The WebWatch project , which was based at UKOLN, University of Bath and funded the the British Library Research and Innovation Centre (BLRIC), involved the development of robot software to analyse web resources in a variety of (mainly UK) communities. The project analysed several communities and has produced reports on the results. Following the successful completetion of the WebWatch project a final report has been produced. This article summarises the findings published in the report.
Following an initial review of robot software tools, it was decided to make use of the Harvest software  to analyse web resources. A slightly modified version of the software was used for a number of trawls. However the analysis of the data collected showed that Harvest was limited in its use for analysing (rather than indexing) web resources. Harvest was designed to index the content of HTML pages, whereas we wanted to analyse the HTML tags in HTML pages (and discard the content) and to analyse binary objects, such as images.
Due to the limitations of Harvest, it was decided to develop our own robot tool. The robot was written in Perl and made use of the libwww library.
Several trawls were carried out using the WebWatch robot software including:
Rather than repeat the observations from the trawls which have been published elsewhere, this article summarises some of the trends which were observed during the three trawls of UK University entry points.
Trawls of UK University entry points took place on 27 October 1997, 31 July 1998 and 25 November 1998. The first trawl used a list of entry points supplied by NISS  but the other two trawls used one supplied by HESA . Slight variations in these two lists, together with incomplete coverage of the entry points (due to errors in the input file and servers being unavilable when the trawl was carried out) means that accurate comparisons cannot be made, although trends can be observed.
Analysis of web server software show that both the Apache and Microsoft IIS servers are growing in popularity, at the expense of the CERN, Netscape and NCSA servers and a number of more specialist servers.
The size of entry points has not changed significantly between the second and third trawls (as mentioned previously, the original version of the WebWatch robot did not allow image files to be analysed, so data of the total size of entry points was not available for the first trawl). Two entry points have grown in size significantly (by over 100 Kb) although one has reduced by 50 Kb).
"Splash screens" are growing in popularity, with a doubling in numbers (from 5 to 10) between October 1997 and November 1998. Splash screens typically use the <META REFRESH="value" HREF="url"> HTML element to automatically display the page url after a period of value seconds. Splash screens are often used to display an advertisment (typically containing a large image, before taking the user to the main enty point for a site. Although splash screens can help to advertise an organisation, without forcing users to click to move progress further, they can be counter-productive by slowing down access to the entry points for regular visitors, and, of course, do generate extra network traffic.
Use of Dublin Core Metadata has shown a slight increase, from 2 sites in October 1997 to 11 in November 1998. Use of Dublin Core metadata is still overshadowed by use of "AltaVista" metadata (i.e. <META NAME="description" CONTENT="xx"> and <META NAME="keywords" CONTENT="xx">).
A poster which illustrates these trends has been produced, which is shown in Figure 1.
The analyses of the data collected in WebWatch trawls is carried out be a series of Perl scripts and spreadsheet and statistical analysis packages (including Microsoft Excel and SPSS). However it was felt desirable to provide web-based interfaces to a number of the utilities developed by the WebWatch project, so they were available for use by everyone and not just UKOLN staff. Three WebWatch utility programs have been released with a Web interface:
An illustration of the Doc-info service is shown below.
As described elswehere in Ariadne  these services can be integrated with a Netscape browser, which makes them much more accessible.
The final report for the WebWatch project: WebWatching UK Web Communities: The Final WebWatch Report  contains more detailed information about the project, including observations from the trawls and relevant recommendations. The report also includes the reports which have been published elsewhere. The report is available in a variety of formats including MS Word, Adobe PDF and HTML.
Ian Peacock, who was appointed as the WebWatch Computer Officer on 28 August 1997, left UKOLN to take up a post at Netcraft  on 12 February 1999. Netcraft is a commericial organisation which carries out regaular analyses of web sites. Netcraft is based in Bath, so Ian has not had far to move. I would like to take this opportunity to thank Ian for the hard work and dedication he put in to ensuring the success of the WebWatch project.
University of Bath