This workshop, jointly sponsored by the DPC , JISC  and UKWAC , aimed to bring together content creators and tool developers with key stakeholders from the library and archives domains, in the quest for a technically feasible, socially and historically acceptable, legacy for the World Wide Web.
Adrian Brown, Assistant Clerk of the Records at the Parliamentary Archives , set out the framework for 'securing an enduring Web' around the key elements of selection, capture, storage, access and preservation. He identified new selection challenges arising from today's dynamic, personalised Web sites, and issues of 'temporal cohesion', where capture cannot keep pace with the rapid rate of content change. Capture tools therefore needed to evolve in line with the changing nature of the Web itself, and we needed to find technical solutions for the longer-term accessibility and maintenance of very large quantities of interlinked, complex data.
Hanno Lecher from Leiden University , highlighted a more immediate concern, that of keeping reliable access to Web resources cited in academic publications. Whilst advocating the use of citation repositories to maintain copies of Web-published content, he noted that this approach is very labour-intensive, and suggested the use of applications such as SnagIt  or Zotero , or the International Internet Preservation Consortium's  WebCite service , as other options.
Eric Meyer of the Oxford Internet Institute  spoke about the World Wide Web of Humanities Project , which aimed to enable researchers to extract thematic collections from the Internet Archive , and to provide enhanced access to the associated metadata. Meyer also touched upon an identified need to move away from collecting snapshots of the Web towards more continuous data, in order to facilitate temporal studies on Web archives, such as the growth of news networks or the development of the climate change debate.
Helen Hockx-Yu, Web Archiving Programme Manager at the British Library , gave an overview of the software tools available to support and manage the Web archiving process, also noting gaps in current provision. Overall, she painted a picture of a Web archiving community always having to play catch-up with the inherent creativity of the Web itself. In terms of preservation of Web content, there is still little consensus over strategy, practices or the use of specific tools, although international collaboration in the field has led to some convergence on certain crawlers and the development of the new WARC file format as an international standard ISO 28500: 2009, by the IIPC (International Internet Preservation Consortium) .
Cathy Smith, Collections Strategy Manager at The National Archives , gave an overview of a recent research study, looking at what audiences Web archives can anticipate and what the Web might look like as a historical source. Should Web archivists aim at building a holistic, but shallow view of the whole UK Web domain, or harvest specific sites in depth, along thematic lines? Preliminary findings suggest that users would prefer to use a national Web archive, although this does not necessarily imply a single repository. Existing institutions could continue to provide access to local Web collections, but there should be coordination to eliminate potential overlaps arising from differing thematic, legal and geographical collecting remits.
Amanda Spencer and Tom Storrar, also from The National Archives, spoke about TNA's Web Continuity Project , which combines comprehensive capture of UK central government Web sites with the deployment of redirection software to ensure persistent access from live sites to archived Web resources. The team has also been working to influence policy makers and content creators to promote best practice in Web site construction, leading to more successful harvesting of site content.
Richard Davis, Project Manager, gave an introduction to the ArchivePress blog-archiving project  being undertaken by the University of London Computer Centre  and the British Library Digital Preservation Department. Describing the complex (and expensive) Web harvesting tools currently available as 'using a hammer to crack a nut,' when it comes to blogs, the ArchivePress team will seek to exploit a universal feature of blogs – newsfeeds – as the key to gathering blog content for preservation. The approach could possibly be later adapted to harvest from Twitter.
Maureen Pennock, Web Archive Preservation Project Manager at the British Library, explained some of the issues involved in the longer-term preservation of Web content, beyond capture. Initiatives instigated by the British Library to protect the contents of the UK Web Archive  from obsolescence include a technology watch blog , a regular risk review of file formats held within the archive, coupled with migration to the container WARC format, and the creation of a Web preservation test-bed. Maureen also outlined some challenges for the future, such as the growth of closed online communities and of personalised Web worlds. Finally, she made the important point that 'preservation is best if it begins at source', emphasising the need to produce Web content which is optimised for harvesting.
Thomas Risse introduced the Living Web Archive (LiWA) Project , which seeks to develop the 'next generation of Web content capture, preservation, analysis, and enrichment services'. The new approach will go beyond harvesting static snapshots of the Web and enable the capture of streaming media, link extraction from dynamic pages, and include methods for filtering out Web spam.
Jeffrey van der Hoeven spoke about work on the emulation of old Web browsers in Global Remote Access to Emulation-Services (GRATE) as part of the Planets Project , and within the framework of KEEP (Keep Emulation Environments Portable) . He pointed out that the current generation of crawlers assume a PC-based view of the Web. With ever-increasing capabilities of mobile presentation devices, he suggested that in future the focus will need to change towards capturing content, with emulation used to recreate different views of the same content.
The conference concluded with a roundtable discussion, following Kevin Ashley's glimpse into the future of the Web's past. He argued that future researchers will not just want to browse individual Web pages, but will want to exploit the inherent properties of Web content in aggregate – introducing the concept of 'mashups in the past'. This assumes access to archived Web data in bulk, permitting machine-to-machine interaction with different sources of historical Web content.
Questions centred on how best to engage Web users in selecting and appraising Web content for preservation, obtaining permission for harvesting, and the potential impact of enhanced legal deposit legislation in the UK. Suggestions included the idea that popular Google searches might be used as one method of selecting content for capture, and a proposal to archive the UK domain name registry.