Survive or Thrive  is the punchy title given to an event intended to stimulate serious consideration amongst digital collections practitioners about future directions in our field - opportunities but also potential pitfalls. The event, which focused on content in HE, comes at a time of financial uncertainty when proving value is of increasing importance in the sector and at a point when significant investment has already been made in the UK into content creation, set against a backdrop of increasingly available content on the open Web from a multitude of sources.
The premise of the event - we must survive in this context and seek to thrive - is a timely reminder of these realities, but also a motivation to explore some of the currently available digital collection technologies and models for user engagement that are out there and working successfully in this environment in order to make the most of our digital collections for our users.
Dan Greenstein is Vice Provost at the University of California which includes oversight of the California Digital Library. He has been Director of the Digital Library Foundation, the Arts and Humanities Data Service and the Resource Discovery Network.
Greenstein's opening point was that we would be hearing a lot about what can be done but it would be important to focus on what should be done-and this would be different for each one of us. Something that is likely to apply to all of us, however, is that we will be working in a time of reduced budgets. While he is of the opinion that 10-15% cuts are absorbable, cuts of 20-30% across the sector will require fundamental changes. Although student numbers have been rising steadily, funding per capita has been falling (as demonstrated by statistics from the UK's Department of Education and Skills, over the period 1980/1-1999/2000).
Although the period after World War II saw an explosion in printed publications, we are now seeing an explosion in new distribution models - digital, mobile and print-on-demand - supported by e-publishing and retrospective digitisation. Therefore, Dan maintained, 'redundant management of print collections is insane' as is seeking to make savings on special collections: they are what make a library unique, not copies of print publications that will be increasingly available through other means.
Next generation information practice is likely to require:
What does it take to fix this? Changes in collection management:
The questions are political, not technical. Currently, core funding is used for print acquisition and e-licensing - commercial acquisition - while 'funding dust' is used for information literacy and digital curation. There is no trade-off being made, and this is unsustainable. Everything should be funded from the same collection budget, with a realistic approach to prioritisation - what ought to be done.
Discovery and delivery are orienting to the individual. If library services do not follow suit, then users will go around us to get at content. This is a 'massively heavy lift' - can we do it quickly enough before the industry bypasses us? (Dan advised the audience to take a look at what Apple had done to the music business with the iPod and iTunes. He maintained it is going after the publishing industry next.)
Greenstein's final point - an orderly retreat is better than a disorderly one: services and access to information can be maintained.
Following the economic theory of marginal utility, Mike Ellis described how, historically, value derived from scarcity and there was benefit from keeping content closed. In the networked age there are three phenomena which challenge this:
Nowadays, Mike maintained, value becomes about usability rather than scarcity:
It is worth noting that point 8 proved highly contentious during discussion, and was subsequently modified to agree that 'how' should include notions of interoperability, openness and Web accessibility, but beyond that specific technologies matter less than these principles.
Mike has placed his presentation on Slideshare .
The National Archives (TNA) have started to publish content to Flickr Commons . The primary purpose was to use Flickr as a 'shop window' to get across the scope of the collections.
Benefits and the business case for using third-party dissemination:
A key driver for the adoption of Flickr was to avoid rebuilding what was there already, and the motivation to avoid the 'build it and they will come' approach in favour of going to where the users are already. This represents a balance of benefits - costs vs. control of content. A key lesson is that Flickr is working at Web scale - TNA content is not!
It is also worth noting that the ambition for mashups of TNA content was regarded as overly optimistic and has not manifested itself. However this is not seen as a disappointment-rather TNA now better understand their users and their expectations of TNA content as a result of this foray into web-scale dissemination.
What is the aim of being on Flickr?
James Reid began with a widely held assertion: 80% of all an organisation's information has a geographic aspect, directly or indirectly referenced. Direct referencing is an explicit assertion of the geographic information (e.g. open Ordnance Survey maps or geo-referenced digital collections) while indirect referencing is an implicit assertion of geographic information (e.g. textual references to place names). Edina Unlock  provides services for geo-referencing and geo-parsing (text-mining for geographic references).
James has made his presentation  available.
Tom Heath explained linked data using the analogy of a transport network - different transport types interoperate; we don't need to understand how trains or buses work in order to use them to get places. Historical lesson: building physical networks adds value to the places which are connected-building virtual networks adds value to the things which are connected.
How is linked data best implemented? There will be no big bang; expansion and benefits will be incremental. First step - build an infrastructure of identifiers -things we care about as an organisation. This can be as simple as id.uni.ac.uk/thing that captures the domain model (people, departments, things, etc). Cost? As for any infrastructure development, there is the bootstrapping cost vs. cost savings and the value of things that would not otherwise get done.
Tom has made his slides  available.
Sophia Ananiadou introduced applications of text mining that are being developed in various academic disciplines to support the analysis of large textual corpora. This is an emerging research discipline in its own right that is finding traction amongst the natural and social sciences as well as in the area of digital humanities. Text mining offers the ability to extract semantic information from unstructured textual documents through a sequence of techniques:
Unstructured text (implicit knowledge)
→ Information retrieval
→ Information extraction (named entities)
→ Semantic metadata annotations (enrich the document)
→ Structured content (explicit knowledge)
Existing tools can be adapted to a subject domain using annotated corpora. Applications include document classification, metadata extraction, summarisation and information extraction. Current NaCTeM domains are biological, medical and social sciences. 
The JISC and RLUK Resource Discovery Vision - ultimate goal is for a national strategy for resource discovery supported by aggregation services for metadata.
More information is available at:
Digital New Zealand is a national-scale platform to support digital content, including infrastructure, metadata, and front-end search and delivery. There are currently 99 contributing organisations within New Zealand.
4 initial projects (2008) launched with 20 contributors:
Initial work focussed on building foundations for something bigger, designing an infrastructure to be extensible and building from the ground up. Using an Agile methodology (SCRUM) they were able to build the prototypes for initial launch in 16 weeks, although lots of background work had been done on the concepts and organisational buy-in.
To maintain momentum and energy and avoid the 'slow death of project wind-down' DigiNZ took a conscious decision to maintain the project atmosphere (although not technically a project anymore) by rejecting the idea of 'business-as-usual' and keeping to the two-week development sprint cycle. A current challenge is finding the balance between maintenance and development.
DigiNZ are currently looking at digitisation support services, including an advice service similar to JISC Digital Media. They are also running a central service for digitisation nominations, on which members of the public can vote. To date, they have had 100 nominations, the most popular of which has received 600 votes. This is considered to be a moderate amount of activity.
DigiNZ will continue its mission to lower barriers to getting content online, which includes hosting services for institutions without the resources to maintain their own. Future applications will include metadata enhancement such as geo-tagging, based on the principle of a game to crowdsource the required effort.
Current issues are the balance of effort between central and distributed functionality and tools. They are more interested in distributed access where partner organisations take on much of the effort of publicity. The scope of inclusion of content and the balance between maintenance and development are also issues.
Things that worked for DigiNZ:
Crowdsourcing developed in response to the data deluge of astronomical data, but is increasingly being applied in other domains. Some astronomical data can be processed by machine but not all-in particular the pattern matching necessary to recognise galaxy shapes is not yet sufficiently sophisticated, and even neural nets have problems.
'Citizen science' is not new, it builds on a tradition of amateur ornithologists, astronomers and palaeontologists. However, crowdsourcing is a new kind of citizen science that inverts the normal model. Historically, scientists were asking for people to supply the data (e.g. observations of birds or fossils) while the scientists would do the analysis. In the crowdsourcing model, the scientists supply the data and the people do the analysis.
Galaxy Zoo underwent an explosion in popularity after it was picked up by the BBC, which crashed the original servers. At its peak it was hitting 70,000 classifications per hour (more than 1 PhD-student-month per hour) and by the end had racked up over 200 years FTE (Full Time Equivalent) by 300,000 volunteers. But to put this in perspective, this is only the same attention as that given to 6 seconds of Oprah Winfrey's TV show.
Lessons learnt about volunteer recruiting:
It became clear that crowdsourcing could tackle different kinds of questions:
The platform is now being extended into a generalised platform for crowdsourcing  to lower the barriers to entry for early-career researchers. The platform will also act as an intermediary to provide guarantees to those on both sides of the relationship (researchers and volunteers) - a 'fulfilment contract'. It is not expected that there will be similar, widespread publicity in the future - people are not as amazed by what the Web can do anymore. The challenge of how to recruit and engage volunteers is seen as ongoing.
Attention or activity data harvested from OPACs can be used as a source of information about user behaviour and hence as a basis to inform collection management or to build user recommendation systems. There are three kinds of data:
The University of Huddersfield demonstrated an increased borrowing range by making suggestions based on current user behaviour (average number of books borrowed per person increased from 14 to 16 over the period 2005-9). A developer challenge based on Huddersfield data produced two applications to improve resource discovery, two to support recommended courses based on a user's behaviour, and two to support decision-making in collection management.
Findings of the CERLIM/MOSAIC Project are that 90% of students want to know what other people were using - to get the bigger picture of what is out there and to aid with retrieval. Recommender systems are used commercially, so the benefits and mechanisms are well understood. It was also observed that they are not the basis for people wasting money (at least this is not a reported outcome), and so they are unlikely to be the basis for people wasting attention in OPACs and non-commercial situations.
There are two approaches to user-augmented retrieval: data analysis (of attention, activity data) and social networks (user engagement - reviews recommendations etc). It was opined that activity data are likely to be of more use for undergraduate courses, due to the volume of use and associated data, while research use exists in the long-tail of the collection, and will likely benefit from social network-type effects. However, it was observed in the Question and Answer session that the long-tail phenomenon only exists at institutional level - subject networks of researchers at national level will have their own corpora of high-use material, and analysis at that level is likely to be more productive.
The Q&A session focussed on addressing the elephant in the room-financing. Despite Greenstein's admonition at the start of the day there was a striking absence of specific financial discussion through the course of the day, in favour of a somewhat isolated appreciation of technical possibilities and associated user benefits. The basis for a cost-benefit analysis was not always clear.
The question was simple - "how do we pay for this?"-the answer, less clear. Greenstein was adamant that core budgets can and should be realigned to give appropriate focus to digital collections, while admitting the possibility of commercial partnerships. Commercial partnerships were considered to be optimistic by other respondents, with Pugh warning that "there is no pot of gold for all" and Ellis pointing out the significant overheads of running an in-house commercial operation such as an image library. Neale advocated 'smuggling' technical costs into other activities that over time could build into a significant strand. The danger being that the 'other activities' (and the funding they bring in) could dry up leaving significant, un-resourced technical commitments. Stuart Dempster (JISC) urged us to take at look at the Ithaka case studies  (which are due to be refreshed in 2010) and also to pay close attention to government policy to see which way the financial winds are blowing for the sector.
One thing was clear - we cannot avoid opening and preserving content. However, while possibilities were offered and successful financial models do exist, resourcing for specific digital collections remains a challenge for the curators-much depends on the nature of the collection and its target audience.
This two-day event offered a fast-paced look at many areas of digital collections technologies and the ways in which user behaviours are changing. While it was interesting to hear about emerging digital technologies and models for user engagement, following the event there was something of a return to normality as the realities of implementing these innovative technologies became apparent. As mentioned above, despite serious financial considerations in the opening session, there was a striking absence of financial discussion throughout the sessions. It was not clear in many cases what the cost of adopting such technologies or approaches would be, nor the potential savings.
Another topic that was apparent by its absence was that of digital preservation. The ongoing costs of maintaining digital collections are only starting to be understood, and in many institutions making the case for infrastructure to support preservation is the first step before one can consider ways to innovate. Flipping this on its head-may it be the case that innovating with content (and thus providing the associated user benefits) will make collections so valuable that preservation becomes a de facto result of maintaining the user community by providing continued, supported access? In which case, starting from where we are-is it better to make the case for innovation or for preservation? (And of course the answer may be different for different communities, collections or user engagement models.) A caveat, however-we are all familiar with the project silo which, while innovative in the years which produced it, is now languishing on the equivalent of a dusty shelf and looking its age. As institutional digital collections multiply so do the resources required to maintain them; and multiplying technology stacks on top of content only acerbates this. Without a closer engagement with the source of use and the source, increasingly, of the data-researchers, teachers and students-we run the risk of multiplying our commitment to technologies without an associated increase in technical resources to maintain them. If we are not demonstrating the value in doing so, then this is going to be a hard case to make, and rightly so.
Nevertheless, despite these potential pitfalls, this event filled me with a sense of optimism, as so much is changing and so much is becoming possible. The digital revolution is changing the way information is used, and if we understand our collections, their users and the possibilities, then there exists the chance to improve information availability and use throughout the sector.