Web Magazine for Information Professionals

Digital Libraries '97

David Nichols reports on the important international conference: Digital Libraries '97.

The ACM conference on Digital Libraries is, despite its short life, the premier conference for the field. This year the computer scientists, librarians and all those in-between travelled to Philadelphia in Pennsylvania for ACM Digital Libraries ‘97 [1]. The conference, from 23-26 July, took place in the Doubletree Hotel [2] in downtown Philly and was co-located with the more technical SIGIR ‘97 [3].

All of the usual conference elements were present with 28 papers, 2 invited talks, tutorials, workshops and a session for posters/demonstrations. Predictably there was a strong US representation: both in papers (21) and in attendance. Only 2 papers came from the UK - one from an eLib project and one from a BL RIC project: this seemed disappointing although perhaps many projects would have targeted ELVIRA [4] instead.

 

The main themes of the conference program were multimedia, interoperability (via 4 D-Lib panels) and metadata. Contrasting with the technical slant was an undercurrent of user-centered discussion - sometimes seeming to be fighting to get itself heard amongst the systems talk. Many of these discussions continued into SIGIR ‘97.

The conference kicked off with a keynote address from Jim Reimer of IBM on Digital Libraries in the Media Industry. His main topics were the trend towards digital post production, the failing media (what to do when your magnetised particles start separating from your tapes) and the problems of supporting searching in very large collections. Several large numbers (250 terabytes for a digital version of a film, $3 billion/year couriering analogue tape around the USA) punctuated an entertaining talk that encapsulated many of the problems facing the digital libraries research community. Particularly salient were the anecdotes on searching: the (alleged) accidental discovery of the Beatles’ ‘Live at the BBC’ footage, the media house with 20 copies of the same video circulating and the advertising agency who re-shoot pictures of toasters because they can’t find the previous ones.

The first paper session was on the topic of multimedia. A potentially very useful application of image processing techniques (from the University of Massachusetts Multimedia Indexing and Retrieval Group [5]) allowed text in images (e.g. signs, maps, adverts, cheques etc) to be extracted and then used as index terms for the image. Two papers from the Informedia Project [6] at Carnegie Mellon University examined the role of abstractions in accessing video and the problems in accessing transcribed text from spoken documents. The video abstractions presentation was particularly interesting as it is symbolic of the general digital library problem: too much information. The key challenge being to summarise and filter the information to manageable limits for a human user. The techniques described (identifying relevant clips and then generating representative ‘skims’) are partly driven by bandwidth considerations but also by the temporal nature of video: you may have to watch a lot to determine that something is not worth watching.

Unsurprisingly the US Digital Library Initiative [7] was strongly represented at the conference - most notably by Stanford University [8]. Their InfoBus system is a prototype architecture that integrates metadata and proxies to achieve interoperability with heterogeneous services (or in English - it can work together with many different types of things). The direct-manipulation user interface component (DLITE) [9] was particularly interesting - integrating search and query objects into larger tasks by making intermediate results visible on the desktop. This seemed to me to be one of the more stimulating papers at the whole conference - showing an interface which has the potential to change work practices in dealing with search tasks. DLITE is just one of the services that can be attached to the InfoBus and the whole architecture itself shows promise although, as one person during the D-Lib panel suggested, “what about scalability?” The movement of the InfoBus away from Stanford should be closely watched.

Further contributions from Stanford dealt with shopping models and the costs of translating queries between search engines. All these elements had a strong metadata component and the twin themes of metadata and interoperability surfaced at many points during the conference. The expected scenario is certainly converging on near-seamless searching of multiple heterogeneous databases held together with a metadata glue. The session on agents was not particularly interesting doing little to counteract some computer scientists’ claim that agent is another word for ‘program’.

Thursday night was the major social event - a banquet cruise on the Delaware River on the Spirit of Philadelphia. Advertised as an all-weather event this showed prescient organization as Philadelphia was suffering unseasonably wet weather. Added unplanned excitement came from one of the coach drivers (of course the one I happened to be on!) who appeared not to know in which direction he was going. After an incident with a couple of police cars and an unusual 3 point turn on a major road we finally reached the boat. The end of the evening was no less eventful when the same coach couldn’t extricate itself from the parking lot. After all the other cars had been moved (20 minutes later) we did eventually head off for the hotel: but only after several jokes about the InfoBus being stuck in the car park.

Underneath the already choppy waters of digital libraries research lurks the dread multi-headed copyright monster. The conference addressed this squarely with a plenary session from Pamela Samuelson from the University of California at Berkeley. She took the computer scientists to task for ‘flattening out complex copyright law to a pancake’. In an informative talk Samuelson provided a legal perspective on a wide variety of issues. One section asked the question of whether it was “lawful to link?” Citing both the Shetland case (see Charles Oppenheim’s discussion of this issue [10]) and some US cases [11] [12] [13] she summarised that plain (vanilla) linking was probably acceptable as a case of interoperability (generally regarded as a good thing by courts). However, linking as part of a framed environment can cause problems: e.g. when adverts in different frames are for competing companies who may have exclusivity agreements. Samuelson commented that Web publication would probably be interpreted as an implied license to index the content.

Surprisingly, the talk also included the most interesting parts of the conference with regard to agents. Private copying is often permitted under conditions of fair use: so can agents exercise fair use? As many services rely (or intend to rely) on adverts to provide a revenue stream, the use of agents (not known for their appreciation of adverts) may not be considered fair use. Consequently, even the temporary copying that agents may undertake could fall foul of copyright restrictions. Perhaps we should be designing agents to read advertisements? Following on from this Samuelson posed the question: can agents ever make an enforceable contract or are contracts the exclusive property of humans?

Samuelson’s reports from the legal frontline can be found in frequent contributions [14] to the Communications of the ACM. As she commented, even children know stealing is wrong but it is the professors who have their computers full of illegal software.

The session on digital scholarship (whatever that is) contained the first UK paper. Steve Hitchcock from the eLib Open Journal project [15] described a service that users would actually find useful: online citations linking to abstracts in a separate bibliography database. This session also contained a typically stimulating talk from Cathy Marshall of Xerox PARC on annotation. Those of us that remember the early versions of Mosaic may recall the unimplemented Group Annotation menu items: Marshall’s paper is a good example of the primary user-oriented research that should be done before rushing off to code solutions. She examined the annotations that students made in university text books by searching through the stocks of a campus second hand bookshop. Marshall noted the different functions annotation can serve (placemarking, tracing progress, problem-working etc) and gathered some limited evidence that students preferred annotated copies to pristine ones.

Friday night was reserved for the poster session and demonstrations. I found the poster session a little disappointing with little of real novelty. The notable exception being a poster/demonstration of a category map based browsing system [16] from the University of Arizona. The system, based on an underlying neural network algorithm, produces an easily-understandable graphical representation of the underlying textual content of a Web page. The other demonstration of note was from IBM, showing some research software called LexNav (for Lexical Navigator): in this case displaying query results as networks of lexical links on a 2D view. This view is based on the actual contents of the collection and allows interactive query refinement through expanding the network of terms. These were exactly the sort of innovative interface I had expected to find at the conference and it was a pity there were not more of them.

Saturday morning saw the user-oriented community claim the stage. David Levy (Xerox PARC) captured the attention of the audience by describing the ways in which our attention is distributed and fragmented. He argued that information technology, and in particular the Web, encourages a fragmentation of attention by splitting wholes into parts which are then re-linked. And the very act of navigating hypertext requires additional attentional resources. Computer screens make the problem worse as they promote multiple applications and windows whereas the wholeness of books promotes concentrated long-lasting attention. Gary Marchionini described a technically simple digital library for a community of teachers in Baltimore and with several anecdotes showed that the major problems are not to do with the technology - rather they are social and political. Successful use of the resources required a change in the teachers’ style of work: from requiring greater explicitness in planning curriculum modules and in being less possessive about their own materials. This non-technical session closed with a paper from Lancaster University on ethnographic research methods highlighting the difficulties of computer scientists working with ethnographers. Saturday closed with workshops on thesauri and metadata, curriculum issues and collaboration in the digital library.

In summary, an interesting conference which was illustrative of the diversity of research in the field of digital libraries. Ranging from the user-oriented research of Levy and Marshall to practical demonstrations of new interfaces, there was something for everyone although it seemed there were more computer-related people then library people. They could have been put off by several technical papers which seemed to me would have been more at home in the following SIGIR conference. But overall, thoroughly worthwhile attending and it would have been even better to see more UK involvement. Next year, Digital Libraries ‘98 [17] is co-located with Hypertext ‘98 and moves a few miles west to Pittsburgh.

The author is very grateful to the British Library Research and Innovation Centre for financial support which allowed him to attend this conference.

References

[1] The home page of Digital Libraries ‘97:
http://www.lis.pitt.edu/~diglib97/

[2] DoubleTree Hotel, Philadelphia
http://www. doubletreehotels.com/DoubleT/Hotel61/79/79Main.htm

[3] The home page of SIGIR ‘97:
http://www.acm.org/sigir/co nferences/sigir97/

[4] ELIVRA 4
http://ford.mk.dmu.ac.uk/ELVIRA/ ELVIRA4/

[5] University of Massachusetts Computer Science Department Multimedia Indexing and Retrieval Group:
http://hobart.cs.umass.edu/~mmedia/

[6] Informedia Project at Carnegie Mellon University:
http://www.informedia.cs.cmu.edu/

[7] Digital Library Initiative:
http://dli.grainger.uiuc.edu/national.htm

[8] Stanford Digital Library Project:
http://www-diglib.stanford.edu/diglib/

[9] DLITE:
http://www-diglib.st anford.edu/diglib/cousins/dlite/

[10] The Shetland Times dispute analysed by Charles Oppenheim:
http://www.ariadne.ac.uk/issue 6/copyright/

[11] The TotalNews case:
http://www.wired.com/news/topframe/4204.html

[12] The TotalNews case (2):
http://www.wired.com/news/topframe/2230.html

[13] This is the site that caused all the arguments:
http://www.totalnews.com/

[14] Selected papers by Pamela Samuelson:
http://info.berkeley.edu/~pam/papers.html

[15] Open Journal Project:
http://journals.ecs.soton.ac.uk/

[16] Online demo of the Arizona system:
http://ai2.BPA.Arizona.EDU/ent/

[17] Digital Libraries ‘98:
http://www.ks.com/DL98/

Author details

David Nichols,
Research Associate,
Cooperative Systems Engineering Group,
Computing Department,
Lancaster University,
Lancaster LA1 4YR
Email: dmn@comp.lancs.ac.uk