This one-day conference, held by NISO (US National Information Standards Organization) on Wednesday 29 October at the American Geophysical Union in Washington DC, USA, attended by 150 people, was so popular it was 'sold out' a week before the event. It was held 'back to back' with a Metasearch conference the following day, a substantial majority attending both conferences. The largest proportion of attendees were librarians but vendors of library systems and publishers were also well represented. Some of the librarians already have OpenURL resolvers, whereas others were there to find out what is involved in hosting such a resolver and the benefits of doing so. Most participants were from the US, but there were also attendees from places as far afield as New Zealand, Japan, Canada, Sweden, Denmark, Belgium and the UK.
The workshop programme was comprehensive covering both existing OpenURL technologies and the new proposed OpenURL Framework standard (Z39.88-2004)  . Speakers gave the perspectives of librarians, publishers, resolver vendors, and 'home-grown' resolver developers, as well as various aspects of the OpenURL Framework.
Following are brief reports of each of the talks during the day. The PowerPoint presentations and details of the sponsors of the event are available from the NISO Web site .
Pat welcomed attendees to the workshop which she introduced as part of the new era of standards development. New publishing and delivery methods have increased the expectation from users of services provided. Standards are not static, cast in concrete, but are organic, responsive and collaborative. Standards development by a committee involves many complex steps between brain-storming to final writing, such as requirements setting, problem solving and consensus building, not forgetting of course socialising (and in my case travelling!)
Eric, the chair of the NISO Committee AX that has developed the OpenURL Framework standard, formally handed over the finished Standard to Pat Harris for NISO, and introduced the members of the committee in attendance at the workshop. The committee held its final meeting on the two days preceding the conference and unanimously agreed to freeze version 1.0 of the OpenURL Framework apart from some minor editing. Pat announced that the standard will now be copy edited. It should be released in January 2004 for ballot to NISO voting members. It will then also be available for public review, reflecting NISO's commitment to a transparent, open process for standards development. It is expected that a Registration Agency to manage the OpenURL Registry  will be appointed in the first quarter of 2004. The latest version of the OpenURL Framework standard is available on the NISO Committee AX Web site, along with the Implementation Guidelines for the KEV (Key/Encoded-Value) format .
Oliver gave an overview of OpenURL 1.0, indicating its advantages with new features making it much improved over the old, draft version 0.1 . He described the history of linking development from early proprietary 'within service' linking to the new interoperable links expected, in fact demanded, by users. Pre-defined links can be inconsistent and require complex management. With a standard linking syntax like OpenURL it is only necessary to teach link resolvers how to link 'out' and teach services how to link to resolvers. It also makes possible 'appropriate copy' linking so that users are not sent straight to a publisher's site.
OpenURL is about communicating information not about what a resolver has to do. The new OpenURL Framework is self-describing, meaning it can be extended without needing to redefine it. Oliver showed how the simple draft OpenURL works aided by an 'HTTP delivery truck' on his Powerpoint presentation. He then showed how the ContextObject has been introduced as an extensible framework, by changing the delivery truck into a flatbed truck that transports ContextObject containers.
At the point where the OpenURL Framework standard is agreed by NISO ballot, a maintenance agency and a registration agency can be set up. Once this infrastructure is in place, people could facilitate interest groups to develop new applications and new profiles for new communities.
Ed described Digital Object Identifiers (DOI) , including the current status of the DOI community, and how DOIs are used in the CrossRef  system for reference linking. CrossRef is used by publishers to deposit XML metadata for content identified by DOIs. They also use it to retrieve DOIs for references in journal articles in order to embed DOI links with those references. Secondary databases and 'Abstracting and Indexing' services use CrossRef to create links from abstract records to full text articles. Libraries, who now have free access to CrossRef, use it to discover DOIs from metadata, and also to look up metadata from DOIs. But there are administrative issues with libraries at present and more automation is needed. End users use CrossRef when they click on DOI links, and they are able to find DOIs using a free Web form on the CrossRef site. It is now even possible to search by DOI in Google.
Ed described how DOIs and OpenURL can work together. It is proposed that in future OpenURL will be the main way to interoperate with DOI services.
(Or: 'How I learned to stop worrying and love standards')
' "Because" - is "because" good enough for you?'. Libraries are constrained by their budgets and so have a big interest in getting their people to the content they pay for. Hence the libraries' interest in OpenURL. Publishers will include linking if the libraries want it. But why should publishers care? 'What goes round comes around'. Links are reciprocal, some going back to the publishers, especially with metasearching, which needs standard data too. So ultimately OpenURL is to the publishers' benefit also. And reciprocity works. It may seem like a gamble to the publishers, but in fact there will be as many users coming in as going out. If this is not the case, it is not the fault of OpenURL but rather the fault of the content. But NISO needs to do some more publicity to publishers to encourage them to implement OpenURL linking.
Oren described the functionality of generic link resolvers, not specifically SFX, although inevitably his examples were from SFX.
Before the introduction of link resolvers, libraries had little say in how linking was done, but were involved in high maintenance of proprietary linking solutions. Libraries have expensive collections that were not being used optimally so users were not being well served. OpenURL makes possible links that are not hard-wired. The OpenURL linking workflow for users is: from link source; to link server menu; to link target. The link server menu is under a library's control with many possibilities of customisation. It is even possible for a library to implement direct linking, where a user doesn't see the link server menu at all, and in the OpenURL Framework there is the possibility of this being requested as a service type. The OpenURL Framework now introduces the possibility of server-to-server rather than menu-based linking.
Currently there are: 10+ commercial link servers, plus some home-grown ones; hundreds of resources actively using OpenURLs; hundreds or thousands of institutions with link servers; so hundreds of thousands of users are benefiting. The growth has been frightening.
At the heart of a resolver is the knowledge base, where a library's collections are defined, thus enabling a resolver to determine appropriate services for a user. The knowledge base contains information about: potential services; collections; rules. A resolver generally includes the capability of augmenting data where metadata is sparse or of poor quality. The resolver has to de-reference identifiers and by-reference metadata, for example fetching metadata from DOIs. The knowledge base will normalise and enhance the metadata provided by a link source, for instance determining a journal ISSN (International Standard Serial Number), which is not usually included in a reference, or expanding abbreviated journal titles. For a service to be a target in a knowledge base it must have a defined 'link-to' syntax. Currently a 'link-to' standard is missing but some services are adopting OpenURL for this. Also targets are lacking a standard for description of their collections, such as details of packages and date ranges. Source services should provide good metadata.
Currently most services are based on institutional affiliation. For the future, more granular attributes of the user are needed, for example is the user: faculty, thus entitled to ILL (Inter-Library Loan); technical services; undergraduate; etc? Personalisation of services would make them really context-sensitive. Information about the requester is currently passed in environment variables, such as IP address, cookies, or certificates. More sophisticated authentication frameworks such as Shibboleth are being developed and details could be passed in the ContextObject's requester entity. The knowledge base describes an institution's collection. But collections can and will be distributed, rather than all being held at the home institution. This introduces the need for a distributed rights evaluation model. When knowledge bases become distributed it will be possible to use OpenURL to enable the interaction between link servers.
OpenURL could have other uses beyond linking. It could be used for document delivery, for instance, to query a link server to find holdings information including print versions. Such functionality could use multiple ContextObjects to return results. Link servers could be seen as a library's central repository: of ejournal subscriptions; of A-Z journal listings; providing an OpenURL generator for end-users; to populate OPACS; to integrate with research or course materials and tools; to become the centre piece of ERM (Electronic Records Management) systems.
Mark talked about 'Implementing an OpenURL Resolver - Challenges and Opportunities'. Why should a library system vendor implement a link server? It assists integration between the library system and the link server, providing more opportunity for customisation, and better support. The challenges with resolvers are: keeping the knowledge base up-to-date; keeping up-to-date records of 'link-to' syntaxes; dealing with variations in source data; providing services; maintaining relationships.
The OpenURL Framework introduces new opportunities and challenges: standardisation of data elements; expanded scope of linking opportunities; a larger set of services becoming possible; a potential for new applications such as linking to simulations of scientific experiments. Until now OpenURL has been seen as an end-user system but there are now possibilities for its use as server-to-server communication.
Matt highlighted the need for co-operation in future developments with:
Frances described the selection and setting up of a link resolver at LANL. Before selecting a resolver it is necessary to: list the collections in the library's electronic portfolio, ranking the importance of the resources; list the services you want to provide; and decide on the level of customisation and control you want and are able to provide. It is important to involve the whole library team because this will be a core product of the library and not just part of the systems department. After selection, staff training and system testing are very important, as is its introduction to users through focus groups and user support. Once a resolver is in place, usage statistics are vital to see how its use can be improved, as well as usage levels of particular sources and targets.
Thomas described the development of a 'home-grown' resolver within an environment where all major resources are hosted locally. Part of the reason for building their own resolver was budget constraints. The option of total customisability was also a consideration. One of the major issues is maintaining the knowledge base. The software will be released as open source soon, but it is unlikely to include a significant knowledge base of pre-loaded data. If they were looking at the issue now they may consider going for a customisable vendor solution.
This talk was based on the OpenURL Framework Implementation Guidelines for the KEV format. I described the OpenURL Framework with emphasis on its support for the scholarly information community, where OpenURL began, with the San Antonio Profiles. I illustrated the talk with examples of OpenURL linking in the zetoc  current awareness and document delivery service showing how existing version 0.1 OpenURLs can be upgraded simply to version 1.0 using San Antonio Profile, Key/Encoded-Value, Inline OpenURLs. I briefly discussed the OpenURL Framework trial and zetoc's part in that. Examples from the zetoc OpenURL Trial application demonstrated the viability of using hybrid (version 0.1 and version 1.0) OpenURLs during the transition period when it will be difficult for information sources to determine whether all their users' resolvers have been upgraded. Then I gave some pointers to conformance requirements on both referrers and resolvers.
Eric showed some of the similarities and differences between the functionality of OpenURLs and metasearching. He suggested ways they could be working together by giving each other 'a big hug'.
Herbert described the OpenURL Framework using the metaphor of climbing a mountain. OpenURL version 0.1 was at a camp on the first stages of the ascent. OpenURL version 1.0 represents the summit of the committee's climb while it was developing the standard. We have now descended to the base camp of the Profiles and Implementation Guidelines. Implementers need to read only the appropriate profiles and implementation guidelines not the whole standard.
Herbert gave indications of communities where OpenURL would be applicable beyond the scholarly information San Antonio profiles. The Simple Dublin Core profile  could be used in any domain. The cultural heritage community is investigating implementing a virtual collection over their physical distributed collections, an application that will need context-sensitive linking. There is work in progress developing non-text metadata formats, and on using OpenURL within RSS (Really Simple Syndication). OpenURL should be disseminated to the digital library community. It could be used for integrating learning systems into digital libraries. The Semantic Web does not currently provide context-sensitive linking. This standard has the potential to change the linking experience of Web users in general. The OpenURL Registry is fundamental to the standard when expanding to other communities. A maintenance agency will be appointed by NISO and processes defined.
Use of OpenURL in current practice is menu-driven using Web pages. Herbert described some new developments being researched at LANL that go beyond current practice.
The OpenURL Framework has the potential to change the linking experience on the Web. Realising its use beyond the scholarly information community is a challenge. The registry and liaison by NISO are crucial to meeting this challenge.
Many questions from the audience arose during the day. This is a selection of some of the significant questions and their answers.
Many questions related to resolver behaviour. In fact resolver behaviour is specifically out of scope in the OpenURL Framework standard. Also the standard is not a protocol, and is not intended as a search interface although there is no proscription against its use for searching. However it is clear that some guidance is needed on the issues of resolver behaviour.
Q. What if an OpenURL sent to a resolver results in more than one item being matched?
A. This is up to the resolver. It may give the users a set of links, or it may return nothing.
Q. Service types. Can I ask for and get full text when I don't have subscription to it?
A. No. The resolver would not provide full text in this case. Service types can't be used to overcome subscription restrictions.
Q. OpenURL version 0.1 and the current metadata formats in the San Antonio Profiles of OpenURL version 1.0 do not include 'subject'.
A. The Simple Dublin Core metadata format (experimental) does include 'subject'.
Q. Is there a minimum recommendation of metadata that referrers should supply?
A. No, everything is optional and the standard is not a protocol. However, in practice minimalist source OpenURLs would not be useful. The Implementation Guidelines encourage referrers to supply as much data as they have available.
Q. Is there a length problem with hybrid OpenURLs?
A. OpenURLs over 2048 bytes sent by HTTP GET will not work in Microsoft Internet Explorer. Long OpenURLs are better sent by POST.
Q. Is there a way for a resolver to indicate, or send back, information about what it is able to resolve.
A. No. The standard is not a search protocol. A resolver can claim to support a profile, this claim being by information not in a machine readable way. There are machine-readable definitions of the profiles in the registry, so this claim could subsequently be checked by machine. But this functionality is not part of the standard.
Q. One large information provider is tracking the quality of target links coming in from resolvers, the number of which is increasing dramatically. They are resolving at about the 90% rate. Is there any way of improving this?
A. Not really. This is a problem of metadata quality.
Q. What is the relationship between metadata formats and profiles?
A. Metadata formats are in the registry independent of profiles. Profiles can subscribe to a selection of metadata formats. One metadata format can be in several profiles. Profiles are really for supporting compliance claims when advertising / purchasing resolvers and referrers. A metadata format is indicated in actual ContextObjects for the particular entities described using it, whereas a profile is not.
Q. Are there plans for a SOAP binding for ContextObjects?
A. Not at the moment. This is outside the scope of the NISO Committee AX's remit. Maybe a future standard committee will define a SOAP binding.