A crisp spring day in Atlanta saw a gathering of 50 participants coming from libraries, including many from the GALILEO consortium, from vendors, including sponsors Ex Libris and Innovative Interfaces, Inc., and from content providers such as JSTOR, for a series of presentations at the well-equipped and comfortable Georgia Tech Global Learning Center, Atlanta, Georgia, USA. The agenda  was an interesting mix of perspectives on a theme - switching focus from information resource users, particularly students, and how studying and interacting with them can inform our discovery and delivery systems, to details of 'behind the scenes' of these systems, technologies and standards such as OpenURL and SSO (Single Sign-on), and improvements needed to deliver more seamlessly what users want, as well as the development of new services such as bX recommender and BookServer. Many of these behind-the-scenes issues centred on metadata quality and interoperability.
NISO Managing Director Todd Carpenter introduced the talks and the theme. The Education Committee wanted a programme that got back to the basics: technology should 'get out of the way and give the users what they want'. He used a light switch analogy for the kind of seamlessness today's users are expecting.
Joan Lippincott's overview of college students as information users - their behaviour, attitudes and changes over the years, was grounded in her background as both provider - she was formerly head of Public Services at Cornell - and more recently, as a consumer (researcher) of information services. However, it drew from a lot of research data and reports in reaching its conclusions and recommendations. They documented trends amongst the 'Net Gen' (born 1982-1991) students, such as the rise in use of cell phones and laptops, increasing participative and collaborative learning, and the importance of social networking. Student attitudes support the need to 'make search simpler': 80% think they are good searchers, and while they use databases in addition to Google, students usually stick with a small number of databases with which they're familiar.
Embedding trusted resources and services 'where the users are', with seamless navigation and simple, easy-to-follow instructions is essential to reach these users. Developers have to get out of their silos and listen to both librarians and users. Her recommendations were as follows: provide combined access to both purchased and freely available resources on the Web, such as other institutions' digitised special collections; offer more lively and interactive displays of information; provide more methods of discovery beyond indexes and catalogues, such as exhibits and visualisations; consider new technologies such as QR codes; provide more external links and social engagement, for example through tagging; and promote your institution's digital publications with links in socially developed resources like Wikipedia. Discussion after the presentation ranged from the technical barriers to connecting library applications with some learning management software (e.g. Blackboard), to Web site navigation links (Joan remarked, 'We've lost the right side - students think it's for ads!').
Phil Norman manages the OpenURL Maintenance Agency at OCLC. He used his presentation to explain some of the key concepts of the OpenURL Registry with the goal of enabling institutions better to use the registry in order to optimise search and discovery services for their users.
Information contained in the OpenURL Registry can be used as a reference and implementation guide for developers. For service providers, it can function as a catalogue of what is already available, as well as a tool to create new applications. The registry uses the OpenURL 1.0 protocol to create OpenURL framework applications . Norman explained the importance of citation metadata in OpenURL ContextObjects and cited this data mark-up as the method by which link resolvers parse information to serve out the appropriate copy of a requested resource.
Norman explained the core components of ContextObjects and talked about the relevant encoding schemes required for contained elements. Additionally, he discussed the four currently existing community application profiles. Included among them is the RTM (Request Transfer Message) profile approved last year . This profile will be used for directing loan, copy, access to look-up, or item digitisation requests to Resource Delivery Systems and / or Electronic Resolver Systems.
Norman outlined the 3-step lightweight process of creating a registry entry and suggested that members start with an idea or an existing application in need of standardisation. He advised interested parties to solicit advice from members of the informational and technological communities.
Adam Chandler followed Norman's discussion of the OpenURL registry by elaborating on some specific findings of the OpenURL Quality Metrics project at NISO. Working closely with the recommendations put forth by phase I of the KBART (Knowledge Base and Related Tools) project , group members are working to optimise the linking capability of OpenURLs by informing all stakeholders with a set of best practice guidelines.
Chandler gave a demonstration of the reporting software which was used to analyse the content of nearly 4.5 million OpenURLs . The process compares the link-to metadata supplied by content providers against that of an institutional knowledge base. The resulting metrics indicate the frequency of certain core elements appearing in OpenURLs generated from a wide range of providers. By examining the most granular aspects of OpenURL metadata, it is possible to determine what missing or incorrectly formatted elements are causing links to break.
Conducting such a detailed analysis on core element frequencies will guide the project team in developing recommendations for standardising metadata used in OpenURL knowledge bases. Chandler later remarked that focusing more on correcting or enhancing OpenURL metadata in the initial stages of implementation will result in a more seamless discovery process.
After two years, the Quality Metrics project will undergo evaluation and then a decision will be made as to whether the group will continue its work. In conjunction with the KBART recommendations, the group seeks to increase communication between content providers and their consumers and to co-ordinate a systematic method for metadata standardisation and updates. They are open to contributions from others, especially those who may have useful data.
Ameet Doshi thanked the organisers for the excellent lunch, and introduced Bob Fox who started off a lively and informative presentation by giving some history of why Georgia Tech Library's Student Advisory Board was formed and how it has evolved. After-the-fact comments from students on how a library renovation could have been done better prompted formation of the Board in 2005. Student membership eventually expanded from 8 to 17, to allow for unavoidable absences from the monthly meetings and so maintain momentum. The Board had some independence and students were to nominate their successors, but this proved problematic and the successors are now chosen from nominees furnished by the Library with suggestions from a scholarship committee, allowing for vetting and diversity of representation across a number of factors. They try to choose talkative contributors and leaders, then turn them loose but organise things to keep them involved: 'If you have food, a compelling agenda, and then follow through on ideas, they will come.' Email communication is important to keep students who may miss a meeting in the loop and to furnish quick response from the Library on student suggestions.
Dottie Hunt described the design charrette  used by students to mock up their suggestions for space redesign for two renovation projects. She also shared videos of Board meeting participation and of Board members video-interviewing other students to capture how they used the Library's Web site and resources, using flip cameras that are made available to them.
Ameet Doshi concluded with some observations and points of best practice. Keeping a 'cohort' of students together on the Board for 2-4 years fostered commitment and ultimately, advocacy for the Library. When Board members heard that 24-hour service might be curtailed due to budget shortfalls, they took the initiative to start gathering student signatures on petitions to keep it open. Doshi said there is also a faculty advisory board; they tried combining the boards and found they have higher student involvement when the student board is separate. As regards return on investment, the library believes the Board's activity contributed to positive perceptions by students that the Library is actively listening, as shown in a recent LibQual survey.
Lagace of ExLibris gave an overview of the recommender service and explained the rationale behind the aggregated harvesting of usage data for scholarly articles. Recommender services in commercial Web sites have proven to be of great assistance to consumers when suggesting similar items based on what past consumers have searched for and ultimately, purchased. This type of service is now extended to scholarly researchers by pointing them to specific articles of likely interest based upon their search parameters.
bX™ recommender is a product of the research done at Los Alamos National Laboratory by Johan Bollen and Herbert Von de Sompel  based on the mining and structural analysis of aggregated usage data. Linking servers, such as SFX, contain histories of digital scholarship across a multitude of databases. This usage data can be harvested through OAI-PMH (Open Archives Initiative Protocol for Metadata Harvesting) and analysed as click-through data indicative of relationships between scholarly resources. Over time, these patterns become representative of the research activities of scholars working within specific research domains and they are employed to present the user with similar items based on their search path.
bX™ services can be incorporated into any library's user environment whether or not they are using SFX. And although it is not required, institutions can also contribute their local usage statistics from their own SFX logs, the idea being that the higher the number of contributors, the higher the quality of the recommendations. bX™ currently benefits from over 200 implementations worldwide.
Besides receiving specific article recommendations, users of bX™ may be able to discover new search terms or explore new fields of interest based upon their initial query. Additionally, bX™ is offered as an on-demand service, so it does not have to be installed or maintained locally. However, once installed, it displays from within the menu of services offered by the library's link resolver, or it can be accessed through an API to other services such as ATOM and RSS. Lagace demonstrated the integration of bX™ into the interfaces of several discovery environments such as Xerxes, Primo, and Google Scholar.
Aggregated discovery usually means that there are several different network platforms that a user must sign on to in order to access resources on the Web. NISO's Single Sign-on (SSO) Authentication Working group is developing a set of recommended practices to improve single sign-on functionality in hopes of creating seamless access across all service providers . The Working Group's goal is to provide SSO functionality across licensed resources and authenticated portals. A set of deliverables has been defined in order to achieve this goal. Terminology will be analysed from across Web authentication processes and login procedures will be classified. The group hopes to develop a set of best practice recommendations for the relationships between users and service providers.
Another issue related to authentication across domains is the lack of a user interface allowing for a more seamless user experience. Existing SSO technologies like Shibboleth make it possible for the user to maintain one identity across all systems. Such technologies work reliably when sites are compliant and authentication is somewhat automatic. Yet, there are many factors to consider. Users should have the ability to establish a home site, preferred service provider and automatic login as well as an automatic trigger to aid the discovery process.
The group is focusing on implementation of SSO in federated search systems because of the array of challenges associated with points of access. They are creating use cases that will detail how a user arrives at a service and what login process is required to access the information.
The group will promote its set of best practices for SSO access by eventually developing a promotional plan but Kaplanian cited some problems. Standardising terminology across different domains has been difficult. Varying levels of technical or business knowledge associated with parts of the process have highlighted the need to communicate across a network of campus administrators and IT stakeholders.
Peter Brantley described BookServer, a community-driven project, which is sponsored by the Internet Archive. The project aims to provide an open architecture for discovery and delivery of e-books, including purchase, borrowing or delivery of free content, from any source, to any device.
Brantley gave a little history of how and why the idea developed, spurred by reaction to Amazon, the Google Books settlement and the development of the e-book marketplace. It's no surprise that users are confused and frustrated over all the differing formats, publishers, devices, discovery pathways, modes of acquisition and DRM (Digital Rights Management) methods.
The community is drafting the Open Publication Distribution system (OPDS) XML standard , a very simple implementation built on the Atom format, using Dublin Core descriptive elements in preference to the more complex and publisher-centric ONIX. Using an existing schema is allowing them to get the 'Catalogue,' or manifest, component off the ground quickly. They are working on tools for transforming ONIX, MARC, and spreadsheet data to this format. 'Catalogue' feeds could be assembled and pushed out in a number of ways, indexed using Open Search, sorted and faceted, potentially tying in reviews, recommendations, fan fiction... there are many possibilities.
Some of the same challenges surface here as in other projects described in this forum: metadata need to be 'cleaned up' and standardised; matching titles (for works) is not easy; identifiers are key. Other obstacles needing consideration are: publishing is segmented by geography and language; DRMs are messy; standardising vending and lending workflow is difficult.
There are many potential applications and benefits to the standardisation envisioned in this project. Some publishers and institutions such as the Library of Congress are taking an interest, and the group is open to more participants. The question/answer session showed listeners' interest in independent ebook reader software and in how OPDS could be used for any kind of content (even library catalogues), as well as in services which could include print-on-demand or centralised search of aggregated databases.
This was a stimulating and informative forum that maintained a dual focus on information users and on the essential (and mostly voluntary) work that goes on behind the scenes to develop and promulgate standards that will provide new and improved 'seamless' discovery and delivery services.