Managing the development and delivery of electronic library services is one of the major current challenges for university library and information services. This article provides a brief overview of some of the key issues facing information professionals working in higher education institutions (HEIs). In doing so, it also picks up some of the real-world lessons which have emerged from the eLib (Electronic Libraries) programme now that it has come to a close. These lessons have been highlighted in a number of recent reports coming out of eLib, including the formal eLib programme evaluation (1).
The issues discussed here are grouped under the following headings:
This is a practical view of e-library issues. The term ‘electronic library’ is being used here in broad terms to mean a collection of networked digital information resources and associated technical and managerial infrastructure. The electronic library is assumed to include data and metadata in various formats which are created or assembled in order to provide a service to end users. The terms ‘electronic library’, ‘e-library’, and ‘digital library’ are used as synonyms(2).
The e-library market is an immature one and pricing models have not yet stabilised. Take e-journals for example. There is enormous variation in how e-journals are priced – individually, in subject clusters, in ‘take-or-leave-it’ packages. Pricing levels are often determined by complex formulae which are in many cases based on print subscriptions. The complexity associated with e-journal pricing is not unique. The criteria on which pricing models should be based for all electronic products is still very unclear. Should price be based on use? If so, how is use determined? Should it be based on size of user community? If so, how is size calculated?
The library and information community should perhaps be more proactive in developing clearer ideas of what it would regard as acceptable pricing models for e-resources. Projects such as PELICAN can certainly help with this(3), but there should be a much wider debate on this issue. We should not just leave this to the national agencies. All too often information professionals react negatively to publishers’ deals without being able to suggest clear alternatives to replace them.
One thing is clear – developing the electronic library is currently not an easy way to save money. It can often be forgotten that the Follett report (the publication of which in 1993 gave rise to eLib) was written in a context of rising student numbers and full library buildings. It identified the e-library as a possible way of saving money and space, and eLib was set up partly to address these issues. And yet eight years down the line, “one of the lessons of eLib as a whole…. is that electronic media do not save money or library space and are not likely to in the near future.”(4) As far as prices are concerned, a few examples illustrate the point. E-journal deals often cost seven or eight per cent on top of existing paper subscriptions, and often prevent cancellation of the paper. Digitised texts for electronic short loan, such as those provided by the HERON service, are clearly too expensive and involve too much of an administrative burden for HEIs to be easily scaled-up(5). Emerging e-book services, such as NetLibrary, are quoting prices far in excess of paper prices(6). There is also the added problem of VAT which is chargeable for electronic resources whereas printed publications are exempt (at least for the moment).
This situation needs to be emphasised to senior managers in HEIs who still often have a naïve view that increases in electronic resources should lead to immediate and clear reductions in expenditure. The electronic library may create new and improved services for users but it is not cheap. In fact investment in e-library facilities may for the foreseeable future require additional money. This is especially true when libraries are often expected to maintain print-based services in parallel with the electronic.
All users in the institution can usefully be kept informed about pricing issues in order to gain their support for investment in resources. Sometimes there is a surprising level of ignorance amongst users about how electronic resources are paid for (or even that they are paid for at all). In a recent University of Nottingham library user group meeting, a postgraduate student asked the question, ‘what has Science Direct got to do with the library? It’s on the Internet!’ This statement carries with it a raft of common misunderstandings. First of all that the library and the Internet work in different domains. Secondly, that the Internet always delivers information which is free. Because the content is free at the point of use and easily accessible to users on campus, they often forget it comes at a price (in the case of Science Direct, a very large price). More should perhaps be done to get this message across to help ensure the library is provided with funds to enhance resources.
But it is not just the level of funding but also the structure of the funding which is important. Many libraries in HEIs allocate funds to different budget heads based on a formula, taking into account factors such as student numbers, staff numbers, and average book prices. These formulae often divide up the available funds in an inflexible subject-based way. Strict subject specificity is however often inappropriate in the electronic environment. Many products come as cross-disciplinary packages of material and working out which subject pays (or part pays) for a particular resource can be unnecessarily complex. Library funding allocation models need to be constructed to ensure that libraries have the flexibility to respond to the available deals on behalf of all of their users.
Libraries may need to reappraise their overall budgeting priorities and expectations in the context of the e-library. As we moved from the ‘traditional library’ model to the ‘automated library’ model (where traditional library processes, such as cataloguing and circulation, were automated), librarians became used to spending large amounts of money on library management systems (LMSs) and on the staff to maintain them. It has been commented, however, that in the new ‘electronic library’ (which provides access to extensive data and full text online) we are expecting to provide access on the cheap. A new generation of e-library systems (such as cross-searching services) are currently being launched which complement the LMS but are often far more complex than an LMS module, and yet we expect them to be much cheaper.
There is an argument that priorities in staff time should also be re-examined. Managers often expect staff to manage access to e-journal collections or catalogue web resources in their ‘spare’ time. John MacColl argues that this has got to change. “If we are to be persuasive in the world of digital information in which we now present our services as ‘hybrid libraries’….we have to be much quicker.” He goes on to say that “the cataloguing of ‘toll-gated’ electronic information – e-journals and e-books – should now be the highest priority for our cataloguing departments (or metadata units).”(7)
Systems and technical issues are major challenges. The fundamental challenge is (still) integration – bringing the different components of the library together as a coherent whole. This is the challenge addressed by the eLib ‘hybrid library’ projects, and some interesting and useful work has been done. Some informative work has been carried out to conceptualise the role of the library in the hybrid information environment(8). But the practical challenge has not gone away.
One specific practical question highlighted by the hybrid library projects is the relationship between the OPAC and the other elements of the e-library. Is the OPAC at the centre of the hybrid or electronic library, or is it ‘just another database’? For example, many libraries have developed their web sites to be gateways to information resources available to users. They often contain direct links to e-journals, CD-ROMs, and quality web resources. Does the library also put records for all of these items on their OPACs or are these items only accessible via the library web site? Many libraries have developed databases behind their web site to provide access to these kinds of sources(9). The web site has become a search tool in its own right. With this and the fact web sites and web OPAC interfaces have increasingly been given a similar ‘look and feel’, the distinction between library web site and OPAC has become blurred.
Integration is also fast becoming a bigger problem. Libraries are no longer just dealing with digital textual resources but a wide range of different types of data. These include statistical, mapping, graphical, sound, and moving image materials to name a few. The challenge of creating a navigable online library environment which allows users to move between and around key resources continues to be a big one.
Designing ways of navigating the wide range of resources is a major challenge. The use of various cross-searching and linking technologies is now high on the agenda. Z39.50 is a major facility in this area. As a protocol designed for the exchange of bibliographic data, it is often seen as the most likely solution to the problem of integration. But it is still not widely used in UK (or elsewhere). The question is: does it work? The eLib phase 3 ‘clumps’ projects were designed to “kick start” the use of Z39.50 in the UK HE sector. They have certainly carried out some successful project work but it is clear (partly as a result of their work) that Z39.50 is far more complex to implement than was expected four years ago. Some major problems remain, particularly with the exchange of holdings and serials data. There are also issues to do with differences in the way the protocol is implemented (although the Bath Profile(10) should do something to improve this). Perhaps one of the biggest problems is that there are major inconsistencies in the way in which MARC has been implemented in library OPACs. When the source data is this inconsistent, the Z39.50 standard becomes a problem to implement in such a way that reliable results can be guaranteed.
So, does it work? Perhaps it depends on what is meant by ‘work’. Technical advances have been and are still being made. In fact all of the clumps projects are continuing their work in one form or another even though the eLib funding has dried up. Other Z projects are now also being funded by the Joint Information Systems Committee. But Z39.50 has not really proved itself as a ‘service-strength’ technology, as opposed to something used by projects. It is interesting that the recent UK National Union Catalogue investigation, after carrying out extensive tests of Z39.50 has recommended that an NUC in the UK should not be a service with Z39.50 at its core(11). Perhaps this demonstrates the jury is still out on the long term future of Z39.50.
New linking facilities are also beginning to emerge. Many of these provide links between different datasets – most usefully between bibliographic and full text resources. Services are now available from many database providers which allow these links to be set up for libraries. However, these linking services are often laborious to create, difficult to maintain and patchy in their coverage. Often the links provided by one data supplier do not correspond to the subscriptions of the library. This so-called ‘appropriate copy problem’ has led to the development of the new technology of OpenURLs(12). The practical value of this technology is still be investigated but it has the potential to play a crucial role in helping users navigate between accessible information resources(13). It is the basis of the Ex Libris SFX product(14).
SFX is just one of a number of commercial products which deliver new cross-searching and linking possibilities and which have come on the market recently. These include Ex Libris’ MetaLib which can be used alongside SFX(15), WebExpress / iPort from OCLC(16), and Fretwell Downing’s VDX/Agora product(17). The last of these is of course partly based on work carried out on eLib projects, particularly the Agora hybrid library project(18). They are all delivering ‘hybrid library like’ functionality which is available for the first time as commercial products rather than having to be developed in-house. They create new possibilities for libraries in delivering integrated services. It remains to be seen what kind of maintenance implications they have for individual libraries and whether they will be able to afford them in the first place.
One interesting alternative to cross-searching that has recently emerged is the Open Archives Initiative (OAI) protocol(19). Rather than dynamically searching across different databases in response to a command from a user, the OAI protocol allows metadata to be harvested from OAI-compliant databases which can then be collected into a single searchable database. It is difficult to predict at this stage how widely this technology will be adopted but it has potential. Adoption may partly depend on whether a formal metadata schema for OAI-compliant datasets can be worked out in detail.
Metadata remains a crucial issue. Some of the technologies and commercial products above create the potential to cross-search records produced according to different schemas – MARC (for libraries), ISADG (for archives), Dublin Core (for web sites). The problem of the relationship between these different schemas however remains. Can they be successfully cross-searched? Some have suggested that collection level descriptions (CLDs) may partly address this problem. Searching descriptions of collections of material may be a useful preliminary to searching at the item level. UKOLN (the UK Office for Library and Information Networking) working with various partners has developed a preliminary standard for CLDs(20), and this has begun to be implemented, by some eLib and RSLP (Research Support Libraries Programme) projects(21). This work needs to be continued and extended so that CLDs begin to make a difference in institutional, rather than just project, work.
Another major technical issue that has major service implications is security, authentication and authorisation. The picture here is a complex one. The nationally provided authentication system, Athens(22), has done a great deal to improve the situation in the UK, but is still not universally used by data providers. It is also not flexible enough to be used to serve internal institutional needs. It is hoped that the Athens-replacement, Sparta, will improve the situation. If institutions could easily use Sparta for locally delivered services (such as an electronic short loan service) this would be a major step forward. At the moment, we tend to fall back on IP-range authentication. Whilst this is suitable for much use, it is not sufficient. Off-campus access is becoming increasingly important to many users. Some institutions have responded to this pragmatically by setting up proxy servers. These allow off-campus users to appear to external data sources to be on-campus. But this is only a temporary solution. The issue remains a crucial one. The place of authentication in a cross-searching world (where users want to use a number of data sources simultaneously), and the possibility of ‘quiet’ authentication (where authentication and authorisation information is passed behind the scenes without the user intervening) remain key questions.
The related issue of personalisation is also an ongoing challenge. Many of the eLib hybrid library projects did some useful work in this area. Much of this is now relevant for institutions beginning to look at developing portal-type facilities for their users to cover a wider range of services than just the e-library. One key issue, as with personalisation for e-library services, is the extent to which user intervention is required. Whilst users often respond positively to the suggestion that they should be able to tailor a service to their own needs, in practice most users do not do this to any large extent. Tailoring can however be done at a system level for users, presenting them with key information and services which fit their profile. Whilst this needs to be done sensitively, it is often more effective than services which require users to do it themselves. Issues of security and data protection need of course to be considered here.
Of course, the infrastructure is only there to deliver the content. This leads on to collection development. Much e-library development in HEIs has up until now been opportunistic and unsystematic. Creating an integrated collection development policy which covers all media is then an important challenge. Electronic acquisitions have to fit into the overall provision. One major issue here is the balance between print and electronic services. The question of substitution is becoming an urgent one for many libraries. Should electronic sources replace printed ones? This is a question that was addressed for bibliographic resources several years ago (who still takes the ISI Citation Indexes in print form?) but is only now beginning to be faced for full text resources. As the situation is moving all of the time, this issue needs constant monitoring. It may however become more immediately relevant in particular areas. Key teaching texts are an example. It is conceivable that a combination of copyright cleared readings and e-books may in the relatively short term be used to replace multiple copies of printed text books in short loan collections.
The procedures for selecting and acquiring e-materials still need to be more settled in many institutions. There needs to be clear criteria for selection. What are the judgements which need to be made about the content and functionality of an e-resource? A large number of issues need to be considered which range from technical requirements through content to training and support implications(23). At the University of Nottingham we have tried to develop a clear protocol for selecting e-journals which involves a subject librarian analysing the deal against a set of criteria. Such a system is often necessary if the library has some kind of electronic resources group which makes acquisition decisions.
The selection and acquisition process itself is often far more complex for electronic materials and includes liaising with suppliers, organising trials and demonstrations, and formal evaluation. Once a decision has been made, it is ironic that the acquisition of e-resources can often take longer the paper ones. Sometimes just getting a price from suppliers can take several weeks. Perhaps the reason for these delays is due to the inexperience both of publishers and libraries. It can however sometimes frustrate the expectations of users.
The proliferation of electronic information materials means that the relationship between publishers and libraries is changing. In broad terms, there is a movement from the use of resources being determined by public law (copyright and fair dealing) to private agreement (licences). License agreements are private arrangements between two parties. They place the provider in a much stronger position to specify how the information is used and who uses it. Libraries in the UK have been helped to negotiate licences assertively since much of this is done through national agencies, CHEST (for software and datasets) and NESLI (for e-journals)(24). These agencies are currently in a state of flux and it is important that whatever replaces them in the long term is able to maintain a robust stance with publishers. In particular, the need for publishers to allow off-campus and multi-campus access to materials needs to be maintained. The need for off-campus access is one of the clear lessons highlighted by many eLib phase 3 projects in their reports and is certainly the experience of those involved in day-to-day service delivery.
Once material has been acquired it needs to be managed. The management of e-journals is, for example, a particular problem. It is not possible to buy an e-journal package, make it available and then forget about it. There is always an ongoing maintenance problem. Packages seem to add and subtract titles on a regular basis. Access problems occur very frequently. Libraries have begun to develop in-house databases to streamline e-journal management for staff and delivery to users. Commercial products are also now being released. The most advanced one of these is TDNet(25). Perhaps there is a role here for subscription agents. Libraries will certainly welcome opportunities to hand over some of the laborious administration associated with e-journals to a reliable third party. But whoever manages e-journal delivery, there is an argument that suppliers themselves should be providing a better service here. Suppliers should provide regular updates on the contents of service, MARC records and assistance as standard.
Another important aspect of the management of e-materials is preservation. In the main, electronic resources have been acquired by libraries to satisfy immediate need. The issue of preservation of electronic materials has been side-stepped. In many areas, libraries have continued to acquire paper in parallel with e-versions. However, as the prospect of e-only versions of material becomes more immediate, the preservation issue becomes more pressing. The eLib programme, through the CEDARS project(26), has made a significant contribution to the debate. The DNER Office has also assigned a preservation responsibility to senior members of staff. There is an important role for national agencies (including JISC and the British Library) to play in preservation. However, the preservation question remains a key one for all libraries. It will have an impact on the whole direction of local e-collection development policies. Until the issue is nearer resolution, library managers will not have the confidence to move decisively from paper to electronic-only resources.
Of course, electronic resources are not only being acquired by libraries but also created by them. Libraries are increasingly active in digitising materials both to preserve the originals and to add value to them. Library staff are also increasingly involved in creating content for learning and teaching purposes. Such activity needs to be built into general collection development policies of libraries. Its importance needs to be recognised by library managers and academic staff.
Publishers have embraced the electronic future at different rates and with different levels of enthusiasm but most academic publishers are now operating in the electronic environment. They are working to maintain (or increase) their market shares and profits in this new area. In many ways, this has led to the same problems created in the print environment being transferred to the electronic. Perhaps the most significant of these is the so called ‘serials crisis’.
Between 1986 and 2000 statistics produced for the Consortium of University Research Libraries show that journal prices have risen by 291% at a time when the Retail Price Index (the standard measure of inflation) has increased by less than 70%. Journal price inflation is still running at about 8% per year. E-journals have not helped alleviate this. Although many e-journal packages provide access to additional titles, they typically involve at least 7% on top of existing print expenditure. Publishers have shown determination to lock their content away in subscription-based databases. And yet, paradoxically, this is content largely produced by members of HEIs who would like to see their work disseminated as widely as possible. The situation is becoming increasingly unsustainable.
Library and information professionals can respond to this in a number of ways. Negotiating vigorously with publishers is one way already mentioned. Perhaps academics working as journal editors and editorial board members might also be encouraged to engage more with their publishers about the price of the journal. At the same time support should also be given to initiatives such as SPARC which has led on a number of new publishing ventures which create more competition in the publishing market(27). In some cases, SPARC has set up new titles specifically to compete ‘head to head’ with existing expensive titles. As SPARC is soon to have a European base, UK libraries should be active in supporting it.
Another possibility is the e-print initiative. Since this has the potential to create a new paradigm in academic publishing, it is appropriate to deal with it in a little detail here. It is now possible to create cross-searchable databases of research papers and make them freely available on the web. The Open Archives Initiative has created the technical standards to do this, and ‘eprints.org’ has now provided free software to install OAI-compliant ‘e-print archives’(28). E-print archives are already well established in a few subject disciplines such as Physics (with the Los Alamos ArXiv service)(29) and Cognitive Science (the CogPrints archive)(30). Some institutions have also begun to set up archives. At the University of Nottingham, we have set up an experimental archive using the free software and are currently devising strategies to encourage academic staff to archive their papers(31).
One key issue here is copyright. Some publishers allow authors to retain the copyright for their papers and permit them to deposit material in public archives. Others do not. Authors should be encouraged to insist on retaining copyright, at least to deposit a copy of their paper on a not-for-profit e-print server. Where publishers do not allow this, it is now widely accepted that authors can deposit a pre-refereed version of the paper followed by post-refereed corrections on an e-print server without contravening copyright. Such approaches have been championed by Stevan Harnad who has written widely on the subject and has been behind the setting up of the CogPrints archive and eprints.org(32).
Harnad is insistent that ‘self archiving’ can and should begin now. Authors should be encouraged to continue to submit papers to peer-reviewed journals as well as place their papers on e-print archives. He suggests that this will in the medium-term lead to a paring down of the role of publishers to become organisers of the peer review process and adders of value to the basic content. It might also be suggested that the long term future of peer review remains a matter of debate. At present it is wrapped up closely with the publication process but this need not be the case. It is conceivable that in future peer review could be provided by professional or learned societies who might provide a ‘kite mark’ for a publication which could then be published on an e-print archive.
Although the future shape of scholarly communication still remains unclear, what is clear is that library and information professionals have a key role to play here. First of all, the library is the natural home for an institutional e-print service. Libraries have traditionally managed the key academic information resources of institutions. In the short term, librarians should be active in installing e-print servers locally and smoothing the path for academics to contribute to them. Secondly, librarians should take the role of raising awareness about these issues amongst their academic colleagues and institutional managers. As the professionals who currently interface between publishers and academics, library staff are in a good position to be well informed on the issues.
Strategies need to be devised to encourage academic colleagues to contribute to e-print archives. A key question that needs addressing is: what is the hook? How can academics be encouraged to contribute? How can their fears that contributing to e-print archives threaten their ability to publish in high impact journals be allayed? Librarians need to establish a dialogue with academics to address these issues. Finding out what they want and need, and not just making assumptions how we think they ought to work. Engaging with institutional managers is also important. The more senior managers can be persuaded to embrace the institutional archiving concept, the more they will encourage staff on the ground to contribute. Librarians have to take this advocacy role seriously.
One of the concerns about electronic resources is ‘market penetration’. To what extent are they actually being used? Who is using them? Who is not using them? In order to help to ensure that quality resources are used widely and effectively it is important to have marketing strategies in place and also information skills training opportunities. It is also essential to have good formal and informal liaison mechanisms set up to ensure that the resources purchased are the right ones and once they are purchased that they are used.
The complexity of the current information environment means that users more than ever need assistance in navigating the resources. In particular, users need assistance in identifying high quality resources. Whilst under-use of e-resources may be a problem amongst some users, for others, indiscriminate over-use is the difficulty. Some of the recent results of research carried out by the JUSTEIS project show the extent to which students rely on information on the web(33). All too often the information they are using is of a questionable quality when information in other places and in different formats may be better. Some users (particularly some students) are not being critical enough about the information they are encountering on the web. There is a crucial role to play here for information professionals in guiding and training users to help them find and use high quality (both free and paid for) resources.
Once resources are there, it is essential to have good support systems in place. Universities are usually good at buying and making available resources but often do not invest in support. At the University of Nottingham, a recent consultation exercise with heads of schools about their information needs demonstrated that by far the most popular stated requirement was more support for IT use. This is of course a much broader requirement than just electronic library services but is nevertheless closely connected with them. Many users feel they are not adequately supported in their use of IT-based resources and this means they are not using those resources, or if they are, they are not using them as effectively as they might. Support can take various forms, including FAQ services, knowledge bases, well constructed feedback forms and so on. But there is of course still no alternative to people. As well as face-to-face help desks, telephone help lines are often popular with users for IT-based resources. Some libraries are also experimenting with live online reference desks which use ‘chat room type’ functionality.
Provision and support has also to extend to off-campus use. At Nottingham, we have a pilot electronic short loan service and the clear message coming out of evaluations is that students want off-campus access (something which we are now planning to do). This message comes from full time ‘traditional’ students, as well as the growing numbers of ‘non-traditional’ students. The need to support these latter students in their use of these resources is also acute.
The needs and behaviour of these and other users are often not fully known. All too often the views of information professionals on user behaviour are little more than ‘best guesses’. More could perhaps be done to acquire evidence about needs or at least monitor usage of resources more systematically. Usage statistics are now available for many e-resources from suppliers. At the same time, information professionals ought to perhaps try to ensure greater clarity and uniformity in these statistics across different providers. At present, most usage statistics from web resources have to be treated with caution especially when trying to draw comparisons across different services. This makes the production of reliable performance indicators for e-libraries a challenge, although useful work has been done in this area by groups, such as the SCONUL Advisory Committee on Performance Indicators and the EQUINOX project(34). Quality assurance in the e-library remains a major issue.
Many issues associated with the delivery of electronic library resources do not involve the library alone. Libraries now more than ever deliver their e-services in the context of a wider ICT-based provision. Electronic library delivery relies on the infrastructure, hardware and expertise provided by computing services in universities. It is then essential that the library as an organisation works in close partnership with the computing service. This partnership of these organisations can take a variety of forms ranging from organisational merger to strategic alliance. It will vary from institution to institution. But whatever form it takes it must be ‘an ever closer union’ if users are to be properly served(35).
Whether they are part of formally merged organisations or not, libraries as organisations need often to remould themselves to be able to better deliver electronic and hybrid library services. Greater team and project working, flatter structures and improved communication channels are amongst some of the key developments which libraries are carrying out in order to do this(36).
The importance of partnership between different information professionals is highlighted in the context of new campus-wide services which are beginning to emerge. In particular the development of Managed Learning Environments are now at the top of the agenda for many institutions. MLEs are systems which facilitate the learning and teaching process by providing resources and tools online. They will normally include a Virtual Learning Environment (VLE) tool, and integrate this with student record databases and other campus information systems(37). As such they form a central part of the so-called ‘web enabled campus’ and the ‘e-university’.
One of the key strategic question that has to be addressed is what is the relationship between the electronic library and the MLE? This is one of the key questions to emerge from eLib phase 3. When the hybrid library and clumps projects were conceived it was seen as important to try to integrate various systems and services within the library provision. As the projects progressed, it was becoming clearer that integration was a much bigger challenge than this. As well as integrating services specifically within the remit of the library, it is also necessary to integrate these with other university systems. But how? This question is recognised to have been so important at a national level that JISC has funded the INSPIRAL project to investigate the concepts involved(38). The ANGEL project is also working on some practical answers to the question(39).
Nevertheless, it is crucial that institutions are addressing this question at a local level as well. In practical terms this often requires members of staff from the library to be actively involved in MLE development at various levels. It is crucial to have senior members of library staff involved at a strategic level on the university MLE group (if one exists) and also have staff involved in particular implementations of MLEs working alongside teaching and technical staff. All too often different people are involved in e-library and MLE development and this can mean that the connections are missed.
Local e-library development takes place in the context of ongoing national initiatives. In particular, the development of the Distributed National Electronic Resource (DNER) is important in e-library service delivery(40). Many of the key strategic technical and content issues associated with the DNER have now begun to be addressed in detail(41). As the DNER develops, the crucial question for institutions is what is the relationship between the national and the institutional provision? How will institutions integrate DNER services with their own services? This will be one of the key questions for library and information services to address in the next year. An immediate example of this is the Resource Discovery Network, which co-ordinates the work of the various subject hubs (most of which began life as eLib projects)(42). How do resources available through the RDN fit with locally-produced web gateways? Perhaps we need some more exemplars in this area.
The national situation is also changing. We are used to the Joint Information Systems Committee (JISC), which funds the DNER, being active in e-library development. But a new group funded directly by the Funding Councils has recently been set up. This group, the Research Libraries Strategy Group, is chaired by Sir Brian Follett and it is expected that its work will have a significant impact on library (including e-library) provision in many areas alongside that of JISC. It is expected, for example, that it will look at the possibility of taking forward a UK National Union Catalogue and also at ways in which the British Library can collaborate to a much greater extent with HEIs(43).
The implications of all of this for library and information services staff are profound. Information professionals are now required to take on a wider variety of roles requiring a broader range of skills than perhaps ever before. A number of eLib projects have helped to successfully highlight these issues in recent years. Library managers need to address the issue of how staff are obtained, trained and retained in order to carry out this work. Many eLib projects reported that they often had problems recruiting and retaining staff with the right skills and this is a general problem across the sector which needs to be addressed. Thought also needs to be given to staffing structures which are often biased in favour of traditional library roles. Electronic library roles are often tacked on to the side of organisations. There may be a need in many organisations to review the fundamental organisational structure to see whether it is best able to deliver the wide range of services currently required.
These services involve library staff taking on roles. The librarian is now:
And these roles require a wide range of new or enhanced skills, including:
Some of these are hard skills (such as technical knowledge) others are soft skills (such as vision). All are important to have in the organisation.
The last ten years has seen enormous change in library and information services. National initiatives, such as eLib, have helped to facilitate rapid development. But change is set to continue in the next decade. Libraries are going to be expected more than ever to be fast-moving, innovative organisations which can still deliver stable services. Achieving this will involve energetic technical and content development. But it will also involve developing organisations with the right staff with the right skills working in the right structures. It is in this way that we will be better able to support the needs of our users.
This paper is based on a presentation first given to a CALIM (Consortium of Academic Libraries in Manchester) seminar in March 2001. Thanks to those involved for their feedback. Thanks also to Dr Mike Gardner, Dick Chamberlain, John MacColl and Philip Hunter for comments on drafts of the article.
Stephen Pinfield is Academic Services Librarian at the University of Nottingham. Before this he managed the BUILDER hybrid library project at the University of Birmingham. He is acting as part-time Programme Consultant to the DNER Office, overseeing eLib Phase 3 projects.