I am pleased to introduce you to the content of Issue 68, and to have the opportunity to remind you that you have a far larger number of channels into the publication’s content. You can do so by using the Archive (for back issues), Authors or Articles tabs on the front page to search for material or information (in addition to the general search field top right) or you can casually browse the material offered by Today’s Choice, view the passing articles in the Gallery block to the right, or drill down into the Gallery from its tab. Furthermore you will note it is possible to browse over the Keywords that have been generated across all the content of the Magazine. In addition, you will find FAQs on the new functionality to explain how it is best employed. I trust you will find that Ariadne is considerably more connected up as a result of this development and I sincerely hope it will encourage you to range farther afield within its content, whether recent or not.
I am also announcing that from Issue 69, Ariadne intends to publish material under Creative Commons licensing. We trust that this will meet with general approbation and hope that you will consider writing to offer or discuss a contribution of your own.
Data Citation and Publication
At a time when the true importance of datasets in support of research is becoming increasingly apparent, Sarah Callaghan, Roy Lowry, David Walton point out in their article Data Citation and Publication by NERC’s Environmental Data Centres that data are the basis on which scientific endeavour operates. Yet as data have become more and more abundant, the difficulty of publishing them has grown commensurately. They also point to the importance of uninterrupted access to data, and how that is not always easily maintained when they are published on the Web. Furthermore, where such data can no longer be accessed, they cease to fulfil their role of validating the research that produced them.
The authors describe the role of the six NERC-funded data centres in supporting researchers in the management of their data; and the importance of the effort involved in ensuring that datasets archived are supported by the contextual information such as calibration and location data which ensure they remain of use to researchers far into the future.
This article is about their approach to promoting access to data while at the same time working to ensure that data creators are properly credited for the considerable work involved. It describes the path they took towards implementing sound data citation, though the authors do not claim they have achieved perfection. The authors provide us with some background to the work they are undertaking by examining projects funded by JISC which have preceded and informed their own. They also explain some of the issues surrounding the publication of data with which data centres need to engage.
They point out the irreconcilable nature of the traditional versioned publishing model with that of the ever-changing nature of data for publication. They also provide an overview of BODC pilot projects centred on the concept of the BODC Published Data Library. They explain the principal resources of data from within and outside BODC, their attendant requirements for publication and the procedure through which the Published Data Library works to publish them.
The authors go on to describe the involvement of data centres in the publication of data in two different areas. The first is the day-to-day serving of data, and the Publication of datasets that have been enhanced and strengthened by a process of peer review and management. But they also point to the need to bridge the gap between the mundane provision of data and the enhanced process of data Publication by formalising a method of dataset citation.
In the context of NERC data centres’ adoption of DOIs for datasets, they also draw a distinction between the technical quality of datasets destined for peer review and publication as against the scientific quality of their content which is evaluated by scientists with the relevant expertise. But the assurance that the dataset meets the necessary technical standards in terms of format, access, etc means that peer review can be undertaken.
They provide an overview of the principal technical requirements relating to datasets, in particular, the format of files, together with description of the associated XML metadata documents. The authors go on to detail the responsibilities data centres undertake when they assign a DOI to a dataset, including the proper formulation of landing pages. They also provide a number of prescriptions in respect of such pages to ensure not only that users can obtain as much information as possible, but also that the correct dataset is identified!
The authors explain the structure of citations and DOI-specific metadata employed by the NERC centres and examine the principles that govern the assignment of DOIs, and how centres approach the whole matter of revised datasets as well as the granularity involved in assigning DOIs to measurements in datasets. At the same time, they also realise that some datasets become scientifically significant before they reach maturity and can be frozen forever. They explain how ongoing sources of data can be handled and how their DOI assignment is best approached.
Open Educational Resources
In his article Delivering Open Educational Resources for Engineering Design, Mansur Darlington describes two methods for presenting online OERs for engineering design that were explored and developed as part of the Higher Education Academy/JISC-funded DelOREs (Delivering Open Educational Resources for Engineering Design) Project. He points out how so little of material of potential use to staff and students actually comes with effective guidance on its educational exploitation. He describes the aims and work of the DelOREs Project and how its two collections have been mounted for dissemination and management purposes. He also raises the central difficulty in obtaining sufficiently clear information, particularly licensing details, with which to make discovery and adaptation of OERs a simple task. He offers an incisive view of the surrounding culture and changes necessary if OER collections are to thrive.
In this article, Mansur also explains the background behind the choice of software to present the DelORES Selections material, the configurability and extensibility of WordPress being major determinants together with many users’ familiarity with it. Mansur goes on to explain how descriptions of OER resources are created, how such information is organised, and the the effective exploitation of RSS to disseminate them. A subject-specific taxonomy designed by team members underlies this description process. Further more, the ‘custom fields’ functionality of WordPress affords, he maintains, the opportunity to record other data while other familiar aspects of Word Press are also moulded to these description purposes.
He explains that while the Selections content is static, its stable-mate DelORES Extensions is a dynamic collection which requires the means to select and update content by the user, as well as discard it. Mansur explains the role of Waypoint software for user access and how it has been adapted by the project. The collection is augmented through conventional Web-based discovery techniques. He also describes how the project has organised filtering of potential content which benefits by a two-handed approach, one manual, to allow of human selective processes, and the other automated. In this way it is possible to develop a system ‘knowledge’ which provides criteria for retention or discard of resources.
Mansur describes the very varied formats of candidate material for adoption as OERs but points out that it must conform to certain minimal requirements relating to quality, relevance and clarity of legal status. As regards legal clarity, all that is required is a machine-readable statement of the material’s acceptable usage. The basic notion of licensing actually centres on the spirit when not the letter of Creative Commons.
However, Mansur also notes that material displaying minimal constraints on reuse is not very common, nor is the culture for making it ‘open’ very mature. Neither can the irony be lost on OER collators that, even where material has been placed in ready view of potential reusers, the lack of clarity about its legitimate reuse discourages collection owners in fear of being seen as advocating anything less than clear, legitimate reuse. As a consequence, Mansur bemoans the absence of a clear ‘open-practice’ culture to date. Nor, he continues, is the process for OER collection providers of resource discovery and adoption a straightforward one since even resources obviously intended as an OER lack all the required information, though he cites the exemplary MIT OpenCourseWare approach as the ideal.
The lack of machine-readable data therefore makes the discovery of OERs a difficult task. Mansur describes the solution trialled by the DelORES Project. He goes on consider the approaches to resource description and the value of formats such as XML over native formats; advances are being made in standardisation of descriptive data. The fact remains, in his view, that fully automated OER discovery and processing relies on the provision of machine-readable information that satisfies the above-mentioned minimal requirements which are easily harvested.
Finally, Mansur compares the different cultures reigning in the USA and UK in relation to the sharing of material and the current climate here is unlikely to help matters. He identifies a decided need to define appropriate financial and organisational mechanisms if that unhelpful climate is to change – and that resource provision efforts in UK HE need to be incentivised if they are to be sustained and extended.
Public Libraries in Africa
In their article Perceptions of Public Libraries in Africa, Monika Elbert, David Fuegi and Ugne Lipeikaite describe the principal findings of the ground-breaking study of the same name which served to provide evidence of how public libraries are perceived by their stakeholders. Ariadne has been fortunate indeed to receive articles from David and Monika in the past about the work they have been doing with libraries in developing countries, and so I am very pleased that they and Ugne have been able to offer these insights into the perceptions not only of users of libraries, but also non-users - and the officials who influence their development.
While we have heard much about the views of users of public libraries in developed countries recently, sadly so often in the context of contesting cuts in budgetary support, it is thought-provoking to be offered the perceptions of citizens whose use of public services is rarely taken for granted. It is also important to underline that these stakeholder perception studies were undertaken the better to understand not just the public’s perceptions but also those of the national and municipal decision makers who influence library finance and planning.
This article gives not only an overview of the usage behaviour of the public but also their opinions of library service quality as well as where and how their opinions are formed. The authors contend that the manner in which these perceptions were gathered may prove of use to researchers seeking to gather evidence to support advocacy for improved library investment and planning elsewhere. In their interpretation for us of the study Perceptions of Public Libraries in Africa they highlight the fact that opinions were sought of citizens who do not currently use public libraries, an approach that is far-seeing since such a constituency could so easily be overlooked. I have no doubt but that identifying such subjects may have proven less than straightforward.
It becomes apparent in reading this article that the views of library users in Africa and say, here in the UK, are not so very divergent at times, for example in relation to the relevance of material on offer to readers and the quality and ambiance of their library’s physical surroundings. In other respects we may consider ourselves as more than a little fortunate: for example in the degree to which new media and ICT in general have penetrated public consciousness and have influenced expectations. One interpretation of certain data might be that the greater availability of of such media can blind us to the fact that the expertise of the librarian in underpinning library services remains central, something arguably more readily understood by less fortunate library users in Africa.
A rather encouraging finding was that, whether they prioritised library funding or not, the local government officials surveyed did largely recognise the economic benefits to be derived from public libraries in addition to the more readily recognised advantages such as literacy and education. On the other hand, it is clear that how libraries in Africa engage pro-actively with that economic agenda is less well perceived.
It is very much to be hoped that, as the authors conclude, the ‘findings of the study, once they have been validated by the local library communities in the countries concerned, will constitute a substantial body of evidence that potentially can be used to inform evidence-based library management and advocacy campaigns.’
In his article Has Second Life Lived up to Expectations?, Paul Gorman examines to what degree Second Life (SL) has justified the claims made for it by its evangelists with particular regard to education. His initial remarks relate to the manner in which some adopting institutions fail to grasp its primary characteristic, its interactivity, and are often doomed to repeat the errors of their real-world organisation.
Paul begins his article by tracing the visibility and popularity of SL from its launch in 2003 to the present day. He examines whether and to what extent SL has realised its potential for education and refers to the interest that was shown in UK Higher Educations institutions (HEIs), so much so that by summer 2010 the involvement of British HEIs in virtual worlds of one variety or another was almost universal, although the degree to which that was true varied considerably. Paul then goes on to give his personal view of SL’s virtual world as an education user. In that context he contends it would be inaccurate to describe SL as widely used among students and goes on to question to what extent it can be argued that SL has trumped the value of computer-mediated communication such as instant messaging. He also points to what one might describe as SL’s ‘Achilles Heel’ in comparison with say, Skype or Facebook.
Paul is not convinced that the approach adopted by certain institutions, in which they ‘slavishly’ recreate in SL the dimensions of their physical entity is of any value at all. By and large, he contends, they simply reproduce the non-interactive (my emphasis) characteristics of conventional teaching, for example the copying of notes off a screen.
Adopting the other tack, however, Paul goes on to extol the virtues of those same interactive characteristics where they are allowed to apply. He provides a clear example in the matter of role-play. It can range from barristers’ advocacy, and customer service, to that of medical students (though I was unclear whether this included the much needed ‘bedside manner’ in some instances) and provides opportunities to practise beyond the circumstances that could easily be reproduced in real life. However, the author does issue a word of caution on the degree to which role-play can be employed as a means of representing ‘true’ experience. Paul points out that SL also offers other forms of person-to-object intervention that provide scenarios that cannot be replicated in reality, such as scenes from the distant past.
Nonetheless, the use of virtual worlds has been beneficial to the understanding and practice of students from a wide range of disciplines.
Though he is a strong advocate of its interactive potential, Paul is obliged to point out some long-term failings. He identifies the principal weakness in the SL experience in terms of its failure to provide a convincing sense of ‘being there’ and why that has come about. All in all, I feel the author has taken a balanced view of the educational benefits and weaknesses of SL’s offer, reserving his strongest criticism not for a particular failing so much as the length of time that some have persisted.
Increasing Arts Research Deposit
In Kultivating Kultur: Increasing Arts Research Deposit, Marie-Therese Gramstadt discusses how the JISC-funded Kultivate Project is encouraging arts research deposit in UK institutional repositories. In its mission to increase arts research deposit in institutional repositories, Kultivate works to engage with the arts research community to share and support best practice. The Kultur Group is a key element in this community and has set up IRs that are better suited to arts researchers. Marie-Therese describes her research methodology, including desk research and surveys, to gain a picture of the arts research repository landscape.
She begins her examination of the barriers that hinder arts research deposit by identifying the wide range of stakeholders and relationships that exist in an IR. Even terminology to describe an IR affects potential users’attitudes and how easily they can become an integral part of a researcher’s day-to-day workflow. Moreover, the requirements of arts researchers in the use of an IR differ from other disciplines in terms of presentation and context of outputs. As a result, enhancements to the EPrints platform are in hand.
Marie-Therese admits that there is a continuing debate about what constitutes artistic research and which aspects of work would be suitable for deposit. Surveys conducted pointed to the singularity of arts research output in IRs and how its theory differs markedly from other disciplines. She also indicates the possible conflict between the nature of IR-handling of arts output and how that is viewed by researchers discomfited by notions of audit. Similarly, institutional control could exclude researchers from their own output when they move on. This instance caused great resistance to reusing IRs when work could not be accessed. Ease of access to amend work held in an IR was another source of debate. In this context, Marie-Therese described the model operated by the Journal for Artistic Research which works in a manner more sympathetic to arts researchers’ needs.
No one will be surprised to learn that the operation of rights in this field is far from straightforward. Much could be characterised as a forest of different combinations and layers of rights issues. A Kultivate survey highlighted copyright as a significant barrier to arts research deposit, though use of depositors’ work by others can be better controlled thanks to using repository software.
Marie-Therese points to correlations between Kulivate and the JISC MRD programme, in particular Project CAiRO (Curating Artistic Research Output) and JISC Incremental. Unsurprisingly, their findings reflected the usual dichotomy between researchers’ generally favourable stance on the principle of data sharing and the much lower rate of actual deposit. Kultivate’s survey identified the time required to deposit as a major determinant. Feedback from Kultur II Group meetings and other JISC-funded projects indicated that a reduction in the complexity of the deposit process would increase participation.
The group also examined the DepositMO work which supports deposit by MS word on a desktop using the SWORD protocol and provides automatic cataloguing of deposited images even if subject terms and keywords are not yet included. Progress has been made in reducing the number of fields and improving workflow in the deposit process; a demonstrator ARTSUK has been made available to the Kultur II Group by Kultivate.
The Kultivate project proposal draws attention to the assertion by OpenDOAR that arts content is noticeably under-represented in UK IRs (to something in the order of only 3%). The project has been investigating the upload of content via Mahara software. Marie-Therese points to the importance of advocacy to encourage regular and further deposits once the initial wave of deposits dissipates, and how embedding the deposit process in research practice is a central issue. She also mentions the work of the Repositories Support Project (RSP) in advising Kultivate.
However, this article clearly recognises that advocacy for arts research differs considerably from other disciplines given ‘the cultural and specialised needs of artistic researchers who often have different workflow processes, complex multimedia research outputs, and operate in a different context in terms of their relationship to the institution and their research.‘ Descriptions of work which seeks to highlight for depositors the effect their work may be having is very reminiscent of the approach developed at Hokkaido . The Kultivate Project survey work also highlights the central position of the forthcoming Research Excellence Framework (REF) in the motivation of researchers and institutions, to the point where deposit in the IR is the only way to enter evidence for the REF 2014.
In her conclusions, Marie-Therese includes a mixture of technical, managerial and organisational solutions in order to increase arts research deposit in UK institutions.
The CLIF Project
In The CLIF Project: The Repository as Part of a Content Lifecycle, Richard Green, Chris Awre and Simon Waddington describe how a digital repository can become part of the technical landscape within an institution and support digital content lifecycle management across systems. They describe the JISC-funded CLIF (Content Lifecycle Integration Framework) Project. Part of its work was to co-operate with partners to understand the interaction of the authoring, collaboration and delivery of materials using three systems used within Higher Education institutions, namely Fedora and SharePoint, on the one hand, Fedora and Sakai on the other. There is no underestimating the importance of embedding the Institutional Repository in the digital content lifecycle and the extensive integration work required which has highlighted the need for more up-to-date standards to be adopted by systems and greater concentration on making export of content as easy as import.
The authors point to the work at Hull on which CLIF reposes and the danger that repository content could become another silo of digital material. The plan to develop software which would reduce this danger was strengthened by the collaboration between Hull and KCL. The authors raise the matter of standards in the development of software to transfer content between systems and the approach to constructing digital objects, particularly where Fedora Commons software is concerned. To this end, Hydra-compliant objects are seen as addressing the greater flexibility of Fedora digital objects.
The authors’ literature review identified considerable diversity in the approach to digital content lifecycle management, and highlighted the need to identify the appropriateness of different systems at different stages of the management lifecycle. They then describe how, in gathering user requirements, they recognise different needs among colleagues working mostly with research data, and others operating largely with test-based materials. This situation generated the development of two generic use cases to reflect that dichotomy, and CLIF worked towards supporting functionality for both SAKAI and Sharepoint.
The project’s technical review considered the functionality offered by Fedora, Sharepoint and SAKAI, and where interaction among them could take place in the content lifecycle. It also examined the best approach to software integration, favouring point-to-point architecture over Enterprise Service Buses (ESBs).
Supported by its technical review, the project began work on integrating Sharepoint with Fedora, and SAKAI with Fedora. The authors describe how CLIF extends the functionality of Sharepoint’s MySite. They describe how document deposit to a repository is configured for users by an administrator, and, where possible, takes place together with depositors’ assignment of metadata.
The authors then move to the integration of Fedora into the SAKAI resources tool. It is possible to configure the level in the repository structure where movement of content between SAKAI and repository folders operates. This management functionality means that preservation of material is facilitated. The authors remind us that provision and transfer of metadata for SAKAI is somewhat limited.
The authors then describe the apparent ‘copy-and-paste’ functionality of the SAKAI-Fedora integration work as concealing a high degree of complexity. They also sought many stakeholders involved in the development of use cases in order to seek their opinion of the integration work and summarise their views. Users’ feedback was overall encouraging. They also describe the steps to be fulfilled at Hull in terms of SAKAI integration code that will support Hull’s QA processes while at the same time creating the opportunity to address the limited nature of SAKAI content’s descriptive metadata.
In their conclusion the authors consider the profile of such practical considerations in the literature. They point to the considerable strength repositories possess in the area of archival capability and the need for institutions to understand better the potential of differing content management systems in operating over the different stages of the digital content lifecycle.
Digitisation in Armenia
In Peculiarities of Digitising Materials from the Collections of the National Academy of Sciences, Alan Hopkinson and Tigran Zargaryan give an overview of their experience of digitising paper-based materials in the Fundamental Scientific Library (FSL) of the National Academy of Sciences (NAS), Armenia, including some of the obstacles encountered during image processing and optical-character recognition.
While providing us with some background to Armenian scientific publications, the authors describe the situation in the light of the enormous effect the Internet has had upon the scene, and the aims of the project they are steering to address the present and ever-increasing demands for digitisation. They provide a wealth of practical detail in their approach to the digitisation process and the difficulties encountered, not least the problems relating to multiple alphabet forms which will give some practitioners pause for thought.
They begin by providing a broad-brushstroke picture of the stage to which publishing in Armenia has developed and the anticipated and increasing sophistication of electronic resource description and information discovery and retrieval. They go on to describe the evolution of scientific publication in Armenia, produced in a variety of languages as well as Armenian and how Armenian publishing spread in particular to urban concentrations of the Armenian diaspora.
Alan and Tigran describe the principal aims of their project and the nature of the bid they submitted to the British Library together with a description of the digitisation and preservation processes they established on receipt of a BL grant. They also outline their strategy based on experience of photographing originals. They provide some very useful advice for practitioners about to undertake similar digitisation activity in their position, particularly in terms of camera hardware and colour.
The authors go on to describe the processes they developed in recording and storing image data and give readers an indication of the resources that must be dedicated if digitisation on such a scale is to succeed. They also describe the decisions taken with regard to open-source or proprietary solutions and their associated thinking. Heartened by the success of their initial digitisation project, staff at the FSL were encouraged to initiate another: placing NAS scholarly content online. They did so despite the hurdles generated by the scale of the work they were undertaking.
The authors explain the significance of the OCR conversion process in the digitisation effort. The capacity to render text in picture format to text that is searchable, and even executable, represents an enhancement of a major order, and a development of significant benefit to all using their material.
In the first ‘Tooled Up’ article in new Ariadne, Graham Seaman describes in Adapting VuFind as a Front-end to a Commercial Discovery System, the adaptation of an open-source discovery tool, VuFind, to local needs, discusses the decisions which needed to be made in the process, and considers the implications of this process for future library discovery systems. He also provides an understanding not only of the requirements involved in this development work, but also the approaches adopted to meeting them in the context of the needs of Royal Holloway Library (RHL).
In addition to providing us with detail of the problems and solutions he encountered along the way, Graham also shares with us thoughts on the nature of next-generation discovery systems and how he sees them in the context of his work described in this 'Tooled Up' article.
In his opening Graham reminds us that VuFind ‘was one of the first next-generation library discovery systems in the world, made possible by the open source Solr/Lucene text indexing and search system which lies at the heart of VuFind’. He goes on to explain the most striking features of this next-generation discovery system, comparing it with the look and functionality of older OPAC interfaces. As adoption increased, Graham notes, so proprietary offerings also emerged, which were capable of accessing data outside library control, and publisher metadata as well. Nonetheless, he contends, VuFind and its stablemate Blacklight remain easy to adapt to local needs and supply a single front end for services as well as a single interface for adopters choosing open source solutions. But they will also act as a front end to proprietary discovery services such as Summon. This article covers Royal Holloway Library’s interest in interfaces to new services.
Graham continues by explaining how RHL opted to move from federated search using Xerxes and Metalib to combined catalogue, repository and archive discovery, as well as for journal articles. He explains how the intention to reduce the number of interfaces for readers via data aggregation was foiled and examines the solutions examined while also detailing some of the difficulties that were encountered, both within and outside his institution.
Under such circumstances, he relates, VuFind started to look like a possible means of providing a practical and user-friendly front end to Summon. He goes on to identify the key features of the requirements for implementing such a solution, focusing on the ILS driver, the Summon interface and integration with RHL’s OpenURL resolver, among others. The requirements described represented a range of difficulties for the VuFind system, ranging from the relatively simple to the complex, involving modifications to the core functions of VuFind. He explains the RHL policy of feeding their modifications back into the main VuFind development path. He also provides an insight into the maintenance of trunk and experimental versions of VuFind under development, and also into the implementation decisions confronting the RHL team.
In describing the creation of their local instance of VuFind, Graham goes into detail about the decisions made and the problems encountered on the way, associated with matters such as the LMS driver, OpenURLs, the rigidity of SFX look-and-feel, and catalogue browsing. He goes on to describe the work involved in modifying VuFind to operate with Summon as its primary search engine, and at the same time avoid duplication of effort. He also describes the project’s reactions to the changes in the format of record identifiers and the relevant solutions adopted. But he also provides yet another example in software development of how few changes are entirely free of unwelcome consequences.
In his conclusions, Graham provides a balanced view of the benefits and difficulties involved in the approach adopted. He points to two possible useful lessons for fellow developers relating to the evaluation of open-source applications and the nature of ‘next-generation’ discovery, a view formed by RHL’s experience of working with VuFind.
Welsh Libraries and Social Media
In Welsh Libraries and Social Media: A Survey Alyson Tyler looks at the usefulness to Welsh libraries of social media sites and the outcomes of two surveys conducted to investigate the current degree of access as well as interest in exploiting such sites. Alyson analyses responses from different kinds of libraries and goes on to advise on how they might effect a change in their organisations’ policy towards social media usage. In offering, to this end, a business case approach, she supplies not only the essential considerations to address but also some sound advice on how to handle the social media tools once accessed.
The picture that Alyson’s analysis of the survey responses produced was decidedly a mixed one in all but the Higher Education libraries where no barriers to social media existed. In other sectors, a variety of polices seemed to apply in either denial or restriction of use, based on status, age and even the time of day. Respondents to her survey gave equally varied reasons as to why access to social media was blocked or restricted, although reasons of security and organisational policy featured most frequently. Nonetheless it also emerged during the course of the survey that the majority of libraries contacted already used or planned to use Web 2.0 technologies.
Alyson goes on to supply us with the flavour of the responses received, which demonstrate the technologies being employed by responding libraries. She remarks that, as is no doubt often the case, early adopters in work frequently reflect their even earlier adoption in a personal capacity, which has fuelled their professional enthusiasm.
Alyson then moves on to findings from an internal survey of opinion conducted by the Society of Chief Librarians (Wales). She writes that results indicate that in the intervening year between the two surveys, more local authorities have relented and permitted library staff to access social media sites. She asserts that library staff need to advise and support readers even if they personally are unimpressed by such media.
In offering her own business case, she encourages potential applicants to think through a variety of considerations that, if properly addressed, will strengthen their application to access social media. They include aims, objectives, audience, evaluation processes, resource and risk implications, and, by no means least, how such use relates to an overall communications strategy. Alyson enjoins potential adopters to consider the importance of the functions of differing social media tools, their expected audiences, the successful engagement with the latter. She also considers bi-lingual blogging. Nor should one overlook legal issues such as FOI requests over Twitter, as well as incidents of defamation.
In concluding, Alyson reminds us that all social media do is provide other channels of communication, which represent other means of libraries being able to talk to their users.
Events and Reviews
I am also pleased to assure you that Ariadne continues to offer a range of reports of events from the calendar, some well established, others new; as well as its usual range of reviews, which in this issue includes reviews of: an anthology of perceptive essays on the challenges presented to archival thought and practice by Web 2.0; a spirited defence of public libraries; a collection of essays that examine the transformation of academic libraries as they become part of digital learning environments; a book which formalises the processes of being innovators and entrepreneurs of the Information Age; an O’Reilly work entitled Making Software - What Really Works, and Why We Believe It, and; a collection of essays on a wide range of current topics and challenges in information retrieval.
I hope you will enjoy Issue 68.
- Suziki Masako, Sugita Shigeki. "From Nought to a Thousand: The HUSCAP Project". October 2006, Ariadne Issue 49 http://www.ariadne.ac.uk/issue49/suzuki-sugita/