Web Magazine for Information Professionals

Editorial Introduction to Issue 70

The editor introduces readers to the content of Ariadne Issue 70.

Welcome to Issue 70 of Ariadne which is full to the brim with feature articles and a wide range of event reports and book reviews.

In Gold Open Access: Counting the Costs Theo Andrew explains the significance of the recent RCUK amendment to their Open Access policy requirements of researchers and the importance assumed by the cost of publishing the Gold Open Access route. Unsurprisingly, there is currently a great variability in such costs to research institutions, while, with few exceptions, publishers are as yet slow to impart what effect the move to charging for article processing will have on current institutional subscription costs. It is Theo’s aim in his article to cast what light he can on the matter by presenting data on article processing charges (APCs) gathered over the last five years. He begins by stating ‘The Problem’ which simply enough calculates the publication costs to an institution’s research departments if they attempted to follow RCUK’s preferred Gold OA route. Immediately it is clear that the sums involved will require rationing on their part. Theo demonstrates that what information is available on article processing charges (APCs), estimations of unit cost vary, nor, as he explains, have there not been valiant attempts to gain a view of the likely costs – but the picture gained from the data available from over 100 publishers is a patchy and inconsistent area. Theo therefore goes on to supply data on articles published by Edinburgh where an APC was applied. He supplies an analysis of the comparison of APC costs with the prestige of the journals in which articles to those charges were published, and provides a calculation of the mean APC costs and related impact factor. They draw our attention towards how little is known about the cost of peer review – or the practices of offsetting unavoidable costs. While emphasising, quite rightly, the limited size of the data available, Theo does direct us to some interesting conclusions about the types of journal involved, and the fees for article processing charged. While it is important to recognise the constraints described by the author, the findings and questions he raises in his Discussion section are not without interest.

In their article Upskilling Liaison Librarians for Research Data Management, Andrew Cox, Eddy Verbaan and Barbara Sen explore the design of a curriculum to train academic librarians in the competencies to support Research Data Management. They begin by considering new roles HE libraries are likely to adopt in meeting new demands from researchers in connection with the expansion in research data management (RDM). To address these demands, JISC has funded the White Rose Consortium  to produce learning materials in particular for liaison librarians. This article seeks to summarise the project’s approach to the scope and standard of such learning materials. The authors maintain that libraries are already widely recognised as being well-positioned within HEIs to adopt a major role in RDM because of their existing skills in advocacy and communication. Nonetheless, they will encounter problems. Not least will be how to accommodate their new roles amongst all the others.  In addition, librarians will have to contend with the complexity, large-scale and transitory nature of current data practices. Organising new roles related to RDM within liaison librarians’ existing activities is not automatically straight-forward and could require a radical change in some practitioners’ professional outlook. The authors provide an overview of likely librarian roles and their associated competences in RDM and how they map to existing librarian roles. They also highlight a further raft of roles which operate in support of research effort. The authors consider existing curricula in RDM and digital curation, and the three key issues in UK and US LIS schools. The RDMRose Project’s planning took account of the daunting challenge confronting some librarians and made suitable provison. In addition to producing material on concepts and policy, the project also addressed how to give ‘students’ a practical understanding of core methods and tools, including hands-on experience and the opportunity to discuss them. The authors’ conclusions on sound approaches to developing learning materials that will provide the desired learning outcomes are succinctly expressed and are supported by an appendix detailing the course content.

In The LIPARM Project: A New Approach to Parliamentary Metadata, Richard Gartner explains that the study of United Kingdom parliamentary proceedings has been facilitated by the fact that it is possible to consult them in digital form. However, since they do not appear in one homogenous collection, it is not yet possible to search the material over one single system. Richard provides an effective illustration of how something as seemingly straightforward as a politician’s name can present searchers with considerable difficulties. He goes on to explain how JISC was instrumental in promoting the realisation of a long-sought integrated approach to parliamentary metadata. The development of a more integrated approach to parliamentary metadata would make it possible for researchers to track individual careers and parliamentary careers and voting patterns. At the other end of the spectrum of granularity, it would allow the identification of themes and topics across all the parliamentary proceedings of the nation. He explains how the JISC-funded project employs XML to link the essential elements of the parliamentary record within a unified metadata scheme together with the use of its own Parliamentary Metadata Language (PML) and controlled vocabularies.  The author explains the key components of the PML XML schema which  defines seven concepts, deliberately defined in generic terms to ensure that they can be applied to more than one legislature. He goes on to demonstrate how the schema is able to accommodate information on the activities of parliamentarians through the employment of generic elements. He explains that the schema PML schema is able to accommodate often quite complex data relating to the voting behaviour of any UK parliamentarian. The LIPARM architecture is currently being trialled on two collections of parliamentary proceedings in order to test its viability. While much has been made, and rightly, of the digitisation of parliamentary proceedings as a valuable source of reference for the ordinary citizen, it should also be recognised that the diligence applied by LIPARM in the construction of a sound metadata structure will serve to expose far more helpfully the digitised data and those that will surely follow.

In “Does He Take Sugar?”: The Risks of Standardising Easy-to-read Language, Brian Kelly, Dominik Lukeš and Alistair McNaught highlight the risks of  attempting to standardise easy-to-read language for online resources for the benefit of readers with disabilities. In so doing, they address a long-standing issue in respect of Web content and writing for the Web, i.e. standardisation of language. They explain how in the wake of the failure of Esperanto and similar artificial tongues, the latest hopes have been pinned on plain English, and ultimately standardised English, to improve accessibility to Web content. Their article seeks to demonstrate the risks inherent in attempts to standardise language on the Web in the light of the W3C/WAI Research and Development Working Group (RDWG) hosting of an online symposium on the topic. They describe the aids suggested by the RDWG such as readability assessment tools, as well as the beneficiaries of the group’s aims, such as people with cognitive, hearing and speech impairments as well as with readers with low language skills, including readers not fluent in the target language. To provide readers further context, they go on to describe earlier work which, if enshrined in WCAG Guidelines would have had significant implications for content providers seeking to comply with WCAG 2.0 AAA. They interpret what is understood in terms of ‘the majority of users’ and the context in which content is being written for the Web. They contend that the context in which transactional language should be made as accessible to everyone as possible differs greatly from that of education, where it may be essential to employ the technical language of a particular subject, as well as figurative language, and even on occasions, cultural references outside the ordinary. They argue that attempts to render language easier to understand, by imposing limitations upon its complexity, will inevitably lose sight of the nuances that form part of language acquisition. In effect  they supply a long list of reasons why the use and comprehension of language is considerably more complex than many would imagine. However, the authors do not by any means reject out of hand the attempt to make communication more accessible. But they do highlight the significance of context. They introduce the characteristics that might be termed key to Accessibility 2.0 which concentrate on contextualising the use of content as opposed to creating a global solution, instead laying emphasis on the needs of the user. They proceed to detail the BS 8878 Code of Practice 16-step plan on Web accessibility and indicate where it overlaps with the WCAG guidelines. Having provided readers with an alternative path through the BS 8878 approach, they go on to suggest further research in areas which have received less attention from the WCAG guidelines approach. They touch upon the effect of lengthy text, figurative language, and register, among others, upon the capacity of some readers to understand Web content. The authors’ conclusions return to an interesting observation on the effect of plain English which might not have been anticipated – but is nonetheless welcome.

In their ‘tooled up’ article Motivations for the Development of a Web Resource Synchronisation Framework, Stuart Lewis, Richard Jones, and Simeon Warner explain some of the motivations behind the development of the framework. They point to the wide range types and scenarios where it is necessary to synchronise resources between Web sites, including preservation services and aggregators and the care that must be applied in consideration of the protocols and standards that exist to underpin such operations. The core of their article resides in the range of use cases they supply together with the concrete examples  and discussion of related issues they also provide.

In his article entitled The ARK Project: Analysing Raptor at Kent, Leo Lyons confidently predicts that the advent of many more mobile devices will only serve to increase the trend to use electronic resources. He reminds us too that electronic resources are no longer confined to journals or databases, but a host of other services including local and real-time on which library users increasingly rely. With usage of these resources ever on the increase, the ability of libraries to predict and realise ready access to them is being increasingly tested. Librarians therefore come under increasing pressure to ensure the usage they predict is correct. Leo highlights the importance of log files in any process of electronic resource usage audit, while admitting they are not easy to interpret by non-technical staff and no less simple to compile into a cohesive overview of resource usage. Leo explains the purpose of Raptor in its interrogation of log files generated by access to e-resources and why the Kent team as early adopters, decided to continue work on its potential use. He goes on to explain the team’s aims which went beyond the assessment of Raptor direct outputs and extended in the direction of use of the database by other reporting systems. Leo explains how the Analysing Raptor at Kent team is currently examining the uses to which they can put the Microsoft Reporting Services in order to provide library managers with usable analysis and presentation tools without lengthy training. Leo goes on to identify a raft of uses for Raptor which go deeper than direct identification of e-resource downloads  such as a diagnostic tool for learning problems, subject to proper provisions in terms of user privacy.

In their article entitled SUSHI: Delivering Major Benefits to JUSP, Paul Meehan, Paul Needham and Ross MacIntyre begin by highlighting the benefits to UK HE and publishers afforded by the automated data harvesting of the Journal Usage Statistics Portal (JUSP) employing the Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol. They maintain that SUSHI provides enormous savings in both cost and time to the centralised JUSP service. They point to the effect on efficient statistics gathering of the introduction of the COUNTER (Counting Online Usage of Networked Electronic Resources) Code of Practice which has contributed to more effective decision making in purchasing. A report commissioned by JISC identified a distinct demand for usage statistics which resulted ultimately in a prototype service but its reach was hardly nationwide. The advent of SUSHI in 2009 vastly improved the machine-to-machine transfer of COUNTER-compliant data. The authors explain the other benefits that SUSHI confers. Such was the scale of expansion that the development of SUSHI became central to JUSP’s capacity to handle data from other 100 institutions. However, the lack of any standard client software, or, in the case of some major publishers, support for SUSHI, also presented a headache at the outset. Therefore a major aim was to develop a single piece of software to interact with any SUSHI repository, not a trivial undertaking. Nonetheless, its work on the Oxford Journals server proved successful in implementing the retrieval of reports with a SUSHI client in PERL. This work on the Oxford Journals server provided JUSP with a springboard into work to meet the needs not only of publishers deploying usage statistics platforms such as MetaPress but also those without any SUSHI service at all. The authors report general satisfaction among libraries where not only has the manual work of reporting been drastically reduced but where there is now the reassurance that machine-to-machine operations via SUSHI are compliant with COUNTER guidelines. The increased scale of these operations is evident when one notes that over 100 million data entries had been collected by September 2012.  The authors provide a run-down of a typical monthly data collection – and the savings in effort are significant, to put it mildly.  Even when one adds up the manual operations, few though they are, involved in a typical month’s data checking, they represent less than one per cent of the manual effort required were such an operation executed without the use of SUSHI.  They also provide a review of their findings and present the timings on automated SUSHI operations. The authors also describe JUSP’s plans to handle the next release of the COUNTER Code of Practice at the end of 2013. Moreover the automated nature of the SUSHI-driven process also means that should one of JUSP’s publishers be obliged to re-state its usage data on a large scale, JUSP is able to replace such data in short order, despite the lack of notice for such a large-scale revision.  The authors, convinced of the efficacy of SUSHI-driven automated harvesting of usage statistics, have been working on a stand-alone instance of the SUSHI code to enable institutions to establish a Web-based client for themselves.

Stepping down from his pivotal role as CEO at ALT in May 2012, Seb Schmoller may not have expected to be pursued by the media for his thoughts on his career, but such was the impression that he made upon some of his audience that I was prompted to ask him to respond to a few questions from Ariadne. In Seb Schmoller Replies, he talks about online learning which, it will come as a great surprise to some, began to develop before the Web and what he has gleaned along the way in his own career. Seb also supplies some of his philiosophy and his thoughts about the people whom he considers to have had a telling impact on the development of learning, the Web and more.

In 21st-century Scholarship and Wikipedia, Amber Thomas states that as the fifth-most used site on the Web, Wikipedia is evidence of the growing credibility of online resources and points to instances of discourse about how academics relate to Wikipedia. But she does highlight Wikipedia’s statement that its entries should not be cited as primary sources in academic work. Nevertheless, Amber contends that scholarship is in the process of changing and intends to describe four key trends linking scholarship and Wikipedia. She begins by examining ways in which researchers work, which while not mainstream, are nonetheless of interest to information professionals. Two trends she sees as important in emerging scholarly practice are the notion of continual iteration, ‘Perpetual Beta’, and typified by a high degree of collaboration and feedback, the ‘Wiki Way’. Another development that she highlights is occurring within scholarly methodology whereby it is becoming more common to work in the open as one develops one’s thesis, inviting feedback with a view to increasing the impact of one’s work. As such, this placing of research work more in the public eye adds a new dimension to the process of peer review. Amber admits that this call to extended participation in the judgement of research is not without controversy. In addressing the third trend, Amber points to the manner in which digital presentation of scholarly information has moved us beyond the tradiotional style of Dewey classification. Without wishing to dispense with formal classification, she does contend that the public has already moved into the period of ‘post-Dewey’. Given this new state of affairs, she recognises the need for new forms of digital literacy and points to work done by JISC in this regard. Addressing the fourth trend, Amber describes how Wikipedia so amply illustrates the richly linked multidimensional landscape that has evolved. She looks more closely at her own citation practice these days and how much it is influenced by ’the inherent linkiness of the Web’. She also reminds us that Higher Education has also begun to benefit from Wikipedia since it has now become a very large source of referrals to academic work in its own right. Moreover, its inherent features of openness in which editing history and chat on topics serve as a ’visible collective construction of knowledge itself’.

In his article entitled Case Studies in Web Sustainability, Scott Turner highlights the fact that Web resources that have been built with public money may be wasted unless thought is given to their future sustainability in the event of their hosts being closed down. He draws attention to JISC-funded work in this area. Scott provides readers with some general priciples behind the notion of Web site sustainability before moving us on to  consideration of two different case studies.  As part of the approach to possible solutions, a number of Web-hosting options were investigated and Scott indicates their relative advantages.  Scott then shares with us his findings with respect to those Web-hosting options, and details the advantages and drawbacks of using Google Sites, DropBox and Amazon Web Service. The key issue in this case study was the amount of existing material on the original site which was to influence choices.  The second study concerned a smaller Web site and the aim was to achieve cost-free sustainability together with ease of management and transfer. He then goes on to explain his choice of software in each case. In considering the general priciples surrounding Web site sustainability, Scott takes us further than that which relates to the tax-payers’ money, important though that is.

In his article entitled Mining the Archive: The Development of Electronic Journals, Martin White declares at the outset a fascination with the history of the development of information resource management. He finds that Ariadne is able to expose her archives far more usefully through her new platform, and as a result, he has found it far easier to mine the Journal’s content in his analysis for this article. In his investigation of the descriptions of how information professionals have striven to ‘meet emerging user requirements with very limited resources’, Martin admits that the new-found ease with which he can access archive material using Ariadne’s tags [1] has led him down interesting by-ways as well as the highways. Martin begins his exploration of the Ariadne archive by looking at the electronic mark-up of chemical journals and then moves to the appearance of e-only journals in the mid-1990s. He also admits that his investigations of the archive dovetail well with his wish to highlight those practitioners who have made ‘a substantial contribution to our current information environment.’  Martin then turns his attention to the ground-breaking HEFCE initiative to create a Pilot Site Licence Initiative for e-journals nationwide, citing an article from July 1997 looking at journals in transition. Martin goes on to identify material from an Ariadne at-the-event report which provides a notion of the reactions of information professionals to these developments, including the open stance of the serials community to the situation. Martin moves us on from the debate which opened with the advent of e-journals, ie the fate of the print journals, to the even longer debate that surrounds Open Access.  In tracing the development of electronic journal delivery, Ariadne, as Martin seeks to emphasise, does not overlook the issue of preservation and access. He calls upon articles on JSTOR and Portico to support this point. Martin’s conclusions in looking back over the development of e-journals resonate very clearly in this editorial office when he asserts the importance of vision, of professional co-operation, and of the value of content.

In addition, Issue 70 offers an interesting range of event reports for which I am most grateful to their contributors while the scope of book reviews is equally worthy of your attention.

I hope you will enjoy Issue 70.

References 

  1. Overview of keywords. Ariadne: Web Magazine for Information Professionals
    http://www.ariadne.ac.uk/keywords/tags

Author Details

Richard Waller
Ariadne Editor

Email: ariadne@ukoln.ac.uk
Web site: http://www.ariadne.ac.uk/