As I depart this chair after the preparation of what I thought would be the last issue of Ariadne , I make no apology for the fact that I did my best to include as much material to her ‘swan song’ as possible. With the instruction to produce only one more issue this year, I felt it was important to publish as much of the content in the pipeline as I could. This will make for a long summary of main articles; I can only console you with the thought that the first version was considerably longer… Nonetheless, as you will see, there is a wide range of contributions, technical, theoretical, practical, maybe even heretical for some. I trust you will, as I have always tried to ensure over the last ten years, find something that will inform and engage.
In his tooled up article Implementing a Resource or Reading List Management System, Gary Brewerton takes us step by step through the various stages from writing the business case to training and data entry. He warns that while a Reading List Management System (RLMS) provides users with an easy and useful facility, implementing it is not as straight-forward as it may first appear. Gary’s section on the business case for an RLMS points to the core of institutional priorities, namely student retention. He demonstrates how an RLMS supports a library’s management of its collection in line with the teaching needs of the university while providing lecturers with important feedback. He highlights the need to plan with regard to scale, complexity, data entry (or migration effort), maintenance and access. Close examination of these initial issues serves as a means of fleshing out system requirements. He examines the advantages of acquiring proprietary and open source systems. He then considers the third option which is development of one’s own RLMS, though he begins by warning us of the significant amount of development effort involved. Once the scope and in particular the number of reading lists intended has been established, standard project management tools ought to make the process altogether manageable. But, failure to achieve a critical mass of lists can be fatal and Gary provides remedies for project leaders. Gary provides an overview of the main considerations concerning training and support where each institution and its RLMS will form different requirements, for example, whether libraries alone or academics are involved. In addition to underlining their importance, Gary emphasises that stakeholders involved will assume different significance as adoption and implementation move on to RLMS launch and (no doubt) maturity.
In their article on The Potential of Learning Analytics and Big Data, Patricia Charlton, Manolis Mavrikis and Demetra Katsifli discuss how the emerging trend of learning analytics and big data can support and empower learning and teaching. They begin by providing a definition of learning analytics based on that definition, the authors examine how learning analytics differs from the analytical process that has always accompanied teaching. They point out that in the age of Big Data, the difficulty lies in determining which data are truly significant. They explain that the amount of data now involved in investigations is enormous and time-consuming. They also describe how the world of technology has changed since the turn of the 21st century and the consequences that have arisen. They explain Kenneth Cukier’s approach to Big Data and how we are obliged to relinquish to a degree our understanding of how insights into mass data have been determined. They make the point that in order to employ Big Data in operating learning analytics, practitioners must know what to look for in large volumes of data and know how to handle them. The authors turn our attention to the two key research perspectives covering current data analysis tools. They explain that educational data mining is an emerging discipline and that working with any form of data analysis calls for an initial notion of the kinds of patterns anticipated – in teaching and learning they are the critical factors that influence outcomes. They then proceed to describe a series of scenarios containing a wide range of factors which can be analysed to determine the degree of risk of failure to which a cohort of students might be subject. They describe the key tools that automate the learning analytics models. In their discussion section, the authors look at the value of data to teachers and policy makers as well as their value to students in reflection upon the value of the effectiveness of their learning. In their conclusion, the authors warn of the danger of misuse of learning analytics if one becomes enticed unduly by the notion of measurability.
Jason Cooper and Gary Brewerton describe the development of a prototype WebApp to improve access to Library systems at Loughborough University for mobile devices in Developing a Prototype Library WebApp for Mobile Devices. The authors describe how a review of the statistics for access to LUL’s Web site and blog over mobile devices prompted staff to improve such access through the development of a prototype mobile app. They explain the basis upon which they selected or rejected a range of functionalities, the determinant criteria being the avoiding the duplication of planned improvements, the creation of new APIs, or the alteration of data in related systems. The authors then explain the importance of deciding whether to achieve a Web app approach to developing a mobile app, or to opt for a number of native apps. They describe the various options of page or pop-up, and the availability of standard HTML content, or JQuery mobile widgets, to developers, or both. JQuery, they felt, worked well in a mobile device context. They describe the services with which users are presented following successful log-in and how they are displayed for optimal clarity on mobile devices. They explain how data on users’ personalised reading lists are derived from the Loughborough Online Reading List System (LORLS) which they have described in detail in Issue 69 . The authors move us on to the benefits they have identified in the use of HTML5 to develop their Web app. They point out that the early decisions to limit the scope of development effort, that is, choice of Web approach and limited initial functionality, did make it possible to limit developer effort on an initial prototype. They also indicate where their next release will take their development work. They feel that financial and time demands can be kept to practical limits.
In his article The Effect of a Performance-based Funding Model on Research Dtatabases in Sweden and Norway, Leif Eriksson describes how the introduction of Performance-based Research Funding Systems (PRFS) has created new forms of research databases in Sweden and Norway. The Performance-based Research Funding System (PRFS) has proven one of the most significant changes in HE funding systems, moving the sector towards greater market-styled incentives. A major benefit of the PRFS model is that it avoids the allocation of funding for (often historical) reasons which are no longer valid. However the adoption of publications as an indicator of research quality in PRFS-based models has not been without controversy. Nonetheless, publication outputs do figure significantly in first-order indicators of the PRFS models in Scandinavia and elsewhere. The author also points to the use of indexes as a means of overcoming methodological difficulties associated with citation analysis. Where assessors do not choose to rely on peer review, there are difficulties in the use of databases in humanities and social sciences. In his article, Leif undertakes to examine and compare two different PRFS-oriented systems as employed in Norway and Sweden. He describes the national system for research documentation set up to support the Norwegian model. He also describes how in 2011 the new CRIStin system has not only assumed but extended the functionality of its predecessor, offering information on more informal publication as well as researcher and project data. Leif in his description of the Swedish model SwePub points out that it relies on only two indicators, and that its publication indicator, by relying on Web of Science, has attracted criticism from some quarters. He explains that new technology and publishing practices have increased the use of publication databases, but that in Swedish fields, such as humanities and social sciences, publications may only be accessible from local databases. He provides a detailed breakdown of publications and publication types in Sweden between 2004 and 2011. His next analysis is of language distribution in relation to ISI coverage across the various domains, commenting on particular situations in respect of Sweden. Leif reminds us that SwePub has evolved as a result of local HEIs initiatives and discussion of the portal as a data source for the Swedish PRFS model is ongoing. Its database, in his view, will require improvements and stricter quality control and data validation to make this feasible. A leaf may well be taken from Norway’s book.
In their article entitled KAPTUR the Highlights: Exploring Research Data Management in the Visual Arts, Leigh Garrett, Marie-Therese Gramstadt, Carlos Silva and Anne Spalding describe the exploration of the importance and nature of research data in the visual arts and requirements for their appropriate curation and preservation. In their introduction, the authors explain that the KAPTUR Project at its onset did not benefit from any significant research data management practice within the visual arts. Part of the project’s aims was to redress this situation. They summarise the aims of the article in its description of the Project and the work of its four partners towards developing a ‘sectoral model of best practice in the management of research data in the [visual] arts.’ They describe the increasing pressures to improve transparency and more systematic management of research data for a variety of improved benefits. The authors explain some of the complexity that attends research in the visual arts and in particular that the initial ‘data holders’ of research are physical artefacts such as sketch-books and journals. They add that the field generates more than its share of technical difficulties as regards standards, formats and other matters. They go on to explore the tangible nature of some outputs as opposed to the tacit knowledge accompanying artistic practice that must nonetheless be captured in some way. While practice-based research on the visual arts is relatively recent and may derive in part from social sciences and other disciplines, visual arts data can be as, if not more, complex than in longer-established fields of research. They lay emphasis on the far greater variety of ‘formats’ of data that visual arts researchers may produce. The authors then explain how work was undertaken after the publication of the Project’s environmental assessment report. They describe technical solutions to meet researchers’ requirements in the partner institutions and the basis on which two parallel pilots of those solutions were arranged as part of the user and technical analysis. They explain the barriers that existed to effective policy formation, and how it became possible to develop successful high-level research data management policies.
In their article on The Wellcome Library, Digital, Christy Henshaw and Robert Kiley describe how the Wellcome Library has transformed its information systems to enable mass digitisation of historic collections. They recognise that the balance between physical libraries and their virtual counterparts are changing and that the Wellcome Library has developed a transformation strategy as a consequence. Its strategy comprises both the mass digitisation of unique collections together with the creation of an online experience that transcends the generic offerings of mass distribution. They describe how much progress the Library has made in these key aims, explaining that such development is driven by the need to create a framework that integrates new and existing systems, provides scope for growth and meets requirements of policy surrounding both licensing and access. In considering the Wellcome Library’s “Principles for Technical Change”, the authors examine where it can be difficult to achieve convergence with existing functionality. They identified three main areas: searching, viewing and archiving. They report that much has been achieved in addressing issues of convergence. They describe the feasibility study carried out in 2010 in respect of database modification, standards and workflow requirements. This work led to the formal development of the business requirements of the Library’s digital delivery system (DDS). The authors provide some thoughts on the advantages and pitfalls of agile development agreed with its developers. They then move us on to the metadata standards that have been employed by Wellcome Library DDS. They explain the technology supporting the DDS player where, ironically, delivery of video material proved less demanding than that of image-based content. They go on to explain the various and complex forms of access (governed by a range of licences and restrictions). They also explain how the high level of granularity employed by authentication and access system in the DDS ensures that an optimal number of items can be made accessible while efficiently restricting the few items that cannot be made available to all.
In their article eMargin: A Collaborative Textual Annotation Tool, Andrew Kehoe and Matt Gee describe their Jisc-funded eMargin collaborative textual annotation tool, showing how it has widened its focus through integration with Virtual Learning Environments at the Research and Development Unit for English Studies (RDUES) at Birmingham City University (BCU). They explain that while they were involved in developing automated compilation and analysis tools, such as WebCorps, their eMargin tool represented a departure since it is a manual annotation and analysis tool. They explain the project’s genesis in 2007 during analysis work on English literature. Ironically, eMargin grew out of the resistance on the part of some literary scholars to the highly automated top-down approach they applied which was inimical to the conventional approach of the discipline. In discussions with staff and students at the School of English at BCU, they soon discovered the limitations of the analogue approach to close reading annotation – frankly, a mess. Further research on their part revealed that annotation software had done little to keep pace and largely failed to meet requirements of close reading analysis and online sharing. Armed with a clearer set of requirements, the team developed a prototype Web-based annotation system which supported (and distributed to all users) close annotation right down to single-word level. This approach permitted immediate display and response, solving the problem of online sharing. Their description of the eMargin tool includes the design of basic annotation and the flexibility accorded to users, tagging of sub-sections of text or even single words, the various methods and formats of uploading the primary or ‘target’ text, the management of different user groups, look-up or reference functionality, search and retrieval of primary or annotated text, and various output formats. The authors then provide us with the architecture of their solution. They go on to recount the technical difficulties they met in their initial design in respect of the granularity of the eMargin tool. Their solution was both pragmatic and practical! They also describe different and also mobile-friendly methods of text selection. The authors are pleased to report that eMargin is attracting considerable interest and is trialling across disciplines from law and fine art to specialist healthcare.
Brian Kelly, Jonathan Hassell, David Sloan, Dominik Lukeš, E.A. Draffan and Sarah Lewthwaite advise Bring Your Own Policy: Why Accessibility Standards Need to Be Contextually Sensitive and argue that, rather than having a universal standard for Web accessibility, Web accessibility practices and policies need to be sufficiently flexible to cater for the local context. The authors explain that despite the increased pressures on conformance with Web accessibility guidelines, large-scale surveys have shown that they have had relatively little impact. Having reviewed previous critiques, they examine the code of practice BS 8878. They argue for a wider application than just to Web content, and that an alternative strategy could be adopted which would employ measures that are more context-sensitive. The authors point out that little attention has been paid to the principles underlying Global Accessibility Standards and that in non-Western environments may even prove to be counter-productive. They highlight the alternative of more evidence-based standards and examine their disadvantages. Having used the example of simple language to illustrate the difficulties, the authors offer another example in the provision of accessibility support to publicly available video material. They argue that standardisation of the deployment of Web products is more important that the conformance of the products themselves. The authors summarise the aims of BS 8878. They explain the scope of the framework that it adds to WCAG 2.0 and how it encourages Web site designers to think more strategically about all accessibility decisions surrounding their product. They conclude that globalisation is not limited to users: owners of sites do not wish to be constrained in their choice of international suppliers and products, but the latter are by no means standardised globally – but the benefits of an international standard are enormous.
In his tooled up article Improving Evaluation of Resources through Injected Feedback Surveys, Terry Reese suggests a novel approach for providing intercept survey functionality for librarians looking to simplify the gathering of user feedback for library-provided materials. He begins by explaining the difficulties librarians face in obtaining a truly representative assessment of use of their electronic resources, and describes the drawbacks in current evaluations such as usage statistics etc. While pointing to the improvements in the analysis of raw usage data provided by tools like COUNTER and SUSHI , Terry accepts that they cannot be so successful in capturing the impact of resources used on the research of students and staff. He emphasises that usage and impact arise from different sets of information, the latter coming best from users directly and ‘at the point of need.’ An injection survey, that is, a survey that seeks to obtain users’ opinions as they are working with the library’s resource, is regarded as one of the most appropriate means of obtaining impact data, but is far from easy to implement. This article explains the approach adopted in conjunction with a proxy server. Terry’s institution, Oregon State University Libraries (OSUL), was able to determine the optimal location to place an injection survey. In order to meet the institutional need of a fully fledged survey framework in the online delivery process, OSUL adapted a novel approach to determining how to route traffic in the context of potential survey injections. The new approach took a fraction of the time compared to when manual re-writes of URLs represented the best solution. The re-design of OSUL’s proxy workflow meant that a more user-directed assessment programme became possible, with all the benefits to be derived from injection surveys. Another aspect of considerable interest to OSUL staff is whether it will be possible to share surveyed impact data with, for example, institutions with which they share collections. In conclusion, the author sees the collection of these targeted impact data as a means not only of understanding how libraries’ resources affect the work of users, but also of improving their services as well.
In DataFinder: A Research Data Catalogue for Oxford, Sally Rumsey and Neil Jefferies explain the context and the decisions guiding the development of DataFinder, a data catalogue for the University of Oxford. The authors begin by describing their institution’s policy on research data and record management. The emerging infrastructure is developing across four areas: RDM planning, managing live data, discovery and location and access, reuse and curation. It is anticipated that the catalogue, DataFinder, will provide the ‘glue’ across the four services mentioned. The authors go on to explain that DataFinder was designed to use metadata that support discovery and standard citation. They describe the anticipated applications as forming a data chain, linked to one another as a series of services, relevant to the different stages of the research data lifecycle. They explain that metadata created to describe datasets may begin the cycle in quite a minimal form. However, they may be extended as the lifecycle proceeds. A core minimum metadata set has been defined while contextual metadata, for example, commensurate with funders’ policies, can be inserted where relevant. The application of controlled vocabularies and auto-completion to the manual creation process is seen as both user-friendly and as a means of ensuring consistency. The authors explain that the selection of a unique subject classification scheme has been rejected because of too many conflicting interests, but they explain why the FAST scheme has been favoured. The authors then move on to explain how the inherent architecture in their solution has the necessary characteristics to handle the increase in volume. DataReporter will be able to employ both usage and content statistics in the preparation of reports to comply with a number of types of funder mandate as well as for purposes such as capacity planning, budgeting and the REF. They then pass on to the approach taken in the development of DataFinder’s user interface and the reuse of Solr index and search. They explain how the generation of a UUID for each object assures robust uniqueness which permits export without danger of duplication. They also reveal their method of handling duplicate records and mismatches in this context which obviate the need for a highly complex process of programmatic record reconciliation. In their conclusions, they summarise how design decisions have been inevitably influenced by finder requirements but that central to the design has been the means of simplifying the deposit process for the primary users, researchers.
In Augmented Reality in Education: The SCARLET+ Experience, Laura Skilton, Matt Ramirez, Guyda Armstrong, Rose Lock, Jean Vacher and Marie-Therese Gramstadt describe augmented reality in education case studies from the University of Sussex and the University for the Creative Arts. While the use of Augmented Reality (AR) has been largely concentrated in advertising and high-profile events, the SCARLET and SCARLET+ projects provide a better understanding of its use not only in learning and teaching but also with regard to special collections materials and objects. The subsequent project, SCARLET+, benefited from the experience garnered by the initial project. The work at the University of Sussex has been with the Mass Observation collection which older readers will know dates back to the 1930s, World War II and more recently. The SCARLET+ application at Sussex created a structure which greatly benefited users by concentrating upon the interpretation of archival material. Such a structure could be applied to any discipline or collection. Initial investigation of AR applications indicated that they seemed to divide into two categories: very user-friendly but limited in scope; or very flexible but technically difficult. This paradoxical impasse was resolved by the intervention of the project team. The authors give an example of the problems encountered in the development process and how they were solved. The authors explain how the project addressed the somewhat different environment of the University for the Creative Arts (UCA). Together with UCA staff it was agreed to raise students’ awareness and exploitation of AR to the benefit of their learning. Working with UCA has allowed the project to develop approaches to AR development that support handling of artefacts. When AR works well, it can be very impressive. Getting it to work well depends on the technology and infrastructure working properly while training is essential for effective AR development.
In their article Engaging Researchers with Social Media Tools: 25 Research Things@Huddersfield, Graham Stone and Ellen Collins investigate whether 25 Research Things, an innovative online learning programme, could help researchers understand the value of Web 2.0 tools. 25 Research Things, an online learning course developed at the University of Huddersfield, is a pilot which seeks to do for researchers what 23 Things did for public librarians . In their article they investigate to what degree this innovation has benefited researchers. The programme was offered to both postgraduates and staff. As with 23 Things, 25 Research Things encouraged participants to experiment with new technologies and record and share their reactions with other pilot members. The authors review the development of Web 2.0 usage and point to research on the culture of digital natives and how it is wise not to presume that all researchers will conform to the stereotype. The authors point to a study by Ian Rowlands and colleagues which indicates that researchers are beginning to employ social media at all stages of the research cycle, yet it may not be possible to claim that adoption among researchers is anywhere near universal. But acceptance and experimentation is definitely growing. They suggest an overall picture of decided interest in social media on the part of researchers, but little clarity as yet on how best to exploit Web 2.0 technologies. The authors describe how the two runs of the course were operated and the ways in which participation was encouraged together with the role of the project authors in each cohort’s activity. They then describe their methodology, explaining the two forms of data collection for their evaluation process. The authors explore the themes that arose in the participants’ reflective blog posts in order to discover to what degree the online course succeeded in its aims. The reactions to Google Docs, Mendeley, Diigo, Prezi, Slideshare, LinkedIn, Twitter, and Commoncraft were of interest, not just in terms of supporting research, but even on occasions in teaching and learning. They also point to the value that was laid upon the importance and occasionally absence of project team support. In their discussion section, the authors mention the tools which seemed to attract participants in terms of supporting their research activity, one of which was not included in the course! Once again, they also mention how Web 2.0 tools were seen as a support to postgraduates’ teaching work. Furthermore, the value of social media to participants in terms of staying up to date in areas beyond their immediate research activity was also appreciated. In their conclusions on this small pilot project, the authors do see connectiuons between the level of drop-outs and the amount of material and time to cover the course, the degree of project support and other factors. Encouraging above all, to my mind, is the decision to include in future iterations of 25 Research Things for Researchers a face-to-face element for participants all together, with the project team. Wise move.
In their article Hita-Hita: Open Access and Institutional Repositories in Japan Ten Years On, Ikuko Tsuchide, Yui Nishizono, Masako Suzuki, Shigeki Sugita, Kazuo Yamamoto and Hideki Uchijima introduce a number of ideas and projects that have enhanced the progress of the Open Access movement and institutional repositories in Japan over the last ten years. The term ‘hita-hita’ means to be ‘tenacious, persevering and to work step by step without giving up.’ To the authors this term bespeaks the attitude of repository managers in their promotion of repository development and support for the Open Access movement in the country. One of the groups that contributes a great deal to that sense of determination is the Digital Repository Federation (DRF), a community of repository managers from 145 universities and research institutions across Japan which the authors mention in their article. They also describe the environment and ethos of Japanese library professionals and in particular explain the effect of the 3-year tours of duty they carry out in different departments or institutions throughout their career and how this affects the accumulation of repository management experience and expertise. A major part of this contribution concentrates on the methodology of Japanese librarians in advocating the adoption of Open Access and the awareness-raising activities they undertake to encourage researchers to deposit. One of the aspects which I found so striking was the very effective low-tech components of their grass-roots activities which derive, I suspect, as much from Japanese culture as conscious policy, namely the face-to-face contact that is a central part of their various promotional campaigns. I suspect there are some worthwhile lessons to be drawn from this article, and readers interested in this subject will be pleased to learn that a follow-up article on consortial repositories in Japan will appear in Issue 72.
In his latest contribution, Mining the Archive: eBooks, Martin White opts to track the development of ebooks through the articles and reviews on the topic since 2001. While he would maintain there are pleasures particular to the printed book, and that it can render up information more quickly than using an electronic search, Martin cannot deny that an ebook reader is clearly a practical alternative when one is in cramped conditions. In considering a contribution by Sarah Ormes in 2001, Martin reminds us of work funded by Jisc investigating the needs of UK academics and students. Martin’s analysis shows that 2000 – 2004 was a period of heightened interest in ebooks with the arrival of the first ebook standard and the advent of new ebook readers. He also highlights the importance of OCLC’s acquisition of netLibrary together with the emphasis, possibly influenced by Ariadne’s readership, on electronic textbooks. He also points to the value placed on the emerging technology by Penny Garrod of UKOLN who had examined its importance within public library provision by late 2003, and recognised the need to see ebooks integrated with existing collections. Martin then moves on to 2006-2010, which he sees as a further and distinct period in ebook development. He warmly recommends the article by Brian Whalley which describes the situation just before the Amazon Kindle reached the market, and analyses how it might work with other learning applications. In his conclusions, Martin, writing as an ebook author himself, comments on the value of the format and the value placed by many on the higher currency it achieves over its printed equivalent.
I trust you will enjoy Issue 71. Goodbye.
- Richard Waller. "Ariadne Carries On". July 2013, Ariadne Issue 71
- Jon Knight, Jason Cooper, Gary Brewerton. "Redeveloping the Loughborough Online Reading List System". July 2012, Ariadne Issue 69
- Paul Meehan, Paul Needham, Ross MacIntyre. "SUSHI: Delivering Major Benefits to JUSP". November 2012, Ariadne Issue 70
- Helen Leech. "23 Things in Public Libraries". July 2010, Ariadne Issue 64
This article has been published under Creative Commons Attribution 3.0 Unported (CC BY 3.0) licence. Please note this CC BY licence applies to textual content of this article, and that some images or other non-textual elements may be covered by special copyright arrangements. For guidance on citing article (giving attribution as required by the CC BY licence), please see below our recommendation of 'How to cite this article'.