Editorial Introduction to Issue 66: Sanity Check
With institutions searching to increase the impact of the work they do, and conscious of the immediate impact of any event they organise, many will be interested to read of 10 Cheap and Easy Ways to Amplify Your Event in which Marieke Guy provides a raft of suggestions to enhance the participants' experience of and involvement in, the event they are attending. For the unconvinced, they will be pleased to hear it is all Lorcan Dempsey's fault when in 2007 he made reference to the 'amplified conference', but as Marieke points out, the suggestions in her article do not amount to a dismissal of professional events teams but, rather, constitute a range of strategies they might wish to adopt in an environment where the expectation is of doing more with fewer resources.
I make no apology for going off at a tangent here since the All-staff Contact Day at UKOLN at which Marieke gave her presentation, on which her article is based, also provided a debating space in which it was possible to involve all colleagues, something which is not always so easy given the deep specialisms that abound in such an organisation. The topic of the discussion was: Are information overload and too much multi-tasking impairing our ability to focus? And by way of exposition the meeting was treated to the tale of the Web-savvy entrepreneur who somehow managed to miss the message offering a $1.3 million deal in amongst all the other messages reaching him over a multiplicity of channels . The topic of information overload has attracted considerable interest in this publication  in the past and it was certainly true of the debate held at UKOLN that day. Readers unwise enough to read my editorials in the past will not be too surprised to learn that I do incline to the ayes in the debate. I well remember my confusion when a colleague described the prospect of half the conference hall typing away on laptops during a session as a good thing. They may be more surprised to learn that I nonetheless subscribe to the notion that careful use of a number of channels can be beneficial, and I suspect Marieke Guy would say the same for practitioners dipping their toe in the water of amplified events. The adoption of supplementary channels and technologies has to evolve like anything else well planned, carefully, well adapted and subject to review. And at least with the advent of quieter keyboards, the sensory impairment is at least localised!
Marieke proceeds to work through the ten strategies of the article beginning with video and how to make recordings and where to locate them for best exploitation. While video is basically there for retrospective consideration, the second item, streaming is very much real-time and offers a variety of opportunities to include the audience, physical and virtual. Conversely, Marieke reminds us that the files derived from podcasting take up less space and can be hived off to be heard elsewhere, e.g. on the train to work. While taking photos is hardly revolutionary, the use of sites like flickr to tag and store them is a welcome development since they make reuse for the purposes of, say, a blog or Web page far easier. Similarly, slide-hosting as offered by slideshare is proving to be a means of maximising the potential of someone accessing your presentation either before, during or after the event. Whatever reactions one may have to the effect Twitter may have on the operation of one's brain, one can see from Marieke's usage of Twitter and related applications, that it can be a powerfil means of sending immediate and immediately readable messages about reactions, resources, what will happen next, etc. It also permits quantative evaluation of the event involved. Meanwhile, blogging, a system that permits greater expansiveness than Twitter allows organisers to keep participants interested and informed, as well as providing extended content for absentees to visit. Live blogging which operates for the time window of the actual event is closer to Twitter but is less restrictive as to content. In a sense one might argue that using webinars as a form of event is not so much amplification as replacement, but avoiding hair-splitting, it is worth noting that webinars are a good way of organising the material they cover. Likewise the process of event collation where a number of resources such as the live stream, Twitter feed, etc, are pulled together to form a lucid and comprehensive picture can be very helpful. Marieke admits that not all can make use of outside agencies to collate their events, but it is possible to employ free collation tools of dashboard services e.g. Netvibes, which rely principally on RSS feeds, though when combined they can grind exceedingly slowly. Finally to round off her list of 'dos', Marieke reminds us to promote quickly, remember to archive, and preserve. Her list complete, Marieke wisely adds a few caveats about the employment of amplification strategies. Not surprisingly, most significant in my view, is the matter of copyright. Just as pertinent is her warning about the danger of compromising the quality of one's output by over-reliance on DIY tools and insufficient expertise. I would contend that working leaner is possible but not without sufficient rehearsal and testing.
In Reading Van Gogh Online? Peter Boot invites us to consider how much may be learnt from examining our Web site's server logs to discover the patterns of behaviour of our site users. Initially his first interest was to discover who accessed the online edition of the Van Gogh letters to see if his assumptions about them were correct. While introducing the work of the LAIRAH Project, Peter points out that his article represents the first reported study on actual usage data of a scholarly digital edition. He clearly explains the value of server log analysis to site owners in terms of usability, content appreciation and even for enhancements such as users' personalisation and also characterisation of site users. While identifying the advantages of analysis of log data over methods of information collection, e.g. questionnaires, Peter admits there is a limit to how much one can accurately infer from the requests. For example, Peter rightly describes the limitations in respect of user behaviour that no weblog can properly quantify, such that even the description of a single user's session is invariably flawed, though he points out ways of mitigating this difficulty. He goes on to explain the process adopted to analyse his data. It is soon evident that he has been able to determine which functions within specific sub-page types are most and least preferred by visitors. He equally manages to determine users' preferences in respect of search and tables of content (ToCs). Inevitably, in Peter's description, the whole concept of site navigability emerges, and how that can influence what users seek. Peter then turns his attention to the user behaviour identified by evidence of their search processes which, not altogether surprisingly, points to the limitations in many visitors' search skills, in particular, the analysis of terms entered for simple search which highlighted the need for spell-checker support. Finally, Peter turns to what may be determined by examination of the individual sessions, though he admits to certain limitations. Nonetheless, it was possible to discern certain patterns in visitors' behaviour. While some of Peter's findings are unlikely to surprise, such as the confirmation of the discerned dislike among visitors for advanced search options, other findings pointed to assured improvements, such as thumbnails to support textual ToCs, but he also points to the value of application developers producing ways of handling server data better with less reliance on data filtering the user's browser -among other things. While recognising the limitations of his current work, Peter sees the potential of this approach to server data and is inspired to go further!
In her article entitled Turning off Tap into Bath Ann Chapman describes for us the lifecycle of a demonstrator database and the development of a preservation policy for its content and software in respect of the project she has guided and reported on for Ariadne in a number of articles . The project Tap into Bath successfully demonstrated the value of a database of collection descriptions with a common geographic focus. Ann's article describes both the preservation strategy she developed as well as the steps that were taken to preserve information about the database and the software. There were two components to Tap into Bath: the demonstrator database for collections in the Bath area and a downloadable open source resource that others could reuse to create their own collections database. The downloadable resource was a blank version of the database used for Tap into Bath plus the associated search and results display interface software. The look and feel of the database search and results display pages could be customised to reflect the new resource. Additionally the labels for the data elements could also be altered. Ann then proceeds to describe the decisions that arose from the event of 2010 that the database could not be continued in its present form and so a strategy towards preservation evolved through the consideration of a series of pertinent questions. It was therefore decided that providing inaccurate data was not an option so the live database would be taken down; that since there was still value in offering the resource for reuse, the relevant files would be hosted on the UKOLN Web site; that we would make an archive copy of the populated database and continue to offer the database and software as a free download. How would one describe the approach that was adopted? A careful and thoughtful review of the components of this project followed by equally careful and comprehensive action to ensure that the project was wrapped up without loose ends or loss of either database content, project information, and perhaps, most important of all, the potential to reuse this work in the future.
The core of the Phytomedicine Programme of the Faculty of Veterinary Science, University of Pretoria is the investigation in which its post-graduate students collaborate with a wide range of professionals. RefShare: A Community of Practice to Enhance Research Collaboration by Tertia Coetsee looks at the role of RefShare and the information specialist supporting the Programme's research. The difficulty facing the Programme was the lack of personnel to provide and obtain up-to-date information in respect of the post-graduate students' work. Tertia explains how RefWorks and, in particular RefShare, seem to meet the Programme's requirements. She then describes how the research environment is altering with an increased drive towards information specialists engaging more closely with researchers, or as she terms them, 'clients'. Tertia goes on to explain what benefits there are to be derived from research collabotation. She further explains how the library supporting veterinary medicine in her institution has deployed information specialists to support staff and students. She explains that they are there to add value which involves in part the inclusion and generation of electronic information such as Web portals, digital collections and e-publications. Moreover, they are becoming far more involved in the pedagogical activity than would be expected of an information specialist in a more traditional setting. In describing the work of the specialists working with phytomedicine, she identifies the need for a 'community of practice' for its students and researchers. It would support their needs by supplying a platform for core articles, publications, pre-prints, etc. Having detailed the content that was pre-loaded on the RefShare platform, Tertia then describes how its widespread use by the Programme was organised. She also details how the information specialist involved is able to measure participants' usage of the RefShare databases. Tertia also candidly details the limitations of the RefShare system which principally revolved around the lack of functionality to self-upload research findings to the database, as well as collaborators' concerns over privacy. As is often the case, it was recognised that much of the initial difficulties experienced by RefShare users related to their own level of ease with the system, and that more targeted training would reduce their reluctance to use it. A particular advantage was identified among the benefits offered by the platform in relation to the asynchronous progress of students across the Programme. This meant that it mattered much less if students were at different stages of completion of their studies. Furthermore, from the standpoint of the information specialists in the library, the support they offered the Programme's staff and students earned their library a good deal of kudos. It is interesting to note the changing role of the information specialist as described here by Tertia Coetsee and reminds me of a similar theme elsewhere in Ariadne .
Readers who recall the contribution from Mona Hess and colleagues, E-Curator: A 3D Web-based Archive for Conservators and Curators  will be tempted by the next article. It would seem that the difficulty encountered in moving a cultural artefact some 400 metres across his campus to be 3D-imaged inspired in Richard Collmann a whole new project: a portable object imaging rig. In Developments in Virtual 3D Imaging of Cultural Artefacts he describes the path he followed to making improvements not just at the technical level but to the whole operation of digital artefact scanning. Engaged in the creation of a cross-disciplinary learning object it soon became clear to all interested parties that the V3D digital representation of this focal artefact, Ned Kelly's post-mortem plaster head, had much to offer. Moreover, the problem of moving artefacts to static imaging rigs from institutions as widely distributed as they are in Australia, only made the case for a mobile rig the more compelling. Richard also describes some of the benefits of university-based development he was able to exploit to keep cost and delays to an advantageous minimum. He explains that extensive use and evaluation of the initial rig led on to an improved model which has provided not only reductions in production costs but also new features, some of which can be retro-fitted to the original model. Richard also illustrates how elements of the imaging workflow have generated efficiencies. He points out that one of the side-effects of increasingly sophisticated imaging devices is vastly increased image metadata. For example, it is now possible to log camera settings as well as camera positional and rig incremental settings. He also describes possible directions in this work, and how forthcoming developments in hardware are affecting strategy. He closes his article by emphasising the role played by the multi-disciplinary approach adopted by the Project and the effect it is having on the use and analysis of cultural artefacts. All the technical developments apart, it seems here that just as the instruction to the creator of Coca-Cola to 'bottle it' really put the beverage on the map, so making it portable may well make the 3D scanning rig a major leap forward.
Frequent readers of Ariadne will know that file formats are central to any consideration of digital preservation strategies as expressed in previous articles , but as Steve Hitchcock and David Tarrant explain in Characterising and Preserving Digital Repositories: File Format Profiles, they can also be used to reveal the fingerprints of emerging types of institutional repositories. They begin by describing the changes to preservation of scholarly content by dint of the spread of born-digital material. Likewise, the advent of digital content, open access and institutional repositories has greatly increased exposure. They contend that just as institutional repositories raise the profile of scholarly material so must they fulfil the need for preservation which they argue 'should be rooted in access and usage.' They also point out that institutional repositories also frequently possess embedded preservation tools in their interfaces, which places them well ahead in the progress towards effective digital preservation than most Web sites. They also identify the common characteristic of larger volumes of digital content in comparison with printed resources, and all that implies. They reiterate the frequently made point that preservation spans the while content lifecycle but emphasise the need to plan for digital content one is yet to receive, and for the emergence of new applications and file formats as yet unknown. They then ask the reader to consider how institutional repositories might evolve, and, as they term it, 'coalesce' and point to the work of the JISC KeepIt Project and its focus on preservation concerns. In terms of future content and its format, the authors point to methods of auditing institutional repositories' content with tools such as the Digital Asset Framework (DAF). They go on to explain the central role of file format identification in preservation operations and the emergence of a number of tools to effect the important task of file format identification. The authors contend that file format identification is central to the comparison of the profiles of the KeepIt Project exemplars. They then explain how determining the predominant format profile of an institutional repository enables one to identify the differing natures of repositories. They do point out that given the scale of some IR's content, even a digital analysis process can take time. They also state that the subsequent interpretation of a file format profile analysis is dependent on knowledge of the features of the various audit tools and awareness of the significance of unknown and unclassified formats. The future ability to add the characteristics of manually identified formats to the audit tool is not insignificant. They then illustrate this capacity to add identifications to the tool memory by describing the assimilation of specialised files for crystal structures. The authors move on from consideration of the KeepIt exemplars to consideration of the characteristics by which one determines risk to varying formats. They point to the need to determine the technical data of the content held, including, naturally, file formats, in any effort to plan preservation. They propose that better economic management of preservation operations through finer tuning of, and discrimination between, format identification and presentation-related decision-making, e.g. format migration. The approach they recommend will, they contend, will mean 'each exemplar profile gives the respective managers a new insight into their repositories.'
In their article about Saving the Sounds of the UK in the UK SoundMap, Maureen Pennock and Chris Clark point to the changes being witnessed in libraries and the experience they are offering users, including the impact born-digital materials are having on library collections. One such example is the UK SoundMap, an online crowd-sourcing activity driven by the British Library in partnership with the Noise Futures Network to engage and build the community in development of a new born-digital audio-visual research resource, and this activity forms part of BL's Unlocking and Integrating Audio Visual Content (UIAVC) Project and works in parallel with a range of other initiatives which they describe for us. All such activities, they explain, 'express the commitment the Library now has towards integrating audio-visual media within the research experience.' They then turn to the significance of mobile social networking in the development of crowd-sourced real-time data. UK Soundmap (UKSM), they contend, is ground-breaking in its exploitation of recent technological innovations including GIS and geo-tagging technologies combined with audio-visual recording and metadata creation delivered over the mobile phone. UKSM aims to exploit the different means of using such crowd-sourced data to create worthwhile new digital resources about the UK landscape and the reactions of ordinary people towards it. The authors then move us onto a description of the technology that can deliver such aims, including Audioboo and a Google Map interface. Contributors to UKSM are advised on best practice in audio recording and tagging their audio file (or 'boo') and, if accepted, their boo is entered on the RSS feed of all UKSM submissions. The use of mapping as the means of search allows site users to locate the position of all sound recordings accepted and play them immediately. There is little doubt that experts in the field of crowd-sourcing, proponents of citizen science, etc will be interested in the profile and origins of the contributors, though detailed data are somewhat lacking at present. They also emphasise the importance of reducing barriers to participation in this crowd-sourcing effort through the choice of accessible, user-friendly technologies. They also provide information on the nature of the sound recordings that have been submitted. While difficult to identify what might be characterised as a typical recording, they do describe frequent common features. Maureen and Chris point out the inevitably lower quality of sound recordings made by amateurs with the ubiquitous but non-specialist mobile phone microphone, but indicate that most causes of rejection by the Project relate not to technical quality but for reasons such as copyright or obscene content. In truth, they are more concerned about the quality of the file metadata which is frequently wanting. While there is the usual caution about use of third-party technology, they feel the UKSM is low-risk in terms of technological sustainability. They are no doubt very wise to consider inevitable human failings to represent the greater risk in what seems to be a very good example of Web 2.0 collaboration where the rapid and cost-effective accumulation of material from amateurs is balanced against the guarantee of authenticity through expert moderation.
Marking 10 Years of Zetoc, Jane Ronson provides a history of the service and an overview of developments appearing in the near future. Jane describes what the three main features in Zetoc, Search, Alert and RSS, are designed to do to support its users. Following her history of the service, Jane provides an overview of the evaluation conducted in 2010. Jane goes on to explain that funding for enhancements in Zetoc has been acquired and that it will cover 'the development of personalisation features, aggregation of Table of Contents (TOC) services, the expansion of Open Access (OA) content and a new look interface.'
As usual, we offer our At the Event section, as well as reviews on works covering the topics of Resource Description and Access, what innovation and creativity really mean and the myths that surround them, how RSS and blogging can be used by librarians and on the open source community and open source software.
I hope you will enjoy Issue 66.
- Attached to Technology and Paying a Price, Matt Richtel, New York Times, 6 June 2010 http://nyti.ms/bWGtvV
- Sarah Houghton-Jan "Being Wired or Being Tired: 10 Ways to Cope with Information Overload" July 2008, Ariadne Issue 56
- Alison Baud and Ann Chapman, "Tap into Bath", July 2004, Ariadne Issue 40 http://www.ariadne.ac.uk/issue40/baud-chapman/
- Stephanie Round , "Tap into Bath Takes Off", January 2005, Ariadne Issue 42 http://www.ariadne.ac.uk/issue42/tapintobath-rpt/
- Allan Parsons, "Academic Liaison Librarianship: Curatorial Pedagogy or Pedagogical Curation?" October 2010, Ariadne Issue 65
- Mona Hess, Graeme Were, Ian Brown, Sally MacDonald, Stuart Robson and Francesca Simon Millar "E-Curator: A 3D Web-based Archive for Conservators and Curators" July 2009, Ariadne Issue 60
- Chris Rusbridge "Excuse Me... Some Digital Preservation Fallacies?" February 2006, Ariadne Issue 46 http://www.ariadne.ac.uk/issue46/rusbridge/
- Dave Thompson "A Pragmatic Approach to Preferred File Formats for Acquisition" April 2010, Ariadne Issue 63 http://www.ariadne.ac.uk/issue63/thompson/