Overview of content related to 'xml' http://www.ariadne.ac.uk/taxonomy/term/70/all?article-type=&term=&organisation=&project=&author=&issue= RSS feed with Ariadne content related to specified tag en Digitisation and e-Delivery of Theses from ePrints Soton http://www.ariadne.ac.uk/issue72/ball-fowler <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue72/ball-fowler#author1">Julian Ball</a> and <a href="/issue72/ball-fowler#author2">Christine Fowler</a> describe the partnership between the University of Southampton’s Library Digitisation Unit and its institutional repository for digitising and hosting theses.</p> </div> </div> </div> <p>The Hartley Library at the University of Southampton has in excess of 15,000 bound PhD and MPhil theses on 340 linear metres of shelving. Consultation of the hard-copy version is now restricted to readers making a personal visit to the Library, as no further microfiche copies are being produced by the British Library and no master copies of theses are lent from the Library. Retrieval of theses from storage for readers and their subsequent return requires effort from a large number of staff.</p> <p><a href="http://www.ariadne.ac.uk/issue72/ball-fowler" target="_blank">read more</a></p> issue72 feature article christine fowler julian ball abbyy british library jisc university of southampton uk theses digitisation project aggregation api archives cataloguing copyright data digitisation electronic theses eprints framework institutional repository jpeg jstor library management systems metadata oai-pmh ocr open access open archives initiative open source optical character recognition preservation repositories research search technology software url xml Tue, 30 Jul 2013 13:13:08 +0000 editor 2499 at http://www.ariadne.ac.uk LinkedUp: Linking Open Data for Education http://www.ariadne.ac.uk/issue72/guy-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue72/guy-et-al#author1">Marieke Guy</a>, <a href="/issue72/guy-et-al#author2">Mathieu d’Aquin</a>, <a href="/issue72/guy-et-al#author3">Stefan Dietze</a>, <a href="/issue72/guy-et-al#author4">Hendrik Drachsler</a>, <a href="/issue72/guy-et-al#author5">Eelco Herder</a> and <a href="/issue72/guy-et-al#author6">Elisabetta Parodi</a> describe the activities carried out by the LinkedUp Project looking at the promotion of open data in education.</p> </div> </div> </div> <p>In the past, discussions around Open Education have tended to focus on content and primarily Open Educational Resources (OER), freely accessible, openly licensed resources that are used for teaching, learning, assessment and research purposes. However Open Education is a complex beast made up of many aspects, of which the opening up of data is one important element.</p> <p><a href="http://www.ariadne.ac.uk/issue72/guy-et-al" target="_blank">read more</a></p> issue72 feature article eelco herder elisabetta parodi hendrik drachsler marieke guy mathieu d’aquin stefan dietze bbc dcc elsevier knowledge media institute mimas open knowledge foundation open university ordnance survey ukoln university of bath university of manchester university of southampton w3c dbpedia europeana linkedup project wikipedia blog cataloguing cloud computing data data management data mining data set data visualisation dissemination facebook framework higher education hypertext ict identifier information retrieval infrastructure interoperability learning analytics learning management system linked data lod mashup metadata mobile mobile learning mooc oer open data open education personalisation portal privacy rdf remote working repositories research search technology semantic web sparql topic map twitter uri usability video visualisation web resources web standards xml Tue, 04 Feb 2014 13:12:30 +0000 editor 2503 at http://www.ariadne.ac.uk Developing a Prototype Library WebApp for Mobile Devices http://www.ariadne.ac.uk/issue71/cooper-brewerton <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue71/cooper-brewerton#author1">Jason Cooper</a> and <a href="/issue71/cooper-brewerton#author2">Gary Brewerton</a> describe the development of a prototype WebApp to improve access to Library systems at Loughborough University for mobile devices.</p> </div> </div> </div> <p>Reviewing Loughborough University Library’s Web site statistics over a 12-month period (October 2011 – September 2012) showed a monthly average of 1,200 visits via mobile devices (eg smart phones and tablet computers). These visits account for 4% of the total monthly average visits; but plotting the percentage of visits per month from such mobile devices demonstrated over the period a steady increase, rising from 2% to 8%. These figures were supported by comparison with statistics from the Library’s blog, where, over the same period, there was also a steady increase in the percentage of visits from mobile devices.&nbsp; This increase was on a smaller scale than the Web site, rising from 0.5% up to 4%.</p> <p>Having identified this increase in the usage of mobile devices, it was decided to investigate ways to support mobile access more effectively.&nbsp; As part of this investigation, the Library's Systems Team undertook the development of a prototype mobile app.</p> <h2 id="Deciding_the_Prototype-s_Features">Deciding the Prototype's Features</h2> <p>The first task undertaken was to produce a list of functionality that could be included in the Library WebApp.&nbsp; The list was based upon current Library services and consisted of the following:</p> <ul> <li>Support library catalogue searching</li> <li>Display opening hours (pulled from the Library Web site so data can be maintained in one location)</li> <li>Display current item loans, requests and holds <ul> <li>Indicate overdue items</li> <li>Indicate recalled items</li> <li>Offer option to renew loaned items</li> <li>Offer option to cancel requests for items</li> </ul> </li> <li>Reading lists <ul> <li>Ensure module list displays all modules for which the user is registered</li> <li>Should handle multiple levels of reading lists</li> <li>Include thumbnails</li> <li>Include library holding information</li> </ul> </li> <li>Display current room/PC bookings <ul> <li>Display list of bookings including resource name, start time and end time for each booking.</li> <li>Offer option to cancel a room/PC booking</li> <li>Offer option to make a room/PC booking</li> </ul> </li> <li>Display upcoming library events (pulled from the Library Web site) <ul> <li>Include both upcoming workshops and events</li> </ul> </li> <li>Display library news (taken as a feed from our Library blog)</li> <li>Offer feedback option</li> </ul> <p>After reviewing this list, it was decided to leave out the searching of the Library Catalogue feature as the Library's discovery tool (Ex Libris’s Primo [<a href="#1">1</a>]) was scheduled for a number of updates that would improve the support of mobile devices. Therefore it was decided to wait and see how the improved mobile interface performed before deciding how to integrate it into the mobile app.</p> <p>Additionally it was decided not to implement a number of the other features, those that would either require new APIs to be created for other systems or those that would alter the information stored in the other systems.&nbsp; These features would be carried forward for implementation in a future version of the mobile app.&nbsp; Consequently features excluded from the pilot version were:</p> <ul> <li>library catalogue searching</li> <li>the option to renew loaned items and cancel requested items</li> <li>the option to make or cancel a room/PC booking</li> </ul> <h2 id="WebApp_versus_Native_Apps">WebApp versus Native Apps</h2> <p>An important early decision was whether to create the Mobile App as a WebApp or as a number of native apps?&nbsp; A native app is one that is developed in the native language for the platform (Objective-C for iPhone/iPad devices, Java for Android devices, etc) and usually delivered via an app-store (iTunes for Apple, Google Play for Android, etc).&nbsp; A WebApp is developed in HTML5 and JavaScript, being delivered to the mobile device via the World Wide Web.</p> <p>There are pros and cons to developing a mobile app as a native app or as a WebApp. Native apps have full access to a mobile device's resources but need to be developed as a separate app for each platform on which they are to be made available.&nbsp; Conversely developing a mobile app as a WebApp restricts the resources that can be accessed to those available to the device's Web browser, although a single developed WebApp can work on multiple platforms.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue71/cooper-brewerton" target="_blank">read more</a></p> issue71 tooled up gary brewerton jason cooper apple google loughborough university w3c adobe ajax android apache api authentication blog browser cache cataloguing content management cookie css data framework google books html html5 ipad iphone itunes java javascript jquery json library management systems local storage metadata mobile native app native apps open source passwords perl restful rss standards tablet computer url vocabularies web app web browser web development widget xhtml xml Mon, 10 Jun 2013 13:33:09 +0000 admin 2438 at http://www.ariadne.ac.uk eMargin: A Collaborative Textual Annotation Tool http://www.ariadne.ac.uk/issue71/kehoe-gee <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue71/kehoe-gee#author1">Andrew Kehoe</a> and <a href="/issue71/kehoe-gee#author2">Matt Gee</a> describe their Jisc-funded eMargin collaborative textual annotation tool, showing how it has widened its focus through integration with Virtual Learning Environments.</p> </div> </div> </div> <p>In the Research and Development Unit for English Studies (RDUES) at Birmingham City University, our main research field is Corpus Linguistics: the compilation and analysis of large text collections in order to extract new knowledge about language. We have previously developed the WebCorp [<a href="#1">1</a>] suite of software tools, designed to extract language examples from the Web and to uncover frequent and changing usage patterns automatically. eMargin, with its emphasis on <em>manual</em> annotation and analysis, was therefore somewhat of a departure for us.</p> <p>The eMargin Project came about in 2007 when we attempted to apply our automated Corpus Linguistic analysis techniques to the study of English Literature. To do this, we built collections of works by particular authors and made these available through our WebCorp software, allowing other researchers to examine, for example, how Dickens uses the word ‘woman’, how usage varies across his novels, and which other words are associated with ‘woman’ in Dickens’ works.</p> <p>What we found was that, although our tools were generally well received, there was some resistance amongst literary scholars to this large-scale automated analysis of literary texts. Our top-down approach, relying on frequency counts and statistical analyses, was contrary to the traditional bottom-up approach employed in the discipline, relying on the intuition of literary scholars. In order to develop new software to meet the requirements of this new audience, we needed to gain a deeper understanding of the traditional approach and its limitations.</p> <p style="text-align: center; "><img alt="logo: eMargin logo" src="http://ariadne-media.ukoln.info/grfx/img/issue71-kehoe-gee/emargin-logo.png" style="width: 250px; height: 63px;" title="logo: eMargin logo" /></p> <h2 id="The_Traditional_Approach">The Traditional Approach</h2> <p>A long-standing problem in the study of English Literature is that the material being studied – the literary text – is often many hundreds of pages in length, yet the teacher must encourage class discussion and focus this on particular themes and passages. Compounding the problem is the fact that, often, not all students in the class have read the text in its entirety.</p> <p>The traditional mode of study in the discipline is ‘close reading’: the detailed examination and interpretation of short text extracts down to individual word level. This variety of ‘practical criticism’ was greatly influenced by the work of I.A. Richards in the 1920s [<a href="#2">2</a>] but can actually be traced back to the 11<sup>th</sup> Century [<a href="#3">3</a>]. What this approach usually involves in practice in the modern study of English Literature is that the teacher will specify a passage for analysis, often photocopying this and distributing it to the students. Students will then read the passage several times, underlining words or phrases which seem important, writing notes in the margin, and making links between different parts of the passage, drawing out themes and motifs. On each re-reading, the students’ analysis gradually takes shape (see Figure 1). Close reading takes place either in preparation for seminars or in small groups during seminars, and the teacher will then draw together the individual analyses during a plenary session in the classroom.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue71/kehoe-gee" target="_blank">read more</a></p> issue71 tooled up andrew kehoe matt gee ahrc amazon birmingham city university blackboard british library cetis d-lib magazine google ims global ims global learning consortium jisc niso university of leicester university of oxford wikipedia accessibility aggregation ajax api big data blog browser data database digital library ebook free software html interoperability intranet java javascript jquery metadata moodle plain text repositories research search technology software standards tag cloud tagging tei url vle web browser wiki windows xml Thu, 04 Jul 2013 17:20:45 +0000 lisrw 2467 at http://www.ariadne.ac.uk DataFinder: A Research Data Catalogue for Oxford http://www.ariadne.ac.uk/issue71/rumsey-jefferies <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue71/rumsey-jefferies#author1">Sally Rumsey</a> and <a href="/issue71/rumsey-jefferies#author2">Neil Jefferies</a> explain the context and the decisions guiding the development of DataFinder, a data catalogue for the University of Oxford.</p> </div> </div> </div> <p>In 2012 the University of Oxford Research Committee endorsed a university ‘Policy on the management of research data and records’ [<a href="#1">1</a>]. Much of the infrastructure to support this policy is being developed under the Jisc-funded Damaro Project [<a href="#2">2</a>]. The nascent services that underpin the University’s RDM (research data management) infrastructure have been divided into four themes:</p> <p><a href="http://www.ariadne.ac.uk/issue71/rumsey-jefferies" target="_blank">read more</a></p> issue71 feature article neil jefferies sally rumsey bodleian libraries datacite jisc orcid uk data archive university of oxford dmponline impact project aggregation algorithm api archives cataloguing controlled vocabularies curation data data citation data management data model data set database digital archive digital library eprints fedora commons identifier infrastructure jacs linked data metadata oai-pmh open access open archives initiative passwords preservation purl rdf repositories research research information management schema search technology semantic web software solr standards uri url vocabularies wireframe xml Thu, 13 Jun 2013 20:23:22 +0000 lisrw 2446 at http://www.ariadne.ac.uk Editorial Introduction to Issue 70 http://www.ariadne.ac.uk/issue70/editorial <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/editorial#author1">The editor</a> introduces readers to the content of <em>Ariadne</em> Issue 70.</p> </div> </div> </div> <p>Welcome to Issue 70 of <em>Ariadne </em>which is full to the brim with feature articles and a wide range of event reports and book reviews.</p> <p><a href="http://www.ariadne.ac.uk/issue70/editorial" target="_blank">read more</a></p> issue70 editorial richard waller alt amazon google hefce jisc portico rdwg ukoln university of oxford w3c ark project jisc information environment jusp liparm rdmrose web accessibility initiative wikipedia accessibility aggregation archives bs8878 controlled vocabularies data data management database digital curation digitisation ejournal framework higher education identifier internet explorer jstor licence metadata microsoft reporting services mobile open access perl portal preservation privacy raptor repositories research resource management schema search technology software standardisation standards sushi wcag web resources web services wiki xml xml schema Fri, 14 Dec 2012 14:20:23 +0000 lisrw 2417 at http://www.ariadne.ac.uk The LIPARM Project: A New Approach to Parliamentary Metadata http://www.ariadne.ac.uk/issue70/gartner <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/gartner#author1">Richard Gartner</a> outlines a collaborative project which aims to link together the digitised UK Parliamentary record by providing a metadata scheme, controlled vocabularies and a Web-based interface.</p> </div> </div> </div> <p>Parliamentary historians in the United Kingdom are particularly fortunate as their key primary source, the record of Parliamentary proceedings, is almost entirely available in digitised form. Similarly, those needing to consult and study contemporary proceedings as scholars, journalists or citizens have access to the daily output of the UK's Parliaments and Assemblies in electronic form shortly after their proceedings take place.</p> <p>Unfortunately, the full potential of this resource for all of these users is limited by the fact that it is scattered throughout a heterogeneous information landscape and so cannot be approached as a unitary resource.&nbsp; It is not a simple process, for instance, to distinguish the same person if he or she appears in more than one of these collections or, for that matter, to identify the same legislation if it is referenced inconsistently in different resources. As a result, using it for searching or for more sophisticated analyses becomes problematic when one attempts to move beyond one of its constituent collections.</p> <p>Finding some mechanism to allow these collections to be linked and so used as a coherent, integrated resource has been on the wish-list of Parliamentary historians and other stakeholders in this area for some time. In the mid-2000s, for instance, the History of Parliament Trust brought together the custodians of several digitised collections to examine ways in which this could be done. In 2011, some of these ideas came to fruition when JISC (Joint Information Systems Committee) funded a one-year project named LIPARM (Linking the Parliamentary Record through Metadata) which aimed to design a mechanism for encoding these linkages within XML architectures and to produce a working prototype for an interface which would enable the potential offered by this new methodology to be realised in practice.</p> <p>This article explains the rationale of the LIPARM Project and how it uses XML to link together core components of the Parliamentary record within a unified metadata scheme. It introduces the XML schema, Parliamentary Metadata Language (PML), which was created by the project and the set of controlled vocabularies for Parliamentary proceedings which the project also created to support it.&nbsp; It also discusses the experience of the project in converting two XML-encoded collections of Parliamentary proceedings to PML and work on the prototype Web-based union catalogue which will form the initial gateway to PML-encoded metadata.</p> <h2 id="Background:_The_Need_for_Integrated_Parliamentary_Metadata">Background: The Need for Integrated Parliamentary Metadata</h2> <p>The UK's Parliamentary record has been the focus of a number of major digitisation initiatives which have made its historical corpus available in almost its entirety: in addition, the current publishing operations of the four Parliaments and Assemblies in the UK ensure that the contemporary record is available in machine-readable form on a daily basis. Unfortunately, these collections have limited interoperability owing to their disparate approaches to data and metadata which renders the federated searching and browsing of their contents currently impossible. In addition, the disparity of platforms on which they are offered, and the wide diversity of user interfaces they use to present the data (as shown by the small sample in Figure 1), render extensive research a time-consuming and cumbersome process if it is necessary to extend its remit beyond the confines of a single collection.</p> <p style="text-align: center; "><img alt="Figure 1: Four major collections of Parliamentary proceedings, each using a different interface" src="http://ariadne-media.ukoln.info/grfx/img/issue70-gartner/liparm-figure1.png" style="width: 640px; height: 231px;" title="Figure 1: Four major collections of Parliamentary proceedings, each using a different interface" /></p> <p style="text-align: left; "><strong>Figure 1: Four major collections of Parliamentary proceedings, each using a different interface</strong></p> <p>A more integrated approach to Parliamentary metadata offers major potential for new research: it would, for instance, allow the comprehensive tracking of an individual's career, including all of their contributions to debates and proceedings. It would allow the process of legislation to be traced automatically, voting patterns to be analysed, and the emergence of themes and topics in Parliamentary history to be analysed on a large scale.</p> <p>One example of the linkages that could usefully be made in an integrated metadata architecture can be seen in the career of Sir James Craig, the Prime Minister of Northern Ireland from 1921 to 1940.&nbsp; Figure 2 illustrates some of the connections that could be made to represent his career:-</p> <p style="text-align: center; "><img alt="Figure 2: Sample of potential linkages for a Parliamentarian" src="http://ariadne-media.ukoln.info/grfx/img/issue70-gartner/figure2-james-craig-v3.jpg" style="width: 640px; height: 331px;" title="Figure 2: Sample of potential linkages for a Parliamentarian" /></p> <p style="text-align: center; "><strong>Figure 2: Sample of potential linkages for a Parliamentarian</strong></p> <p>The connections shown here are to the differing ways in which he is named in the written proceedings, to his tenures in both Houses, the constituencies he represented, the offices he held and the contributions he made to debates. Much more complex relationships are, of course, possible and desirable.</p> <p>The advantages of an integrated approach to metadata which would allow these connections to be made have long been recognised by practitioners in this field, and several attempts have been made to create potential strategies for realising them. But it was only in 2011 that these took more concrete form when a one-day meeting sponsored by JISC brought together representatives from the academic, publishing, library and archival sectors to devise a strategy for integrating Parliamentary metadata. Their report proposed the creation of an XML schema for linking core components of this record and the creation of a series of controlled vocabularies for these components which could form the basis of the semantic linkages to be encoded in the schema [<a href="#1">1</a>]. These proposals then formed the basis of a successful bid to JISC for a project to put them into practice: the result was the LIPARM (Linking the Parliamentary Record through Metadata) Project.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue70/gartner" target="_blank">read more</a></p> issue70 feature article richard gartner jisc kings college london library of congress national library of wales liparm archives cataloguing controlled vocabularies data digital library digitisation e-research identifier interoperability metadata multimedia national library rdf research research information management schema uri vocabularies xml xml schema Fri, 30 Nov 2012 19:41:15 +0000 lisrw 2391 at http://www.ariadne.ac.uk Motivations for the Development of a Web Resource Synchronisation Framework http://www.ariadne.ac.uk/issue70/lewis-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/lewis-et-al#author1">Stuart Lewis</a>, <a href="/issue70/lewis-et-al#author2">Richard Jones</a> and <a href="/issue70/lewis-et-al#author3">Simeon Warner</a> explain some of the motivations behind the development of the ResourceSync Framework.</p> </div> </div> </div> <p>This article describes the motivations behind the development of the ResourceSync Framework. The Framework addresses the need to synchronise resources between Web sites. &nbsp;Resources cover a wide spectrum of types, such as metadata, digital objects, Web pages, or data files. &nbsp;There are many scenarios in which the ability to perform some form of synchronisation is required. Examples include aggregators such as Europeana that want to harvest and aggregate collections of resources, or preservation services that wish to archive Web sites as they change.</p> <p><a href="http://www.ariadne.ac.uk/issue70/lewis-et-al" target="_blank">read more</a></p> issue70 tooled up richard jones simeon warner stuart lewis aberystwyth university cornell university imperial college london jisc library of congress niso oai oclc ukoln university of edinburgh university of oxford dbpedia europeana opendoar wikipedia access control aggregation api archives atom cache cataloguing data data management data set database digital library doi dspace dublin core eprints framework ftp higher education html hypertext identifier interoperability knowledge base linked data metadata namespace national library oai-ore oai-pmh open access open archives initiative open source passwords portal portfolio preservation provenance repositories research rfc rss search technology service oriented architecture software sru srw standards sword protocol syndication twitter uri url web app web resources web services xml z39.50 Mon, 03 Dec 2012 15:58:46 +0000 lisrw 2392 at http://www.ariadne.ac.uk The ARK Project: Analysing Raptor at Kent http://www.ariadne.ac.uk/issue70/lyons <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/lyons#author1">Leo Lyons</a> describes how University of Kent librarians are benefitting from Raptor's ability to produce e-resource usage statistics and charts.</p> </div> </div> </div> <p>It is indisputable that the use of e-resources in university libraries has increased exponentially over the last decade and there would be little disagreement with a prediction that usage is set to continue to increase for the foreseeable future. The majority of students both at undergraduate and post-graduate level now come from a background where online access is the <em>de facto</em> standard.</p> <p><a href="http://www.ariadne.ac.uk/issue70/lyons" target="_blank">read more</a></p> issue70 feature article leo lyons cardiff university jisc microsoft newcastle university university of huddersfield university of kent ark project authentication blog cataloguing csv data data set database further education identifier infrastructure internet explorer ldap licence microsoft reporting services mobile native app raptor repositories research sharepoint shibboleth software sql standards wiki xml Tue, 04 Dec 2012 17:21:49 +0000 lisrw 2394 at http://www.ariadne.ac.uk SUSHI: Delivering Major Benefits to JUSP http://www.ariadne.ac.uk/issue70/meehan-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/meehan-et-al#author1">Paul Meehan</a>, <a href="/issue70/meehan-et-al#author2">Paul Needham</a> and <a href="/issue70/meehan-et-al#author3">Ross MacIntyre</a> explain the enormous time and cost benefits in using SUSHI to support rapid gathering of journal usage reports into the JUSP service.</p> </div> </div> </div> <p>A full-scale implementation of the Journal Usage Statistics Portal (JUSP) would not be possible without the automated data harvesting afforded by the Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol. Estimated time savings in excess of 97% compared with manual file handling have allowed JUSP to expand its service to more than 35 publishers and 140 institutions by September 2012. An in-house SUSHI server also allows libraries to download quality-checked data from many publishers via JUSP, removing the need to visit numerous Web sites. The protocol thus affords enormous cost and time benefits for the centralised JUSP service and for all participating institutions. JUSP has also worked closely with many publishers to develop and implement SUSHI services, pioneering work to benefit both the publishers and the UK HE community.</p> <p style="text-align: center; "><img alt="Journal Usage Statistics Portal (JUSP)" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/jusp-logo.png" style="width: 145px; height: 133px;" title="Journal Usage Statistics Portal (JUSP)" /></p> <h2 id="JUSP:_Background_to_the_Service">JUSP: Background to the Service</h2> <p>The management of journal usage statistics can be an onerous task at the best of times. The introduction of the COUNTER [<a href="#1">1</a>] Code of Practice in 2002 was a major step forward, allowing libraries to collect consistent, audited statistics from publishers. By July 2012, 125 publishers offered the JR1 report, providing the number of successful full-text downloads. In the decade since COUNTER reports became available, analysis of the reports has become increasingly important, with library managers, staff and administrators increasingly forced to examine journal usage to inform and rationalise purchasing and renewal decisions.</p> <p>In 2004, JISC Collections commissioned a report [<a href="#2">2</a>] which concluded that there was a definite demand for a usage statistics portal for the UK HE community; with some sites subscribing to more than 100 publishers, just keeping track of access details and downloading reports was becoming a significant task in itself, much less analysing the figures therein. There followed a report into the feasibility of establishing a ‘Usage Statistics Service’ carried out by Key Perspectives Limited and in 2008 JISC issued an ITT (Invitation To Tender). By early 2009 a prototype service, known as the Journal Usage Statistics Portal (JUSP) had been developed by a consortium including Evidence Base at Birmingham City University, Cranfield University, JISC Collections and Mimas at The University of Manchester; the prototype featured a handful of publishers and three institutions. However, despite a centralised service appearing feasible [<a href="#3">3</a>], the requirement to download and process data in spreadsheet format, and the attendant time taken, still precluded a full-scale implementation across UK HE.</p> <p style="text-align: center; "><img alt="COUNTER" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/counter-header.png" style="width: 640px; height: 45px;" title="COUNTER" /></p> <p>Release 3 of the COUNTER Code of Practice in 2009 however mandated the use of the newly-introduced Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol [<a href="#4">4</a>], a mechanism for the machine-to-machine transfer of COUNTER-compliant reports; this produced dramatic efficiencies of time and cost in the gathering of data from publishers. The JUSP team began work to implement SUSHI for a range of publishers and expanded the number of institutions. By September 2012, the service had grown significantly, whilst remaining free at point of use, and encompassed 148 participating institutions, and 35 publishers. To date more than 100 million individual points of data have been collected by JUSP, all via SUSHI, a scale that would have been impossible without such a mechanism in place or without massive additional staff costs.</p> <p>JUSP offers much more than basic access to publisher statistics, however; the JUSP Web site [<a href="#5">5</a>] details the numerous reports and analytical tools on offer, together with detailed user guides and support materials. The cornerstone of the service though is undeniably its SUSHI implementation, both in terms of gathering the COUNTER JR1 and JR1a data and - as developed more recently - its own SUSHI server, enabling institutions to re-harvest data into their own library management tools for local analysis.</p> <h2 id="JUSP_Approach_to_SUSHI_Development_and_Implementation">JUSP Approach to SUSHI Development and Implementation</h2> <p>Once the decision was made to scale JUSP into a full service, the development of SUSHI capability became of paramount importance. The team had been able to handle spreadsheets of data on a small scale, but the expected upscale to 100+ institutions and multiple publishers within a short time frame meant that this would very quickly become unmanageable and costly in staff time and effort - constraints that were proving to be a source of worry at many institutions too: while some sites could employ staff whose role revolved around usage stats gathering and analysis, this was not possible at every institution, nor especially straightforward for institutions juggling dozens, if not hundreds, of publisher agreements and deals.</p> <p>Two main issues were immediately apparent in the development of the SUSHI software. Firstly, there was a lack of any standard SUSHI client software that we could use or adapt, and, more worryingly, the lack of SUSHI support at a number of major publishers. While many publishers use an external company or platform such as Atypon, MetaPress or HighWire to collect and provide usage statistics, others had made little or no progress in implementing SUSHI support by late 2009 - where SUSHI servers were in place these were often untested or unused by consumers.</p> <p>An ultimate aim for JUSP was to develop a single piece of software that would seamlessly interact with any available SUSHI repository and download data for checking and loading into JUSP. However, the only client software available by 2009 was written and designed to work in the Windows environment, or used Java, which can be very complex to work with and of which the JUSP team had limited expertise. The challenge therefore became to develop a much simpler set of code using Perl and/or PHP, common and simple programming languages which were much more familiar to the JUSP team.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue70/meehan-et-al" target="_blank">read more</a></p> issue70 feature article paul meehan paul needham ross macintyre birmingham city university cranfield university elsevier intute jisc jisc collections mimas niso university of manchester university of oxford jusp nesli pirus2 zetoc archives authentication csv data data set database digital library dublin core html identifier interoperability java multimedia openurl passwords perl php portal raptor repositories research shibboleth software standards sushi windows xml Wed, 05 Dec 2012 17:54:19 +0000 lisrw 2396 at http://www.ariadne.ac.uk 23rd International CODATA Conference http://www.ariadne.ac.uk/issue70/codata-2012-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/codata-2012-rpt#author1">Alex Ball</a> reports on a conference on ‘Open Data and Information for a Changing Planet’ held by the International Council for Science’s Committee on Data for Science and Technology (CODATA) at Academia Sinica, Taipei, Taiwan on 28–31 October 2012.</p> </div> </div> </div> <p>CODATA was formed by the International Council for Science (ICSU) in 1966 to co-ordinate and harmonise the use of data in science and technology. One of its very earliest decisions was to hold a conference every two years at which new developments could be reported. The first conference was held in Germany in 1968, and over the following years it would be held in&nbsp; 15 different countries across 4 continents.</p> <p><a href="http://www.ariadne.ac.uk/issue70/codata-2012-rpt" target="_blank">read more</a></p> issue70 event report alex ball codata datacite dcc elsevier icsu jisc library of congress national academy of sciences niso oais orcid royal meteorological society sheffield hallam university stm ukoln university college london university of bath university of edinburgh university of queensland university of washington dealing with data europeana ojims accessibility algorithm api archives bibliographic data big data blog cataloguing cloud computing creative commons crm curation data data citation data management data mining data model data set data visualisation database digital archive digital curation digitisation dissemination doi dvd e-learning facebook framework geospatial data gis google maps handle system identifier infrastructure intellectual property interoperability java knowledge base knowledge management licence linux lod metadata mobile moodle oer ontologies open access open data open source operating system optical character recognition portfolio preservation privacy provenance repositories research restful search technology sharepoint smartphone software standardisation standards tagging usb video visualisation vocabularies web resources web services widget wiki xml xmpp Sat, 15 Dec 2012 12:41:16 +0000 lisrw 2430 at http://www.ariadne.ac.uk euroCRIS Membership Meeting, Madrid http://www.ariadne.ac.uk/issue70/eurocris-2012-11-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/eurocris-2012-11-rpt#author1">Rosemary Russell</a> and <a href="/issue70/eurocris-2012-11-rpt#author2">Brigitte Jörg</a> report on the bi-annual euroCRIS membership and Task Groups meetings which took place in Madrid on 5-6 November 2012.</p> </div> </div> </div> <p>euroCRIS membership meetings [<a href="#1">1</a>] are held twice a year, providing members and invited participants with updates on strategic and Task Group progress and plans, as well as the opportunity to share experience of Current Research Information System (CRIS)-related developments and seek feedback. A CERIF (<em>Common European Research Information Format</em>) tutorial is usually included on the first morning for those new to the standard, and the host country reports on local CRIS initiatives in the ‘national’ session.</p> <p><a href="http://www.ariadne.ac.uk/issue70/eurocris-2012-11-rpt" target="_blank">read more</a></p> issue70 event report brigitte jorg rosemary russell codata elsevier eurocris imperial college london jisc orcid ukoln university of bath reposit adobe aggregation bibliometrics blog cerif data data model data set database digital repositories dublin core framework identifier infrastructure institutional repository interoperability lod ontologies open access open source portal preservation rdf repositories research research information management software standards visualisation vocabularies xml Thu, 13 Dec 2012 09:07:57 +0000 lisrw 2408 at http://www.ariadne.ac.uk Editorial Introduction to Issue 69 http://www.ariadne.ac.uk/issue69/editorial <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue69/editorial#author1">The editor</a> introduces readers to the content of <em>Ariadne</em> Issue 69.</p> </div> </div> </div> <p>Never blessed with any sporting acumen, I have to confess to a degree of ambivalence towards the London Olympics unfolding around this issue as it publishes. That does not mean that I do not wish all the participants well in what after all is an enormous achievement just to be able to compete there at all. While I admit to not watching every team walk and wave, I cannot deny that the beginning and end of the Opening Ceremony [<a href="#1">1</a>] did grab my attention. Who could blame me? I suspect we sat as a nation terrified to discover what this would say about us all.</p> <p><a href="http://www.ariadne.ac.uk/issue69/editorial" target="_blank">read more</a></p> issue69 editorial richard waller bbc blackboard jisc jisc collections loughborough university ukoln university of bath university of glamorgan university of pretoria devcsi wikipedia accessibility aggregation api archives authentication blog cache content management data database digital preservation drupal ebook framework internet explorer json knowledge management licence metadata ocr opac open source perl refworks repositories research schema search technology shibboleth standards usability visualisation wiki xml Tue, 31 Jul 2012 11:45:13 +0000 lisrw 2372 at http://www.ariadne.ac.uk Moving Ariadne: Migrating and Enriching Content with Drupal http://www.ariadne.ac.uk/issue69/bunting <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue69/bunting#author1">Thom Bunting</a> explains some of the technology behind the migration of <em>Ariadne</em> (including more than 1600 articles from its back issues archive) onto a Drupal content management platform.</p> </div> </div> </div> <p>Tools and strategies for content management are a perennial topic in <em>Ariadne. </em> With&nbsp;<a href="/category/buzz/content-management?article-type=&amp;term=&amp;organisation=&amp;project=&amp;author=" title="Link to overview of articles including references to 'content management'">more than one hundred articles</a>&nbsp;touching on content management system (CMS) technologies or techniques since this online magazine commenced publication in 1996,&nbsp;<em>Ariadne</em>&nbsp;attests to continuing interest in this topic. Authors have discussed this topic within various contexts, from&nbsp;<a href="/category/buzz/content-management?article-type=&amp;term=intranet&amp;organisation=&amp;project=&amp;author=&amp;issue=#content-overview" title="Link to articles discussing 'content management', within 'intranet' context">intranets</a> to&nbsp;<a href="/category/buzz/repositories?article-type=&amp;term=content+management&amp;organisation=&amp;project=&amp;author=&amp;issue=#content-overview" title="Link to overview of articles referring to 'content management', within 'repositories' context">repositories</a>&nbsp;and&nbsp;<a href="/category/buzz/content-management?article-type=&amp;term=web+2.0&amp;organisation=&amp;project=&amp;author=&amp;issue=#content-overview" title="Link to overview of articles discussing 'content management', within context of Web 2.0">Web 2.0</a>, &nbsp;with some notable&nbsp;<a href="/sites/all/datacharts/hc/72-chart-wp.html#timeline" title="Link to timeline: articles referring to 'content management'">surges in references to 'content management' between 2000 and 2005</a>&nbsp;(see Figure 1 below). &nbsp;Although levels of discussion are by no means trending, over recent years it is clear that&nbsp;<em>Ariadne</em> authors have taken note of and written about content management tools and techniques on a regular basis.&nbsp;</p> <p>In the light of this long-established interest, it is noteworthy that&nbsp;<em>Ariadne</em> itself migrated into a content management system only recently. Although the formatting of its articles did change a few times since 1996, <em>Ariadne</em>&nbsp;remained 'hand-coded' for more than fifteen years. &nbsp;None of its articles had been migrated into a database-driven content management system until March 2012, when&nbsp;<a href="/issue68" title="Link to table of contents for Ariadne issue 68">issue 68</a>&nbsp;was published.&nbsp;&nbsp;</p> <p>As mentioned in the&nbsp;<a href="/issue68/editorial1" title="Editorial introduction: Welcome to New Ariadne">editorial introduction</a>&nbsp;to that first issue, launching the new content management arrangements, and as discussed in some more detail below (see 'Technical challenges in content migration'), the considerable size of&nbsp;<em>Ariadne</em>'s archive of back issues was daunting. &nbsp;With <a href="/articles" title="Overview of more than 1600 articles in Ariadne">more than 1600 articles</a>&nbsp;in hand-coded 'flat'-html formats,&nbsp;the process of migration itself required careful planning to result in a seamless, graceful transition into an entirely new content management arrangement. &nbsp;Over time, the sheer size of the <em>Ariadne</em> corpus had made it both increasingly rich in content and increasingly more challenging to convert retrospectively into a database-driven CMS as the total number of articles published within this online magazine steadily expanded.&nbsp;</p> <p>In looking back over the recent process of migrating <em>Ariadne</em> onto a CMS platform, this article discusses some tools and techniques used to prepare content for transfer, testing, and then re-launch. &nbsp;After explaining some of the background to and objectives of this work, this article focuses on key features of content management supported by Drupal.&nbsp;</p> <p style="text-align: center; "><img alt="Figure 1: Timeline of references in Ariadne to content management" src="http://ariadne-media.ukoln.info/grfx/img/issue69-bunting/content%20management-timeline.png" style="height: 453px; width: 500px; " title="Figure 1: Timeline of references in Ariadne to content management" /></p> <p style="text-align: center; "><strong>Figure 1: Ariadne timeline of references to content management</strong></p> <h2 id="Requirements_Analysis:_Planning_the_Way_Forward">Requirements Analysis: Planning the Way Forward</h2> <p>Based on surveys of readers and authors conducted in late 2010, the <em>Ariadne</em>&nbsp;management team analysed the range of feedback, drew up sets of re-development requirements, and then considered the options available.</p> <p>The following table provides an overview of key findings regarding the range of enhanced functionality and features considered:</p> <table align="center" border="1" cellpadding="1" cellspacing="1" id="500wtable" style="width: 500px; "> <tbody> <tr> <td colspan="2" style="text-align: center; "><strong>Overview of findings derived from survey responses</strong></td> </tr> <tr> <td style="text-align: center; "><em>enhanced functionality or feature</em></td> <td style="text-align: center; "><em>interest recorded in surveys</em></td> </tr> <tr> <td>browsing by keywords</td> <td>73.4% of respondents</td> </tr> <tr> <td>updated look and feel</td> <td>62.3% of respondents</td> </tr> <tr> <td>browsing by title</td> <td>50.0% of respondents</td> </tr> <tr> <td>enhanced use of search engine</td> <td>48.0% of respondents</td> </tr> <tr> <td>improved display for portable devices</td> <td>34.0% of respondents</td> </tr> <tr> <td>more summative information on articles</td> <td>32.1% of respondents</td> </tr> <tr> <td>improved navigability from article level</td> <td>32.1% of respondents</td> </tr> <tr> <td>improved social media options</td> <td>29.5% of respondents</td> </tr> <tr> <td>browsing by author</td> <td>28.0% of respondents</td> </tr> <tr> <td>improved RSS feeds</td> <td>27.0% of respondents</td> </tr> </tbody> </table> <p>In addition to these findings derived from surveys, the management team also recognised the need for some other functionalities to support monitoring of <em>Ariadne</em>'s on-going engagement with various domains and institutions across the UK and beyond.</p> <table align="center" border="1" cellpadding="1" cellspacing="1" id="500wtable" style="width: 500px; "> <tbody> <tr> <td colspan="2" style="text-align: center; "><strong>Additional features to support monitoring of engagement</strong></td> </tr> <tr> <td style="text-align: left; ">identification of author domains (higher education, further education, research, commercial, etc)</td> <td style="text-align: left; ">to support analysis of <em>Ariadne</em> connections and reach across various sectors</td> </tr> <tr> <td>identification of authors by organisation</td> <td>to support analysis of <em>Ariadne</em> connections and reach in UK and worldwide</td> </tr> </tbody> </table> <p>Taking into account the key findings derived from survey questions as well as the additional functionality identified as useful in monitoring UK and worldwide engagement, the <em>Ariadne</em>&nbsp;management team drew up sets of re-development requirements and considered how to proceed.&nbsp;Migration into a content management system represented the obvious way forward, as it became clear that <em>Ariadne</em>'s&nbsp;previous tradition of 'hand-coded' production (dating from the early days of the Web) had little chance of coping gracefully with the new sets of requirements.</p> <p>In a review of CMS options available, it also became clear that&nbsp;&nbsp;<a href="http://en.wikipedia.org/wiki/Drupal" title="Wikipedia article: Drupal">Drupal</a>&nbsp;[<a href="#1">1</a>] was well positioned as a content management system (or, emphasising its highly modular and extensible design, <em>content management framework </em>&nbsp;[<a href="#2">2</a>] ) to supply required functionality and features.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue69/bunting" target="_blank">read more</a></p> issue69 tooled up thom bunting ibm microsoft ukoln university of bath datagovuk gnu wikipedia apache api archives bibliographic data content licence content management css data data set database drupal framework further education graphics higher education html identifier jquery json licence linux metadata mysql open source perl php preservation python rdf repositories research rss search technology software sql server sqlite standards taxonomy usability video visualisation web 2.0 xml Fri, 27 Jul 2012 16:47:36 +0000 lisrw 2348 at http://www.ariadne.ac.uk Redeveloping the Loughborough Online Reading List System http://www.ariadne.ac.uk/issue69/knight-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue69/knight-et-al#author1">Jon Knight</a>, <a href="/issue69/knight-et-al#author2">Jason Cooper</a> and <a href="/issue69/knight-et-al#author3">Gary Brewerton</a> describe the redevelopment of Loughborough University’s open source reading list system.</p> </div> </div> </div> <p>The Loughborough Online Reading Lists System (LORLS) [<a href="#1">1</a>] has been developed at Loughborough University since the late 1990s.&nbsp; LORLS was originally implemented at the request of the University’s Learning and Teaching Committee simply to make reading lists available online to students.&nbsp; The Library staff immediately saw the benefit of such a system in not only allowing students ready access to academics’ reading lists but also in having such access themselves. This was because a significant number of academics were bypassing the library when generating and distributing lists to their students who were then in turn surprised when the library did not have the recommended books either in stock or in sufficient numbers to meet demand.</p> <p>The first version of the system produced by the Library Systems Team was part of a project that also had a ‘reading lists amnesty’ in which academics were encouraged to provide their reading lists to the library which then employed some temporary staff over the summer to enter them into the new system.&nbsp; This meant that the first version of LORLS went live in July 2000 with a reasonable percentage of lists already in place.&nbsp; Subsequently the creation and editing of reading lists was made the responsibility of the academics or departmental admin staff, with some assistance from library staff.</p> <p>LORLS was written in Perl, with a MySQL database back-end.&nbsp; Most user interfaces were delivered via the web, with a limited number of back-end scripts that helped the systems staff maintain the system and alert library staff to changes that had been made to reading lists.</p> <p>Soon after the first version of LORLS went live at Loughborough, a number of other universities expressed an interest in using or modifying the system. Permission was granted by the University to release it as open source under the General Public Licence (GPL)[<a href="#2">2</a>].&nbsp; New versions were released as the system was developed and bugs were fixed. The last version of the original LORLS code base/data design was version 5, which was downloaded by sites worldwide.</p> <h2 id="Redesign">Redesign</h2> <p>By early 2007 it was decided to take a step back and see if there were things that could be done better in LORLS.&nbsp; Some design decisions made in 1999 no longer made sense eight years later.&nbsp; Indeed some of the database design was predicated on how teaching modules were supposed to work at Loughborough and it had already become clear that the reality of how they were deployed was often quite different.&nbsp; For example, during the original design, the principle was that each module would have a single reading list associated with it.&nbsp; Within a few years several modules had been found that were being taught by two (or more!) academics, all wanting their own independent reading list.</p> <p>Some of the structuring of the data in the MySQL database began to limit how the system could be developed.&nbsp; The University began to plan an organisational restructuring shortly after the redesign of LORLS was commenced, and it was clear that the simple departmental structure was likely to be replaced by a more fluid school and department mix.</p> <p>Library staff were also beginning to request new features that were thus increasingly awkward to implement.&nbsp; Rather than leap through hoops to satisfy them within the framework of the existing system, it made sense to add them into the design process for a full redesign.</p> <p>It was also felt that the pure CGI-driven user interface could do with a revamp.&nbsp; The earlier LORLS user interfaces used only basic HTML forms, with little in the way of client-side scripting.&nbsp; Whilst that meant that they tended to work on any web browser and were pretty accessible, they were also a bit clunky compared to some of the newer dynamic web sites.</p> <p>A distinct separation of the user interface from the back-end database was decided upon to improve localization and portability of the system as earlier versions of LORLS had already shown that many sites took the base code and then customised the user interface parts of the CGI scripts to their own look and feel.&nbsp; The older CGI scripts were a mix of user interaction elements and database access and processing, which made this task a bit more difficult than it really needed to be.</p> <p>Separating the database code from the user interface code would let people easily tinker with one without unduly affecting the other.&nbsp; It would also allow local experimentation with multiple user-interface designs for different user communities or devices.</p> <p>This implied that a set of application programming interfaces (APIs) would need to be defined. As asynchronous JavaScript and XML (AJAX)[<a href="#3">3</a>] interactions had been successful applied in a number of recent projects the team had worked on, XML was chosen as the format to be used.&nbsp; At first simple object access protocol (SOAP) style XML requests was experimented with, as well as XML responses, but it was soon realised that SOAP was far too heavy-weight for most of the API calls, so a lighter ‘RESTful’ API was selected.&nbsp; The API was formed of CGI scripts that took normal parameters as input and returned XML documents for the client to parse and display.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue69/knight-et-al" target="_blank">read more</a></p> issue69 tooled up gary brewerton jason cooper jon knight google harvard university loughborough university microsoft gnu access control ajax api archives authentication bibliographic data blog cache chrome cookie data database digital library e-learning framework google books gpl html javascript jquery json library management systems licence metadata moodle mysql open source perl refworks restful schema shibboleth soap software sql standards web browser xml z39.50 zip Sat, 28 Jul 2012 14:32:55 +0000 lisrw 2354 at http://www.ariadne.ac.uk JISC Research Information Management: CERIF Workshop http://www.ariadne.ac.uk/issue69/jisc-rim-cerif-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue69/jisc-rim-cerif-rpt#author1">Rosemary Russell</a> reports on a two-day workshop on research information management and CERIF held in Bristol over 27-28 June 2012.</p> </div> </div> </div> <script type="text/javascript">toc_collapse=0;</script><div class="toc" id="toc1"> <div class="toc-title">Table of Contents<span class="toc-toggle-message">&nbsp;</span></div> <div class="toc-list"> <ol> <li class="toc-level-1"><a href="#Workshop_Scope_and_Aims">Workshop Scope and Aims</a></li> <li class="toc-level-1"><a href="#The_New_CERIF_Support_Project_at_the_ISC_UKOLN">The New CERIF Support Project at the ISC, UKOLN</a></li> <li class="toc-level-1"><a href="#UK_CERIF_Landscape">UK CERIF Landscape</a></li> <li class="toc-level-1"><a href="#UK_Involvement_in_euroCRIS_and_Other_International_Initiatives">UK Involvement in euroCRIS and Other International Initiatives</a></li> </ol> </div> </div><p>A workshop on Research Information Management (RIM) and CERIF was held in Bristol on 27-28 June 2012, organised by the Innovation Support Centre [<a href="#1">1</a>] at UKOLN, together with the JISC RIM and RCSI (Repositories and Curation Shared Infrastructure) Programmes. It was a follow-up to the CERIF Tutorial and UK Data Surgery [<a href="#2">2</a>] held in Bath in February.</p> <h2 id="Workshop_Scope_and_Aims">Workshop Scope and Aims</h2> <p>The aim was to bring together people working on the various elements of the UK RIM jigsaw to share experience of using CERIF and explore ways of working together more closely. While the first day focused specifically on RIM, the second day widened to explore synergies with the repositories community. Participants therefore included JISC RIM and MRD projects and programme managers, support and evaluation projects, Research Councils, funders and repository infrastructure projects. There were around 30 participants [<a href="#3">3</a>] in total, with some variation across the two days, given the different content. The event was chaired by Josh Brown, RIM Programme Manager and Neil Jacobs, Programme Director, Digital Infrastructure, both at JISC. All presentations as well as breakout session outputs are available via the UKOLN ISC Events site [<a href="#4">4</a>].</p> <h2 id="The_New_CERIF_Support_Project_at_the_ISC_UKOLN">The New CERIF Support Project at the ISC, UKOLN</h2> <p>The UK community was pleased to welcome Brigitte Jörg [<a href="#5">5</a>] to the meeting, in the first week of her new role at UKOLN’s Innovation Support Centre as National Coordinator for the CERIF Support Project. Brigitte is already well known to British practitioners working with CERIF – both in her role as as CERIF Task Group Leader [<a href="#6">6</a>] at euroCRIS and as advisor to several existing JISC projects. We look forward to working with her on further initiatives – her CERIF expertise will be a huge asset for Research Information Management support and coordination in British Higher Education.</p> <h2 id="UK_CERIF_Landscape">UK CERIF Landscape</h2> <p>There is certainly extensive RIM-related activity in the UK currently, which looks set to continue. The landscape was outlined in the scene setting sessions by myself, based on the CERIF adoption study [<a href="#7">7</a>] carried out earlier this year. The rate of CRIS (Current Research Information System) procurement has increased very rapidly in the last few years, particularly during 2011. For example the first Pure system in the UK was procured jointly by the Universities of Aberdeen and St Andrews in May 2009; now there are 19 UK universities using Pure. Since all CRIS on the market are CERIF-compatible (to a greater or lesser extent) this means that a large number of UK institutions are CERIF users (again, to varying degrees) – around 31% [<a href="#7">7</a>]. The two other CERIF CRIS being used in the UK are CONVERIS (Avedas, Germany) and Symplectic Elements (UK-based); only one UK CERIF CRIS is being developed in-house, at the University of Huddersfield. There is therefore a significant potential user base for the many CERIF-based services discussed over the course of the workshop. Particularly as more institutions reach the end of their CRIS implementation phase, they are going to be looking for opportunities to exploit the interchange benefits offered by CERIF.</p> <h2 id="UK_Involvement_in_euroCRIS_and_Other_International_Initiatives">UK Involvement in euroCRIS and Other International Initiatives</h2> <p>As a reflection of the intensity of UK CRIS activity, the UK has the largest number of institutional members of euroCRIS – 25. The next country in terms of membership is Germany, with just 13 members (and then the Netherlands, with seven). It is also notable that there were six UK papers (up from three in 2010) at the recent euroCRIS conference in Prague (all openly accessible from the euroCRIS website [<a href="#8">8</a>]), reflecting the growing UK presence at international level. This indicates the significant impact of JISC programmes - both RIM and MRD (Managing Research Data). At euroCRIS meetings other European countries have expressed some envy of the resources currently available in the UK to support RIM development!</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue69/jisc-rim-cerif-rpt" target="_blank">read more</a></p> issue69 event report rosemary russell cornell university edina elsevier eurocris hefce imperial college london jisc orcid ukoln university of bath university of huddersfield university of oxford university of st andrews devcsi wikipedia blog cerif curation data data model data set dublin core file format framework higher education identifier infrastructure institutional repository metadata ontologies open access open source repositories research research information management schema software standards vocabularies xml Sun, 29 Jul 2012 19:46:13 +0000 lisrw 2367 at http://www.ariadne.ac.uk Data Citation and Publication by NERC’s Environmental Data Centres http://www.ariadne.ac.uk/issue68/callaghan-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue68/callaghan-et-al#author1">Sarah Callaghan</a>, <a href="/issue68/callaghan-et-al#author2">Roy Lowry</a>, <a href="/issue68/callaghan-et-al#author3">David Walton</a> and members of the Natural Environment Research Council Science Information Strategy Data Citation and Publication Project team describe their work in NERC’s Environmental Data Centres.</p> </div> </div> </div> <p>Data are the foundation upon which scientific progress rests. Historically speaking, data were a scarce resource, but one which was (relatively) easy to publish in hard copy, as tables or graphs in journal papers. With modern scientific methods, and the increased ease in collecting and analysing vast quantities of data, there arises a corresponding difficulty in publishing this data in a form that can be considered part of the scientific record.</p> <p><a href="http://www.ariadne.ac.uk/issue68/callaghan-et-al" target="_blank">read more</a></p> issue68 feature article david walton roy lowry sarah callaghan badc british antarctic survey british library british oceanographic data centre codata datacite jisc ncas royal meteorological society science and technology facilities council claddier ojims archives ascii cataloguing cd-rom curation data data citation data management data set digital curation digital repositories doi dspace dublin core e-science framework geospatial data google scholar guid higher education html identifier infrastructure internet explorer interoperability library data metadata open access rdf repositories research schema standards uri url vocabularies xml Fri, 09 Mar 2012 14:06:59 +0000 lisrw 2223 at http://www.ariadne.ac.uk Delivering Open Educational Resources for Engineering Design http://www.ariadne.ac.uk/issue68/darlington <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue68/darlington#author1">Mansur Darlington</a> describes two methods for presenting online OERs for engineering design that were developed and explored as part of the Higher Education Academy/JISC-funded DelOREs (Delivering Open Educational Resources for Engineering Design) Project.</p> </div> </div> </div> <p>A great deal of information is accessible on the World Wide Web which might be useful to both students and teachers. This material, however, is of variable quality and usefulness and is aimed at a wide spectrum of users. Moreover, such material rarely appears accompanied by guidance on how it may be most effectively used by potential users. To make information more usable it must be made more readily discoverable and there should be clear – and preferably machine-readable – indications of its provenance and quality and the legitimate uses to which it may be put.</p> <p><a href="http://www.ariadne.ac.uk/issue68/darlington" target="_blank">read more</a></p> issue68 feature article mansur darlington hea heriot-watt university jisc massachusetts institute of technology university of bath jorum mrc aggregation algorithm blog copyright creative commons data e-learning framework google search higher education html identifier intellectual property json licence metadata microdata oer provenance rdf repositories research resource description resource discovery rss schema search technology software standardisation standards taxonomy ukoer url vocabularies wordpress xhtml xml Fri, 09 Mar 2012 14:06:59 +0000 lisrw 2234 at http://www.ariadne.ac.uk Editorial Introduction to Issue 68 http://www.ariadne.ac.uk/issue68/editorial2 <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue68/editorial2#author1">The editor</a> introduces readers to the content of <em>Ariadne</em> issue 68.</p> </div> </div> </div> <p>I am pleased to introduce you to the content of Issue 68, and to have the opportunity to remind you that you have a far larger number of channels into the publication’s content.</p> <p><a href="http://www.ariadne.ac.uk/issue68/editorial2" target="_blank">read more</a></p> issue68 editorial richard waller british library jisc massachusetts institute of technology national academy of sciences royal holloway sakai clif depositmo hydra opendoar repositories support project rsp aggregation archives blog cataloguing content management copyright creative commons data data citation data set digital repositories digitisation dissemination doi eprints facebook fedora commons foi framework higher education ict identifier information retrieval instant messaging institutional repository library management systems lucene metadata ms word multimedia ocr oer opac open source openurl preservation repositories research resource description resource discovery rss search technology second life sfx sharepoint software solr standardisation sword protocol taxonomy twitter vufind web 2.0 wordpress xml Mon, 12 Mar 2012 15:17:06 +0000 lisrw 2322 at http://www.ariadne.ac.uk Data Science Professionals: A Global Community of Sharing http://www.ariadne.ac.uk/issue68/iassist-2011-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p align="left"><a href="/issue68/iassist-2011-rpt#author1">Sylvie Lafortune</a> reports on the 37th annual conference of the International Association for Social Science Information Services and Technology (IASSIST), held over 30 May – 3 June 2011 in Vancouver, British Columbia, Canada.</p> </div> </div> </div> <p>The IASSIST [<a href="#1">1</a>] Conference is a long-standing annual event which brings together researchers, statistical analysts as well as computer and information professionals interested in all aspects of research data, from discovery to reuse. This 37<sup>th</sup> meeting spanned five days where participants could attend workshops, IASSIST business meetings and a myriad of presentations. This year, the event focused on the sharing of tools and techniques which ‘improves capabilities across disciplines and along the entire data life cycle’.</p> <p><a href="http://www.ariadne.ac.uk/issue68/iassist-2011-rpt" target="_blank">read more</a></p> issue68 event report sylvie lafortune association of research libraries cessda datacite dcc iassist laurentian university massachusetts institute of technology national science foundation simon fraser university university of alberta yale university data without boundaries ddi algorithm archives controlled vocabularies data data citation data management data set digital repositories e-science framework gis identifier infrastructure lod metadata microdata ms word nesstar ontologies open data open source portal rdf repositories research schema software standards visualisation xml Mon, 27 Feb 2012 19:36:45 +0000 lisrw 2238 at http://www.ariadne.ac.uk Towards Interoperabilty of European Language Resources http://www.ariadne.ac.uk/issue67/ananiadou-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue67/ananiadou-et-al#author1">Sophia Ananiadou</a> and colleagues describe an ambitious new initiative to accelerate Europe-wide language technology research, helped by their work on promoting interoperability of language resources.</p> </div> </div> </div> <!-- start main content --><!-- start main content --><p>A core component of the European Union is a common market with a single information space that works with around two dozen national languages and many regional languages. This wide variety of languages presents linguistic barriers that can severely limit the free flow of goods, information and services throughout Europe.</p> <p><a href="http://www.ariadne.ac.uk/issue67/ananiadou-et-al" target="_blank">read more</a></p> issue67 feature article dean andrew jackson john keane john mcnaught paul thompson philip j r day sophia ananiadou steve pettifer teresa k attwood yoshinobu kano ibm meta-net university of manchester university of oxford university of tokyo data database e-science framework ict identifier information retrieval information society interoperability java metadata named entity recognition natural language processing plain text programming language repositories research search technology software standards tagging text mining uima web services xml Sun, 03 Jul 2011 23:00:00 +0000 editor 1619 at http://www.ariadne.ac.uk Image 'Quotation' Using the C.I.T.E. Architecture http://www.ariadne.ac.uk/issue67/blackwell-hackneyBlackwell <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue67/blackwell-hackneyBlackwell#author1">Christopher Blackwell</a> and <a href="/issue67/blackwell-hackneyBlackwell#author2">Amy Hackney Blackwell</a> describe with examples a digital library infrastructure that affords canonical citation for 'quoting' images, useful for creating commentaries, arguments, and teaching tools.</p> </div> </div> </div> <p>Quotation is the heart of scholarly argument and teaching, the activity of bringing insight to something complex by focused discussion of its parts. Philosophers who have reflected on the question of quotation have identified two necessary components: a name, pointer, or citation on the one hand and a reproduction or repetition on the other. Robert Sokolowski calls quotation a 'curious conjunction of being able to name and to contain' [<a href="#1">1</a>]; V.A. Howard is more succinct: quotation is 'replication-plus-reference' [<a href="#2">2</a>]. We are less interested in the metaphysical aspects of quotation than in the practical ones.</p> <p>The tools and techniques described here were supported by the National Science Foundation under Grants No. 0916148 &amp; No. 0916421. Any opinions, findings and conclusions or recommendations expressed in this article are those of the authors and do not necessarily reflect the views of the National Science Foundation (NSF).</p> <h2 id="Quotation">Quotation</h2> <p>Quotation, when accompanied by citation, allows us to bring the reader's attention to bear on a particular part of a larger whole efficiently and without losing the surrounding context. A work of Biblical exegesis, for example, can quote or merely cite 'Genesis 1:29' without having to reproduce the entire Hebrew Bible, or even the Book of Genesis; a reader can resolve that citation to a particular passage about the creation of plants, and can see that passage as a discrete node at the bottom of a narrowing hierarchy: Hebrew Bible, Genesis, Chapter 1, Verse 29. We take this for granted.</p> <p>Quoting a text is easy. But how can we quote an image? This remains difficult even in the 21st century where it is easy to reproduce digital images, pass them around through networks, and manipulate them on our desks.</p> <p>A scholar wishing to refer to a particular part of an image will generally do something like this: She will open one version of an image in some editing software, select and 'cut' a section from it, and 'paste' that section into a document containing the text of her commentary or argument. She might add to the text of her argument a reference to the source of the image. The language that describes this process is that of mechanical work&nbsp;– cutting and pasting&nbsp;– rather than the language of quotation and citation. The process yields a fragment of an image with only a tenuous connection to the ontological hierarchy of the object of study. The same scholar who would never give a citation to '<em>The Bible</em>, page 12' rather than to 'Genesis 1:29' will, of necessity, cite an image-fragment in a way similarly unlikely to help readers find the source and locate the fragment in its natural context.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue67/blackwell-hackneyBlackwell" target="_blank">read more</a></p> issue67 feature article amy hackney blackwell christopher blackwell clemson university furman university google harvard university national academy of sciences national science foundation university of virginia gnu homer multitext archives browser creative commons css data digital library doi dublin core firefox free software html identifier infrastructure java licence metadata namespace openoffice research safari schema software standards stylesheet tei thesaurus url urn vocabularies web browser xhtml xml xsl xslt zip Sun, 03 Jul 2011 23:00:00 +0000 editor 1620 at http://www.ariadne.ac.uk MyMobileBristol http://www.ariadne.ac.uk/issue67/jones-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue67/jones-et-al#author1">Mike Jones</a>, <a href="/issue67/jones-et-al#author2">Simon Price</a>, <a href="/issue67/jones-et-al#author3">Nikki Rogers</a> and <a href="/issue67/jones-et-al#author4">Damian Steer</a> describe the rationale, aims and progress of MyMobileBristol, highlighting some of the challenges and opportunities that have arisen during the project.</p> </div> </div> </div> The MyMobileBristol Project is managed and developed by the Web Futures group at the Institute for Learning and Research Technology (ILRT), University of Bristol [<a href="#1">1</a>]. The project has a number of broad and ambitious aims and objectives, including collaboration with Bristol City Council on the development or adoption of standards with regard to the exchange of time- and location-sensitive data within the Bristol region, with particular emphasis on transport, the environment and sustainability. <p><a href="http://www.ariadne.ac.uk/issue67/jones-et-al" target="_blank">read more</a></p> issue67 feature article damian steer mike jones nikki rogers simon price ilrt jisc jisc techdis ordnance survey ukoln university of bristol w3c web futures datagovuk devcsi mca mobile campus assistant mymobilebristol apache api atom authentication blog browser bsd cataloguing content management data data set database dissemination e-research e-science framework geospatial data gis higher education html intellectual property java javascript jena ldap licence machine learning mobile mobile phone native app native applications open data open source operating system portal portfolio rdf research resource description restful rss search technology semantic web smartphone software sparql sql standards usability web app web browser web services wiki wireless xml Sun, 03 Jul 2011 23:00:00 +0000 editor 1622 at http://www.ariadne.ac.uk From Link Rot to Web Sanctuary: Creating the Digital Educational Resource Archive (DERA) http://www.ariadne.ac.uk/issue67/scaife <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue67/scaife#author1">Bernard M Scaife</a> describes how an innovative use of the EPrints repository software is helping to preserve official documents from the Web.</p> </div> </div> </div> <!-- start main content --><!-- start main content --><p>When I started as Technical Services Librarian at the Institute of Education (IOE) in September 2009, one of the first tasks I was given was to do something about all the broken links in the catalogue. Link rot [<a href="#1">1</a>] is the bane of the Systems Librarian's life and I was well aware that you had to run fast to stand still.</p> <p><a href="http://www.ariadne.ac.uk/issue67/scaife" target="_blank">read more</a></p> issue67 feature article bernard m scaife bbc becta google jisc national library of australia oai the national archives uk data archive university of london university of southampton archives bibliographic data cataloguing content management copyright creative commons data data mining digital preservation digitisation dspace eprints fedora commons higher education html identifier infrastructure interoperability lcsh library management systems licence metadata ms word multimedia national library oai-pmh open access preservation provenance repositories research schema search technology software thesaurus ulcc url xml Sun, 03 Jul 2011 23:00:00 +0000 editor 1625 at http://www.ariadne.ac.uk Characterising and Preserving Digital Repositories: File Format Profiles http://www.ariadne.ac.uk/issue66/hitchcock-tarrant <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue66/hitchcock-tarrant#author1">Steve Hitchcock</a> and <a href="/issue66/hitchcock-tarrant#author2">David Tarrant</a> show how file format profiles, the starting point for preservation plans and actions, can also be used to reveal the fingerprints of emerging types of institutional repositories.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue66/hitchcock-tarrant" target="_blank">read more</a></p> issue66 feature article david tarrant steve hitchcock amazon google harvard university jisc microsoft mpeg the national archives university of illinois university of northampton university of southampton university of the arts london wellcome library jisc information environment keepit wikipedia accessibility adobe archives bibliographic data blog cloud computing css csv curation data data management database digital curation digital preservation digital repositories dissemination document format droid eprints file format flash flash video framework gif graphics html hypertext identifier institutional repository java jpeg latex linked data metadata mpeg-1 open access open source photoshop php plain text preservation quicktime repositories research schema semantic web software standards vector graphics video web 2.0 wiki windows windows media xml xml schema Sun, 30 Jan 2011 00:00:00 +0000 editor 1608 at http://www.ariadne.ac.uk International Digital Curation Conference 2010 http://www.ariadne.ac.uk/issue66/idcc-2010-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue66/idcc-2010-rpt#author1">Alex Ball</a> reports on the 6th International Digital Curation Conference, held on 7-8 December 2010 in Chicago.</p> </div> </div> </div> <!-- version v2: final edits after author review 2011-01-12 REW --><!-- version v2: final edits after author review 2011-01-12 REW --><p>The International Digital Curation Conference has been held annually by the Digital Curation Centre (DCC) [<a href="#1">1</a>] since 2005, quickly establishing a reputation for high-quality presentations and papers. So much so that, as co-chair Allen Renear explained in his opening remarks, after attending the 2006 Conference in Glasgow [<a href="#2">2</a>] delegates from the University of Illinois at Urbana-Champaign (UIUC) offered to bring the event to Chicago.</p> <p><a href="http://www.ariadne.ac.uk/issue66/idcc-2010-rpt" target="_blank">read more</a></p> issue66 event report alex ball cni coalition for networked information cornell university datacite dcc indiana university johns hopkins university leiden university massachusetts institute of technology michigan state university national library of australia national science foundation research information network rutgers university ukoln university of arizona university of bath university of california berkeley university of cambridge university of chicago university of edinburgh university of illinois university of oxford university of sheffield university of southampton datashare i2s2 idmb myexperiment sagecite sudamih aggregation archives ark authentication blog cataloguing content management curation data data citation data management data model data set database digital curation digital library e-science eprints framework identifier infrastructure intellectual property interoperability irods linked data linux metadata mobile national library ontologies open access open data operating system persistent identifier preservation preservation metadata provenance rdf repositories research resource description search technology semantic web sharepoint software standards tagging tei text mining twitter video virtual research environment visualisation wiki windows xml Sun, 30 Jan 2011 00:00:00 +0000 editor 1611 at http://www.ariadne.ac.uk From Passive to Active Preservation of Electronic Records http://www.ariadne.ac.uk/issue65/briston-estlund <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue65/briston-estlund#author1">Heather Briston</a> and <a href="/issue65/briston-estlund#author2">Karen Estlund</a> provide a narrative of the process adopted by the University of Oregon in order to integrate electronic records management into its staff's workflow.</p> </div> </div> </div> <!-- v2 of article incorporating edits from XHTML view 20101123 - rew --><!-- v2 of article incorporating edits from XHTML view 20101123 - rew --><p>Permanent records of the University of Oregon (UO) are archived by the Special Collections and University Archives located within the University Libraries. In the digital environment, a new model is being created to ingest, curate and preserve electronic records. This article discusses two case studies working with the Office of the President to preserve electronic records.</p> <p><a href="http://www.ariadne.ac.uk/issue65/briston-estlund" target="_blank">read more</a></p> issue65 feature article heather briston karen estlund google microsoft oais the national archives university of oregon adobe archives blog cataloguing content management data management digital asset management digital preservation digital record object identification digital repositories droid dspace dvd ead eportfolio file format identifier infrastructure institutional repository microsoft office ocr optical character recognition preservation privacy repositories standards tagging video web 2.0 xml Fri, 29 Oct 2010 23:00:00 +0000 editor 1584 at http://www.ariadne.ac.uk Developing Infrastructure for Research Data Management at the University of Oxford http://www.ariadne.ac.uk/issue65/wilson-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue65/wilson-et-al#author1">James A. J. Wilson</a>, <a href="/issue65/wilson-et-al#author2">Michael A. Fraser</a>, <a href="/issue65/wilson-et-al#author3">Luis Martinez-Uribe</a>, <a href="/issue65/wilson-et-al#author4">Paul Jeffreys</a>, <a href="/issue65/wilson-et-al#author5">Meriel Patrick</a>, <a href="/issue65/wilson-et-al#author6">Asif Akram</a> and <a href="/issue65/wilson-et-al#author7">Tahir Mansoori</a> describe the approaches taken, findings, and issues encountered while developing research data management services and infrastructure at the University of Oxford.</p> </div> </div> </div> <!-- v4., incorporating late edits and reference increment by ++1; 2010-11-26-11-57 rew --><!-- v4., incorporating late edits and reference increment by ++1; 2010-11-26-11-57 rew --><p>The University of Oxford began to consider research data management infrastructure in earnest in 2008, with the 'Scoping Digital Repository Services for Research Data' Project [<a href="#1">1</a>]. Two further JISC (Joint Information Systems Committee)-funded pilot projects followed this initial study, and the approaches taken by these projects, and their findings, form the bulk of this article.</p> <p><a href="http://www.ariadne.ac.uk/issue65/wilson-et-al" target="_blank">read more</a></p> issue65 feature article asif akram james a. j. wilson luis martinez-uribe meriel patrick michael a. fraser paul jeffreys tahir mansoori ahds dcc google hefce ibm jisc microsoft oxford university computing services research information network uk data archive university of east anglia university of essex university of melbourne university of oxford university of southampton datashare eidcsr jisc information environment sudamih algorithm archives bibliographic data browser cloud computing curation data data management data set database digital asset management digital curation digital repositories e-research flash framework geospatial data gis google maps ict identifier infrastructure infrastructure service intellectual property interoperability j2ee jpeg metadata multimedia open access portal preservation provenance qt repositories research research information management schema search technology sharepoint software standards visualisation web 2.0 web portal xml xml schema Fri, 29 Oct 2010 23:00:00 +0000 editor 1590 at http://www.ariadne.ac.uk CIG Conference 2010: Changes in Cataloguing in 'Interesting Times' http://www.ariadne.ac.uk/issue65/cig-2010-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue65/cig-2010-rpt#author1">Rhiannon McLoughlin</a> reports on a three-day conference on cataloguing in a time of financial stringency, held by the CILIP Cataloguing and Indexing Group at Exeter University, from 13-15 September 2010.</p> </div> </div> </div> <p>The focus of this conference was initiatives to get through the current economic climate. Cataloguing departments are under threat of cutbacks as never before. Papers on streamlining, collaborative enterprises, shared catalogues and services, recycling and repurposing of content using metadata extraction techniques combined to give a flavour of the new thrift driving management. The continuing progress of the long awaited Resource Description and Access (RDA)[<a href="#1">1</a>][<a href="#2">2</a>] towards becoming the new international cataloguing standard was another hot topic.</p> <p><a href="http://www.ariadne.ac.uk/issue65/cig-2010-rpt" target="_blank">read more</a></p> issue65 event report rhiannon mcloughlin british library british museum cilip google ifla jisc leeds metropolitan university library of congress mla research information network sconul ukoln university of aberdeen university of exeter university of leeds university of strathclyde university of warwick aacr2 aggregation archives bibliographic data blog cataloguing cidoc-crm crm data data management digital repositories digitisation ebook frbr google search higher education lcsh learning object metadata learning objects lom marc marc21 metadata ontologies open data open source repositories research resource description and access resource discovery resource sharing schema search technology semantic web software standards vle wiki xml Fri, 29 Oct 2010 23:00:00 +0000 editor 1595 at http://www.ariadne.ac.uk Retooling Libraries for the Data Challenge http://www.ariadne.ac.uk/issue64/salo <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue64/salo#author1">Dorothea Salo</a> examines how library systems and procedures need to change to accommodate research data.</p> </div> </div> </div> <p>Eager to prove their relevance among scholars leaving print behind, libraries have participated vocally in the last half-decade's conversation about digital research data. On the surface, libraries would seem to have much human and technological infrastructure ready-constructed to repurpose for data: digital library platforms and institutional repositories may appear fit for purpose. However, unless libraries understand the salient characteristics of research data, and how they do and do not fit with library processes and infrastructure, they run the risk of embarrassing missteps as they come to grips with the data challenge.</p> <p>Whether managing research data is 'the new special collections,'[<a href="#1">1</a>] a new form of regular academic-library collection development, or a brand-new library specialty, the possibilities have excited a great deal of talk, planning, and educational opportunity in a profession seeking to expand its boundaries.</p> <p>Faced with shrinking budgets and staffs, library administrators may well be tempted to repurpose existing technology infrastructure and staff to address the data curation challenge. Existing digital libraries and institutional repositories seem on the surface to be a natural fit for housing digital research data. Unfortunately, significant mismatches exist between research data and library digital warehouses, as well as the processes and procedures librarians typically use to fill those warehouses. Repurposing warehouses and staff for research data is therefore neither straightforward nor simple; in some cases, it may even prove impossible.</p> <h2 id="Characteristics_of_Research_Data">Characteristics of Research Data</h2> <p>What do we know about research data? What are its salient characteristics with respect to stewardship?</p> <h3 id="Size_and_Scope">Size and Scope</h3> <p>Perhaps the commonest mental image of research data is terabytes of information pouring out of the merest twitch of the Large Hadron Collider Project. So-called 'Big Data' both captures the imagination of and creates sheer terror in the practical librarian or technologist. 'Small data,' however, may prove to be the bigger problem: data emerging from individual researchers and labs, especially those with little or no access to grants, or a hyperlocal research focus. Though each small-data producer produces only a trickle of data compared to the like of the Large Hadron Collider Project, the tens of thousands of small-data producers in aggregate may well produce as much data (or more, measured in bytes) as their Big Data counterparts [<a href="#2">2</a>]. Securely and reliably storing and auditing this amount of data is a serious challenge. The burgeoning 'small data' store means that institutions without local Big Data projects are by no means exempt from large-scale storage considerations.</p> <p>Small data also represents a serious challenge in terms of human resources. Best practices instituted in a Big Data project reach all affected scientists quickly and completely; conversely, a small amount of expert intervention in such a project pays immense dividends. Because of the great numbers of individual scientists and labs producing small data, however, immensely more consultations and consultants are necessary to bring practices and the resulting data to an acceptable standard.</p> <h3 id="Variability">Variability</h3> <p>Digital research data comes in every imaginable shape and form. Even narrowing the universe of research data to 'image' yields everything from scans of historical glass negative photographs to digital microscope images of unicellular organisms taken hundreds at a time at varying depths of field so that the organism can be examined in three dimensions. The tools that researchers use naturally shape the resulting data. When the tool is proprietary, unfortunately, so may be the file format that it produced. When that tool does not include long-term data viability as a development goal, the data it produces are often neither interoperable nor preservable.</p> <p>A major consequence of the diversity of forms and formats of digital research data is a concomitant diversity in desired interactions. The biologist with a 3-D stack of microscope images interacts very differently with those images than does a manuscript scholar trying to extract the underlying half-erased text from a palimpsest. These varying affordances <em>must</em> be respected by dissemination platforms if research data are to enjoy continued use.</p> <p>One important set of interactions involves actual changes to data. Many sorts of research data are considerably less usable in their raw state than after they have had filters or algorithms or other processing performed on them. Others welcome correction, or are refined by comparison with other datasets. Two corollaries emerge: first, that planning and acting for data stewardship must take place throughout the research process, rather than being an add-on at the end; and second, that digital preservation systems designed to steward only final, unchanging materials can only fail faced with real-world datasets and data-use practices.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue64/salo" target="_blank">read more</a></p> issue64 feature article dorothea salo california digital library dcc google oai university of wisconsin hydra algorithm api archives bibliographic data big data blog cookie curation data data management data set database digital curation digital library digital preservation digitisation dissemination drupal dspace dublin core eprints fedora commons file format flickr google docs infrastructure institutional repository interoperability library management systems linked data marc metadata mods oai-pmh open source preservation rdf repositories research search technology software standardisation standards sword protocol wiki xml Thu, 29 Jul 2010 23:00:00 +0000 editor 1566 at http://www.ariadne.ac.uk FRBR in Practice http://www.ariadne.ac.uk/issue64/taylor-teague <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue64/taylor-teague#author1">Wendy Taylor</a> and <a href="/issue64/taylor-teague#author2">Kathy Teague</a> describe what they learnt about how FRBR is used at the Celia Library for the Visually Impaired in Helsinki, during their Ulverscroft/IFLA-funded visit.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue64/taylor-teague" target="_blank">read more</a></p> issue64 feature article kathy teague wendy taylor ifla rnib ukoln bibliographic data bibliographic record cataloguing copyright data file sharing frbr library management systems licence marc21 national library opac search technology standards xml Thu, 29 Jul 2010 23:00:00 +0000 editor 1567 at http://www.ariadne.ac.uk Institutional Web Management Workshop 2010 http://www.ariadne.ac.uk/issue64/iwmw-2010-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue64/iwmw-2010-rpt#author1">Keith Doyle</a> provides a personal perspective on a conference organised by UKOLN for those involved in the provision of institutional Web services.</p> </div> </div> </div> <p>This was the 13th Institutional Web Management Workshop [<a href="#1">1</a>] to be organised by UKOLN [<a href="#2">2</a>] held at the University of Sheffield from 12 to 14 July 2010.&nbsp;The theme was 'The Web in Turbulent Times' [<a href="#3">3</a>]. As such, there was a healthy balance of glass-half-empty-doom-and-gloom, and glass-half-full-yes-we-can.</p> <p><a href="http://www.ariadne.ac.uk/issue64/iwmw-2010-rpt" target="_blank">read more</a></p> issue64 event report keith doyle canterbury christ church university eduserv google ilrt oxford university computing services terminalfour ukoln university college london university of bristol university of cambridge university of oxford university of salford university of sheffield university of the west of england w3c iwmw memento mobile campus assistant wikipedia accessibility apache blog browser cocoa content management css curation data data visualisation datamining facebook firefox framework geospatial data gis hashtag higher education html html5 hypertext information architecture linked data mashup metadata mobile mobile phone opera plone portal qr code rdfa research rss search technology sharepoint smartphone social web software taxonomy twitter usability video videoconferencing visualisation web app web development web services webkit widget wookie wordpress xcri xml Thu, 29 Jul 2010 23:00:00 +0000 editor 1569 at http://www.ariadne.ac.uk Moving Towards Interoperability: Experiences of the Archives Hub http://www.ariadne.ac.uk/issue63/stevenson-ruddock <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue63/stevenson-ruddock#author1">Jane Stevenson</a> and <a href="/issue63/stevenson-ruddock#author2">Bethan Ruddock</a> describe the work that the Archives Hub team has been doing to promote the sharing of content.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue63/stevenson-ruddock" target="_blank">read more</a></p> issue63 feature article bethan ruddock jane stevenson courtauld institute of art jisc mimas university of london university of manchester archives hub dealing with data aggregation archives cataloguing data database digital archive ead interoperability portal repositories research resource discovery search technology software standards thesaurus ukad usability xml Thu, 29 Apr 2010 23:00:00 +0000 editor 1546 at http://www.ariadne.ac.uk A Pragmatic Approach to Preferred File Formats for Acquisition http://www.ariadne.ac.uk/issue63/thompson <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue63/thompson#author1">Dave Thompson</a> sets out the pragmatic approach to preferred file formats for long-term preservation used at the Wellcome Library.</p> </div> </div> </div> <p>This article sets out the Wellcome Library's decision not explicitly to specify preferred file formats for long-term preservation. It discusses a pragmatic approach in which technical appraisal of the material is used to assess the Library's likelihood of preserving one format over another. The Library takes as its starting point work done by the Florida Digital Archive in setting a level of 'confidence' in its preferred formats. The Library's approach provides for nine principles to consider as part of appraisal. These principles balance economically sustainable preservation and intellectual 'value' with the practicalities of working with specific, and especially proprietary, file formats. Scenarios are used to show the application of principles (see <a href="#annex">Annex</a> below).</p> <p>This article will take a technical perspective when assessing material for acquisition by the Library. In reality technical factors are only part of the assessment of material for inclusion in the Library's collections. Other factors such as intellectual content, significance of the material, significance of the donor/creator and any relationship to material already in the Library also play a part. On this basis, the article considers 'original' formats accepted for long-term preservation, and does not consider formats appropriate for dissemination.</p> <p>This reflects the Library's overall approach to working with born digital archival material. Born digital material is treated similarly to other, analogue archival materials. The Library expects archivists to apply their professional skills regardless of the format of any material, to make choices and decisions about material based on a range of factors and not to see the technical issues surrounding born digital archival material as in any way limiting.</p> <h2 id="Why_Worry_about_Formats">Why Worry about Formats?</h2> <p>Institutions looking to preserve born digital material permanently, the Wellcome Library included, may have little control over the formats in which material is transferred or deposited. The ideal intervention point from a preservation perspective is at the point digital material is first created. However this may be unrealistic. Many working within organisations have no choice in the applications they use, cost of applications may be an issue, or there may simply be a limited number of applications available on which to perform specialist tasks. Material donated after an individual retires or dies can prove especially problematic. It may be obsolete, in obscure formats, on obsolete media and without any metadata describing its context, creation or rendering environment.</p> <p>Computer applications 'save' their data in formats, each application typically having its own file format. The Web site filext [<a href="#1">1</a>] lists some 25,000 file extensions in its database.</p> <p>The long-term preservation of any format depends on the type of format, issues of obsolescence, and availability of hardware and/or software, resources, experience and expertise. Any archive looking to preserve born digital archival material needs to have the means and confidence to move material across the 'gap' that exists between material 'in the wild' and holding it securely in an archive.</p> <p>This presents a number of problems: first, in the proliferation of file formats; second, in the use of proprietary file formats, and third, in formats becoming obsolete, either by being incompatible with later versions of the applications that created them, or by those applications no longer existing. This assumes that proprietary formats are more problematic to preserve as their structure and composition are not known, which hinders preservation intervention by imposing the necessity for specialist expertise. Moreover, as new software is created, so new file formats proliferate, and consequently exacerbate the problem.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue63/thompson" target="_blank">read more</a></p> issue63 feature article dave thompson microsoft mpeg wellcome library aggregation archives born digital cd-rom data database digital archive digital preservation dissemination drm file format framework internet explorer jpeg jpeg 2000 metadata microsoft office open source openoffice preservation provenance real audio repositories software standards tiff usb video xml Thu, 29 Apr 2010 23:00:00 +0000 editor 1547 at http://www.ariadne.ac.uk Turning on the Lights for the User: NISO Discovery to Delivery Forum http://www.ariadne.ac.uk/issue63/niso-d2d-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue63/niso-d2d-rpt#author1">Laura Akerman</a> and <a href="/issue63/niso-d2d-rpt#author2">Kim Durante</a> report on Discovery to Delivery, Creating a First-Class User Experience, a NISO Forum on today's information seekers and current standards developments held in March 2010 at the Georgia Tech Global Learning Center.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue63/niso-d2d-rpt" target="_blank">read more</a></p> issue63 event report kim durante laura akerman amazon blackboard coalition for networked information cornell university emory university georgia institute of technology google library of congress niso oai oclc serials solutions internet archive wikipedia aggregation api application profile archives atom authentication cataloguing data database digital library digitisation drm dublin core ebook framework google books google scholar identifier interoperability jstor knowledge base marc metadata oai-pmh onix open archives initiative openurl qr code research resource sharing rss schema search technology sfx shibboleth software standardisation standards tagging video visualisation xml Thu, 29 Apr 2010 23:00:00 +0000 editor 1548 at http://www.ariadne.ac.uk Volcanic Eruptions Fail to Thwart Digital Preservation - the Planets Way http://www.ariadne.ac.uk/issue63/planets-2010-rome-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue63/planets-2010-rome-rpt#author1">Matthew Barr</a>, <a href="/issue63/planets-2010-rome-rpt#author2">Amir Bernstein</a>, <a href="/issue63/planets-2010-rome-rpt#author3">Clive Billenness</a> and <a href="/issue63/planets-2010-rome-rpt#author4">Manfred Thaller</a> report on the final Planets training event Digital Preservation - The Planets Way held in Rome over 19 - 21 April 2010.</p> </div> </div> </div> <div align="center"> <p style="text-align: left;">In far more dramatic circumstances than expected, the Planets Project [<a href="#1">1</a>] held its 3-day training event<em> Digital Preservation – The Planets Way</em> in Rome over 19 - 21 April 2010. This article reports its proceedings.</p> </div><p><a href="http://www.ariadne.ac.uk/issue63/planets-2010-rome-rpt" target="_blank">read more</a></p> issue63 event report amir bernstein clive billenness manfred thaller matthew barr austrian national library british library national library of the netherlands oais open planets foundation opf swiss federal archives university of cologne university of glasgow archives bibliographic data browser cataloguing cloud computing data database digital preservation digital repositories digitisation file format framework graphics identifier interoperability java metadata national library operating system preservation repositories research software usb visualisation web browser web services xml youtube zip Thu, 29 Apr 2010 23:00:00 +0000 editor 1549 at http://www.ariadne.ac.uk Towards a Toolkit for Implementing Application Profiles http://www.ariadne.ac.uk/issue62/chaudhri-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue62/chaudhri-et-al#author1">Talat Chaudhri</a>, <a href="/issue62/chaudhri-et-al#author2">Julian Cheal</a>, <a href="/issue62/chaudhri-et-al#author3">Richard Jones</a>, <a href="/issue62/chaudhri-et-al#author4">Mahendra Mahey</a> and <a href="/issue62/chaudhri-et-al#author5">Emma Tonkin</a> propose a user-driven methodology for the iterative development, testing and implementation of Dublin Core Application Profiles in diverse repository software environments.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue62/chaudhri-et-al" target="_blank">read more</a></p> issue62 feature article emma tonkin julian cheal mahendra mahey richard jones talat chaudhri cetis jisc oai ukoln university of bath geospatial application profile gnu iemsr images application profile jisc information environment lmap opendoar tbmap wikipedia application profile archives blog cerif data data model database dcap dcmi digital repositories domain model dspace dublin core dublin core metadata initiative e-government eprints fedora commons framework frbr geospatial data gis higher education identifier information architecture institutional repository interoperability metadata metadata model oai-ore open access open archives initiative open source rdf repositories research resource description ruby schema scholarly works application profile search technology software standards sword protocol uri usability virtual research environment vocabularies xml Sat, 30 Jan 2010 00:00:00 +0000 editor 1522 at http://www.ariadne.ac.uk Get Tooled Up: Xerxes at Royal Holloway, University of London http://www.ariadne.ac.uk/issue62/grigson-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue62/grigson-et-al#author1">Anna Grigson</a>, <a href="/issue62/grigson-et-al#author2">Peter Kiely</a>, <a href="/issue62/grigson-et-al#author3">Graham Seaman</a> and <a href="/issue62/grigson-et-al#author4">Tim Wales</a> describe the implementation of an open source front end to the MetaLib federated search tool.</p> </div> </div> </div> <!-- v4. completion of author details: institution - 2010-02-22-10-30- rew --><!-- v4. completion of author details: institution - 2010-02-22-10-30- rew --><p>Rarely is software a purely technical issue, though it may be marketed as 'technology'. Software is embedded in work, and work patterns become moulded around it. Thus the use of a particular package can give rise to an inertia from which it can be hard to break free.</p> <p>Moreover, when this natural inertia is combined with data formats that are opaque or unique to a particular system, the organisation can become locked in to that system, a potential victim of the pricing policies or sluggish adaptability of the software provider. The speed of change in the information world in recent years, combined with the actual or expected crunch in library funding, has made this a particular issue for library management system (LMS) users. While there is general agreement on the direction to take - more 'like Google' - LMS suppliers' moves in this direction can prove both slow and expensive for the user.</p> <p>Open source software has often been suggested as an alternative, but the nature of lock-in means that the jump from proprietary to open system can be all or nothing; in effect too big (and complex) a risk to take. No major UK university libraries have yet moved to Koha, Evergreen, or indeed any open source LMS [<a href="#1">1</a>].</p> <p>The alternative, which brings its own risks, is to take advantage of the pressures on LMS suppliers to make their own systems more open, and to use open source systems 'around the edges' [<a href="#2">2</a>]. This has the particular benefit of creating an overall system which follows the well-established design practice of creating a clean separation of 'view' (typically the Web interface) from 'model' (here the LMS-managed databases) and 'controller' (the LMS core code). The 'view' is key to the user experience of the system, and this separation gives the ability to make rapid changes or to integrate Web 2.0 features quickly and easily, independently of the system back-end. The disadvantage of this approach is that it is relatively fragile, being dependent on the willingness of the LMS supplier to provide a detailed and stable application programming interface (API).</p> <p>There are several current examples of this alternative approach. Some, like the Vufind OPAC, allow the use of plug-ins which adapt the software to a range of different LMSs. Others, like Xerxes, are specialised front-ends to a single system (MetaLib from ExLibris [<a href="#3">3</a>]). This has an impact on evaluating the software: in particular, the pool of active developers is likely to be smaller in the latter case.</p> <h2 id="Royal_Holloway_Library_Services">Royal Holloway Library Services</h2> <p>Within this general context, Royal Holloway Library Services were faced with a specific problem. The annual National Student Survey had given ratings to the Library well below those expected, with many criticisms centred on the difficulty in using the Library's MetaLib federated search system.</p> <p>MetaLib is a key access point to the Library's e-resources, incorporating both A-Z lists of major online databases available to library users, and a federated search tool. Feedback showed that many users found the interface less than satisfactory, with one user commenting that:</p> <blockquote><p><em>'MetaLib is possibly the worst and most confusing library interface I have ever come across'</em></p></blockquote> <p>The Library Management Team decided to remedy this as a matter of urgency and set a deadline of the start of the 2009 Autumn term. There was no funding available to acquire an alternative discovery system so the challenge was to identify a low-cost, quick-win solution for the existing one. With this work in mind, the incoming Associate Director (E-Strategy) had already recruited two new colleagues over the Summer vacation: a systems officer with Web development experience, the other an experienced e-resources manager.</p> <p>The first possible route to the improvement of MetaLib was modification of the existing MetaLib Web interface. This was technically possible but presented several major difficulties: the underlying ExLibris designs were based on the old HTML 4.0 and pre-dated current stylesheet-based design practice; the methods to adapt the designs were opaque and poorly documented, based on numbered variables with semantics that changed depending on context; and perhaps most importantly, the changes were to be made over the summer months, giving no time for user feedback on the details of the changes to be made.</p> <p>The second possibility was the use of Xerxes [<a href="#4">4</a>]. Xerxes offered the advantage of an interface design which had been user-tested on a range of (US) campuses, partially solving the user feedback issue. It was not, however, entirely cost-free, as ExLibris charges an annual maintenance fee for the MetaLib X-server API on which Xerxes depends.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue62/grigson-et-al" target="_blank">read more</a></p> issue62 feature article anna grigson graham seaman peter kiely tim wales google jisc jisc collections kingston university microsoft royal holloway sconul university of london gnu api authentication data database ebook ejournal free software gpl html interoperability library management systems licence linux mysql opac open source php portal refworks repositories research search technology sfx software solaris standards stylesheet vufind web 2.0 web development web services wiki xml xslt Sat, 30 Jan 2010 00:00:00 +0000 editor 1525 at http://www.ariadne.ac.uk Abstract Modelling of Digital Identifiers http://www.ariadne.ac.uk/issue62/nicholas-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue62/nicholas-et-al#author1">Nick Nicholas</a>, <a href="/issue62/nicholas-et-al#author2">Nigel Ward</a> and <a href="/issue62/nicholas-et-al#author3">Kerry Blinco</a> present an information model of digital identifiers, to help bring clarity to the vocabulary debates from which this field has suffered.</p> </div> </div> </div> <!-- v2, incorporating author review edits inc. lead-ins to bullet lists - 2010-02-12-19-30-rew--><!-- v2, incorporating author review edits inc. lead-ins to bullet lists - 2010-02-12-19-30-rew--><p>Discussion of digital identifiers, and persistent identifiers in particular, has often been confused by differences in underlying assumptions and approaches. To bring more clarity to such discussions, the PILIN Project has devised an abstract model of identifiers and identifier services, which is presented here in summary. Given such an abstract model, it is possible to compare different identifier schemes, despite variations in terminology; and policies and strategies can be formulated for persistence without committing to particular systems. The abstract model is formal and layered; in this article, we give an overview of the distinctions made in the model. This presentation is not exhaustive, but it presents some of the key concepts represented, and some of the insights that result.</p> <p>The main goal of the Persistent Identifier Linking Infrastructure (PILIN) project [<a href="#1">1</a>] has been to scope the infrastructure necessary for a national persistent identifier service. There are a variety of approaches and technologies already on offer for persistent digital identification of objects. But true identity persistence cannot be bound to particular technologies, domain policies, or information models: any formulation of a persistent identifier strategy needs to outlast current technologies, if the identifiers are to remain persistent in the long term.</p> <p>For that reason, PILIN has modelled the digital identifier space in the abstract. It has arrived at an ontology [<a href="#2">2</a>] and a service model [<a href="#3">3</a>] for digital identifiers, and for how they are used and managed, building on previous work in the identifier field [<a href="#4">4</a>] (including the thinking behind URI [<a href="#5">5</a>], DOI [<a href="#6">6</a>], XRI [<a href="#7">7</a>] and ARK [<a href="#8">8</a>]), as well as semiotic theory [<a href="#9">9</a>]. The ontology, as an abstract model, addresses the question 'what is (and isn't) an identifier?' and 'what does an identifier management system do?'. This more abstract view also brings clarity to the ongoing conversation of whether URIs can be (and should be) universal persistent identifiers.</p> <h2 id="Identifier_Model">Identifier Model</h2> <p>For the identifier model to be abstract, it cannot commit to a particular information model. The notion of an identifier depends crucially on the understanding that an identifier only identifies one distinct thing. But different domains will have different understandings of what things are distinct from each other, and what can legitimately count as a single thing. (This includes aggregations of objects, and different versions or snapshots of objects.) In order for the abstract identifier model to be applicable to all those domains, it cannot impose its own definitions of what things are distinct: it must rely on the distinctions specific to the domain.</p> <p>This means that information modelling is a critical prerequisite to introducing identifiers to a domain, as we discuss elsewhere [<a href="#10">10</a>]: identifier users should be able to tell whether any changes in a thing's content, presentation, or location mean it is no longer identified by the same identifier (i.e. whether the identifier is restricted to a particular version, format, or copy).</p> <p>The abstract identifier model also cannot commit to any particular protocols or service models. In fact, the abstract identifier model should not even presume the Internet as a medium. A sufficiently abstract model of identifiers should apply just as much to URLs as it does to ISBNs, or names of sheep; the model should not be inherently digital, in order to avoid restricting our understanding of identifiers to the current state of digital technologies. This means that our model of identifiers comes close to the understanding in semiotics of signs, as our definitions below make clear.</p> <p>There are two important distinctions between digital identifiers and other signs which we needed to capture. First, identifiers are managed through some system, in order to guarantee the stability of certain properties of the identifier. This is different to other signs, whose meaning is constantly renegotiated in a community. Those identifier properties requiring guarantees include the accountability and persistence of various facets of the identifier—most crucially, what is being identified. For digital identifiers, the <strong>identifier management system</strong> involves registries, accessed through defined services. An HTTP server, a PURL [<a href="#11">11</a>] registry, and an XRI registry are all instances of identifier management systems.</p> <p>Second, digital identifiers are straightforwardly <strong>actionable</strong>: actions can be made to happen in connection with the identifier. Those actions involve interacting with computers, rather than other people: the computer consistently does what the system specifies is to be done with the identifier, and has no latitude for subjective interpretation. This is in contrast with human language, which can involve complex processes of interpretation, and where there can be considerable disconnect between what a speaker intends and how a listener reacts. Because the interactions involved are much simpler, the model can concentrate on two actions which are core to digital identifiers, but which are only part of the picture in human communication: working out what is being identified (<em>resolution</em>), and accessing a representation of what is identified (<em>retrieval</em>).</p> <p>So to model managing and acting on digital identifiers, we need a concept of things that can be identified, names for things, and the relations between them. (Semiotics already gives us such concepts.) We also need a model of the systems through which identifiers are managed and acted on; what those systems do, and who requests them to do so; and what aspects of identifiers the systems manage.</p> <p>Our identifier model (as an ontology) thus encompasses:</p> <ul> <li><strong>Entities</strong> - including actors and identifier systems;</li> <li><strong>Relations</strong> between entities;</li> <li><strong>Qualities</strong>, as desirable properties of entities. Actions are typically undertaken in order to make qualities apply to entities.</li> <li><strong>Actions</strong>, as the processes carried out on entities (and corresponding to <strong>services</strong> in implementations);</li> </ul> <p>An individual identifier system can be modelled using concepts from the ontology, with an identifier system model.</p> <p>In the remainder of this article, we go through the various concepts introduced in the model under these classes. We present the concept definitions under each section, before discussing issues that arise out of them. <em>Resolution</em> and <em>Retrieval</em> are crucial actions for identifiers, whose definition involves distinct issues; they are discussed separately from other Actions. We briefly discuss the standing of HTTP URIs in the model at the end.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue62/nicholas-et-al" target="_blank">read more</a></p> issue62 feature article kerry blinco nick nicholas nigel ward d-lib magazine dest ietf oasis internet archive aggregation archives ark ascii browser cataloguing cool uri cordra curation data database digital object identifier dns document management doi e-learning ftp identifier infrastructure interoperability learning objects metadata mobile mobile phone namespace ontologies openurl persistent identifier purl repositories research rfc search technology semantic web semiotic service usage model uri url vocabularies wayback machine web browser xml xml namespaces Sat, 30 Jan 2010 00:00:00 +0000 editor 1528 at http://www.ariadne.ac.uk Fedora UK & Ireland / EU Joint User Group Meeting http://www.ariadne.ac.uk/issue62/fedora-eu-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue62/fedora-eu-rpt#author1">Chris Awre</a> reports on the first coming together of two regional user groups for the Fedora digital repository system, hosted by the University of Oxford in December 2009.</p> </div> </div> </div> <!-- v2. edits from author incorporated into this version - 2010-02-12-22-47 rew --><!-- v2. edits from author incorporated into this version - 2010-02-12-22-47 rew --><p>The Fedora digital repository system [<a href="#1">1</a>] (as opposed to the Fedora Linux distribution, with which there is no connection) is an open source solution for the management of all types of digital content. Its development is managed through DuraSpace [<a href="#2">2</a>], the same organisation that now oversees DSpace, and carried out by developers around the world. The developers, alongside the extensive body of Fedora users, form the community that sustains Fedora.</p> <p><a href="http://www.ariadne.ac.uk/issue62/fedora-eu-rpt" target="_blank">read more</a></p> issue62 event report chris awre bbc duraspace ieee jisc kings college london stanford university technical university of denmark university of edinburgh university of hull university of oxford university of southampton university of virginia bril datashare hydra idmb cloud computing content management data data management database digital repositories dspace e-research e-science eprints fedora commons flickr framework geospatial data gis infrastructure institutional repository linux metadata mobile open source portal qr code rdbms rdf repositories research search technology software usability virtual research environment wiki xml youtube Sat, 30 Jan 2010 00:00:00 +0000 editor 1531 at http://www.ariadne.ac.uk