Overview of content related to 'csv' http://www.ariadne.ac.uk/taxonomy/term/4007/all?article-type=&term=&organisation=&project=&author=&issue= RSS feed with Ariadne content related to specified tag en The ARK Project: Analysing Raptor at Kent http://www.ariadne.ac.uk/issue70/lyons <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/lyons#author1">Leo Lyons</a> describes how University of Kent librarians are benefitting from Raptor's ability to produce e-resource usage statistics and charts.</p> </div> </div> </div> <p>It is indisputable that the use of e-resources in university libraries has increased exponentially over the last decade and there would be little disagreement with a prediction that usage is set to continue to increase for the foreseeable future. The majority of students both at undergraduate and post-graduate level now come from a background where online access is the <em>de facto</em> standard.</p> <p><a href="http://www.ariadne.ac.uk/issue70/lyons" target="_blank">read more</a></p> issue70 feature article leo lyons cardiff university jisc microsoft newcastle university university of huddersfield university of kent ark project authentication blog cataloguing csv data data set database further education identifier infrastructure internet explorer ldap licence microsoft reporting services mobile native app raptor repositories research sharepoint shibboleth software sql standards wiki xml Tue, 04 Dec 2012 17:21:49 +0000 lisrw 2394 at http://www.ariadne.ac.uk SUSHI: Delivering Major Benefits to JUSP http://www.ariadne.ac.uk/issue70/meehan-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue70/meehan-et-al#author1">Paul Meehan</a>, <a href="/issue70/meehan-et-al#author2">Paul Needham</a> and <a href="/issue70/meehan-et-al#author3">Ross MacIntyre</a> explain the enormous time and cost benefits in using SUSHI to support rapid gathering of journal usage reports into the JUSP service.</p> </div> </div> </div> <p>A full-scale implementation of the Journal Usage Statistics Portal (JUSP) would not be possible without the automated data harvesting afforded by the Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol. Estimated time savings in excess of 97% compared with manual file handling have allowed JUSP to expand its service to more than 35 publishers and 140 institutions by September 2012. An in-house SUSHI server also allows libraries to download quality-checked data from many publishers via JUSP, removing the need to visit numerous Web sites. The protocol thus affords enormous cost and time benefits for the centralised JUSP service and for all participating institutions. JUSP has also worked closely with many publishers to develop and implement SUSHI services, pioneering work to benefit both the publishers and the UK HE community.</p> <p style="text-align: center; "><img alt="Journal Usage Statistics Portal (JUSP)" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/jusp-logo.png" style="width: 145px; height: 133px;" title="Journal Usage Statistics Portal (JUSP)" /></p> <h2 id="JUSP:_Background_to_the_Service">JUSP: Background to the Service</h2> <p>The management of journal usage statistics can be an onerous task at the best of times. The introduction of the COUNTER [<a href="#1">1</a>] Code of Practice in 2002 was a major step forward, allowing libraries to collect consistent, audited statistics from publishers. By July 2012, 125 publishers offered the JR1 report, providing the number of successful full-text downloads. In the decade since COUNTER reports became available, analysis of the reports has become increasingly important, with library managers, staff and administrators increasingly forced to examine journal usage to inform and rationalise purchasing and renewal decisions.</p> <p>In 2004, JISC Collections commissioned a report [<a href="#2">2</a>] which concluded that there was a definite demand for a usage statistics portal for the UK HE community; with some sites subscribing to more than 100 publishers, just keeping track of access details and downloading reports was becoming a significant task in itself, much less analysing the figures therein. There followed a report into the feasibility of establishing a ‘Usage Statistics Service’ carried out by Key Perspectives Limited and in 2008 JISC issued an ITT (Invitation To Tender). By early 2009 a prototype service, known as the Journal Usage Statistics Portal (JUSP) had been developed by a consortium including Evidence Base at Birmingham City University, Cranfield University, JISC Collections and Mimas at The University of Manchester; the prototype featured a handful of publishers and three institutions. However, despite a centralised service appearing feasible [<a href="#3">3</a>], the requirement to download and process data in spreadsheet format, and the attendant time taken, still precluded a full-scale implementation across UK HE.</p> <p style="text-align: center; "><img alt="COUNTER" src="http://ariadne-media.ukoln.info/grfx/img/issue70-meehan-et-al/counter-header.png" style="width: 640px; height: 45px;" title="COUNTER" /></p> <p>Release 3 of the COUNTER Code of Practice in 2009 however mandated the use of the newly-introduced Standardized Usage Statistics Harvesting Initiative (SUSHI) protocol [<a href="#4">4</a>], a mechanism for the machine-to-machine transfer of COUNTER-compliant reports; this produced dramatic efficiencies of time and cost in the gathering of data from publishers. The JUSP team began work to implement SUSHI for a range of publishers and expanded the number of institutions. By September 2012, the service had grown significantly, whilst remaining free at point of use, and encompassed 148 participating institutions, and 35 publishers. To date more than 100 million individual points of data have been collected by JUSP, all via SUSHI, a scale that would have been impossible without such a mechanism in place or without massive additional staff costs.</p> <p>JUSP offers much more than basic access to publisher statistics, however; the JUSP Web site [<a href="#5">5</a>] details the numerous reports and analytical tools on offer, together with detailed user guides and support materials. The cornerstone of the service though is undeniably its SUSHI implementation, both in terms of gathering the COUNTER JR1 and JR1a data and - as developed more recently - its own SUSHI server, enabling institutions to re-harvest data into their own library management tools for local analysis.</p> <h2 id="JUSP_Approach_to_SUSHI_Development_and_Implementation">JUSP Approach to SUSHI Development and Implementation</h2> <p>Once the decision was made to scale JUSP into a full service, the development of SUSHI capability became of paramount importance. The team had been able to handle spreadsheets of data on a small scale, but the expected upscale to 100+ institutions and multiple publishers within a short time frame meant that this would very quickly become unmanageable and costly in staff time and effort - constraints that were proving to be a source of worry at many institutions too: while some sites could employ staff whose role revolved around usage stats gathering and analysis, this was not possible at every institution, nor especially straightforward for institutions juggling dozens, if not hundreds, of publisher agreements and deals.</p> <p>Two main issues were immediately apparent in the development of the SUSHI software. Firstly, there was a lack of any standard SUSHI client software that we could use or adapt, and, more worryingly, the lack of SUSHI support at a number of major publishers. While many publishers use an external company or platform such as Atypon, MetaPress or HighWire to collect and provide usage statistics, others had made little or no progress in implementing SUSHI support by late 2009 - where SUSHI servers were in place these were often untested or unused by consumers.</p> <p>An ultimate aim for JUSP was to develop a single piece of software that would seamlessly interact with any available SUSHI repository and download data for checking and loading into JUSP. However, the only client software available by 2009 was written and designed to work in the Windows environment, or used Java, which can be very complex to work with and of which the JUSP team had limited expertise. The challenge therefore became to develop a much simpler set of code using Perl and/or PHP, common and simple programming languages which were much more familiar to the JUSP team.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue70/meehan-et-al" target="_blank">read more</a></p> issue70 feature article paul meehan paul needham ross macintyre birmingham city university cranfield university elsevier intute jisc jisc collections mimas niso university of manchester university of oxford jusp nesli pirus2 zetoc archives authentication csv data data set database digital library dublin core html identifier interoperability java multimedia openurl passwords perl php portal raptor repositories research shibboleth software standards sushi windows xml Wed, 05 Dec 2012 17:54:19 +0000 lisrw 2396 at http://www.ariadne.ac.uk Retooling Special Collections Digitisation in the Age of Mass Scanning http://www.ariadne.ac.uk/issue67/rinaldo-et-al <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue67/rinaldo-et-al#author1">Constance Rinaldo</a>, <a href="/issue67/rinaldo-et-al#author2">Judith Warnement</a>, <a href="/issue67/rinaldo-et-al#author3">Tom Baione</a>, <a href="/issue67/rinaldo-et-al#author4">Martin R. Kalfatovic</a> and <a href="/issue67/rinaldo-et-al#author5">Susan Fraser</a> describe results from a study to identify and develop a cost-effective and efficient large-scale digitisation workflow for special collections library materials.</p> </div> </div> </div> <!-- start main content --><!-- start main content --><p>The Biodiversity Heritage Library (BHL) [<a href="#1">1</a>] is a consortium of 12 natural history and botanical libraries that co-operate to digitise and make accessible the legacy literature of biodiversity held in their collections and to make that literature available for open access and responsible use as a part of a global 'biodiversity commons.' [<a href="#2">2</a>] The participating libraries hold more than two million volumes of biodiversity literature collected over 200 years to support the work of scientists, researchers and students in their home insti</p> <p><a href="http://www.ariadne.ac.uk/issue67/rinaldo-et-al" target="_blank">read more</a></p> issue67 feature article constance rinaldo judith warnement martin r. kalfatovic susan fraser tom baione american museum of natural history california digital library harvard university ifla library of congress new york botanical garden oclc smithsonian institution university of cambridge university of oxford internet archive open library wikipedia archives bibliographic data cataloguing csv data database digital library digitisation dublin core framework infrastructure intellectual property librarything metadata opac open access repositories research tagging url video web services wiki z39.50 Sun, 03 Jul 2011 23:00:00 +0000 editor 1624 at http://www.ariadne.ac.uk Characterising and Preserving Digital Repositories: File Format Profiles http://www.ariadne.ac.uk/issue66/hitchcock-tarrant <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue66/hitchcock-tarrant#author1">Steve Hitchcock</a> and <a href="/issue66/hitchcock-tarrant#author2">David Tarrant</a> show how file format profiles, the starting point for preservation plans and actions, can also be used to reveal the fingerprints of emerging types of institutional repositories.</p> </div> </div> </div> <p><a href="http://www.ariadne.ac.uk/issue66/hitchcock-tarrant" target="_blank">read more</a></p> issue66 feature article david tarrant steve hitchcock amazon google harvard university jisc microsoft mpeg the national archives university of illinois university of northampton university of southampton university of the arts london wellcome library jisc information environment keepit wikipedia accessibility adobe archives bibliographic data blog cloud computing css csv curation data data management database digital curation digital preservation digital repositories dissemination document format droid eprints file format flash flash video framework gif graphics html hypertext identifier institutional repository java jpeg latex linked data metadata mpeg-1 open access open source photoshop php plain text preservation quicktime repositories research schema semantic web software standards vector graphics video web 2.0 wiki windows windows media xml xml schema Sun, 30 Jan 2011 00:00:00 +0000 editor 1608 at http://www.ariadne.ac.uk Repository Fringe 2010 http://www.ariadne.ac.uk/issue65/repos-fringe-2010-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue65/repos-fringe-2010-rpt#author1">Martin Donnelly</a> (and friends) report on the Repository Fringe "unconference" held at the National e-Science Centre in Edinburgh, Scotland, over 2-3 September 2010.</p> </div> </div> </div> <p>2010 was the third year of Repository Fringe, and slightly more formally organised than its antecedents, with an increased number of discursive presentations and less in the way of organised chaos! The proceedings began on Wednesday 1 September with a one-day, pre-event SHERPA/RoMEO API Workshop [<a href="#1">1</a>] run by the Repositories Support Project team.</p> <p><a href="http://www.ariadne.ac.uk/issue65/repos-fringe-2010-rpt" target="_blank">read more</a></p> issue65 event report martin donnelly cetis dcc duraspace edina google jisc open university sherpa ukoln university of cambridge university of edinburgh university of glasgow university of hull university of southampton university of st andrews addressing history crispool datashare depositmo hydra jorum memento repomman reposit repositories support project romeo sharegeo sneep wikipedia aggregation api archives bibliographic data blog content management content negotiation csv curation data data management data set database digital curation digital library digital preservation digitisation dissemination doi dspace eprints fedora commons file format framework geospatial data gis google maps hashtag html hypertext identifier infrastructure institutional repository ipad kml learning objects mashup metadata national library oer ontologies open access open source preservation repositories research rss search technology social networks solr standards tagging twitter uri video visualisation wordpress yahoo pipes Fri, 29 Oct 2010 23:00:00 +0000 editor 1592 at http://www.ariadne.ac.uk Data Services for the Sciences: A Needs Assessment http://www.ariadne.ac.uk/issue64/westra <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue64/westra#author1">Brian Westra</a> describes a data services needs assessment for science research staff at the University of Oregon.</p> </div> </div> </div> <p>Computational science and raw and derivative scientific data are increasingly important to the research enterprise of higher education institutions. Academic libraries are beginning to examine what the expansion of data-intensive e-science means to scholarly communication and information services, and some are reshaping their own programmes to support the digital curation needs of research staff. These changes in libraries may involve repurposing or leveraging existing services, and the development or acquisition of new skills, roles, and organisational structures [<a href="#1">1</a>].</p> <p>Scientific research data management is a fluid and evolving endeavour, reflective of the high rate of change in the information technology landscape, increasing levels of multi-disciplinary research, complex data structures and linkages, advances in data visualisation and analysis, and new tools capable of generating or capturing massive amounts of data.</p> <p>These factors can create a complex and challenging environment for managing data, and one in which libraries can have a significant positive role supporting e-science. A needs assessment can help to characterise scientists' research methods and data management practices, highlighting gaps and barriers [<a href="#2">2</a>], and thereby improve the odds for libraries to plan appropriately and effectively implement services in the local setting [<a href="#3">3</a>].</p> <h2 id="Methods">Methods</h2> <p>An initiative to conduct a science data services needs assessment was developed and approved in early 2009 at the University of Oregon. The initiative coincided with the hiring of a science data services librarian, and served as an initial project for the position. A researcher-centric approach to the development of services was a primary factor in using an assessment to shape services [<a href="#4">4</a>]. The goals of the project were to:</p> <ul> <li>define the information services needs of science research staff;</li> <li>inform the Libraries and other stakeholders of gaps in the current service structures; and</li> <li>identify research groups or staff who would be willing to participate in, and whose datasets would be good subjects for, pilot data curation projects.</li> </ul> <p>The library took the lead role on the assessment, consulting with other stakeholders in its development and implementation. Campus Information Services provided input on questions regarding campus information technology infrastructure, and to avoid unnecessary overlap with other IT service activities focused on research staff. The Vice President for Research and other organisational units were advised of the project and were asked for referrals to potential project participants. These units provided valuable input in the selection of staff contacts. Librarian subject specialists also suggested staff who might be working with data and interested in participating. Librarians responsible for digital collections, records management, scholarly communications, and the institutional repository were involved in the development of the assessment questions and project plan.</p> <p>The questions used in the assessment were developed through an iterative process. A literature and Web review located several useful resources and examples. These included the University of Minnesota Libraries' study of scientists' research behaviours [<a href="#3">3</a>], and a study by Henty, et al. on the data management practices of Australian researchers [<a href="#5">5</a>]. The Data Audit Framework (DAF - now called the Data Asset Framework) methodology was considered to provide the most comprehensive set of questions with a field-tested methodology and guidelines [<a href="#6">6</a>][<a href="#7">7</a>][<a href="#8">8</a>][<a href="#9">9</a>][<a href="#10">10</a>][<a href="#11">11</a>]. The stages outlined in the DAF methodology were also instructive, although we elected not to execute a process for identifying and classifying assets (DAF Stage 2), since the organisational structure of our departments and institutes are not conducive to that level of investigation. From the beginning it was recognised that recruitment of scientists was based as much on their willingness to participate as their responsibility for any specific class or type of research-generated data.</p> <p></p><p><a href="http://www.ariadne.ac.uk/issue64/westra" target="_blank">read more</a></p> issue64 feature article brian westra arl edina imperial college london jisc johns hopkins university microsoft uk data archive university of edinburgh university of essex university of glasgow university of illinois university of oregon university of oxford university of washington archives authentication csv curation data data management data set data visualisation database digital curation digital library drupal e-research e-science file format framework gis higher education infrastructure institutional repository metadata mysql open access provenance repositories research usability visualisation Thu, 29 Jul 2010 23:00:00 +0000 editor 1568 at http://www.ariadne.ac.uk Get Tooled Up: SeeAlso: A Simple Linkserver Protocol http://www.ariadne.ac.uk/issue57/voss <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue57/voss#author1">Jakob Voss</a> combines OpenSearch and unAPI to enrich catalogues.</p> </div> </div> </div> <!-- 2008-11-11 REW v2 to take in final edits on code fragments, etc --><!-- 2008-11-11 REW v2 to take in final edits on code fragments, etc --><p>In recent years the principle of Service-oriented Architecture (SOA) has grown increasingly important in digital library systems. More and more core functionalities are becoming available in the form of Web-based, standardised services which can be combined dynamically to operate across a broader environment [<a href="#1">1</a>].</p> <p><a href="http://www.ariadne.ac.uk/issue57/voss" target="_blank">read more</a></p> issue57 feature article jakob voss d-lib magazine google ieee oai w3c cpan jisc information environment wikipedia api archives atom bibliographic data blog browser cataloguing cloud computing creative commons csv data database digital library firefox framework html hypertext identifier javascript json library management systems librarything licence lod metadata microformats namespace oai-pmh opac open archives initiative open data open source open standard openurl perl rdf rfc search technology soa software sparql sql sru standards syndication tag cloud uri url web 2.0 web services wiki xml xslt Thu, 30 Oct 2008 00:00:00 +0000 editor 1436 at http://www.ariadne.ac.uk The 2008 Mashed Museum Day and UK Museums on the Web Conference http://www.ariadne.ac.uk/issue56/ukmw08-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue56/ukmw08-rpt#author1">Mia Ridge</a> reports on the Mashed Museum day and the Museums Computer Group UK Museums on the Web Conference, held at the University of Leicester in June 2008.</p> </div> </div> </div> <p>Following the success of the inaugural event last year [<a href="#1">1</a>], the Mashed Museum day was again held the day before the Museums Computer Group UK Museums on the Web Conference. The theme of the conference was 'connecting collections online', and the Mashed Museum day was a chance for museum ICT staff to put this into practice.</p> <h2 id="The_Mashed_Museum_Day">The Mashed Museum Day</h2> <p>Earlier this year I received an email that read:</p> <p><a href="http://www.ariadne.ac.uk/issue56/ukmw08-rpt" target="_blank">read more</a></p> issue56 event report mia ridge bbc ibm library of congress massachusetts institute of technology museum of london oai university of leicester europeana freebase romeo wikipedia accessibility aggregation api archives blog cataloguing copyright csv data data set data visualisation database digital library digital media digitisation exif file format flickr foi framework geospatial data gis ict infrastructure metadata ontologies rdf rdfa repositories research resource description rss search technology semantic web standardisation syndication twitter video visualisation vocabularies web 2.0 web services xml Tue, 29 Jul 2008 23:00:00 +0000 editor 1415 at http://www.ariadne.ac.uk The KIDMM Community's 'MetaKnowledge Mash-up' http://www.ariadne.ac.uk/issue53/kidmm-rpt <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue53/kidmm-rpt#author1">Conrad Taylor</a> reports on the KIDMM knowledge community and its September 2007 one-day conference about data, information and knowledge management issues.</p> </div> </div> </div> <h2 id="About_KIDMM">About KIDMM</h2> <p>The British Computer Society [<a href="#1">1</a>], which in 2007 celebrates 50 years of existence, has a self-image around engineering, software, and systems design and implementation. However, within the BCS there are over fifty Specialist Groups (SGs); among these, some have a major focus on 'informatics', or the <em>content</em> of information systems.</p> <p><a href="http://www.ariadne.ac.uk/issue53/kidmm-rpt" target="_blank">read more</a></p> issue53 event report conrad taylor anglia ruskin university bsi google library of congress nhs ordnance survey the national archives ukoln university of bolton university of london university of manchester wikipedia adobe algorithm archives ascii born digital browser cataloguing controlled vocabularies csv cybernetics data data management data mining data set database digital archive digital asset management dublin core e-government e-learning ead eportfolio foia framework geospatial data gis google maps identifier information retrieval information society interoperability location-based services metadata mis named entity recognition ontologies portfolio preservation provenance repositories research search technology sgml software standards tagging text mining thesaurus vocabularies wiki xml Tue, 30 Oct 2007 00:00:00 +0000 editor 1358 at http://www.ariadne.ac.uk Book Review: Mastering Regular Expressions, 3rd Edition http://www.ariadne.ac.uk/issue53/tonkin-tourte-rvw <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue53/tonkin-tourte-rvw#author1">Emma Tonkin</a> and <a href="/issue53/tonkin-tourte-rvw#author2">Greg Tourte</a> take a look at the new edition of an O'Reilly classic.</p> </div> </div> </div> <h2 id="Introduction:_Needles_Haystacks_and_Magnets">Introduction: Needles, Haystacks and Magnets</h2> <p>Since the early days of metadata, powerful textual search methods have been, as Wodehouse's Wooster might have put it, 'of the essence'. Effective use of search engines is all about understanding the use of the rich query syntax supported by that particular software. Examples include the use of Boolean logic (AND, OR and NOT), and wildcards, such as <em><strong>*</strong></em> and <em><strong>?</strong></em>.</p> <p><a href="http://www.ariadne.ac.uk/issue53/tonkin-tourte-rvw" target="_blank">read more</a></p> issue53 review emma tonkin greg tourte google oreilly ukoln university of bristol archives ascii csv data database digital library eprints html interoperability java metadata perl php programming language search technology software text mining url Tue, 30 Oct 2007 00:00:00 +0000 editor 1363 at http://www.ariadne.ac.uk The LEODIS Database http://www.ariadne.ac.uk/issue27/leodis <div class="field field-type-text field-field-teaser-article"> <div class="field-items"> <div class="field-item odd"> <p><a href="/issue27/leodis#author1">Jonathan Kendal</a> on the creation of LEODIS, a Public Libraries sector digitization and database project.</p> </div> </div> </div> <h3 id="Personal_Background">Personal Background</h3> <p>To begin with, as this is predominantly a libraries publication I feel an introduction to my background may be helpful in understanding this approach to digitisation.</p> <p><a href="http://www.ariadne.ac.uk/issue27/leodis" target="_blank">read more</a></p> issue27 feature article jonathan kendal manchester metropolitan university microsoft oracle archives browser cataloguing csv data database digitisation dublin core identifier internet explorer intranet javascript jpg programming language purl research search technology software sql standards tiff url video Fri, 23 Mar 2001 00:00:00 +0000 editor 775 at http://www.ariadne.ac.uk