CURATEcamp is ‘A series of unconference-style events focused on connecting practitioners and technologists interested in digital curation.’  The first CURATEcamp was held in the summer of 2010, and there have been just over 10 Camps since then. The activity at CURATEcamps is driven by the attendees; in other words, ‘There are no spectators at CURATEcamp, only participants.’  Camps follow the ‘open agenda’ model: while organisers will typically build the activity around a particular theme within the field of digital curation, and sometimes (but not always) collect topics for discussion, there is no preset agenda. The event’s structure is determined at the beginning of the day by having participants propose and vote on topics of interest. The selected topics are then placed in an outline or ‘grid’ that records the day’s activity.
18 people attended the iPres 2012 camp. The group selected four projects to work on from the list compiled prior to the Camp . In addition, some participants gave lighting talks during the lunch break.
In September, a working group from the National Digital Stewardship Alliance (NDSA)  released for public comment a document describing levels of digital preservation . This document arose as a result of a perceived lack of guidance on how organisations should prioritise resources for digital preservation; the levels do not address what to preserve, workflows, preservation platforms, and other policy or operational details.
The CURATEcamp iPres 2012 grid
The document, which is presented in the form of a matrix of columns and rows, defines four levels:
and across these levels, five functional areas
In general, each level builds on the amount of organisational resources required to perform the preservation functions.
One of the CURATEcamp groups assembled to provide feedback on the document. Their feedback has been posted to the NDSA blog as a comment.  In summary, the group saw a need for a ‘Level Zero’, one identifying ‘something you can point at that you suspect you are responsible for.’ Content identified at this level may not even necessarily be preserved; the point is that an organisation needs to identify that it has content that needs to be evaluated for preservation. Furthermore, the group felt that the level captions (such as ‘Know Your Data’) were not very useful and recommended removing them from the chart. Finally, contrary to the general pattern of each successive level requiring more resources than the previous one, the group felt that this was not the case in the ‘File Formats’ functional area but that a case could be made for making file format requirements more substantial as preservation levels increased.
This group attempted to tackle a lot for a single session: curation and the Cloud, new strategies and tools for new technology, and practical digital preservation solutions for production entities. Ultimately, the group focused on tools and strategies that could be used to preserve cloud-based services and social media, and simultaneously preserve authenticity - the myriad of issues the advent of the Cloud brings to digital preservation.
Preservation of email is not a new issue, and there are many documented workflows around it. However, preservation and curation of Web-based email brings up new issues. Grabbing the actual email is fairly straightforward if credentials are supplied. But, once the email is curated, its presentation can be difficult. How do we, or should we, present how a given user flags items? How his or her email is organised? Content is king, but what about context?
The group focused on curating email from Hotmail, Gmail and similar services. Questions raised included:
Finally, email is not the only cloud-based service that digital curators will need to work with - Dropbox, online backup services, Google Docs, and social media services will also pose their own challenges. In the very limited amount of time remaining in the session, the group raised similar questions to these other cloud- based services as they did with Web-based email, and also noted that standards and best practices for archiving social media and other cloud-based content needed to be developed by the community.
Central Toronto, venue for iPRES 2012 events
A popular design pattern in digital curation and preservation is micro-services, which are single-purpose, loosely coupled tools that are combined together into workflows. Another CURATEcamp iPres 2012 group examined the feasibility of formulating a standardised set of requirements that could aid micro-service users, developers, and integrators. Functional requirements worth exploring included:
Examples of ‘good’ and ‘bad’ tools that can be integrated into workflows as micro-services are (as an example of the ‘good’) bagit by the Library of Congress , which is well documented (both within the source code and externally), can be included in Python scripts as a library or used as a standalone script, and is easy to install and use; as an example of a ‘bad’ micro-service, participants nominated FITS , which they saw as not well documented and difficult to install.
Some guidelines for developers of digital preservation micro-services already exist in the form of David Tarrant’s ‘Software Development Guidelines’ . Furthermore, participants familiar with work being performed at SCAPE (SCAlable Preservation Environments),  revealed that that organisation was planning on producing Debian packages for its preservation tools, making them easy to deploy on standard Linux infrastructure.
File format identification and characterisation are important tasks in digital curation workflows, since successful application of downstream processes (like validation and normalisation) rely on accurate identification of a file’s format. Many tools exist that attempt to identify file formats -- too many in some people’s opinion. JHOVE, FITS, FIDO, Apache Tika, and the recently released Unified Digital Format Registry (UDFR), which unifies two other services, PRONOM and the Global Digital Format Registry, are all services or pieces of software that perform functions related to format identification and characterisation, but they all do it in slightly different ways.
Building on the work of Andy Jackson , Paul Wheatley , and the Archivematica Format Policy Registry Requirements , this CURATEcamp group discussed limitations of existing tools and approaches, the need for better use cases, clearer functional requirements, and performance optimisations of popular tools.
This discussion continued after the CURATEcamp and resulted in the organisation of the ‘CURATEcamp 24 Hour Worldwide File ID Hackathon’ , which was held on November 16, 2012 and coordinated across time zones from GMT +12:00 to GMT -8:00. The day began in New Zealand and crossed continents with the rising sun, joined intermittently by participants interested in enhancing best practice for format identification and validation. The group communicated on Twitter via the event hashtag (#fileidhack) and in the IRC #openarchives chatroom. This event was a resounding success and ended with summaries provided by the Vancouver, Canada team in the final time zone. Highlights of the outcomes included (with names of principal contributors in parentheses):
These accomplishments all happened within the same day, and involved people from organisations ranging from independent consultancies to national libraries. Overall, the 24 Hour Worldwide File ID Hackathon proved to be an excellent example of the digital curation community’s ability to tackle a specific set of problems in a loosely coordinated yet highly focused burst of activity.
The Camp’s organisers offered participants the opportunity to deliver a 5-minute lightning talk during the lunch break. These impromptu presentations covered a broad variety of topics, including one institution’s experience of becoming certified as a Trustworthy Digital Repository, workflows for transferring content into digital preservation systems, specific work done to identify file formats in a large Web site archive, and low-cost disk storage options.
Lunchtime Lightning Talks at CURATEcamp iPres 2012
Participants at this CURATEcamp felt it was a success: it generated useful discussion, resulted in some concrete outcomes, and provided a venue for digital curation practitioners and researchers to meet face to face. Moreover, unconference events like this one are spreading in the library and archives community, and are proving to offer a productive alternative to traditional conferences and other forms of collaboration. Part of the appeal of CURATEcamp is that it is easy to organise one; interested readers need only consult the CURATEcamp ‘How it works’ Web page  for more information.
Mark Jordan is Head of Library Systems at Simon Fraser University. His interests include digital preservation, repository platforms, and metadata reuse and exchange. Mark is a contributor to several open source applications and the chief developer of several more. He is the author of Putting Content Online: A Practical Guide for Libraries (Chandos, 2006).
Courtney manages Archivematica system requirements, product design, technical support, training, and community relations. She has been a researcher and co-investigator on the InterPARES 3 Project, researcher on the UBC-SLAIS Digital Records Forensics Project, and is a member of the Professional Experts Panel on the BitCurator Project. Courtney has been published in Archivaria and has delivered many presentations on the practical application of digital preservation strategies.
Nick Ruest is the Digital Assets Librarian at York University. He oversees the development of data curation, asset management and preservation strategies, along with creating and implementing digital preservation policies. He is also active in the Islandora community, and occasionally contributes code to the project. He is currently the President of the Ontario Library and Technology Association.
This article has been published under Creative Commons Attribution 3.0 Unported (CC BY 3.0) licence. Please note this CC BY licence applies to textual content of this article, and that some images or other non-textual elements may be covered by special copyright arrangements. For guidance on citing this article (giving attribution as required by the CC BY licence), please see below our recommendation of 'How to cite this article'.