Perhaps one of the current benchmarks for gauging when a Web technology has migrated from the cluttered desks of the technorati to the dining tables of the chatterati is if it becomes a topic for BBC Radio 4's The Moral Maze . More accustomed to discussing matters such as child-rearing or a controversial pronouncement of the Archbishop of Canterbury, the panel members who, over the years have ranged from the liberal to the harrumphing illiberal (and in one case, both at the same time), recently did battle over Twitter . They were discussing its mass use recently to cause considerable discomfort to two professions that, some might still argue, benefit from a degree of unaccountability not enjoyed by lesser mortals. It is not my place here to rehearse the excellent arguments in some cases put forward by both the programme's witnesses (brave individuals clearly unschooled in the story of Daniel and the Lion's Den) and their interviewing panel. However, it was heartening to hear a debate about the benefits of the apparently democratising technology that Twitter arguably is, as opposed to the disservice such incidents might very well do to traditional political engagement and debate (with some mention of 'wisdom of the mob' or similar). What I concluded will be of no interest, but the programme served to highlight the essential nature of the Web, that it is capable of producing developments that can benefit or disadvantage not just users, but people in general at one and the same time. In the case of Twitter, I am sure there were a few bus drivers in Bath (where UKOLN produces Ariadne) who were glad of it when snowstorms beleaguered the city in February 2009 . Messages were sent out asking householders to make drivers a cup of tea if they saw their bus stuck in the snow outside their home. 'bathcsc' was put to immediate good use and it is the immediacy of Twitter that makes it so powerful; its staying power may be another matter; at time of writing 'bathcsc' is no longer with us .
Indeed it is the immediacy of Twitter and real-time search engines that equally appeal to our expert on search engines Phil Bradley. I am very pleased to say that The Moral Maze does not have a monopoly on philosophical musings because in Search Engines: Real-time Search Phil raises the question of what exactly is real-time search - 'is the emphasis on real-time search, or is it on real-time search?', he enquires. Is one '...looking through material that literally is published in real time. In other words, material where there's practically no delay between composition and publishing'? Or is it more a matter of 'finding the right answer to your question based on what's available right now, about the subject you care about right now. - '? I will leave you to discover where Phil stands on the matter, and promise equally that he provides in his inimitable fashion a cornucopia of sites to visit in pursuit of real-time search facilities. However, I welcomed his preface to that list offering a few salient remarks on the nature of real-time searching that are relevant to my train of thought above. Phil advises us to take account of 'the usual suspects such as spam and authority.' It is the latter suspect, the concept of authority, which represents a major issue for information providers and their users, and not just in the area of real-time search engines, as will be apparent. Phil writes, 'If the idea of authority on the Web is taxing, at least with traditional Web publishing it's possible to check a Web site, read back through a series of blog posts or see who links to a specific page. With real-time search we're depending on the fact that someone who says that they're at the scene of a specific event actually is; it's all too easy for a malicious individual to 'report' something that's happening when it really isn't. As we have little to go on other than brief details that person has supplied in their biography (if indeed they have), it becomes much more difficult to sort the wheat from the chaff.' I think I mention the problem of sorting wheat from chaff elsewhere but in this instance it is the provenance of the material that poses searchers considerable difficulties. A misleading tweet, for all that it may be considerably shorter than a misleading or ill-informed article, may do just as much harm.
I was extremely pleased to receive the views of Michael Kennedy as recounted in his Cautionary Tales: Archives 2.0 and the Diplomatic Historian which describe his own brush with the two-edged nature of 2.0-ness. Michael describes his reactions thus: 'Preparing for the CRESC Conference[], I was introduced to the new world of Web 2.0 and Archives 2.0. The central tenets of both were welcome, but at times worrying, territory. With the definition of Archives 2.0 as a changed mindset towards archives and as the application of Web 2.0 technologies to archives, I could see why, as a researcher, Archives 2.0 would be of benefit to me. On the other hand, as an editor and publisher of diplomatic documents, I felt the brakes of caution apply.' And little wonder, since Michael stands diametrically opposite the tweeter shooting from the hip in an attempt to keep up with developments he is picking off with real-time searches. 'The openness of Archives 2.0, with its encouragement of direct user intervention with sources, clashes with the requirement placed upon diplomatic document editors to provide a neutral environment for the documents they are publishing with care and expertise.' In effect we confront yet again the issue that Phil Bradley raised in his contribution albeit from a slightly different angle: authority. Or perhaps to put it another way, trust. In this instance we see Michael quite properly concerned for the authority of the documents he is placing online and the need for neutrality in the way in which they are presented. Given the nature of the documents in question, it is entirely understandable that he has reservations about 'the ability Web 2.0 technology gives to archive end-users to manipulate and augment published data to their own ends, perhaps even to augment the source itself, on the hosting Web site.' What impresses however is that even given the perceived threat to the very essence of his service, the author remains open-minded and can recognise the benefits the Web 2.0 technologies may be able to provide. 'What if DIFP allowed users limited freedom to add references to online documents to give details of, say, corresponding files or documents in other archives? For example, if DIFP published a letter from the Eamon de Valera to British Prime Minister Neville Chamberlain, a user of the Chamberlain papers in the University of Birmingham could add in a reference to the corresponding file in the Chamberlain papers.' He also considers other examples such as using geo-locational technologies to provide supplementary information. Closer investigation may prove that this might not work for DIFP, but the important point, as is the case with Paul Bevan below, is to keep an open mind. Michael's writing demonstrates a readiness to consider new approaches and technologies combined with a natural and entirely professional caution in defence of the very essence of what his work represents to society.
Ultimately, when considering the usefulness of new Web technologies, while it is increasingly apparent, given the wide range of vehicles for content now available, that it ought to be a question of horses for courses, a considered approach would have to place the primacy of the nature of the content in the driving seat of new developments. Paola Marchionni also argues in her article Why Are Users So Useful?: User Engagement and the Experience of the JISC Digitisation Programme that, when considering content, keeping the nature of its users in mind is equally important. In particular she points to the significant role of user engagement in the creation of digitised scholarly resources. She writes, 'The prospect of free and democratic access to high-quality digitised content over the World Wide Web has often resulted in institutions undertaking digitisation projects with the emphasis more on making the digitised content accessible online rather than carefully researching the needs of their target audiences. Nor have they always considered providing them with content and functionalities that respond to their demands. 'If you build it, they will come' has been perhaps too common a mantra, underpinned by the belief that it will be enough to publish rare or unique content on the Web in order for users to flock in and for a Web site to thrive.' Whatever else, the Web like so many other things in the 2000s, has ceased to be one single entity or considered as such. The adoption of new technologies must be guided by considerations of content and audience every much as which envelope into which one places that content.
With all the hype and confusion that surrounds Web 2.0 it is refreshing to come across a submission that speaks in terms of an approach to the subject that are both measured and considered. It is clear from his article Share. Collaborate. Innovate. Building an Organisational Approach to Web 2.0 that Paul Bevan and his colleagues have done a great deal of homework while shaping the new Web Strategy of the National Library of Wales. What soon becomes apparent from Paul's description is the importance of keeping an open mind when considering the adoption of Web 2.0 technologies. He discovered reactions ranged from those of organisations which were soon disenchanted by the indifference to their offering, to others which have been surprised at the degree of take-up of new Web 2.0-related services. To listen to one persuasion to the exclusion of the other might very well prove unwise, but even more importantly, Paul points out that any success is rooted in a realistic and practical notion of what is actually achievable - as opposed to the really marvellous and unworkable. Two further points he makes are of equal importance in the approach he details: a refusal to allow the 'hot topic' of Web 2.0 to predominate to the exclusion of other important aspects of a Web strategy by firmly placing it alongside other Web technologies, exciting or otherwise; and the clear recognition that the 'advantage of an organisational approach is that rather than simply exercising the potential and passion of individuals, the potential for long-term and widespread engagement can be harnessed through a cohesive network of individuals with a shared strategic aim'.
The emergence of data journals at the moment would seem to be a welcome development. Many would recognise the potential they provide for a framework to permit the citation, and indeed peer review, of datasets with the same force as is true of scholarly papers in general. Such data publication is surely a positive trend where scientists will begin to see the value of spending more time on the proper curation of their data and the potential for datasets to earn them credit for their work as much, perhaps one day, as do their published papers. I am very grateful to Sarah Callaghan, Sam Pepler, Fiona Hewer, Paul Hardaker and Alan Gadian for returning to us with a follow-up article to their contribution in Issue 60 which covered the Overlay Journal Infrastructure for Meteorological Sciences (OJIMS) Project offering an introduction to the concept of overlay journals and their potential impact on the meteorological sciences. In How to Publish Data Using Overlay Journals: The OJIMS Project they provide us with more technical detail about the OJIMS Project, giving details of the software used to deploy a demonstration data journal and operational document repository and the form of the submission processes for each.
Few are likely to dispute that the landscape in scholarly publications has been radically altered by the advent of the World Wide Web, nor that the disruptive technologies it brought with it have completely changed the way research is published and accessed. The emergence of the Open Access movement in this regard remains a constantly shifting topic and I am most grateful to Arjan Hogenaar for his contribution on yet another emerging element in the form of the Enhanced Publication (EP) which Arjan feels will make a significant improvement to Society's understanding of Science. In his article Enhancing Scientific Communication through Aggregated Publications he draws our attention to the way researchers can implement EPs in Aggregated Publications Environments and, equally significantly, points to the importance of the move away from the model of traditional scholarly publication process towards the development of scholarly communication which combines the recognised benefits of the traditional process with the new properties derived from scientific collaboratories.
It will come as no surprise to anyone that as soon as one development on the Web begins to deliver dividends to research, then it is not very long before its very usefulness attracts difficulties in its wake. One such area arises in the matter of search and discovery of digital resources, even when hosted by institutional repositories (IRs). In their article UK Institutional Repository Search: Innovation and Discovery Vic Lyte, Sophia Jones, Sophia Ananiadou and Linda Kerr point out that a simple search box which returns a long list of results derived from the holdings of an individual IR is quite likely to swamp the searcher - and if the search system is offering federated search then the list will be considerably more daunting. As it stands, such a blizzard of results is not only likely to detain researchers while they sort the wheat from the chaff, but to make things worse, there is a strong likelihood that in such circumstances the once clear objectives of the researcher will start to lose their focus while in the literature search stage of their work. As the authors point out, 'How the aggregated results are re-organised presents major challenges to a federated higher-level simple search facility'. The aim of the UK Institutional Repository Search Project, funded by the JISC and led by MIMAS in partnership with SHERPA, UKOLN and NaCTeM is to address the underlying difficulties that confront searchers across IRs with a solution in the form of a free, targeted search tool that makes searching faster and more relevant to them.
I am indebted to Peri Stracchino and Yankui Feng who describe a year's progress in building the digital library infrastructure outlined by Julie Allinson and Elizabeth Harbord in their article SHERPA to YODL-ING: Digital Mountaineering at York in our last issue. Their article Learning to YODL: Building York's Digital Library will meet the needs of readers who were hoping to drill down a little more on the development of digital repositories for the University of York. In this issue the authors have provided a more detailed technical description of their work on York's repository infrastructure. Readers of Issue 60 will already know that the project had opted to adopt the now widely used Fedora Commons as the underlying repository. More controversial perhaps was their decision to produce their user interface by the use of Muradora. Peri and Yangkui reflect upon how they managed the implementation and development of such two elements but they also focus in particular on their development of fine-grained access control integrated with York's LDAP and the use of X Forms technology to develop a workflow structure.
I hope you will enjoy Issue 61.