I was excited to see that Facet had brought out a “practical guide” to altmetrics. As far as I can see, there is only one other textbook on the market released in 2015, and in this fast-growing area I would say there is definitely room for another. However, as I ploughed further and further through it, I must confess to feeling a bit confused. Firstly, while the title suggests the book is a practical guide for librarians, researchers and academics, it was almost exclusively aimed at librarians (as you might expect from a book from Facet) and equally as theoretical as practical. But it was more than this. I came to the book expecting an edited work containing contributions from some key players in the field, on practical topics relating to the development, meaning, and implementation of altmetrics - maybe some case studies - and certainly some in-depth discussion on what these numbers mean and how we can apply them sensibly. But it wasn’t quite that.
Now there were certainly some contributions from key players: excellent chapters from Euan Adie (founder of Altmetric.com), Ben Showers (formerly Head of Scholarly & Library Futures at Jisc), and William Gunn (Head of Academic Outreach at Mendeley). However these were the only three chapters (out of 12) not written by members of the School of Health & Related Research (ScHARR) at the University of Sheffield. Seven were written by Andy Tattersall himself and two others written by colleagues, Andrew Booth and Clare Beecroft. This is not necessarily a bad thing if the authors were experts in the field of altmetrics, but whist undeniably experts in their respective fields, I’m not sure from their biographies whether those fields stretch to the practical implementation of altmetrics. Indeed I felt some of their contributions on citation metrics; considerations on implementing new technologies; and mobile apps, whist interesting, were stretching the remit of the book a bit. Particularly as there were other key topics that I felt were missing (see below).
I guess my biggest disappointment was the chapter on resources and tools. This should be the heart of any practical guide on altmetrics. What are these tools and how can we use them? What are their advantages and limitations? Who is using them and how? The chapter lists 41 tools in total, however most were social media services, leaving just seven altmetric tools. Now that’s OK - I certainly learned of the existence of some altmetric tools I’d not heard of before. The disappointing thing was the lack of thorough analysis. Some resources were given a paragraph of explanation and a quick guide to how they could be used; others received just a paragraph; and still others just a single line of text. Some were repeated (Mendeley and Impact Story). A further seven tools were listed under the heading “Other notable academic tools” without a single line of explanation. Snowball metrics were listed as a resource, but there was no discussion of which altmetrics are ‘Snowball-compliant’ (and how it’s only actually currently possible to generate Snowball-compliant metrics using Elsevier products…). What this chapter sorely lacked - and indeed what the rest of the book could have really benefitted from - was some case studies. We’re all at an early stage of engaging with altmetrics and having some short stories from librarians, academics and research offices about how they are using the various tools would have been invaluable.
The other key omission for me - and something that is big news in the world of metrics at the moment - is the validity of altmetrics and how they should be used responsibly. What are they actually measuring? Can they be normalised? Should they be standardised? Who should doing the standardising? Is it possible to compare the value of a blog mention, a tweet mention, a wikipedia mention, and a policy mention? In the heavily metricised world of academia, these are big issues for the information professional, researcher and academic alike.
I’m aware this review is coming over as almost completely negative and I don’t mean to be. I definitely learned some stuff - it just wasn’t exclusively in the area of altmetrics. I’ve come to the conclusion that the book is just wrongly labelled. If it had been called “New technologies: from web 2.0 to open peer review and everything in between” I think I would be writing a much more positive review! So my verdict is: read it if you want a 200-page romp through some of the key scholarly communication developments of the last 30 years. However, if they get round to a second edition, it would be good to see the focus narrowed, the author list broadened, and the inclusion of some helpful case studies.