I have longed to write that headline for 30 years, and now Twitter and the Scholarly Web have done it for me! Go to https://AcademicsSay and you will see what I mean. Stuff, not that other stuff, you understand (http://www.timeshighereducation.co.uk/comment/opinion/the-scholarly-web-30-january-2014/2010843.article explains everything). Appropriately for Twitter, this new service organized something very topical: the Six Word Peer Review. Some items truly representative of academic (and all) our states of mind emerged. I liked “Why didn’t I think of this”, for example, and “Your data contradict my theory Reject” has the right touch to become a classic, while THES observes the accurate truth of an astronomer whose contribution was “Cite Me Cite ME Cite Me”.

Elsewhere the calm waters of academe were less disturbed, though it seems to me to have been another momentous week for STM announcements. As an indicator of change this interview with Duke University about using articles instead of textbooks seems to me to have real resonance (thanks to Adam Hodgkin of Exact Editions):
http://blogs.plos.org/blog/2014/02/03/an-interview-with-david-johnston/#.Uu-yjhoXJFo.twitter:
“Students are asked to read open access journal articles that cover the main aspects taught in the course. In this case we have focused on using PLOS ONE articles that are now all collected into the Marine Megafauna Collection over at PLOS Collections. We have also developed an iPad app that is useful for teaching marine megafauna-based classes called Cachalot. This app, available on the iTunes store for free, incorporates the PLOS ONE articles with other content written by experts around the world and is released under an open access license. We are not using the app directly in the online class this time as it is only available on iOS, not through the android or web-based platforms – yet.”

So whatever we think about the ongoing debate in scholarly communications concerning the limited impact of OA in research, we may be looking at much greater impact in Higher Education. How ironic would it be if the real impact of OA was on textbook publishers and not on journal publishers. And how equally ironic it would be if the journal research publishers, so long the butt of academic malice, were able to flex their business models and go into fresh territory just as pressure mounts on the journal as the first instance, first peer review point of publication. Macmillan, through their Digital Science subsidiary, have long been the laboratory of experimentation in software and services for supporting research workflow, which I would broadly argue is the direction of progress for those who wish to escape the self-publishing, post-publication peer review which is to follow (flood metaphors come easily in the UK this month). Each to his own Ark, say I, but I am very interested that two large and historically traditional players have chosen Macmillan Digital Sciences vessels this week. I was impressed in the first instance by the Taylor and Francis decision to adopt figshare. Putting all of the evidential data, videos, tabular matter, graphs, filesets and datasets for each T&F article onto figshare immediately gives T&F authors a clickable link that they and their readers can use in T&F Online, but it also creates a new route to the online service, and a new source of metrics. Each figshare entry has a Datacite DOI so that the evidential material can be cited in its own right. This is a practical step which puts users first.

This new service went live on 30 January, as Springer were consummating another deal with the Macmillan Digital Science people (http://www.springer.com/about+springer/media/pressreleases?SGWID=0-11002-6-1453458-0)
“Whereas altmetrics were used in the past at Springer for annual journal reports and editorial board meetings, or to track a journal’s performance, now this information is being gathered and shared widely with authors, SpringerLink users and the general public as well,” commented Martijn Roelandse, Senior Editor at Springer. “Springer is always trying to find new ways that it can make SpringerLink and the research we publish more useful, and partnering with Altmetric to provide this data fits perfectly with that mission.” Altmetric said:
“Providing this information on SpringerLink to readers, researchers and the general public is a great way of showcasing the wider impact and influence of each article, which is increasingly important to scientists everywhere.”
The number of shares for any given article will now be listed alongside citations on articles’ abstract pages on SpringerLink. While the “citations” link will redirect users to springer.com, the “shares” link will send users to Altmetric (altmetric.com)where they can dive into the discussions around any given piece of research.”

So in the space of a few days two major players indicated that they could no longer withstand the pressure to provide data for articles and data about articles. And at the same time the traditional provider of services to researcher/authors and to information about impact also gave notice of changes to come. First Thomson Reuters made a major announcement about the renewal of Web of Science. In its tug of war with Elsevier SciVal, doing nothing at this point clearly spells disaster, so we find the emergence of a next generation strategy that embraces further development of the landmark agreement with Google Scholar, the provision of the Chinese Science Citation Database and of SciELO (citation data from Spanish and Portuguese language sources) will help, with the Korean database to come. Google means going Open Web, and that helps too. Users who have complained about delays in Open Access article coverage will be pleased to see that being addressed as well.

Yet for many watchers the most interesting Thomson Reuters announcement of the year so far came on 31 January, with the launch of Pro-View eReader Platform 1.8. While it feels to me as if ProView has been around a long time, I recognize that this may be because of an in-bred scepticism that the eBook is the answer, rather than a very transitory step towards an answer. But I have never seen a giant publisher do something like this: an eReader, globally available, Windows and Mac, iPad and Android, capable of importing ebook content from any Thomson (law, tax, science, finance etc) source along with the requisite productivity tools. Users can filter and search notes, highlights, bookmarks; they can move those elements into new editions, even where the text changes; and of course they can create and export PDFs of their own. And you can get this app in any/either Appstore. Seems to me like one of those changes where we all scratch our heads and say “Wonder how we got by without doing this already!” (http://thomsonreuters.com/proview)

A long week of many announcements and rather too many publishing press releases. Come to think of it, we might post some of them at “Shit Publishers Say”. Could be wildly popular on Twitter.

New announcements in science publishing are falling faster than snowflakes in Minnesota this week, and it would be a brave individual who claimed to be on top of a trend here. I took strength from Tracy Vence’s review, The Year in Science Publishing (www.the-scientist.com), since it did not mention a single publisher, confirming my feeling that we are all off the pace in the commercial sector. But it did mention the rise, or resurrection, of “pre-print servers” (now an odd expression, since no one has printed anything since Professor Harnad was a small boy, but a way of pointing out that PeerJ’s PrePrints and Cold Spring Harbor’s bioRxiv are becoming quick and favourite ways for life sciences researchers to get the data out there and into the blood stream of scholarly communication). And Ms Vence clearly sees the launch of NCBI’s PubMed Commons as the event of the year, confirming the trend towards post-publication peer review. Just as I was absorbing that I also noticed that F1000, which seems to me to still be the pacemaker, had just recorded its 150,000th article recommendation (and a very interesting piece it was about the effect of fish oil on allergic sensitization, but please do not make me digress…)

The important things about the trend to post-publication peer review are all about the data. Both F1000 and PubMed Commons demand the deposit or availability of the experimental data alongside the article and I suspect that this will be a real factor in determining how these services grow. With reviewers looking at the data as well as the article, comparisons are already being drawn with other researcher’s findings, as well as evidential data throwing up connections that do not appear if the article alone is searched in the data analysis. F1000Prime now has 6000 leading scientists in its Faculty (including two who received Nobel prizes in 2013) and a further 5000 associates, but there must be questions still about the scalability of the model. And about its openness. One of the reasons why F1000 is the poster child of post publication peer review is that everything is open (or, as they say in these parts, Open). PubMed Commons on the other hand has followed the lead of PeerJ’s PubPeer, and demanded strict anonymity for reviewers. While this follows the lead of the traditional publishing model it does not allow the great benefit of F1000: if you know who you respect and whose research matters to you, then you also want to know what they think is important in terms of new contributions. The PubPeer folk are quoted in The Scientist as saying in justification that “A negative reaction to criticism by somebody reviewing your paper, grant or job application can spell the end of your career.” But didn’t that happen anyway despite blind, double blind, triple blind and even SI (Slightly Intoxicated) peer reviewing?

And surely we now know so much about who reads what, who cites what and who quotes what that this anonymity seems out of place, part of the old lost world of journal brands and Open Access. The major commercial players, judging by their announcements as we were all still digesting turkey, see where the game is going and want to keep alongside it, though they will farm the cash cows until they are dry. Take Wiley (www.wiley.com/WileyCDA/pressrelease), for example, whose fascinating joint venture with Knode was announced yesterday. This sees the creation of a Knode – powered analytics platform provided as a Learned Society and industrial research service, allowing Wiley to deploy “20 million documents and millions of expert profiles” to provide society executives and institutional research managers with “aggregated views of research expertise and beyond”. Anyone want to be anonymous here? Probably not, since this is a way of recognizing expertise for projects, research grants and jobs!

And, of course, Elsevier can use Mendeley as a guide to what is being read and by whom. Their press release (7 January) points to the regeneration of the SciVal services, “providing dynamic real-time analytics and insights into the… (Guess What?)… Global Research Landscape”. The objective here is one dear to governments in the developed world for years – to help research management to benchmark themselves and their departments such that they know how they rank and where it will be most fruitful to specialize. So we seem to be quite predictably entering an age where time to read is coming under pressure from volumes of available research articles and evidential data, so it is vital to know, and know quickly, what is important, who rates it, and where to put the most valuable departmental resources – time and attention-span. And Elsevier really do have the data and the experience to do this job. Their Scopus database of indexed abstracts all purpose written to the same taxonomic standard now covers some 21,000 journals from over 5000 publishers. No one else has this scale.

The road to scientific communication as an open and not a disguised form of reputation management will have some potholes of course. CERN found one, well-reported in Nature’s News on 7 January (www.nature.com/news under the headline “Particle Physics papers set free”. CERN’s plan to use its SCOAP project to save participating libraries money, which was then to be disbursed to force journals to go Open Access met resistance, but from the APS, rather than the for profit sector. Meanwhile the Guardian published a long article (http://www.theguardian.com/science/occams-corner/2014/jan/06/radical-changes-science-publishing-randy-schekman) arguing against the views of Nobel laureate Dr Randy Schekman, the proponent of boycotts and bans for leading journals and supporters of impact factor measurement. Perhaps he had a bad reputation management experience on the way to the top? The author, Steve Caplan, comes out in favour of those traditional things (big brands and impact factors), but describes their practises in a way which would encourage an un-informed reader to support a ban! More valuably, the Library Journal (www.libraryjournal.com/2014/01) reports this month on an AAP study of the half-life of articles. Since this was done by Phil Davis it is worth some serious attention, and the question is becoming vital – how long does it take for an article to reach half of the audience who will download it in its lifetime? Predictably the early results are all over the map: health sciences are quick (6-12 months) but maths and physics, as well as the humanities, have long duration half lives. So this is another log on the fire of argument between publishers and funders on the length of Green OA embargoes. This problem would not exist of course in a world that moved to self-publishing and post-publication peer review!

POSTSCRIPT For the data trolls who pass this way: The Elsevier SciVal work mentioned here is powered by HPCC (High Power Computing Cluster), now an Open Source Big Data analytics engine, but created for and by LexisNexis Risk to manage their massive data analytics tasks as Choicepoint was absorbed and they set about creating the risk assessment system that now predominates in US domestic insurance markets. It is rare indeed in major information players to see technology and expertise developed in one area used in another, though of course we all think it should be easy.

« go backkeep looking »