New announcements in science publishing are falling faster than snowflakes in Minnesota this week, and it would be a brave individual who claimed to be on top of a trend here. I took strength from Tracy Vence’s review, The Year in Science Publishing (www.the-scientist.com), since it did not mention a single publisher, confirming my feeling that we are all off the pace in the commercial sector. But it did mention the rise, or resurrection, of “pre-print servers” (now an odd expression, since no one has printed anything since Professor Harnad was a small boy, but a way of pointing out that PeerJ’s PrePrints and Cold Spring Harbor’s bioRxiv are becoming quick and favourite ways for life sciences researchers to get the data out there and into the blood stream of scholarly communication). And Ms Vence clearly sees the launch of NCBI’s PubMed Commons as the event of the year, confirming the trend towards post-publication peer review. Just as I was absorbing that I also noticed that F1000, which seems to me to still be the pacemaker, had just recorded its 150,000th article recommendation (and a very interesting piece it was about the effect of fish oil on allergic sensitization, but please do not make me digress…)

The important things about the trend to post-publication peer review are all about the data. Both F1000 and PubMed Commons demand the deposit or availability of the experimental data alongside the article and I suspect that this will be a real factor in determining how these services grow. With reviewers looking at the data as well as the article, comparisons are already being drawn with other researcher’s findings, as well as evidential data throwing up connections that do not appear if the article alone is searched in the data analysis. F1000Prime now has 6000 leading scientists in its Faculty (including two who received Nobel prizes in 2013) and a further 5000 associates, but there must be questions still about the scalability of the model. And about its openness. One of the reasons why F1000 is the poster child of post publication peer review is that everything is open (or, as they say in these parts, Open). PubMed Commons on the other hand has followed the lead of PeerJ’s PubPeer, and demanded strict anonymity for reviewers. While this follows the lead of the traditional publishing model it does not allow the great benefit of F1000: if you know who you respect and whose research matters to you, then you also want to know what they think is important in terms of new contributions. The PubPeer folk are quoted in The Scientist as saying in justification that “A negative reaction to criticism by somebody reviewing your paper, grant or job application can spell the end of your career.” But didn’t that happen anyway despite blind, double blind, triple blind and even SI (Slightly Intoxicated) peer reviewing?

And surely we now know so much about who reads what, who cites what and who quotes what that this anonymity seems out of place, part of the old lost world of journal brands and Open Access. The major commercial players, judging by their announcements as we were all still digesting turkey, see where the game is going and want to keep alongside it, though they will farm the cash cows until they are dry. Take Wiley (www.wiley.com/WileyCDA/pressrelease), for example, whose fascinating joint venture with Knode was announced yesterday. This sees the creation of a Knode – powered analytics platform provided as a Learned Society and industrial research service, allowing Wiley to deploy “20 million documents and millions of expert profiles” to provide society executives and institutional research managers with “aggregated views of research expertise and beyond”. Anyone want to be anonymous here? Probably not, since this is a way of recognizing expertise for projects, research grants and jobs!

And, of course, Elsevier can use Mendeley as a guide to what is being read and by whom. Their press release (7 January) points to the regeneration of the SciVal services, “providing dynamic real-time analytics and insights into the… (Guess What?)… Global Research Landscape”. The objective here is one dear to governments in the developed world for years – to help research management to benchmark themselves and their departments such that they know how they rank and where it will be most fruitful to specialize. So we seem to be quite predictably entering an age where time to read is coming under pressure from volumes of available research articles and evidential data, so it is vital to know, and know quickly, what is important, who rates it, and where to put the most valuable departmental resources – time and attention-span. And Elsevier really do have the data and the experience to do this job. Their Scopus database of indexed abstracts all purpose written to the same taxonomic standard now covers some 21,000 journals from over 5000 publishers. No one else has this scale.

The road to scientific communication as an open and not a disguised form of reputation management will have some potholes of course. CERN found one, well-reported in Nature’s News on 7 January (www.nature.com/news under the headline “Particle Physics papers set free”. CERN’s plan to use its SCOAP project to save participating libraries money, which was then to be disbursed to force journals to go Open Access met resistance, but from the APS, rather than the for profit sector. Meanwhile the Guardian published a long article (http://www.theguardian.com/science/occams-corner/2014/jan/06/radical-changes-science-publishing-randy-schekman) arguing against the views of Nobel laureate Dr Randy Schekman, the proponent of boycotts and bans for leading journals and supporters of impact factor measurement. Perhaps he had a bad reputation management experience on the way to the top? The author, Steve Caplan, comes out in favour of those traditional things (big brands and impact factors), but describes their practises in a way which would encourage an un-informed reader to support a ban! More valuably, the Library Journal (www.libraryjournal.com/2014/01) reports this month on an AAP study of the half-life of articles. Since this was done by Phil Davis it is worth some serious attention, and the question is becoming vital – how long does it take for an article to reach half of the audience who will download it in its lifetime? Predictably the early results are all over the map: health sciences are quick (6-12 months) but maths and physics, as well as the humanities, have long duration half lives. So this is another log on the fire of argument between publishers and funders on the length of Green OA embargoes. This problem would not exist of course in a world that moved to self-publishing and post-publication peer review!

POSTSCRIPT For the data trolls who pass this way: The Elsevier SciVal work mentioned here is powered by HPCC (High Power Computing Cluster), now an Open Source Big Data analytics engine, but created for and by LexisNexis Risk to manage their massive data analytics tasks as Choicepoint was absorbed and they set about creating the risk assessment system that now predominates in US domestic insurance markets. It is rare indeed in major information players to see technology and expertise developed in one area used in another, though of course we all think it should be easy.

A good headline attracts attention. And I do have some, lame, excuse, since my headliners were both in the top 10 mentions on Facebook. As a commentator on these matters I am only deficient in two small details: I am not on Facebook and, ahem, I did have to look up Ms Cyrus. But in other respects I come by the subject matter honestly. Both parties here had their 2013 fame boosted by a “new publishing” combo of Facebook, YouTube and Twitter. Neither was reliant on “old publishing” in the form of TV, radio or even newspapers or books, and indeed old media spent the year covering what these individuals did in new media terms. And while old media compete for the attention scraps (and I am sure Mike Schatzkin is right when he says that once TV becomes a world of self-scheduled downloads it competes more effectively with the time slots currently held by reading) our thoughts should turn naturally to the complete disintermediation of access in the network.

Which mine did when I saw Cambridge Assessment announce a conference on “The School in the Cloud” for next February (http://www.cambridgeassessment.org.uk/insights/schools-in-the-cloud/). In some ways we have already removed “teaching”: what school, teacher or publisher would not subscribe to the idea of the “learner-centric” world today? And quite right too. Yet we have scarcely begun to cope with the real social ramifications of everyone learning at their point of learning readiness. Teachers have not been repositioned as mentors and moderators, the mobile/multi-site nature of education using technology is not yet clear, we still expect everyone to reach the same levels of achievement at the same time, and we blame the teachers when results fail expectations. Arguably, we should put the resources online, create the programming that links resources into learning journeys, watch the outcomes and abandon formal assessment. But the world that moves lightning fast in some places grinds very slow in others. This week I at last saw an advertisement for a course which cheerfully announced its BYOD status (Bring Your Own Device). And while I was delighted the same day to see the announcement that McGraw Hill Education would produce all their content to IMS Interoperability standards, allowing users the ability to use it with true digital flexibility, I still wonder why we did not do this years ago: recalling a discussion at an ELIG meeting in Sestre Levante in the early years of this century when we all agreed on the necessity – but did nothing.

And in all these discussions we keep ignoring the powerful things that happen when someone educates themselves. Yesterday the death of Colin Wilson was announced, whose books “The Outsider” and “Religion and the Rebel” lit up my teenage years and sent me to university as an existentialist. My first visit to Paris was dominated by the need to haunt the Boul’ Mich for a sight of Sartre, Camus or de Beauvoir. Wilson was an entire autodidact, son of a shoe industry worker who left school at 14. There is something wonderful about knowledge gathered the way he gained his, and small wonder he expounded it with such enthusiasm, given that he had quarried it himself. And how sad it is that, now that both of my younger children are at university themselves, I can confess to my entire dissatisfaction with the way they were educated at both public and private schools. What do you recommend in terms of reading around the subject at A level? I asked the head of Classics. Not on my course he replied. All the pupil needs to know is the mark scheme. This is about Results. Reading around the subject? he repeated. Takes far too much time and any additional knowledge gained only confuses them. Well, Mr Gradgrind is now over 150 years old. Dickens’ brilliant creation should be left where he belongs and removed from current teaching/anti-learning practice. Do we want education as workflow (which, paradoxically, is what assessment has given us) – or as Discovery?

So the disintermediation of the teacher may bring some unexpected rewards. Along with the same process in most other professions. The shattering of the legal profession in the last downturn is typical. As we turn the Cloud into everyone’s back office, so we grow in realization that most back offices are all similar And once you get into workflow, you are moving away from reference and research – and reading around the subject. Legal services which began as support activities like PLC, acquired this year by Thomson Reuters, end up as a wholly new way of out-sourcing areas like corporate law. With operations like Axiom Law (http://www.axiomlaw.co.uk/) growing rapidly – and globally from the start – this type of disintermediation may be quicker. And, incidentally, they will need the same skill sets as current publishing/legal services. It is no accident that Axiom have appointed, it is said, a leading Lexis executive to run one of their regional businesses.

Current publishers will react in two ways. One will be to develop the software which enables learners to learn by creating journeys and relearning experiences, and link relevant content, their own and other people’s, to it. And the other will be to ensure that base level primary content is available in the system at all points. In this regard the science publisher’s who worked with CERN on SCOAP3, the largest physics archive ever assembled, which goes live on 1 January 2014, are to be congratulated. And so, incidentally, is Pope Francis, who beat Miley Cyrus by seven places to head up the Facebook league table (http://www.independent.co.uk/news/media/online/pope-francis-miley-cyrus-and-a-royal-baby-what-facebook-talked-about-in-2013-8994023.html). Now that’s a rare victory for the archaic tongue – pity I never managed to teach myself Latin successfully!

« go backkeep looking »