Aug
17
A Month in the STM Country
Filed Under Big Data, Blog, data analytics, eBook, eLearning, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Ah, the slow moving waters of academic publishing! Take your eye away from them for a day or a week, let alone a month, and everything is irretrievably changed. Last month it was the sale of Thomson Reuters IP and Science that attracted the headlines: this month we see the waves crashing on the shores of academic institutional and Funder-based publishing, as well as a really serious attempt to supplant Web of Science as the metrics standard of good research and critically influential science. And, as always, all unrelated events are really closely connected.
So let’s start with the wonderful world of citation indexes. Inspired by Vannever Bush (who wasn’t in that generation?), the 30 year old Eugene Garfield laid out his ideas on creating a science citation index and a journal impact factor in 1955. His Institute Of Science Information was bought by Thomson Reuters in 1992, and I am pleased to record that in my daily note to the then EPS clients (we were all testing the concept “internet” at the time), I wrote “It is widely thought that in a networked environment ISI will be a vital information resource”! Full marks then for prescience! As Web of Science, the ISI branded service, became the dominant technique for distinguishing good science, and funding good science, so Thomson Reuters found they had a cash cow of impressive proportions on their hands.
But this history is only significant in light of the time scale. While there have been updates and improvements, we are using a 60 year old algorithm despite knowing that its imperfections become more obvious year by year, mostly because the whole marketplace uses it and it was very inconvenient for anyone to stop. Although altmetrics of all sorts have long made citation indexes look odd, no move to rebase them or separate them from a journal-centric view took place. Yet that may be exactly what is happening now. The inclusion of RCR (Relative Citation Ratio) in the National Instiutes of Health iCite suite fits the requirement that change is effected by a major Funder/official body and can then percolate downwards. RCR (I do hope they call it iCite – RCR means the responsible research code of practice to many US researchers) now needs widespread public-facing adoption and use, so its implementation across the face of Digital Science is good news. Having once thought that Digital Science in its Nature days should acquire Web of Science and recreate it, it is now becoming clear that this is happening without such an investment , and companies like figshare, Uber Research and ReadCube will be in the front line exploiting this.
And then, at a recent meeting someone said that there would be 48 new university presses created this year for Open Access publishing for both articles and monographs. I cannot verify the number – more than Team GB’s initial expectation of Olympic Medals! – but the emerging trend is obvious. Look only at the resplendent UCL Press, decked out in Armadillo software producing some very impressive BOOCS (Books as Open Online Content). In September they launch the AHRC- British Library Academic Book of the Future BOOC, if that is not a contradiction. Free, research-orientated and designed to high standards.
Just up the road in London’s Knowledge Quarter is Wellcome, and it is interesting to see the first manifestation of the predictable (well, in this arrondissment anyway) move by funders into self-publishing. As author publication fees mount (one major Funder already spends over a billion dollars US on publishing) there has to be a cheaper way. And at the same time if you could actually improve the quality of scholarly communication by bringing together all of a grant holder’s research outputs in one place that would seem to make sense. It simplifies peer review, which fundamentally becomes a function of the funder’s project selection – saying in effect that if we thought it right to fund the work then we should publish the results. It does have some objective checks, presumably like Plos!, but the object is to very quickly publish what is available: Research articles, evidential data, case reports, protocols, and, interestingly, null and negative results. This latter is the stuff that never gets into journals, yet, as they say at Wellcome “Publishing null and negative results is good for both science and society. It means researchers don’t waste time on hypotheses that have already been proved wrong, and clinicians can make decisions with more evidence”. The platform Wellcome are using is effectively F100, and so is designed for speed of process – 100 days is Wellcomes aspiration – and for post-publication peer review, allowing full critical attention to be paid after materials are made available. And the emphasis on data very much reflects the F1000 dynamic, and the increasing demand for repeatability and reproducibility in research results.
So, what a month for demonstrating trends – towards more refined metrics in research impact, towards the emergence of universities and research funders as publishers, and towards another successful development from the Vitek Tracz stable, and a further justification of the Digital Science positioning at Macmillan. In an age of powerful users focussed on productivity and reputation management, these developments reflect that power shift, with implications for the commercial sector and the content-centric world of books and journals.
Jun
4
In Praise of Elsevier
Filed Under Big Data, Blog, data analytics, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | Leave a Comment
How do I explain not blogging in May? Too much to do and too little thinking time. I shall try harder. How do I explain the title of this blog? I want to write about people who understand change, and despite the complaints of a vigorous minority of academics and the banshee wailing of the professional OA zealots, Elsevier, from Mendeley to SSRN, can certainly be reckoned to read the directional signs. And have done so over many years – from BioMedNet (memory test for younger readers!) onwards. And the fact that they have always pre-emtively bought in front of the market direction has earnt them no praise from academics (“buying innovation and institutionalising it”) or their conventional competitors (“buying innovation before it has grown real margins at prices we could not have paid”). Since both of these are just what market leaders do in all sorts of markets I imagine that Elsevier management are unmoved: since they have been market leader since they swallowed Robert Maxwell’s Pergamon empire over 20 years ago they have huge experience at being unmoved.
And they have been unmoved in their own peculiar, schizophrenic way. It always seems to me as if two companies struggle inside the corporate cloak. I imagine one as a hugely successful and conservative journal publisher, still defending the ramparts of paid-for journals by crouching in the slit trench of peer review and high impact branded publications. But that of course is just one aspect: the online ideological long march of Elsevier’s techno-Maoists displays quite another. Is there any former publishing company in any information sector who can point to a record of technology application and successful re-investment that matches the story which starts with Science Direct, then goes to Scirus, to Scopus and at length to SciVal. This comprehensive re-assessment of the needs of scientists and researchers for databased content, for advanced search, for consistent abstracting and indexing across the entire industry production, and for evaluation and measurement tools that matched and competed with Web of Science is one of the heroic stories in the awakening of scientific publishing to the digital age and its realities.
Why is this important? Since it now becomes clearer every year that the age of journal publishing has ended, and article publishing itself is becoming deeply commoditised, Elsevier have to conjure up a new company which represents the direction of flow in scholarly communications. In this age of increasing investment in global research, and the importance of publication in the cycle of tenure and gaining research funding, researchers are being set genuine problems in handling the crush of articles and distinguishing what is important. In a data-driven society, with an emphasis on the analysis of results and the repeatability of experimentation, evidential data can be more important to other researchers than the editorial state of the finished article. And with branded journals moving ever closer to selecting offerings from what is already available on pre-print servers and project or institutional repositories, the end-product emphasis changes. And when peer review is post publication in many sectors, another element of the old defensive system falls apart.
But it has not fallen apart yet. And, as ever in the information industry, management find that they have one leg astride the old nag which despite a threadbare appearance still produces revenues and high margins, while the other is across a skittish mare who bucks and plunges in all directions and whose gyrations need more corn to fuel than revenues created from races won. While Wiley, though a good internal innovator, looks more to education for acquisition than STM, and Springer-Nature is hog tied by the need to wait for its IPO, now postponed for a further year, before dreaming of competitive acquisition, Elsevier has the field to itself. Mendeley gave it invaluable data on who is reading what and it is now beginning to exploit the advantage that real data about downloads brings. SSRN brings experience of pre-print servers and the way they work and can be turned into publishing platforms. While Web of Science and the Thomson Science stable is available, an Elsevier bid would probably not survive a competition enquiry in Europe. And anyway, they have built much of that already. And ReserchGate and Academia.edu are clearly buy-able, if one needed to…
The only feasible competitive innovation nexus lies in Macmillan Digital Science (separated from Springer-Nature by the need to exclude their losses from the IPO, though presumably in line to be re-united whenever an IPO is concluded). Here start-ups like ReadCube and figshare are beginning to move powerfully. And F1000 is also presumably available as a play in the post publication peer review and data publishing sectors. But as Elsevier have found several times already in the past 20 years change can come up rapidly from the blind spot in the rear view mirror. In a marketplace now unclothed of its aspirational scholarly lineaments and more nakedly directed by reputation management on the input side, and discoverability and relevance on the output side, the real competitor is not other publishers, but the market itself, its readiness to create co-operative institutions by scholars for scholars, and its willingness to allow Elsevier to co-invest and create margins.
As Elsevier ponders its latest data-mining licences in the context of scientists who want to search an entire scholarly corpus of knowledge in one sweep across all published content, it is as well to rethink the nature of networked communication and outmoded ideas like products, content ownership, IP and “barriers to entry”. In the fashionable metaphor, think of science as a mycelium, a vast, unseen connectivity with the power of such an organism (the largest organisms on Earth) to recreate, innovate and grow from the edge. The Elsevier question may not be old style competition but how much, in the networked service economy of scholarly communication, they will be allowed to do to facilitate the way the network runs and its services function.
« go back — keep looking »