Sep
2
The Rise of the Super-Platform?
Filed Under Big Data, Blog, data analytics, eBook, Industry Analysis, Reed Elsevier, Search, semantic web, STM, Uncategorized, Workflow | Leave a Comment
Here on the South Shore of Nova Scotia it is easy to dream dreams and support fantasies, so treat what follows with caution. Also I am entirely without the benefit of talking to anyone at either Wiley or Atypon for several weeks, so my thinking about the purchase of the latter by the former is not supported by any external knowledge or expertise. Yet I have M had a stream of requests wondering whether I know what it means, so walking the shoreline has given me plenty of opportunity to think about it. And whenever I do, I find myself thinking both about the business of consolidation and about the pressing needs of academic and industrial researchers in a data-driven society.
Let’s start with the first and get thoroughly lost en route to the second. Just suppose that the subtle strategists at Wiley have at length decided that buying more journals is no longer a growth strategy. Wiley can grow revenues in education through acquisition, as it has demonstrated. But in the scholarly research market? This is Wiley’s cash cow, and what they want to grow here is margins – or at least drive margins back to where they once were in the glory days of the 1990s when 45% ebitda expectations were the norm. So what if the Wiley choice was recreate InterScience as an updated rival to SpringerLink or ScienceDirect, the rival Springer and Elsevier platforms, or buy Atypon, move their data to that platform and consolidate there at less expense, and with the ability to earn a margin on all the other independent players already on that platform. Just as Wiley bought a whole swathe of institutional publishing rights when it bought Blackwell a generation ago, here it would be buying distribution rights, including a whole tranche of outfits beginning American and ending in Society with a whole lot of words in between that I have difficulty spelling. Add a few more major Atypon players like Sage (coming soon) or McGraw or Emerald or Taylor and Francis and I begin to envisage a conversation that begins “join our super Big Deal off the Atypon platform and we will rival Springer as a power bloc within the librarian’s budget”!
All of this made me recall the early days of ScienceDirect, when Elsevier offered to host the entire marketplace in the interests of scholarship, and were deeply miffed when the rest of the market thought it was more in the interests of Elsevier and politely declined. But surely the age of the Super Hosts is over? Does no one else remember Dialog and what happened to it? Surely the Grand Strategy cannot just be this? While the margins on hosting would be a useful support to net revenues derived from elsewhere in the Wiley STM business, this seems an elaborate way to put another $30 million on the bottom line. So time to think again?
Maybe they bought it for the technology? Atypon is probably best in class, but Wiley have a strong Tech skill set, and already produce very serious Toolsets for researchers. Wiley Spectra Labs would be a prime example. There is nothing Atypon do that Wiley could not do if they devoted the time and investment to it, so this does not seem a fruitful line of enquiry. Another walk – try again!
And on the shoreline of Kingsburg Bay it is easy to reflect on the way in which the data driven society we live in is reflected in the lives of researchers. In the past year I have heard countless stories of research being embedded in the data created by previous research. Repeatability is becoming a key factor, and often one driven by data analytics. I have heard stories of research teams who do not read the literature where it is too voluminous to handle, but search it carefully and semantically for signs of issues that concern them. And I keep on hearing scholars complaining about how hard it is to cross search files located in many different places and governed by different access rules, both at the level of formal data mining and at a less formal level. And does this get better as more evidential data is made available in places like figshare and F1000? Or as the use and value of preprints grows since no one can wait for the published version?
In other words, the big publishers, if they are to stay in the game, must begin to envisage what life will be like in the Age of Data Licensing. When library Big Deal revenues begin to decline, though individual download revenues still grow. The data licensing stream could be a very valuable way to maintain margins, especially since these fees go mostly directly to the bottom line. And here it is worth considering what Elsevier have been about during the quiet dog days of “nothing happens in August”. On 24 August the Mendeley blog invited us all to try out the beta version of Elsevier’s DataSearch, a cross file tool set to allow users to cross search ScienceDirect and certain other files – ArcIV, for example – in conjunction with it . So would it not be a good idea for Wiley to act for the rest of the market, work on a deal which allowed all Atypon-hosted data, owners permitting, to be subject to this sort of licensed searching? And if the user terms were the same for Wiley Atypon as for Elsevier, and, who knows, even Springer Nature, that would surely be in the interests of users and not really a competition issue?
Before fantasy takes flight, let’s recall that there are still a number of smaller hosts around like Ingenta, Highwire and SilverChair. Will they get squeezed out by the availability of sophisticated cross file searching tools on the major repository sites? Atypon in its new guise could be an instrument of consolidation there. And will there be more preprint sites bound into this? Undoubtedly, and Elsevier’s quiet announcement in August of its patent for a machine-based peer review process will help to answer calls for greater speed in article workflow processing just as it annoys the OA lobbyists.
Taken in the context of the past, the Wiley Atypon deal looks odd and untimely. Place it in the context of the data licensing future, and it could be a prescient stroke which retains the cash cow potency of STM at Wiley while the education story unwinds.
Aug
17
A Month in the STM Country
Filed Under Big Data, Blog, data analytics, eBook, eLearning, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Ah, the slow moving waters of academic publishing! Take your eye away from them for a day or a week, let alone a month, and everything is irretrievably changed. Last month it was the sale of Thomson Reuters IP and Science that attracted the headlines: this month we see the waves crashing on the shores of academic institutional and Funder-based publishing, as well as a really serious attempt to supplant Web of Science as the metrics standard of good research and critically influential science. And, as always, all unrelated events are really closely connected.
So let’s start with the wonderful world of citation indexes. Inspired by Vannever Bush (who wasn’t in that generation?), the 30 year old Eugene Garfield laid out his ideas on creating a science citation index and a journal impact factor in 1955. His Institute Of Science Information was bought by Thomson Reuters in 1992, and I am pleased to record that in my daily note to the then EPS clients (we were all testing the concept “internet” at the time), I wrote “It is widely thought that in a networked environment ISI will be a vital information resource”! Full marks then for prescience! As Web of Science, the ISI branded service, became the dominant technique for distinguishing good science, and funding good science, so Thomson Reuters found they had a cash cow of impressive proportions on their hands.
But this history is only significant in light of the time scale. While there have been updates and improvements, we are using a 60 year old algorithm despite knowing that its imperfections become more obvious year by year, mostly because the whole marketplace uses it and it was very inconvenient for anyone to stop. Although altmetrics of all sorts have long made citation indexes look odd, no move to rebase them or separate them from a journal-centric view took place. Yet that may be exactly what is happening now. The inclusion of RCR (Relative Citation Ratio) in the National Instiutes of Health iCite suite fits the requirement that change is effected by a major Funder/official body and can then percolate downwards. RCR (I do hope they call it iCite – RCR means the responsible research code of practice to many US researchers) now needs widespread public-facing adoption and use, so its implementation across the face of Digital Science is good news. Having once thought that Digital Science in its Nature days should acquire Web of Science and recreate it, it is now becoming clear that this is happening without such an investment , and companies like figshare, Uber Research and ReadCube will be in the front line exploiting this.
And then, at a recent meeting someone said that there would be 48 new university presses created this year for Open Access publishing for both articles and monographs. I cannot verify the number – more than Team GB’s initial expectation of Olympic Medals! – but the emerging trend is obvious. Look only at the resplendent UCL Press, decked out in Armadillo software producing some very impressive BOOCS (Books as Open Online Content). In September they launch the AHRC- British Library Academic Book of the Future BOOC, if that is not a contradiction. Free, research-orientated and designed to high standards.
Just up the road in London’s Knowledge Quarter is Wellcome, and it is interesting to see the first manifestation of the predictable (well, in this arrondissment anyway) move by funders into self-publishing. As author publication fees mount (one major Funder already spends over a billion dollars US on publishing) there has to be a cheaper way. And at the same time if you could actually improve the quality of scholarly communication by bringing together all of a grant holder’s research outputs in one place that would seem to make sense. It simplifies peer review, which fundamentally becomes a function of the funder’s project selection – saying in effect that if we thought it right to fund the work then we should publish the results. It does have some objective checks, presumably like Plos!, but the object is to very quickly publish what is available: Research articles, evidential data, case reports, protocols, and, interestingly, null and negative results. This latter is the stuff that never gets into journals, yet, as they say at Wellcome “Publishing null and negative results is good for both science and society. It means researchers don’t waste time on hypotheses that have already been proved wrong, and clinicians can make decisions with more evidence”. The platform Wellcome are using is effectively F100, and so is designed for speed of process – 100 days is Wellcomes aspiration – and for post-publication peer review, allowing full critical attention to be paid after materials are made available. And the emphasis on data very much reflects the F1000 dynamic, and the increasing demand for repeatability and reproducibility in research results.
So, what a month for demonstrating trends – towards more refined metrics in research impact, towards the emergence of universities and research funders as publishers, and towards another successful development from the Vitek Tracz stable, and a further justification of the Digital Science positioning at Macmillan. In an age of powerful users focussed on productivity and reputation management, these developments reflect that power shift, with implications for the commercial sector and the content-centric world of books and journals.
« go back — keep looking »