Oct
19
The Futurist Panel Post
Filed Under Big Data, Blog, data analytics, eBook, eLearning, healthcare, Industry Analysis, internet, Reed Elsevier, Search, semantic web, social media, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Greetings from Frankfurt, where I find myself attending, for the 49th time, the greatest book show on Earth, despite claiming for 25 years that my days here are done. Yesterday I moderated the STM Association’s Futurist Panel, where three brilliant men (Phil Jones of Digital science, John Connolly of Nature and Richard Padley of Semantico) spoke brilliantly about the Future of Science Publishing, In order to get us all in the mood for change, I introduced them by quoting the Outsell market report for Science information and scholarly communication for the year 2027. Yes, I said 2027 (not difficult if you are a real Worlock!). And here it is:
Outsell Annual Report 2027
“The clear market leaders, IBM Watson Science and Gates Science Services announced their intention to secure the complete commoditisation of content in a new accord to be signed in 2028. Brad Biscotti, Gates Chairman, announced in his annual statement that they felt that content-based competition was no longer appropriate. “By creating and maintaining a huge central database of scholarly communication between us, we can best serve science by competing vigorously in supporting the research process with intelligent software tools. Our two companies have created a self publishing marketplace – now it is time to move on to increase the value derived from research funding. We shall be changing our name to Gates Smart Research as we roll out our first generation of virtual laboratories.”
His opposite number at Watson Science, CEO Jed Gimlet, issued a matching statement: “This long decade of buying publishers and building self-publishing draws to an end as any research team anywhere has available to it online services and solutions for concluding and publishing research articles and evidential data within days, or at least a week, of project completion. Our tried and tested post- publication peer review systems give an accurate guide to good science, and continue to re-rate research over time. We have maintained some of our strong brands, like Nature, Science and Cell, so that republication there could add additional rating value. But
our duty to science is to ensure that everything is in one searchable place and subject to cross searching by any scholar using his own data mining protocols. In making this move we recognise that the production of research findings is now so vast in terms of numbers or articles and available data that creating content silos creates risk from non discovery of prior research.”
Outsell comment on these statements: “There is some special pleading here, of course, since the decline of library budgets in the last ten years meant that article downloads have rapidly declined, while rising volumes of self-published papers create problems for researchers who fundamentally have ceased to read new research. IBM’s intelligent science module, Repeatabilty, used in over 80% of laboratories, needs far more data than an article typically contains, leading to calls to reassess the usefulness, format and content of articles. And when a Repeatability process succeeds or fails, it automatically creates a new citation, enriching the metadata attached to the database and requiring a mandatory notice to all previous users. IBM think this is a cost they should share with Gates.
Gates in turn would point out that almost no one reads articles now. Almost all enquiry is robotic, governed by research protocols mandated by funders and implemented at project inception and regularly during the research process. This may lead researchers to check some findings, though many of the enquiries are satisfied at a metadata level. Their major program, Gates Guru, uses this type of intelligent machine reading to provide a metrics-based rating system for scholarship and institutions. Guru, following Gates landmark deal with the Chinese Government in 2025, is the universally accepted standard and there is no university or researcher
who does not subscribe to it at some level.”
(DISCLAIMER – this is a work of imagination, not of Outsell, and they should not be blamed for my heresies)
Unusually for such events, we had a good 45 minutes of discussion. Many intriguing and interesting points were raised. There were fewer than usual change – deniers, though a few arguments were tinged with the “say I can go on doing it like this for a few more years – please” frame of mind that consultants to this sector are very used to encountering. It almost seemed for a moment as if we as an industry accept that real change is afoot – and we are several phases in already.
Until I got a beer in my hand, and a smiling, intelligent, successful publisher said quietly “that deal you mentioned between Wellcome and F1000 – you don’t think that will succeed do you? I mean, they will never make money!” And all of a sudden the best part of a decade had flashed past and I was back in that same room at the same time room interviewing Harold Varmus, co-founder of PLoS, in front of the same crowd. He told them about the launch of PLoS1; they said megajournals would never succeed. I rest my case!
Aug
17
A Month in the STM Country
Filed Under Big Data, Blog, data analytics, eBook, eLearning, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Ah, the slow moving waters of academic publishing! Take your eye away from them for a day or a week, let alone a month, and everything is irretrievably changed. Last month it was the sale of Thomson Reuters IP and Science that attracted the headlines: this month we see the waves crashing on the shores of academic institutional and Funder-based publishing, as well as a really serious attempt to supplant Web of Science as the metrics standard of good research and critically influential science. And, as always, all unrelated events are really closely connected.
So let’s start with the wonderful world of citation indexes. Inspired by Vannever Bush (who wasn’t in that generation?), the 30 year old Eugene Garfield laid out his ideas on creating a science citation index and a journal impact factor in 1955. His Institute Of Science Information was bought by Thomson Reuters in 1992, and I am pleased to record that in my daily note to the then EPS clients (we were all testing the concept “internet” at the time), I wrote “It is widely thought that in a networked environment ISI will be a vital information resource”! Full marks then for prescience! As Web of Science, the ISI branded service, became the dominant technique for distinguishing good science, and funding good science, so Thomson Reuters found they had a cash cow of impressive proportions on their hands.
But this history is only significant in light of the time scale. While there have been updates and improvements, we are using a 60 year old algorithm despite knowing that its imperfections become more obvious year by year, mostly because the whole marketplace uses it and it was very inconvenient for anyone to stop. Although altmetrics of all sorts have long made citation indexes look odd, no move to rebase them or separate them from a journal-centric view took place. Yet that may be exactly what is happening now. The inclusion of RCR (Relative Citation Ratio) in the National Instiutes of Health iCite suite fits the requirement that change is effected by a major Funder/official body and can then percolate downwards. RCR (I do hope they call it iCite – RCR means the responsible research code of practice to many US researchers) now needs widespread public-facing adoption and use, so its implementation across the face of Digital Science is good news. Having once thought that Digital Science in its Nature days should acquire Web of Science and recreate it, it is now becoming clear that this is happening without such an investment , and companies like figshare, Uber Research and ReadCube will be in the front line exploiting this.
And then, at a recent meeting someone said that there would be 48 new university presses created this year for Open Access publishing for both articles and monographs. I cannot verify the number – more than Team GB’s initial expectation of Olympic Medals! – but the emerging trend is obvious. Look only at the resplendent UCL Press, decked out in Armadillo software producing some very impressive BOOCS (Books as Open Online Content). In September they launch the AHRC- British Library Academic Book of the Future BOOC, if that is not a contradiction. Free, research-orientated and designed to high standards.
Just up the road in London’s Knowledge Quarter is Wellcome, and it is interesting to see the first manifestation of the predictable (well, in this arrondissment anyway) move by funders into self-publishing. As author publication fees mount (one major Funder already spends over a billion dollars US on publishing) there has to be a cheaper way. And at the same time if you could actually improve the quality of scholarly communication by bringing together all of a grant holder’s research outputs in one place that would seem to make sense. It simplifies peer review, which fundamentally becomes a function of the funder’s project selection – saying in effect that if we thought it right to fund the work then we should publish the results. It does have some objective checks, presumably like Plos!, but the object is to very quickly publish what is available: Research articles, evidential data, case reports, protocols, and, interestingly, null and negative results. This latter is the stuff that never gets into journals, yet, as they say at Wellcome “Publishing null and negative results is good for both science and society. It means researchers don’t waste time on hypotheses that have already been proved wrong, and clinicians can make decisions with more evidence”. The platform Wellcome are using is effectively F100, and so is designed for speed of process – 100 days is Wellcomes aspiration – and for post-publication peer review, allowing full critical attention to be paid after materials are made available. And the emphasis on data very much reflects the F1000 dynamic, and the increasing demand for repeatability and reproducibility in research results.
So, what a month for demonstrating trends – towards more refined metrics in research impact, towards the emergence of universities and research funders as publishers, and towards another successful development from the Vitek Tracz stable, and a further justification of the Digital Science positioning at Macmillan. In an age of powerful users focussed on productivity and reputation management, these developments reflect that power shift, with implications for the commercial sector and the content-centric world of books and journals.
« go back — keep looking »