Jan
9
Post-Pub and Preprint -The Science Publishing Muddle
Filed Under B2B, Big Data, Blog, data analytics, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Uncategorized, Workflow | 2 Comments
New announcements in science publishing are falling faster than snowflakes in Minnesota this week, and it would be a brave individual who claimed to be on top of a trend here. I took strength from Tracy Vence’s review, The Year in Science Publishing (www.the-scientist.com), since it did not mention a single publisher, confirming my feeling that we are all off the pace in the commercial sector. But it did mention the rise, or resurrection, of “pre-print servers” (now an odd expression, since no one has printed anything since Professor Harnad was a small boy, but a way of pointing out that PeerJ’s PrePrints and Cold Spring Harbor’s bioRxiv are becoming quick and favourite ways for life sciences researchers to get the data out there and into the blood stream of scholarly communication). And Ms Vence clearly sees the launch of NCBI’s PubMed Commons as the event of the year, confirming the trend towards post-publication peer review. Just as I was absorbing that I also noticed that F1000, which seems to me to still be the pacemaker, had just recorded its 150,000th article recommendation (and a very interesting piece it was about the effect of fish oil on allergic sensitization, but please do not make me digress…)
The important things about the trend to post-publication peer review are all about the data. Both F1000 and PubMed Commons demand the deposit or availability of the experimental data alongside the article and I suspect that this will be a real factor in determining how these services grow. With reviewers looking at the data as well as the article, comparisons are already being drawn with other researcher’s findings, as well as evidential data throwing up connections that do not appear if the article alone is searched in the data analysis. F1000Prime now has 6000 leading scientists in its Faculty (including two who received Nobel prizes in 2013) and a further 5000 associates, but there must be questions still about the scalability of the model. And about its openness. One of the reasons why F1000 is the poster child of post publication peer review is that everything is open (or, as they say in these parts, Open). PubMed Commons on the other hand has followed the lead of PeerJ’s PubPeer, and demanded strict anonymity for reviewers. While this follows the lead of the traditional publishing model it does not allow the great benefit of F1000: if you know who you respect and whose research matters to you, then you also want to know what they think is important in terms of new contributions. The PubPeer folk are quoted in The Scientist as saying in justification that “A negative reaction to criticism by somebody reviewing your paper, grant or job application can spell the end of your career.” But didn’t that happen anyway despite blind, double blind, triple blind and even SI (Slightly Intoxicated) peer reviewing?
And surely we now know so much about who reads what, who cites what and who quotes what that this anonymity seems out of place, part of the old lost world of journal brands and Open Access. The major commercial players, judging by their announcements as we were all still digesting turkey, see where the game is going and want to keep alongside it, though they will farm the cash cows until they are dry. Take Wiley (www.wiley.com/WileyCDA/pressrelease), for example, whose fascinating joint venture with Knode was announced yesterday. This sees the creation of a Knode – powered analytics platform provided as a Learned Society and industrial research service, allowing Wiley to deploy “20 million documents and millions of expert profiles” to provide society executives and institutional research managers with “aggregated views of research expertise and beyond”. Anyone want to be anonymous here? Probably not, since this is a way of recognizing expertise for projects, research grants and jobs!
And, of course, Elsevier can use Mendeley as a guide to what is being read and by whom. Their press release (7 January) points to the regeneration of the SciVal services, “providing dynamic real-time analytics and insights into the… (Guess What?)… Global Research Landscape”. The objective here is one dear to governments in the developed world for years – to help research management to benchmark themselves and their departments such that they know how they rank and where it will be most fruitful to specialize. So we seem to be quite predictably entering an age where time to read is coming under pressure from volumes of available research articles and evidential data, so it is vital to know, and know quickly, what is important, who rates it, and where to put the most valuable departmental resources – time and attention-span. And Elsevier really do have the data and the experience to do this job. Their Scopus database of indexed abstracts all purpose written to the same taxonomic standard now covers some 21,000 journals from over 5000 publishers. No one else has this scale.
The road to scientific communication as an open and not a disguised form of reputation management will have some potholes of course. CERN found one, well-reported in Nature’s News on 7 January (www.nature.com/news under the headline “Particle Physics papers set free”. CERN’s plan to use its SCOAP project to save participating libraries money, which was then to be disbursed to force journals to go Open Access met resistance, but from the APS, rather than the for profit sector. Meanwhile the Guardian published a long article (http://www.theguardian.com/science/occams-corner/2014/jan/06/radical-changes-science-publishing-randy-schekman) arguing against the views of Nobel laureate Dr Randy Schekman, the proponent of boycotts and bans for leading journals and supporters of impact factor measurement. Perhaps he had a bad reputation management experience on the way to the top? The author, Steve Caplan, comes out in favour of those traditional things (big brands and impact factors), but describes their practises in a way which would encourage an un-informed reader to support a ban! More valuably, the Library Journal (www.libraryjournal.com/2014/01) reports this month on an AAP study of the half-life of articles. Since this was done by Phil Davis it is worth some serious attention, and the question is becoming vital – how long does it take for an article to reach half of the audience who will download it in its lifetime? Predictably the early results are all over the map: health sciences are quick (6-12 months) but maths and physics, as well as the humanities, have long duration half lives. So this is another log on the fire of argument between publishers and funders on the length of Green OA embargoes. This problem would not exist of course in a world that moved to self-publishing and post-publication peer review!
POSTSCRIPT For the data trolls who pass this way: The Elsevier SciVal work mentioned here is powered by HPCC (High Power Computing Cluster), now an Open Source Big Data analytics engine, but created for and by LexisNexis Risk to manage their massive data analytics tasks as Choicepoint was absorbed and they set about creating the risk assessment system that now predominates in US domestic insurance markets. It is rare indeed in major information players to see technology and expertise developed in one area used in another, though of course we all think it should be easy.
Dec
30
From BRICS to MINTs
Filed Under B2B, Blog, Education, eLearning, Financial services, healthcare, Industry Analysis, internet, mobile content, news media, Publishing, Search, social media, Uncategorized | 1 Comment
This is the season of the year for predictions. You will find little of that here. I feel like a fortunate seer in that none of my predictions have actually failed. I feel like a disappointed seer in that very few ever happened within the timeline of prediction, and indeed a few are still out there, ready to come screaming into focus on the “I told you so” arc of probability, in order to demonstrate once again that if you just forget the timing, everything you can envisage does eventually happen. And I don’t like predictions that follow the “whatever was beginning to happen last year will go on happening next year”, since I regard this as the province of newspapers with holiday space to fill. In its turn technology prediction is a mug’s game, and ever since I heard Alan Kay say that “everything that will be launched in the next 15 years has already been invented” I have resolved to steer clear.
Which only leaves us markets to talk about, and since they are ever-present prediction becomes a matter of when they come into focus rather than anything else. When we invented BRICS (and that last capital S is important if we are recognizing South Africa, as we should be) we were really saying, five years ago, that the long age of US global economic imperium was drawing to a close. A host of new nations was about to challenge that supremacy, and while the US was not minded to give it up easily, as demonstrated last year by its role in leading the global market once more out of cyclical downturn, economists now have a clear handle on when, in the next few years, China will resume its historic role of global market leadership, which it last held in the fourteenth century (think paper, gunpowder, printing and language).
This poses vital questions for information marketplaces. The Information Revolution has been led from the US both in terms of technology and in terms of services and languages. China seems well-equipped in the latter area, with players like Alibaba and Baidu, and the ability to use English very effectively – or buy its use. However, both India and Korea show more promise as the next hub of Silicon Valley proportions. And of course the US will not go away, though it may find it easier to go protectionist and isolationist in some aspects, living off its huge and wealthy internal marketplace, and no longer allowing itself to be the place where all information market prospects have to be proved. In many ways we are already seeing this, since success in the US no longer means automatic global market success. But if this is the outcome then it leaves the rest of the world with an issue – where do I go for growth if not to the USA?
Well, there is a very specific information markets answer to that. There is still huge and dynamic growth in BRICS. And beyond that, look at every country where half the population is under 25, and coming up to half of those are smartphone users. Markets where the smartphone is already the most important network connector and bridge to cloud-based computing, because there is no infrastructure around small populations of laptops or tablets that performs the role that we have identified in Europe, Russia, and the US for embedded network connectivity. These new fast-growth markets will teach us a great deal about cloud-working which we will bring back to the old world. For reasons best known to the economists, the first of these markets to show have been christened MINTs – Malaysia (or should that be Mexico? Or are Mexico and Canada too much part of a Greater US economy?), Indonesia, Nigeria, Turkey. If it were not for sanctions, Iran would head this list. And note that we do not have Korea, the best networked country I have ever visited (10 Mbit broadband on a railway platform in Busan!) on either of these lists.
The ITU statistics tell the story (http://www.itu.int/en/ITU-D/Statistics/Documents/facts/ICTFactsFigures2013-e.pdf),although they are now a year old. But if half of the world’s population is under 25, and if only 25% globally have smartphones at the moment, then we are looking at one of the most exciting growth prospects that any industry has ever seen in global history. It may astound some that 40% of the world’s population is now online, but it seems to me vital to concentrate both on the services we supply them with now, and the way those services draw more of the remaining 60% online as well. And as we look at that 2.7 billion online total, it is as well to remember that in a global population of 7 Billion, the planet supports 6.7 billion mobile/cellular subscriptions. As we go along, each of the cultures that come into play will add something distinctive and exciting to our knowledge of the way in which information services and solutions work to change society.
Finally, what about the Old World? Well, as I have indicated, much of the market that we are discussing was created in the US, and will continue to flourish there. And do not write off Europe. Just imagine what it would be like, in ten years time, if politicians had cast aside the petty nationalisms and regionalisms that bedevil progress today, and a really integrated marketplace was emerging. A trading entity from Ireland to the Ukraine that thrived from being the world’s largest free trade zone, which was utilizing new memberships amongst poorer Eastern Europe to drive growth and using the technology – Europe is the most online region of the world – to regenerate itself. Stranger things have happened – though not much stranger. I admit! Meanwhile, pour another libation, accept my very best wishes for every success in 2014 and venture out into those newly MINTed global marketplaces!
« go back — keep looking »