Aug
17
A Month in the STM Country
Filed Under Big Data, Blog, data analytics, eBook, eLearning, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Ah, the slow moving waters of academic publishing! Take your eye away from them for a day or a week, let alone a month, and everything is irretrievably changed. Last month it was the sale of Thomson Reuters IP and Science that attracted the headlines: this month we see the waves crashing on the shores of academic institutional and Funder-based publishing, as well as a really serious attempt to supplant Web of Science as the metrics standard of good research and critically influential science. And, as always, all unrelated events are really closely connected.
So let’s start with the wonderful world of citation indexes. Inspired by Vannever Bush (who wasn’t in that generation?), the 30 year old Eugene Garfield laid out his ideas on creating a science citation index and a journal impact factor in 1955. His Institute Of Science Information was bought by Thomson Reuters in 1992, and I am pleased to record that in my daily note to the then EPS clients (we were all testing the concept “internet” at the time), I wrote “It is widely thought that in a networked environment ISI will be a vital information resource”! Full marks then for prescience! As Web of Science, the ISI branded service, became the dominant technique for distinguishing good science, and funding good science, so Thomson Reuters found they had a cash cow of impressive proportions on their hands.
But this history is only significant in light of the time scale. While there have been updates and improvements, we are using a 60 year old algorithm despite knowing that its imperfections become more obvious year by year, mostly because the whole marketplace uses it and it was very inconvenient for anyone to stop. Although altmetrics of all sorts have long made citation indexes look odd, no move to rebase them or separate them from a journal-centric view took place. Yet that may be exactly what is happening now. The inclusion of RCR (Relative Citation Ratio) in the National Instiutes of Health iCite suite fits the requirement that change is effected by a major Funder/official body and can then percolate downwards. RCR (I do hope they call it iCite – RCR means the responsible research code of practice to many US researchers) now needs widespread public-facing adoption and use, so its implementation across the face of Digital Science is good news. Having once thought that Digital Science in its Nature days should acquire Web of Science and recreate it, it is now becoming clear that this is happening without such an investment , and companies like figshare, Uber Research and ReadCube will be in the front line exploiting this.
And then, at a recent meeting someone said that there would be 48 new university presses created this year for Open Access publishing for both articles and monographs. I cannot verify the number – more than Team GB’s initial expectation of Olympic Medals! – but the emerging trend is obvious. Look only at the resplendent UCL Press, decked out in Armadillo software producing some very impressive BOOCS (Books as Open Online Content). In September they launch the AHRC- British Library Academic Book of the Future BOOC, if that is not a contradiction. Free, research-orientated and designed to high standards.
Just up the road in London’s Knowledge Quarter is Wellcome, and it is interesting to see the first manifestation of the predictable (well, in this arrondissment anyway) move by funders into self-publishing. As author publication fees mount (one major Funder already spends over a billion dollars US on publishing) there has to be a cheaper way. And at the same time if you could actually improve the quality of scholarly communication by bringing together all of a grant holder’s research outputs in one place that would seem to make sense. It simplifies peer review, which fundamentally becomes a function of the funder’s project selection – saying in effect that if we thought it right to fund the work then we should publish the results. It does have some objective checks, presumably like Plos!, but the object is to very quickly publish what is available: Research articles, evidential data, case reports, protocols, and, interestingly, null and negative results. This latter is the stuff that never gets into journals, yet, as they say at Wellcome “Publishing null and negative results is good for both science and society. It means researchers don’t waste time on hypotheses that have already been proved wrong, and clinicians can make decisions with more evidence”. The platform Wellcome are using is effectively F100, and so is designed for speed of process – 100 days is Wellcomes aspiration – and for post-publication peer review, allowing full critical attention to be paid after materials are made available. And the emphasis on data very much reflects the F1000 dynamic, and the increasing demand for repeatability and reproducibility in research results.
So, what a month for demonstrating trends – towards more refined metrics in research impact, towards the emergence of universities and research funders as publishers, and towards another successful development from the Vitek Tracz stable, and a further justification of the Digital Science positioning at Macmillan. In an age of powerful users focussed on productivity and reputation management, these developments reflect that power shift, with implications for the commercial sector and the content-centric world of books and journals.
Jun
24
On Barriers to Entry
Filed Under B2B, Big Data, Blog, data analytics, Education, eLearning, Financial services, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, STM, Thomson, Uncategorized, Workflow | Leave a Comment
On the morning after the British electorate performed the largest mass suicide attempt in even our eccentric history, thoughts naturally turn to the future. Will a Europe that envies English as a Lingua Franca and which would like Start-up City Europe to be Berlin or Barcelona instead of London’s Silicon Round-about, find ways, in this messy divorce, to challenge the status quo in European information marketplaces? Like everything else I have heard this morning, it is “too early to tell”, but while thinking about competition and competition rules, it may be worth speculating on something else. Is competition what it used to be, and what has happened to the “barriers to market entry” that seemed so important to us all in pre-digital non-networked marketplaces.
And it may be necessary to remind those under a certain age what those historical barriers to entry were. The greatest of them was Ownership. Primarily the ownership of Intellectual Property. And first and foremost the possession of Copyright in Proprietory Information, Data or Content. For 70 years from the death of the author. This Ownership position was also reflected in Brand, and with luck one could combine the two to create quasi-monopolistic positions. Then add domination of distribution networks, exclusive positions with third party agents in important subsidiary markets. Then look at the Know-how created to run these businesses, and the way it was passed like an inheritance from generation to generation of long-serving staff and one can easily see how intimidating and expensive it was to attempt to compete. As the tyro CEO of the European Law Centre in 1980, I looked at Sweet and Maxwell (late eighteenth century) and Butterworth (late nineteenth century) and wondered, although my online product was a wonderful innovation, how I could possibly compete.
The short answer is that you couldn’t. Thomson bought one and Reed the other, acknowledging that if you wanted market share you had to buy it, or condemn yourself to niche plays in subsidiary markets that these Titans disdained. But now turn your mind to market entry today. Established plays who have put down roots are almost a challenge to disruptive start-ups rather than a threat. I grapple now with the opposite problem of valuation: how do you place value on ex-print – migrating – to digital companies when it is easier for a start-up to enter their markets than it is to rent a garage in London from which to do the disrupting? In an amazingly short time, the Age of Content has collapsed around us in all but entertainment marketplaces. It is not just that content became commoditised. It is also to do with our expectations. As the costs of computing and storage continue to collapse in relative terms, volume is no longer a factor here. I read the suggestion in Ars Technica this week that it will soon be possible to download major data collections like Elsevier’s ScienceDirect and provide them to every user. Which reminded me that you could download major collections (SciHub) a and provide them free in Kazakstan.
While major publishers still own the copyrights, theses ownerships no longer present barriers to entry. As I have so often written here, users want solutions, and preferably ones that slot into workflow. So where do we look now for barriers? In a world where users want a comprehensive view of all of the content/data/information which may be pertinent to solution, we can always simulate the content we do not have even if we cannot acquire it as Open Data or find it on the Open Web. But we can add value to it, both in terms of semantic web treatment, and entity extraction for building taxonomies and ontologies. Our knowledge system is both a differentiator and a barrier if it becomes a market standard. Indeed, much of our software performs barrier roles – even if it is hard to protect and, even if our techniques achieved patent protection, that is a short term gain at best.
But where else can we turn? Well, for some an acquired skills base may be a barrier against raw star-up competition. With many players seriously concerned about price competition from well-funded second stage offerings seeking to buy market share on price, it is also important to inspect the state of the Golden Handcuffs that hold the employed skills base in place. And the same applies to the customer relationships. Since customers are a very likely source of competitive pressure, it becomes important to “value” the customer and his relationship with you – are you so expensive that it will soon be cheaper for him to acquire your technology on the market and do your process internally for himself? What was the cost of acquiring that relationship and how quickly could you build another one? What parts of the relationship are defensible from competitive attack on either price or value?
It now becomes harder to value a company in terms of barriers to entry because many of these elements involve valuing intangibles. In a networked society location is no longer a very important factor, and in a network where brands can be created and built in a remarkably short time there is little sacred about trust and brand authority in the abstract. Yet markets still keep asking about defensible value positions, and none of the old answers work anymore.
« go back — keep looking »