Aug
2
After Science Journal Publishing is Over…
Filed Under Big Data, Blog, data analytics, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, social media, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Despite a beautifully written blog on the F1000 site , the launch of ORC ( https://blog.f1000.com) did not get quite the blaze of commentary that I expected . Perhaps it was the timing , as researchers move away on summer holidays . Perhaps it was a bit sparing in terms of detail – more of a land claim than a plan . Perhaps it was unfashionably big thinking – most of the great conceptualisations of Vitek Tracz have taken some years before publishers have realised what they meant – and in that same moment that they have to buy them . And after all , F1000 sat there for a year or two before leading funders realised it was the perfect funder publishing vehicle . So we should not expect an ORC ( Open Research Central) , be it a Tolkien nasty or a Blakean benign , to be an immediate success , but it certainly lays down potential answers to one of the two key post- Journal Publishing questions .
As we move remorselessly into a world where no individual or team can hope either to read or keep track of the published research in any defined field without machine learning or AI support, primary publishing becomes less important than getting into the dataflow and thus into the workflow of scholarship . It still helps to be published in Nature or Cell , but that could take place after visibility on figshare or F1000. Get the metadata right , ensure the visibility and reputation management can commence . So the first question about the post journal world is ” Who keeps score and how is worth measured ?” And then we come to the next question . If the article is simply a waystage data report , and all the other materials of scholarly communication ( blogs , presentations etc) can be tracked , and the data from an experimental sequence can be as important for reproducibility as the article , and reports of successfully repeated experiments are as important in some instances as innovation, then the scheme of Notification and communication and cross-referencing must be open , community-owned and universally available , so how does it get established ?
As I see it , Vitek is proposing the answer to the second question . His majestic conception is to establish the open channel which completely substitutes current commercial publishing. Using the ideas of open post-publication peer review that he piloted successfully with F1000 for Wellcome and Gates , he will try to cut off the commercial publishers at source by depriving them of article flows for second and third tier journals , even if branded journals still survive as republishers of the best of the best . This is a well-aimed blow , since second tier journals with high circulations and less costly peer review are often the most profitable . . Of course , China , India and Russia may not move at the same rate as Europe and the USA . And , again , the move in some disciplines to erode article publishing into a data dump , a summary finding and a citation , will happen more slowly in other fields and may never happen at all in still others . . But the challenge of ORC is quite clear – here is an open vehicle with open governance that can do the job in a funder-dominated marketplace .
But I am still intrigued by the answer to the first question . Who is the accountable scorer who provides the summary reputation scoring . The data leader in the current market is almost certainly Elsevier , but can they become the ultimate player in reputation while remaining the largest publisher of journals ? Wiley appears to be in strategic schizophrenia and Springer Nature need to clear an IPO hurdle ( and decide on buying Digital Science – a critical decision here ) , so the Big Publisher market seems a long way away from coming up with any form of radical initiative. As I have suggested , peer review , if it ceases to be a pre-publication requirement, may once again be the key to all of this . If indeed peer review becomes important at the initiation of a research project- project proposal selection and evaluation of researchers members (the funding award ) – and post-publication , where continual re-evaluation will take place for up to three years in some disciplines , then several attributes are required . This is about a system of measurement that embraces both STM and HSS , yet is flexible enough to allow for discipline-based development . It requires a huge ability to process and evaluate metadata . It needs to be able to score the whole value chain of researcher activity , not just the publishing element . And for neutrality and trust by researchers , funders and governments it cannot be a journal publisher who does this .
In fact the only company who can do it without starting again is the one who has done it already in the transition from print to digital . Much of the skills requirement is there already at Clarivate Analytics , the former Thomson IP and Science . The old Web of Science unit , inheritors of the world of ISI and Gene Garfield , pointed clearly in this direction with the purchase of Publons , the peer review record system earlier this year . After years of working the librarian market , however , the focus has to change . As Vitek demonstrates , funders and researchers are primary markets , though there will be a real spin-off of secondary products from the data held in a compressive datasource of evaluation . And new relationships will be needed to create trusted systems for all user types . The current private equity players still need to invest – in a semantic data platform which can unsilo multi-sourced data and analyse it , and in some innovative AI plays like Wizdom.AI , bought recently by Taylor and Francis . Although it is relatively late in the day , and I could argue that Thomson should have been investing this opportunity five years ago , there is still time to recreate the old Web of Science positioning in a new , rapidly changing marketplace . When Clarivate’s PE ownership break it up and sell it on , as they will within 3-5 years , then I am sure there will be good competition for the patent businesses ..
But the jewel in the crown , with a huge value appreciation ( and a potential exit to funders ) could be the integrated science side of the business . And in order to get there , all that Clarivate need to find is the strategic leadership to carry out this huge transformation . When we see what they do in this regard , we shall see whether they are up for the challenge .
Jul
20
On Classifying Markets and Competition
Filed Under B2B, Big Data, Blog, data analytics, Industry Analysis, internet, mobile content, Publishing, STM, Workflow | Leave a Comment
Half way through another year and safe in this foggy bolt hole in Nova Scotia, and time to reflect on what is becoming one of the most annoying aspects of the maturing digital age – we cannot seem to give up classifications derived from the pre-networked world. All around me I hear people describing what they do and who they target in entirely antediluvian terms – B2B, B2C, financial services, STM, pharma, agriculture, energy, environment etc etc as if these terms were at all useful in describing anything at all. I know, I do it myself. Speed and convenience sometimes seem to demand it. Grouping companies together as sectors or competitors seems to demand it. So, now, on the first day of annual leave, I want to issue myself – and my friends who may come across this – the following stern warning. These words and their Sector classification ilk may once have been descriptive. Then they were simply vague but convenient leftovers. Now they are dangerously misleading and it is becoming strategically important to find better and more accurate descriptors for segmentation developed and accepted in a networked age. What we are doing is as weird as all the information services and solutions companies walking around calling themselves “publishers”.
Look at the keywords at the top of this page and you will see that I am as caught in this trap as anyone. To mitigate the problem I have scattered in a few keywords like “workflow” or “search”, but I have not tackled the real job at all. Increasingly the network is becoming an expression of individual and corporate workflows. Content, as data, can be ingested into those workflows from public or private sources at any point. Data designed for one market use may find far greater utility in “sectors” not envisaged by the original developers. Integral to the use is the software which fashions the usability and activates the workflow: pure play content is not generally a solution but can be a problem looking for one. In recent months we have covered here solution-building software players who license in data from the largest suppliers to create custom solutions for major banks or investment houses. While Thomson Reuters and Bloomberg are competitors in the ancient world of desk-top terminals, in the wider market of data solutions they are both suppliers to these software players. Notional allies in trying to bid up the deal value and ensure copyright protection. Perhaps they may buy one of these agencies in the longer term, but if competition is about getting the attention of endures and supplying them direct then a great deal of rethinking needs to take place.
When in doubt I tend to return to legal information, where I cut my teeth in the early 1980s. There a whole generation of legal services companies has come, in the past decade, to provide a real challenge to Publishers and information providers. If the word “Publisher” denotes the passive availability of content which, if discovered by practitioners at the right time, can help to solve problems, then the whole concept is exhausted in these markets. The growing realisation of this forced the sale of PLC to Thomson and the development of practice law at Lexis, but this growing engagement with the daily workflow of the law office has not prevented the development of Axiom Law or its equivalents in the US or the UK. Again, there is only one way to compete, and a number of city law firms in London are growing software solutions businesses based on AI and machine learning. And again, the competitive focus has shifted, and will shift again, reshaping traditional players as they seek to reposition. The competing stresses are all along the line of networked communications within an information workflow chain. The essence is function – buying a house, doing a compliance audit, preventing money laundering etc and the participants could be from several traditional sectors and at each stage fresh data will be mixed into the solutioning process.
So, if this is true, why do so many of the people i speak to in recent months seem to want to use the word “platform” so defensively I hear content companies claiming that they have all their data on a platform as if that asserted a value, rather like a medieval castle. The much-abused platform word should, in my view express the accessibility of content, and its utility seen as a palette upon which content as data can be remixed to create solutions, using proprietory or commercially available software. All of the participants in the workflow chain are in one sense “publishers”, and they must all share common platform characteristics in order that each can participate in the process. It seems to me that it is likely that NoSQL-based environments will triumph here and that the greatest exchanges of data in the network will in fact be descriptive – metadata. As AI and machine learning get smarter, knowing where things are gets a higher value, and “platforms” will need automated on- and off-ramps, auto-licensing, and, in many instances, huge common platform characteristics that link major players.
There was a time when I served as Chairman of Fish 4, a development company hosted by EPS and then turned into a vehicle for the regional press and carrying the classified advertising of 800 local and regional newspapers. Its board members were the CEOs of the six largest players in that market. When asked one day who they thought their biggest competitors were, they smiled and pointed to each other. Five years later, when Rightmove and AutoTrader and Monster had eaten them up and spat them out, I realised we had learnt some valuable lessons about miss-classifying markets by looking backwards instead of forwards. We cannot afford to do the same again.
« go back — keep looking »