Half way through another year and safe in this foggy bolt hole in Nova Scotia, and time to reflect on what is becoming one of the most annoying aspects of the maturing digital age – we cannot seem to give up classifications derived from the pre-networked world. All around me I hear people describing what they do and who they target in entirely antediluvian terms – B2B, B2C, financial services, STM, pharma, agriculture, energy, environment etc etc as if these terms were at all useful in describing anything at all. I know, I do it myself. Speed and convenience sometimes seem to demand it. Grouping companies together as sectors or competitors seems to demand it. So, now, on the first day of annual leave, I want to issue myself – and my friends who may come across this – the following stern warning. These words and their Sector classification ilk may once have been descriptive. Then they were simply vague but convenient leftovers. Now they are dangerously misleading and it is becoming strategically important to find better and more accurate descriptors for  segmentation  developed and accepted in a networked age. What we are doing is as weird as all the information services and solutions companies walking around calling themselves “publishers”.

Look at the keywords at the top of this page and you will see that I am as caught in this trap as anyone. To mitigate the problem I have scattered in a few keywords like “workflow” or “search”, but I have not tackled the real job at all. Increasingly the network is becoming an expression of individual and corporate workflows. Content, as data, can be ingested into those workflows from public or private sources at any point. Data designed for one market use may find far greater utility in “sectors” not envisaged by the original developers. Integral to the use is the software which fashions the usability and activates the workflow: pure play content is not generally a solution but can be a problem looking for one. In recent months we have covered here solution-building software players who license in data from the largest suppliers to create  custom solutions for major banks or investment houses. While Thomson Reuters and Bloomberg are competitors in the ancient world of desk-top terminals, in the wider market of data solutions they are both suppliers to these software players. Notional allies in trying to bid up the deal value and ensure copyright protection. Perhaps they may buy one of these agencies in the longer term, but if competition is about getting the attention of endures and supplying them direct then a great deal of rethinking needs to take place.

When in doubt I tend to return to legal information, where I cut my teeth in the early 1980s. There a whole generation of legal services companies has come, in the past decade, to provide a real challenge to Publishers and information providers. If the word “Publisher” denotes the passive availability of content which, if discovered by practitioners at the right time, can help to solve problems, then the whole concept is exhausted in these markets. The growing realisation of this forced the sale of PLC to Thomson and the development of practice law at Lexis, but this growing engagement with the daily workflow of the law office has not prevented the development of Axiom Law or its equivalents in the US or the UK. Again, there is only one way to compete, and a number of city law firms in London are growing software solutions businesses based on AI and machine learning. And again, the competitive focus has shifted, and will shift again, reshaping traditional players as they seek to reposition. The competing stresses are all along the line of networked communications within an information workflow chain. The essence is function – buying a house, doing a compliance audit, preventing money laundering etc and the participants could be from several traditional sectors and at each stage fresh data will be mixed into the solutioning process.

So, if this is true, why do so many of the people i speak to in recent months seem to want to use the word “platform” so defensively I hear content companies claiming that they have all their data on a platform as if that asserted a value, rather like a medieval castle. The much-abused platform word should, in my view express the accessibility of content, and its utility seen as a palette upon which content as data can be remixed to create solutions, using proprietory or commercially available software. All of the participants in the workflow chain are in one sense “publishers”, and they must all share common platform characteristics in order that each can participate in the process. It seems to me that it is likely that NoSQL-based environments will triumph here and that the greatest exchanges of data in the network will in fact be descriptive  – metadata. As AI and machine learning get smarter, knowing where things are gets a higher value, and “platforms” will need automated on- and off-ramps, auto-licensing, and, in many instances, huge common platform characteristics that link major players.

There was a time when I served as Chairman of Fish 4, a development company hosted by EPS and then turned into a vehicle for the regional press and carrying the classified advertising of 800 local and regional newspapers. Its board members were the CEOs of the six largest players in that market. When asked one day who they thought their biggest competitors were, they smiled and pointed to each other. Five years later, when Rightmove and AutoTrader and Monster had eaten them up and spat them out, I realised we had learnt some valuable lessons about miss-classifying markets by looking backwards instead of forwards. We cannot afford to do the same again.

At the splendid Publishers Forum meeting in Berlin last month, I had the pleasure of chairing a panel that included the CEO, Mark van Mierle, of veteran education publisher Cornelsen. Our panel was looking at Virtual and Augmented Reality, so it seemed natural to ask why his company was making a significant investment in these technologies. After reminding me that it was an area in which they thought they could make money, and that it took them away from failed or failing product lines in print textbooks and re-orientated them towards the service economy of education, he refocused us on another truth. Every now and then, he said, we need to rebadge and rebrand, so markets see us differently and the sort of people we want to employ are more likely to be attracted to us. With this valuable lens firmly in place, the two important acquisitions in scholarly communications that have taken place this week take on a new importance. Neither deal moves the graph of market size or share: both have huge significance for the market and the companies concerned.

It is always refreshing and slightly shocking when one’s wishes come true. When I wrote in this blog about Colwiz and Wizdom.ai in February that “As I left their Oxford offices the most frequent thought in my head was “why hasn’t a publisher invested and acquired this yet!” https://www.davidworlock.com/2017/02/, I suppose I was father to a thought that had already crossed the minds of others. But Taylor and Francis have made a really valuable acquisition here, and one that puts them into the forefront of the emerging service economy. A collaborative research platform (Colwiz – collective wizdom) backed by a prototype big data environment for using artificial intelligence and machine learning in discovery and categorisation of results represents a five year forward programme of service derivation and development for T&F, while Colwiz will benefit hugely from the widely differing range of HSS and STM communities within T&F as the experimental base of their work. The usual messages apply of course: start-ups are tender plants, and grow best when managed less. Keeping inventive minds happy in process driven companies can be tough etc etc

But at the moment this is an event to celebrate. Rebadging T&F is long overdue. In former management contexts T&F was the milch cow that went on giving, but as academic research marketplaces change, content as data becomes commoditised, researchers cannot keep pace with the global rate of research reporting, more and more machine reading is needed to keep the map of what is known current and valuable, and companies like T&F have to re-invest and reposition. As a company currently with without a CEO and caught in a swirl of private equity supposition about its own collaborative future, this announcement must be hugely re-assuring to staff and researchers alike: Informa clearly have a plan for asset enhancement and are driving the company towards the future or research marketplaces.

Meanwhile, another staple of the industry is signalling its determination to rebuild and refocus. The Science side of Clarivate Analytics, based around Web of Science, was a famous Thomson Reuters cash cow. When Thomson bought ISI three decades ago, the ideas of Eugene Garfield and the use of the impact factor were already industry standards. While all sorts of evolutionary changes took place along the way (Scholar One, Web of Science etc) no one fundamentally wanted to rethink the model for research in a digital, networked research community, and one where library budgets were under huge pressure. And although many librarians felt that Web of Science was a cornerstone acquisition, as soon as alternative metrics became available and grant-funding bodies became uneasy that the impact factor was too narrow a guage, the pressure began to be felt to develop a response. Yet the attractions of the business model and the thought that they might divest seems to have slowed the thinking , so it is wonderful now to see Clarivate, under new ownership, new management and with a lively board of non-executive thinkers, getting stuck into change with the announcement, today, of the acquisition of Publons.

Peer review, once regarded as the last bastion of publisher control of journal publishing, has itself become a contentious area of activity. Set aside the questioning of pre-publication reviews, the suppression of ground-breaking work by self-interested elites, and the “fake reviews” issues. Think about the huge value of post-publication reviewing, the adherence of both Gates and Wellcome to F1000, and the continued growth of blogging and social media commentary around the scholarly workflow, from idea generation to post publication. Publons is the leading exponent of concentrating the gamut of critical input around scholarly communication and creating a reference environment within which all of this material can be shared. Of course, Publons could not exist if we had not made huge strides – Orcid, Cross-Ref etc – in categorising authors, articles, and contributions within the network. But all of the enablers post-date the impact factor. If Clarivate is to re-establish itself as the value register of record then this is just the type of acquisition it must make. Its neutral position – Thomson sold its journals via Wolters Kluwer to Springer many years ago – is vital here, and a move of this type gives renewed faith that the job can yet be done. Certainly researchers yearn for the certainty that the impact factor once delivered.

And lets conclude where we started. Two important industry players who once appeared to be playing behind their strengths have re-asserted themselves this week.This sends a clear signal to researchers, markets, and above all to the young staff they will need to employ. We still do not know if, as was mooted in the Saale process, Calarivate Analytics will split, withe the patents data business going in a different direction to the Science business. But we do know that the Clarivate Science management, and the T&F management, are in a determined mood to rebuild their positions, which makes this one of the most re-assuring weeks in STM this year.

« go backkeep looking »