Sep
11
Towards the Open Book
Filed Under Big Data, Blog, data analytics, Education, eLearning, Industry Analysis, internet, Publishing, Reed Elsevier, semantic web, social media, STM, Workflow | Leave a Comment
The great days of Open Access are past, but the age of the Open Book is just about to begin. Witness the trajectory of the Elsevier acquisition strategy, ever the most reliable guide to trends in academic and research markets. No major journal publishing investments there. Beyond the slackening of demand for more paid for journals, the thrust went first into social media and data via Mendeley, and then towards pre-print servers like SSRN, followed by repositories like BePress. What does this tell us? That the historical demand for enhancing reputation by virtue of coupling your renown to that of the journal that accepted you is giving way to the necessity, in a networked society, of getting quickly into the workflow of your discipline, regardless of where you may subsequently be published. Here Open Access proved a game changer that influenced, in most disciplines, less than 25% of the game. The mere fact, however that a game which was played according to many of the same rules since the 1660s could be changed, means that the next generations are fully entitled to make their own rules.
But while all this has been going on, what of the related worlds not subject to the imperatives of Green and Gold and mandates? By this I mean the world of book publishing in all disciplines, but particularly in HSS with its dependence on getting the full research answer out into monograph form, in a market that has an ever greater problem with buying books in research institutions, a growing problem over at least 50 years. What it is about HSS and books that makes its problems less attractive to public campaigns and political pressure we should leave until another time, but it really is fascinating for those interested in the history of scholarship to reflect that we have had to come through years of publishers saying that monograph publishing was untenable to what is now a concentrated attention on the facts of digital publishing as it impacts monograph production – that it is quicker, easier, cheaper and more effective than ever before, but the real problems remaining lie in circulation, discoverability and finding an appropriate business model.
The first person to demonstrate a practical business development that could march alongside the aspirations of academics and the constraints of librarians was Dr Frances Pinter with her Knowledge Unlatched (KnowledgeUnlatched.org) solution. By allowing librarians to “subscribe” to the opening of books in a proposal list to universal access. Now run from Berlin by Dr Sven Fund, this demonstrated her virtuosity as a publisher coupled with a real insight into the lives of researchers and the stresses within the scholarly communication system. In the past three years, in this infant marketplace neglected by publishers, it has grown appreciably, in a context where small commercial players (Cambridge Scholars) have begun to work on scholarly self-publishing, and a range of new University Presses, both in the US and in the UK, have created “pop-up” publishing for those who can afford it. And here lies a real threat to publishers with important book programmes in STM or HSS: the tools and processes get cheaper, the staff commitment becomes less significant (ex-librarians can do it brilliantly!), and the speed requirement, for reputational purposes, gets greater, so the need to get material into the scholarly workflow becomes more urgent.
All of which ignores a real problem on the road to open access self publishing, and it is fascinating to see how the team at Knowledge Unlatched have their teeth into this. The problem of self-publishing is discoverability. Adding metadata to ensure discovery and then placing that information in the critical junction boxes of a wired society to ensure that what is available is what is found becomes fundamental when simply “outing” content to the web results in loss of discoverability. Publishers have always known this, so the logic for Knowledge Unlatched in opening up its services as Open Services is inescapable, and will predictably lead to KU becoming a hub for this rapidly growing sector as well as a service to sector players. The data accumulation here becomes one of the important assets across the board in HSS and STM.
Predictably, now this market has kick-started, developments will move rapidly. The ability to use semantic enquiry across these texts, to cross search groups of them and to discover and explore them at sub-chapter and paragraph levels will help. Index compilation and the generation of custom-machine generated indices will follow, with automatic updating and cross-referencing , as well as linkage to related evidential data, clearly part of the trajectory. But this only happens if the sector has a hub and some standards, so the move of KU to create Open Services is really the first movement in the direction of a hub that we have seen, and one which, for that reason, is hugely welcome.
Aug
2
After Science Journal Publishing is Over…
Filed Under Big Data, Blog, data analytics, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, social media, STM, Thomson, Uncategorized, Workflow | Leave a Comment
Despite a beautifully written blog on the F1000 site , the launch of ORC ( https://blog.f1000.com) did not get quite the blaze of commentary that I expected . Perhaps it was the timing , as researchers move away on summer holidays . Perhaps it was a bit sparing in terms of detail – more of a land claim than a plan . Perhaps it was unfashionably big thinking – most of the great conceptualisations of Vitek Tracz have taken some years before publishers have realised what they meant – and in that same moment that they have to buy them . And after all , F1000 sat there for a year or two before leading funders realised it was the perfect funder publishing vehicle . So we should not expect an ORC ( Open Research Central) , be it a Tolkien nasty or a Blakean benign , to be an immediate success , but it certainly lays down potential answers to one of the two key post- Journal Publishing questions .
As we move remorselessly into a world where no individual or team can hope either to read or keep track of the published research in any defined field without machine learning or AI support, primary publishing becomes less important than getting into the dataflow and thus into the workflow of scholarship . It still helps to be published in Nature or Cell , but that could take place after visibility on figshare or F1000. Get the metadata right , ensure the visibility and reputation management can commence . So the first question about the post journal world is ” Who keeps score and how is worth measured ?” And then we come to the next question . If the article is simply a waystage data report , and all the other materials of scholarly communication ( blogs , presentations etc) can be tracked , and the data from an experimental sequence can be as important for reproducibility as the article , and reports of successfully repeated experiments are as important in some instances as innovation, then the scheme of Notification and communication and cross-referencing must be open , community-owned and universally available , so how does it get established ?
As I see it , Vitek is proposing the answer to the second question . His majestic conception is to establish the open channel which completely substitutes current commercial publishing. Using the ideas of open post-publication peer review that he piloted successfully with F1000 for Wellcome and Gates , he will try to cut off the commercial publishers at source by depriving them of article flows for second and third tier journals , even if branded journals still survive as republishers of the best of the best . This is a well-aimed blow , since second tier journals with high circulations and less costly peer review are often the most profitable . . Of course , China , India and Russia may not move at the same rate as Europe and the USA . And , again , the move in some disciplines to erode article publishing into a data dump , a summary finding and a citation , will happen more slowly in other fields and may never happen at all in still others . . But the challenge of ORC is quite clear – here is an open vehicle with open governance that can do the job in a funder-dominated marketplace .
But I am still intrigued by the answer to the first question . Who is the accountable scorer who provides the summary reputation scoring . The data leader in the current market is almost certainly Elsevier , but can they become the ultimate player in reputation while remaining the largest publisher of journals ? Wiley appears to be in strategic schizophrenia and Springer Nature need to clear an IPO hurdle ( and decide on buying Digital Science – a critical decision here ) , so the Big Publisher market seems a long way away from coming up with any form of radical initiative. As I have suggested , peer review , if it ceases to be a pre-publication requirement, may once again be the key to all of this . If indeed peer review becomes important at the initiation of a research project- project proposal selection and evaluation of researchers members (the funding award ) – and post-publication , where continual re-evaluation will take place for up to three years in some disciplines , then several attributes are required . This is about a system of measurement that embraces both STM and HSS , yet is flexible enough to allow for discipline-based development . It requires a huge ability to process and evaluate metadata . It needs to be able to score the whole value chain of researcher activity , not just the publishing element . And for neutrality and trust by researchers , funders and governments it cannot be a journal publisher who does this .
In fact the only company who can do it without starting again is the one who has done it already in the transition from print to digital . Much of the skills requirement is there already at Clarivate Analytics , the former Thomson IP and Science . The old Web of Science unit , inheritors of the world of ISI and Gene Garfield , pointed clearly in this direction with the purchase of Publons , the peer review record system earlier this year . After years of working the librarian market , however , the focus has to change . As Vitek demonstrates , funders and researchers are primary markets , though there will be a real spin-off of secondary products from the data held in a compressive datasource of evaluation . And new relationships will be needed to create trusted systems for all user types . The current private equity players still need to invest – in a semantic data platform which can unsilo multi-sourced data and analyse it , and in some innovative AI plays like Wizdom.AI , bought recently by Taylor and Francis . Although it is relatively late in the day , and I could argue that Thomson should have been investing this opportunity five years ago , there is still time to recreate the old Web of Science positioning in a new , rapidly changing marketplace . When Clarivate’s PE ownership break it up and sell it on , as they will within 3-5 years , then I am sure there will be good competition for the patent businesses ..
But the jewel in the crown , with a huge value appreciation ( and a potential exit to funders ) could be the integrated science side of the business . And in order to get there , all that Clarivate need to find is the strategic leadership to carry out this huge transformation . When we see what they do in this regard , we shall see whether they are up for the challenge .
« go back — keep looking »