We are living in an uneasy transitional period between the “online“ world to which we have become relatively accustomed, and the agenic,bot-based ,AI-driven world of the future.

Over the past 50 years I have watched as the information communication industries have either narrowed and specialised in unrelated segments or broadly generalised in pan-digital shared experience. My feeling is that neither is necessary or desirable. At the moment , scholarly communications, about which I feel a certain passion, is going through one of its isolationist phases. Perhaps this is a mirror of the hateful political and nationalistic isolationism of our times, which seems equally unproductive to me. However, just as I feel that the world of journal publishing is approaching a period of major change, so I also feel that it’s huge problems will not be addressed by introspection. Nor is it likely that all of the solutions will be internally sourced. Scholarly communications needs to look at the whole world of information handling and communication for relevant and helpful answers.

These thoughts come to mind while reading an excellent report by Phill Jones commissioned by International STM on the problem of verifying images used in science journal articles. A real problem and both author and funder are to a  be congratulated on tackling it. I am now partially blind, and my reader software, although increasingly sophisticated is not always good at picking up footnotes and references so I might have missed something important. If I did, my apologies in advance.

But what I missed in this excellent document was any reference to the work done by Adobe and Microsoft, together with the New York Times and other newspapers in developing the verification system C2 PA. Now adopted by Associated Press and Reuters, and installed in Sony and Leica cameras, this system gives an image provenance record from the beginning. And then again I searched for any mention of ISCC, the International Content coding system which is now a draft ISO standard. This foundation, and it’s developer Sebastian Posth, have a huge amount to offer in the world of academic research. at the same time that this report was published, Digital Science launched its Dimensions Author Check system. Morrissier have made real strides in integrity checking; the work of Clear Skies and of Research Signals is really exciting and progressive. But because we are concentrated on the problems of researchers, their institutions and their funders, it does not mean that we cannot learn from people involved in credit rating, banking services, healthcare or education.

These thoughts may explain my own long interest in the overall communication between the segments of what we used to call, very many years ago, “publishing.“ Last week I had the pleasure of lunch with Clive Bradley, now aged 90 and formerly the chief executive of the UK Publishers Association. At his behest, I worked on the founding of CICI, The Confederation of Information Communication Industries in the UK, an organisation which still exists as a way of communicating the wider interests of the broadly defined information industry to government and others. Our conception, 20 years ago, was that trade bodies would become de facto standards setters as well as special interest groups. Thus, in the crisis of integrity which currently afflicts science research publishing, helping journal publishers to band together to set minimum standards of proof for the acceptability of articles and images would seem to me to be a very proper thing to do. Refusing to accept articles which did not demonstrate alignment with data standards which rendered the work verifiable, complete, and free from post creation tampering might seem fairly obvious to some. Yet while we sit wringing our hands about retractions, falsified special issues and papermills, we are in the midst of an industry with no kite marks, no assurance standards, no logo of trust and integrity that readers can invest any faith in at all.

Whatever I have written here about the challenges and opportunities of AI, I think that it remains true that the continuing issues of most concern in the transfer of knowledge in the network society remain exactly as in the mid-1980s. Trust and identity are paramount issues. As I think about this, I think about the writings of my  friend David Birch, the hugely respected commentator on the financial services marketplace. His maxim is that “identity is the new money “. He sees  banks ceasing to earn margins in trading in money, but in an agented society becoming the agency which establishes identity, credentials, protocols and standards which allow our agents and botsto work in digital networks on our behalf. For my part, I believe that journal publishes will, and in many cases are already, making a similar pivot. Acts  of publishing will become increasingly individual or institutional or professional society based: commercial services around them will be concerned with data, integrity, and connectivity, including many of the services that banks of the future will offer to their customers.

Last month the world’s journal publishers will have gathered at the Frankfurt Book Fair. After 51 years of annual worship there. I no longer attend, but I am sure that the STM Conference and it’s great dinner covered all the issues of the current integrity problem. The same people were this month at the Charleston conference and will have been doing the same thing. I just wonder how much of the discussion was devoted to how public and researcher trust  in journal article publishing can be restored without concerted action by all market players to create standards of trust and integrity for their industry. Publishes have shown that they can act in concert in the past. Their accomplished really important things with collaborations like CrossRef  and Chorus. They are currently doing really important work on integrity.But we all have to ask if it is enough and if it is fast enough? And will it result in enforceable trade standards which all participants in the ecosystem can trust?

of course, none of this will be relevant to the journal if in fact the journal does not survive. When I wrote last about metrics, I was trying to make the point that unless the metrics embrace the experimentation, the experiments in new business models are handicapped from the beginning.i

It may well be that the sort of integrity standards that will emerge, deploying AI effectively,  in  current journal article publishing will have a profound influence on integrity standards in all other forms of communication in the network, and vice versa. Or, I could speculate, the journal based business model and system will never resolve the integrity issues: that will have to wait for advanced AI development in agent-based systems, where it is the individual researcher who sets the integrity standard for the research data that their bot can accept and use as legitimate. It would not be the first time in the age of digital network communications that the decision point devolved to the end user.

 So what has been more exciting? The neck and neck race for the American Presidency and the future of the free world? Or the continuing argument about the future of citation indexing in the age of PRC (publish, review, curate)? I may be feeling a little jaded by the coverage that I have read, but here I stand, one day before the election, and the latter is becoming more and more attractive as a topic of conversation!

In both instances, of course, it depends where you stand. On the two occasions when I had the great pleasure and privilege of a conversation with Eugene Garfield, the great man, the master of citation indexing as a metric , pointed out that he thought that what he had done was the creation of the “least bad” methodology for comparing science research articles and journals. His acceptance that all systems would be gamed by some people. contrasted with the reluctance of many of his successors to admit that citation falsification was a real problem. in these conversations he was never less than completely scholarly and completely detached. When asked why he was selling ISI to Thomson. he pointed out that, for age and  health reasons, he could no longer run it himself, and in any case new insights and perspectives would be needed to reinvent it. I had a strong impression  of a scientist and researcher who felt that everything, in time ,  needed to be reassessed, to change and to be reinvented.

These thoughts come to me this week as the Web of Science service , essentially Garfield’s citation indexing environment, made a series of announcements that make me worry about its future. Rather than seeking to change with the increasingly turbulent world of science research article publishing, it seems to be painting itself into a corner,. Gene Garfield believed passionately that his metrics were helping scientists to compare and measure. He did his work in a world where there was a routine orthodoxy around the review and processing of an article submitted to a peer  reviewed journal. He would, I think, be the  first  to say that things have to change once that conventional picture begins to fragment.

And it has been fragmenting for many years now. From the beginning of open access through to the post publication review model launched by  eLife two years ago it has begun to change. With more and more articles appearing first of all on preprint servers and only subsequently (and not always) in versions of record in journals. We are moving into the age of mega journals and self publishing. Some would argue that we need more metrics, different metrics, or better combinations of metrics. Others believe that we need ways for accounting for the subsequent performance of articles post publication, that we need to recognise their review history, and not just a snapshot taken pre-publication.

Web of Science have moved, but it does not seem to me that these announcements reach out to the changing state of the journal publishing marketplace. By suspending journal services like CUREUS, Or telling eLife that it is under investigation, I seem to see a service in retreat from the Garfield objectives of better universal standards of measurement. The danger for Clarivate is that they will end up with owning a measurement guideline which measures a shrinking segment of the market. They may feel safe while the impact factor that  arises from these measurements is still linked to the publishing outcomes, vital to academic jobs and promotions. unfortunately, the things that we say will never change have a habit of changing, and when they do change, it can bevery quickly indeed. And Eugene Garfield, were he amongst us now, would be the foremost scientist trying to invent the better mouse trap.

It  is not of course true that Web of Science are these days the only measure in  the marketplace. The positioning of Scopus and of Dimensions is now extremely important in the future of research metrics. And then add the AI potential to this. Our increasing ability to explore the relationships between articles, the performance and reliability of reviewers, and to test the validity of citation index in itself should be seen as a key pointer to the future of science article metrics. Maybe the announcements made by Web of Science this month are an indication of something else – the beginning of the winding down of the curtain on an age of apparent (but not always quite real) certainty, and the beginnings of a new age of uncertainty before new standards are created and established.

keep looking »