So what has been more exciting? The neck and neck race for the American Presidency and the future of the free world? Or the continuing argument about the future of citation indexing in the age of PRC (publish, review, curate)? I may be feeling a little jaded by the coverage that I have read, but here I stand, one day before the election, and the latter is becoming more and more attractive as a topic of conversation!

In both instances, of course, it depends where you stand. On the two occasions when I had the great pleasure and privilege of a conversation with Eugene Garfield, the great man, the master of citation indexing as a metric , pointed out that he thought that what he had done was the creation of the “least bad” methodology for comparing science research articles and journals. His acceptance that all systems would be gamed by some people. contrasted with the reluctance of many of his successors to admit that citation falsification was a real problem. in these conversations he was never less than completely scholarly and completely detached. When asked why he was selling ISI to Thomson. he pointed out that, for age and  health reasons, he could no longer run it himself, and in any case new insights and perspectives would be needed to reinvent it. I had a strong impression  of a scientist and researcher who felt that everything, in time ,  needed to be reassessed, to change and to be reinvented.

These thoughts come to me this week as the Web of Science service , essentially Garfield’s citation indexing environment, made a series of announcements that make me worry about its future. Rather than seeking to change with the increasingly turbulent world of science research article publishing, it seems to be painting itself into a corner,. Gene Garfield believed passionately that his metrics were helping scientists to compare and measure. He did his work in a world where there was a routine orthodoxy around the review and processing of an article submitted to a peer  reviewed journal. He would, I think, be the  first  to say that things have to change once that conventional picture begins to fragment.

And it has been fragmenting for many years now. From the beginning of open access through to the post publication review model launched by  eLife two years ago it has begun to change. With more and more articles appearing first of all on preprint servers and only subsequently (and not always) in versions of record in journals. We are moving into the age of mega journals and self publishing. Some would argue that we need more metrics, different metrics, or better combinations of metrics. Others believe that we need ways for accounting for the subsequent performance of articles post publication, that we need to recognise their review history, and not just a snapshot taken pre-publication.

Web of Science have moved, but it does not seem to me that these announcements reach out to the changing state of the journal publishing marketplace. By suspending journal services like CUREUS, Or telling eLife that it is under investigation, I seem to see a service in retreat from the Garfield objectives of better universal standards of measurement. The danger for Clarivate is that they will end up with owning a measurement guideline which measures a shrinking segment of the market. They may feel safe while the impact factor that  arises from these measurements is still linked to the publishing outcomes, vital to academic jobs and promotions. unfortunately, the things that we say will never change have a habit of changing, and when they do change, it can bevery quickly indeed. And Eugene Garfield, were he amongst us now, would be the foremost scientist trying to invent the better mouse trap.

It  is not of course true that Web of Science are these days the only measure in  the marketplace. The positioning of Scopus and of Dimensions is now extremely important in the future of research metrics. And then add the AI potential to this. Our increasing ability to explore the relationships between articles, the performance and reliability of reviewers, and to test the validity of citation index in itself should be seen as a key pointer to the future of science article metrics. Maybe the announcements made by Web of Science this month are an indication of something else – the beginning of the winding down of the curtain on an age of apparent (but not always quite real) certainty, and the beginnings of a new age of uncertainty before new standards are created and established.


Comments

Name (required)

Email (required)

Website

Speak your mind