Amongst the many things speeded up by the COVID pandemic , the advance of Open Science could be the most marked . While the grateful world rejoices in the speed with which vaccines were produced , the unprecedented sharing of knowledge , techniques and data between the major labs – in Berkeley,   Broad ( MIT and Harvard ) , Oxford and elsewhere – was a major element , alongside the setting aside of normal competitive feelings between research teams . This enabled roll-out within a year rather than the usual vaccine cycle of three to five years . Add the fact that wholly new science was being deployed – the use of CRISPR gene editing with messenger RNA  – and we are left with a remarkable conclusion : in a collaborative environment and under Open Science protocols , things can go faster and become effective sooner than we had ever imagined . 

With that in mind it is worth considering the role of publishing in all of this . Whenever I become too strident in talking to publishers in the science research communications sector about the changing role of journals , and the incoming marketplace of data and analysis, I usually get a fusillade of questions back about the opportunities in data and the claim that significant flags have already been planted in that map . And they are right , though they often ignore the issues raised by more and more targeted and specific intelligent analysis . And they also ignore the fact that outside of eLife and , to an extent , PLoS , no one of scale and weight  in the commercial publishing sector has really climbed aboard the Open Science movement with a recognition of the sort of data and communication control that Open Science will require . 

So what is that requirement ? In two words – Replicability and Retraction . While we still live in a world where the majority of evidential data is not available with the research article , and is not always obviously linked to data in institutional repositories , it is hard to imagine moving forward from the position reported by Bayer’s researchers – that only 25% of research that they want to use is capable of being reproduced in their laboratories . Other studies have shown even lower figures . What does “peer review” actually mean if it produces this sort of result ? Yet publishers have for years disdained publishing articles that “merely” reproduced existing results and validated previous claims . A publisher interested in Open Science would open up publishing channels specific to reproducibility , link successful and unsuccessful attempts to the original article and encourage others to do the same , while building collective data on replication work for analysis – including the analysis of widely cited papers which cannot be reproduced outside of the ambit of the originating team . 

Open Science advocates would go further and push for the pre-registration of research methodology , peer reviewing the submission of the research plan and publishing it . This would prevent a subtle twist in the reporting that would allow the aims to be slightly adjusted subsequently to fit the evidence actually collected . To my knowledge , and I hope I am wrong , only PLoS have a facility for this at present .Searching and analysis of preregistration data could be immensely useful to science , just as the activity itself could add greater certainty to scientific outcomes . In particular it might lead to less retractions , and it is this area that publishers can again make a huge contribution to Open Science . Retraction Watch and the US charitable foundations that support the two principals there do a brilliant job . Since 2010 they have reported on 20,000 retractions of journal articles to 2019 , but the issue is getting greater and greater , and the number of retractions between October  2019 and  September 2020 rose by another 4064. The fact that researchers are reporting and recording this data wherever they can find it is admirable , but surely publishing should be doing its own house keeping , and collecting and referencing  this data to a central registry . There have to be analytics , in an Open Science environment , which point to the effectiveness of peer review , and if peer review is as important as publishers claim , then protecting its standards should be a critical concern . Along with another Open Science mandate , the publishing of signed peer review reports alongside articles , this constant monitoring of retractions is vital if researchers are not to be misled . This is not about fraud , but about over ambitious and unjustified claims . Publishers should not try to hide the number of retractions they have made , but use the open display of the results to demonstrate how effectively they work in the vast majority of cases. 

The last element here is time . Publishers can use data and analytics far more effectively to track article lifetimes , show that previously diminished work in its first five years can come back into importance in the second , show how retractions issued late in the life of the article can effect other work which cited it or was built around it . By the time we reach 2025 the data around the article life cycle will be far more important than much of the data in all but the most important research articles . 

The old Latin tag , lingering still from my childhood scrummage with the classical tongues , and from which I emerged without glory , was “ Quis custodiet ipsos custodes ? “ Who guards the guards? It comes to mind as Scholarly Kitchen debates trust in science and scientists , and recalls the long debates in the UK during my lifetime about trust and self – regulation . Nobody will trust doctors to discipline themselves , we argued , so away with the British Medical Association’s self – regulatory role and bring on the General Medical Council ,. Professional , detached regulation without conflicts of interest . This , the politicians proclaimed , was the foundation of trust . And the argument has raged through out professional life ever since . Can we “ trust ‘ the Law Society to discipline lawyers or the Catholic Church to punish miscreant priests ? Can we expect politicians to follow the Ministerial Code without regulators and investigators ? ( Plainly not!) Can we trust scientists not to falsify the evidence , report misleading information , spread disinformation about those who disagree with them ?  After all ( and after reading some of the post vaccine stories this may come as a shock ) Scientists are human beings . They seek fame , fortune , preferment much as other professions do , and they have their own statistical headcount of charlatans to root out . 

Yet it seems that we should not be afraid of any trust deficit issues in science . What i take from the Scholarly Kitchen panellists is the feeling that it is peer review that is our protection and as long as we maintain it in its current form , public trust in science can be assured . And I am left wondering , mentally rehearsing the forty or more years of argument I have heard already on this topic . I recall the reviews , from Cochrane’s and from others , revealing bias and favouritism , cronyism and the existence of domain theories that could not be contradicted if career advancement was to be secured . Scientists are human too . And peer review is much better than it was , especially where it is now more evidence-based , using AI-inspired investigation to create the battery of tests employed by , for example , UNSILO in the Cactus Communications systems support for peer review used by Springer-Nature and others . Yet , even now , is it not worth asking Who Guards these Guards ? . Why are retractions so hard to secure ? And to communicate ? What independent review takes place ? Where does a researcher go for appeal ? Is arbitration available ? And how would the public ever discover if a retraction had ever been made ? 

If many of the answers to these questions turn back on the publisher as the arbiter of all things trustworthy then I have to say that having been one for many years , and knowing , trusting and loving many others , I still do not see this as the sort of absolute trust guarantee that the general public might like . If we want people to say “ It must be right – the publisher would not have published it if it wasn’t !” Then we must also recognise the operational limitations at work here . First of all , the publisher is not an independent arbitrator , but remains very bound up in the whole process of scholarly communications . This signifies classic conflicts of interest . He will surely want to publish True and Trustworthy Science , but he must also have an eye  to his commercial realities – and these are as present in not-for-profit operators as fully commercial ones . No one is more attentive to the importance for journal branding in encouraging submissions . Great names and institutions help establish reputations , publishing work which confirms earlier results as correct rarely does . Branding and reputation affects levels of APCs . Peer review is a cost , and a cost arbiter of profitability, and cost control remains vital to survival . Can you be an effective Guard if you survive by publishing at a volume not dictated by quality and at a cost not dictated by quality control ?

In very many ways the major science publishers of the world do a brilliant job in publishing so much material of such great quality . Many years ago I thought that publishers would be better off if they gave up peer review . Yet even now  must be doubts as to whether pre-publication is the right point – surely pre-funding and post – publication have equal claims ? And continuous or periodical re-review post publication , to account for changes wrought by subsequent science to the importance of one particular finding , has its logic as well . If pre-registration of research aims and scope is , as many Open Science adherents demand , separated in time and publication from evidence , results and findings , to obviate the problem of aims subtlety migrating to fit the evidence gathered , then this would have a salutary effect , but publishers should be wary – such a system would involve both parts being published separately and in different places , though throughly virtually linked as far as rage user was concerned . Finally , if publishers do see their work as an important element of trust in science , they would be well advised to set up an international council to register and list retractions , and to act as an arbitration point in disputes over  trust issues . It might also decide what the minimum standards for peer review might be . Is a PLOS ONE type examination of methodology and adherence to scientific principles acceptable as a baseline ? 

I might well offer myself as a Judge on such a tribunal . In my years as a law publisher I learnt that Judges can never be wrong in the UK . The wording used then when the Court of Appeal reversed a judgement of the High Court was the wonderfully mild “ It seems My Lord Justice has mistook himself . “ And which judge judges the judges, may I ask  ?

« go backkeep looking »