It has been a month of contrasts. Good solid results at Reed Elsevier have the market analysts demanding the sale of Lexis Legal: the chief break-up irritant at Bernstein can forecast a 20% increase in value as soon as it is done. Reed Elsevier now trades at a discount to last year’s valuation as well as to the wider quoted marketplace. And how does this come about? Simply by representing the company to analysts as a diversified investment portfolio, and then disappointing them with the results, which always prompts a demand for the sale of the weakest bit and the purchase of something stronger.

Meanwhile, over at Thomson Reuters, it has been a dynamic July in terms of forward progress. You can measure that in terms of acquisitions if you like, but to me the key element is the strategic positioning of these purchases and what they do to pursue the goal of market leadership in services and solutions for corporate finance, tax and regulatory, from banking and equity trading, law and tax/accountancy in practitioner terms right across to the desktop of the corporate finance, tax and legal department at the other. If the Thomson Reuters vision works out, it will connect up all of these functions and activities into a series of solutions which will compel big and then medium and small corporates into easier methods of information handling, and methods that get easier the more reliant they become on inter-related services and solutions from Thomson Reuters. This is about integration in the face of user need, about recognizing the primacy of the network, and about bringing one huge company with many specializations into focus on the issues of service and solution. This is not a diverse portfolio of disparate elements: it buys the bits it needs and sells the bits that do not fit, but the definition of acquisition has to do with whether these global aims are satisfied as well as whether the purchase makes financial sense and the required return.

Lets take a few July examples. The headline purchase was the acquisition of FX Alliance for $625 m. Here, then, are Thomson Reuters, a leading player in the sell-side interbank foreign exchange market, one of its traditional strengths, pulling in a major player from the bank-focused currency trading business for corporates, asset managers and hedge funds. Foreign exchange is a huge diversified marketplace, involving some $5 trillions of transactions per day, and this deal gives Thomson Reuters the ability to work in both the internal institutional markets and the corporate-facing external market, using electronic platforms and high speed trading techniques all the while.

By comparison, my next two examples are smaller in scale, but demonstrate other aspects of the process that is going on . Having written extensively about the launch of the Thomson Reuters GRC division – bringing legal and tax into focus with financial services in the areas of Governance, Regulation and Compliance, I want to mark the arrival of “Eikon for Compliance Management” with a special commendation. It seems to me that this now closes a huge loop, and provides a service environment which was never more urgently needed. It is said that there are now 60 new regulatory announcements a day from some 230 regulators and exchanges in financial markets, yet less than a third of traders report having any compliance training or update in the last 3 months. But to join up solutions for your customers you need to start with a joined-up company yourself.

My last example dates back to the experience of some 25 years as an external director on the international side of the Bureau of National Affairs, which has now disappeared into Bloomberg. An issue that intrigued me there, which attracted a great deal of attention and was crying out for a service solution, was Transfer Pricing. Boring? More likely stupifying! Here was an area that always demanded a software-based solution, since most tax lawyers and finance specialists were deeply reluctant to get entrapped in its intricacies, and access to someone who knew what they were talking about was rare and expensive. BNA produced great books on the subject, and so did WK and others. But Thomson Reuters has produced ONESOURCE Transfer Pricing, with an Analyser to get update on the compliance requirement for corporates who trade across borders and a Documenter and Benchmarking solution, ensuring that users have the right forms, and, vitally, ensuring that they are benchmarking against corporations whose solutions have already been accepted by the authorities. Here then is a vital but expensively neglected field of corporate activity, which reflects on much that Thomson Reuters is now about.

The final reflection is upon “platform”. At all levels Thomson Reuters is on a multiplicity of platforms, and while content integration and re-use has led to access being eased and common metadata standards evolved, this still clearly has a good way to go. And there is strength in this multiplicity – no one wants to interrupt the steady absorption of Eikon, now beginning to fulfill expectations, or damage the market primacy of WestlawNext. I expect, however, in the age of data, to see the continuing back-end integration of this very large  player’s systems to be a continuing theme. At the moment its greatest rival is a company, Bloomberg, who is swaddled in the limitations of a Victorian corset – the Terminal. That too will have to go, as Bloomberg limit their own future through an inability to get its new plays in law and government to sell to end users who do not want even a flat-priced, all you can eat deal on a  box originally built for traders. There is a midpoint, and Thomson Reuters’ migration looks like getting them there first.

This is the third attempt in a week to try to record the thinking that initiated the first one and pervaded the second . So here goes : ” Science is based upon measurement and evaluation , yet the activities of scientists have themselves been less measured and evaluated than the subjects of their research . ” In a society that now seeks the ROI of everything , this is becoming an important consideration . In the past we have been happy to measure secondary effects , like the shadow on the wall called “impact factor ” , when it came to  measuring and evaluating  ” good science ” . Now we can see clearly who is reading what and the way they rate it ( look only at Mendeley and ReadCube) , and what they say about it on Twitter and the blogs , we have a much broader measure of what is influential and effective . The market leaders in measurement a decade ago were Thomson (ISI ) , using the outstanding heritage of Eugene Garfield . They were followed by Elsevier , who today, by way of Scopus and its offshoots ,  probably match them in some ways and exceed them in others . Today , these players find themselves in a very competitive market space , and one in which pressure is mounting . Science will be deluged by data unless someone can signpost high quality quickly , and use it in filters to protect users from the unnecessary , while enabling everything to stay available to allow some people to search totality .

I started to get interested in this last year , when the word “alt-metrics ” first showed up . A PLoS blog by Jan Leloup in November 2011 asked for data :

“We seek high quality submissions that advance the understanding of the efficacy of altmetrics, addressing research areas including:

So a wide range of new measuring points is required alongside new techniques for evaluating data about measurement gathered from a very wide variety of sources. And what is “altmetrics ” ? Simply the growing business of using social media data collection as a new evaluation point in order to triangulate measurements that point to the relative importance of various scientific inputs . Here the founders make the point at www.altmetrics.org:

“altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.

Our vision is summarized in:

J. Priem, D. Taraborelli, P. Groth, C. Neylon (2010), Altmetrics: A manifesto, (v.1.0), 26 October 2010. http://altmetrics.org/manifesto
These scholars plainly see as well that it is not just the article that needs to be measured and evaluated , but the whole chain of scholarly communication , and indicate particular pressure points where the traditional article needs to be supported by other publishing types in the research communication cycle :

“Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse. Articles are increasingly joined by:

  • The sharing of “raw science” like datasets, code, and experimental designs
  • Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
  • Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.

Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself. ”

So a new science of measurement and evaluation is being born , and , as it emerges , others begin to see ways of commercialising it . And rightly so , since without some competition here progress will be slow . The leader at present is a young London start-up called , wisely , Altmetric . It has created an algorithm, encased it in a brightly coloured “doughnut” with at-a-glance scoring, and its first implementation is on PLoS articles . I almost hesitate to write that it is a recent investment of Macmillan Global Science and Education’s Digital Science subsidiary , since they seem to crop up so often in these pages . But it is also certainly true that if this observant management has noted the trend then others have as well . Watch out for a crop of start-ups here , and the rapid evolution of new algorithms .

Which really brings me back to the conclusion already written in my previous pieces but not fully drawn out . Measurement and Evaluation – M&E – is the content layer above metadata in our content stack . It has the potential to stop us from drowning in our own productivity .  It will have high value in every content vertical , not just in science . Readers will increasingly expect signposts and scores so that they do not waste their time . And more importantly than anything else, those who buy data in any form will need to predict the return that they are likely to get in order to buy with security . They will not get their buying decisions right , of course , but the altmetrics will enable them to defend themselves to budget officers , taxpayers , and you and I wen we complain that so much funding is wasted !

 

« go backkeep looking »