Feb
18
Clinical Killing in Ann Arbor
Filed Under B2B, Big Data, Blog, data analytics, Financial services, healthcare, Industry Analysis, internet, Publishing, Thomson, Uncategorized, Workflow | Leave a Comment
What is the difference between the value of a database company in 2012 and then, forty-three inflation-free months later, in February 2016? According to IBM Watson Health, who have just bought Truven Health for $2.6 billion from Veritas Capital, who bought it from Thomson in 2012 for $1.25 billion, a neat 100%. Now, private equity is capable of wondrous things in short time periods, and Mike Boswood and his team are great managers and have made great strides with the company, but when things double in value over such a short period, other factors must be coming into play. Maybe there is a sudden perceived shortage of clinical evidence data in healthcare? Hardly seems likely. Fresh stuff is produced daily, and better recorded than ever before. The good folk at Hearst Business will be looking at the balance sheet valuations of Zincs and First Databank, and over at the BMJ, Clinical Evidence takes on a bright new gleam of interest. A gusher in Ann Arbor, Truven’s base, could float an awful lot of other boats.
But, just a minute, what is Truven Health Analytics? Well, at base dear old Micromedex, an indexing and recording database for clinical evidence. And is it sparkly and new? No, it’s about 40 years old. It records decisions and costs, centred on drug use, and has spawned great tools for doctors and hospitals its Formulary Advisor, Red Book and PDR Electronic Library are the tool sets that enable administrators and clinicians to know if they are using the right products in the right combinations at the right dosages and for the right price. Pretty valuable stuff then? Yes, it fully justified the price that Veritas paid in 2012. In earlier times it had been hard to deploy the data effectively – I remember initially encountering it on microfiche (use Google to look this up if you are under 50). When it was densely impenetrable. But Truven has, in recent years, added another word to its title. It is now Truven Health Analytics. And therein lies our tale.
Truven, I must be clear, does more than pure clinical evidence. Services like Marketscan or the patient information services support policy, administration and patient welfare. Its client roster begins with the 100 top US hospitals and includes another 8400 healthcare data buyers. Anyone entering the sector would love to get hands on its client list. So has IBM bought this as a prestige, flagship purchase? There is no evidence of that. Watson Health has made three acquisitions already. Phytel and Explorys have been with them for ten months, and the most recent, Merge Healthcare, cost a mere billion dollars. However, this is the first large data play they have made, and one of the key issues here is probably people – one thing that IBM would have been short of in healthcare was sector data scientists, curators and architects. Now it has a whole cadre. And yet, a 100% price hike?
So let’s get back to that Analytics word that has added itself to the Truven name. Truven, I learnt four years ago, stands for a concatenation of “trusted” and “proven”. Neat, eh? Health is Health. And Analytics… stands for a burgeoning ambition to turn analysis into solutions. The use of the word in the company name was, I suspect, a sign post pointing towards a destination. But for IBM it represents an arrival point – where they now are and intend to be. The coming together of Watson and Truven is thus the wheel hitting the road, or the data hitting the fan. It had to be a big price, if it is to carry the weight of problems solved, solutions created, revolutionary productivity gains and quality hurdles surmounted. You could call this the marketing price, securing not so much the value of Truven but the power and importance of what happens next at IBM Watson Health.
Yet I still have two niggling reservations. One concerns healthcare data. It’s a sector famous for data profusion. All that stuff so arduously recorded and so expensively stored in administrative systems built to withstand the storms of litigation by insurers and patients. If Truven and Hearst can collect it at source, why couldn’t IBM? If IBM is creating a big data solution, then the highly organised and structured databases of Hearst and Truven are a drop in their ocean. Why not lease the data, already becoming commoditised, rather than owning it? We come back to all those data scientists but by now they are beginning to sound like very expensive staffers.
And my final concern, of course, is for my friends and colleagues at Thomson Reuters. Mercifully the trustees of the Woodbridge Trust, beneficial owners of 53% of that company, are supremely unlikely to read this blog, but surely they will find out at some point that an asset their management disposed of in four years ago has doubled in price in the intervening period. They will then reflect that their managers are currently selling Thomson Reuters Science and IP. In patent information and in science citation indexation, these are two of the most data rich information companies in the market place. As ever, it seems that Thomson Reuters want to sell both units together, always looking at a clean deal with low fees. But what if a private equity player should buy them, split them up and add the word Analytics on the end of each – and then sell them for double the purchase price. Thomson Reuters only has one shareholder who matters but that would not entirely reduce the mutterings about valuations and returns. In fact, someone in the City is probably running some analytics on it right now!
Feb
16
Points of Interest on the Road to Innovation
Filed Under B2B, Big Data, Blog, data analytics, Industry Analysis, internet, Publishing, semantic web, Uncategorized, Workflow | Leave a Comment
Like rhubarb, innovation can be forced. And both are often best served mixed with other elements, from mash-up to crumble. But there, worried reader, this rather superficial comparison ends. The results of forcing innovation are often disastrous, would-be dotcom tycoons forget about market readiness, and that valuable old maxim “nothing succeeds on the Internet until it has failed on the Internet” is set aside. Think of the UK’s Independent newspaper. If management had not innovated around the “I” versioning they would have had nothing to sell this week. Ten years after launch the print paper has come to an end, leaving a website and the sale of the I to Johnston Press, a company which, ten years ago, turned over 600 million pounds from some of the most valuable regional newspaper franchises in Britain to create margins of 130 million pounds. Today its revenues stand where it’s margins did then. It reminds us that innovation in the face of change is often not a choice – more a way of survival.
Which makes it even more odd that the two most innovative cultures experienced in my working life – AI and GIS – have taken so very long to get to market. They have had to do name changes and relaunches as entrepreneurs despaired of bringing these technologies to life in user-appealing fashion. So I well recall the advice of an experienced property man when I said that I was doing some work for an emerging property database service. “Maps and data?”, he said, “people just want to buy and sell – and cheat – when it comes to property”. But I was in thrall to the galvanic energy and enthusiasm of Christopher Roper at the time, and believed differently. The company he sold to DMGT, Landmark Information Group, has gone from strength to strength, added great network acquisitions which bring together the needs of surveyors, conveyance solicitors, mortgage parties and eventually buyers and sellers into the same workflow. From providing a non-mandatory environmental check Landmark has become an essential part of buying and selling property. And all of this was sparked by an agreement with the UK’s Ordnance Survey mapping agency to allow re-use of historical survey data going back to the mid – nineteenth century.
But it was the other deal that Christopher Roper did at this time which has left me wondering all these years. He created a joint venture company between Ordnance Survey and Landmark called Point X. The object here was to gather Point of Interest data. The conventional wisdom then was that you would need this data to customized and innovate. Do you need to know the nearest pub? Or vet surgery? Or are your salesmen calling on newsagents? Or sweet shops? We would gather up the data and then offer it like toppings on ice cream. In the age of mash-up this seemed a very obvious strategy. Unsurprisingly to watchers of the glacial progress of change, it has taken a decade for this to come about. You can imagine my joy therefore when I saw the Landmark Solutions Point of Interest Web Portal launched last month. All of that delicious information combined with the Ordnance Survey Open Source mapping environment. If, as some think, web service innovation is a cookbook which begins “first take a map, and then add data” then a new age has begun.
But many things have happened to GIS in the meanwhile. In local and national government, in logistics and distribution, in planning, in agribusiness and in very many other walks of life we are totally GIS dependent. The icy grasp of systems providers like ESRI means that many of these are very idiosyncratic, so achieving compatibility with an installed base, backwards compatibility, is no easier than it was in the early days of Microsoft and Apple. While GIS clearly failed as a religion, it has succeeded as millions of real solutions. So what do you have to do to cope with that? Well, make a customisable solution for a start. Point X has over 4 million points of interest, but they come in 9 groups, with 52 data categories and some 620 different classifications. Go to the portal and you can mix and match, and then select what you need to do to align your custom creation to the GIS system you are using. And the business mode? This is, as it has to be, a Pay As You Go model! (http://www.landmark.co.uk/news-archive/landmark-solutions-launches-points-interest-demand-web-portal)
In a sense, this now seems obvious to the user-centric world in which we live. Yet service developers always think first about prescribing service values and limiting options, so when they let users do what they want to do it seems like a big deal. Here is Reed’s ICIS chemicals database announcing its new Data Express service a a week ago: “Data updates are highlighted automatically and price history updates are sorted in the same column, making it easy to benchmark prices, identify opportunities, and manage risks. This facilitates smoother decision-making, and increased competitiveness for ICIS customers.” This announcement effectively launches an API and an Excel plug-in! (www.icis.com/data express)
So we are in a familiar place in the industry on the road to innovation. We cannot believe what customers tell us about how they want to use information and we hoard information we cannot believe they want. What happened to making service values for the few and then re-iterating for the many? Or even watching how the customer works and makes decisions – and then helping?
« go back — keep looking »