Jul
30
Good Bank, Bad Bank, Big Bank, Sad Bank
Filed Under B2B, Big Data, Blog, eBook, Education, eLearning, Financial services, healthcare, Industry Analysis, internet, news media, online advertising, Pearson, Publishing, Search, semantic web, STM, Uncategorized, Workflow | Leave a Comment
From Olympic Exile on the splendid South Shore of Nova Scotia, I can observe that the banking crisis continues apace, and that the original Swedish solution – put all the smelly bits into a special container called a Bad Bank and cut it free from the Mother Ship – still holds great appeal. I can also see that the financial market analyst demand to cut media companies up into “high growth, strong margins” companies and “low growth, declining margin” companies also has great appeal. We have seen it with McGraw-Hill and now with News International. The equity market analyst’s view (and media markets are almost always at their most dangerous when those who lead companies feel forced to follow the views of those ultimate exemplars of power without responsibility – or experience) seems to be at the moment that the assets which have responded least well to the digital revolution, or have been slowest to react, should be cordoned off and cut free. Very strange: I thought the whole idea of “portfolio” in media ownership was that assets developed at different speeds, and the fast growth ones thus gave “cover” – time and capital – to allow low growth assets to become fast growth again – perhaps with the help of judicious bolt – on acquisition on the way.
And then there is the question of cycles. Some of us apparently work in mini-cycles – the turn of markets within an 18 month period according to an analyst friend – while others are “macro-cycle minded”, which is where I am apparently involved. So if I thought that the reason for McGrawHill to hold onto its Education division was that education, alongside Healthcare, is the most enduring long term growth market we have, and that the portfolio duty of Standard and Poor’s was to enable McGraw’s education unit to get back on its feet, challenge Pearson’s leadership and buy the right catalytic add-on, then I was clearly wrong. Yet it seems to me clear that the future of rating agencies is quite as murky, from both a regulatory as well as a digital standpoint, as any other market. And is McGraw’s B2B, despite some distinguished work, really in the forefront of digital services and solutions in its verticals? Yet these are Good Bank assets, and Education is Bad Bank.
I could write the same about News Corp, television and newspapers. I am certain that no broadcast media have really absorbed the meaning of a networked society, and this is as true of the world of TV stations and cable companies as it is true of newspapers. Of course, one way around the problem is to sell while the going is good, as DMGT so signally failed to do in 2008 when they refused an offer of £1 billion for Northcliffe (regional press), an asset worth around £250m today. Sentiment forbade such a move as it once did at News Corp, so are players like DMGT destined to split to please investors? Apart from my respect for the bravery and ability to change involved in creating new B2B orientated DMGT out of old newspaper DMGT. who is to say that here no digital local manifestation can be created which will not replace traditional local newspapers? And how valuable, since they have them, would those local brands and franchises become in the new local? Especially at helping bits of B2B2C in markets like property reach ultimate consumers.
And where does the splitting end? The arguments that apply here apply equally to the Guardian Media Group, and are complicated by the fact that one investment made to give cover for the newspapers, EMAP, has faded faster than the newspapers themselves. Hopefully selling its half share of this and Autotrader will adjust the losses, and digital revenues (now up to £14.7 m and growing by 26% this year) will do the rest. But here we hit another problem: digital businesses may be more profitable, but they are also smaller. Digital newspaper ad revenue (Mail Online now stands at a forecast of £327m, with a target of £45 m in 2013) models are small, as are paywall models (Times Online now reaches £27m pa after a price hike) And the story of digital books is “less revenue, more margin, cannibalising customers to create a slightly smaller, slightly more profitable company”. What happens when we finish that short cycle?
Maybe the answer to the scale problem is that scale is becoming less important anyway. In a digital world if you have 50% of the workflow and solutions business in agriculture, why should you be in the same group as a content provider to the oil industry? Certainly our current ideas of scale came directly from the print world – you needed to be big enough to finance print runs that took, a day, a week or a year to sell. The cash flow model demanded scale. This is not so today, though I can well imagine a world where deploying common (and very expensive) technologies and having sufficient internal know-how to do so becomes a scale argument. Few B2B players “re-platforming” these days can be doing so, at quite a modest scale, at less than $1.5 m, even if their content is already in good XML order. Larger players face bigger bills, and these will be ongoing as we all go semantic web and Big Data. Then again you may need to be big to finance this as well as investing in collaboration with third parties – content-sharing, delivery mechanism-sharing, solution-sharing. And you may need to be big and diversified to fight off the next round of investors in this sector – the enterprize software vendors who will want to add your B2B solutions to their architecture (or maybe you will need to be big enough to attract them: it can be hard to tell).
So settle back for summer and await the next wave of splits rumours. Back to splitting up Informa? EMAP is already, like Gaul, divided into three parts and ready for resale. Pearson should certainly, in the analysts view, sell Penguin and the FT (despite the fact that they are appreciating nicely now, and they will only be needed as a votive offering to the markets when their sale can finance the next big education push/acquisition). Surely Wolters Kluwer should be subject to this one too – financial analysts sought the sale of its education and its academic publishing assets, and, having succeeded, still hunger for the news that Health is being sold away from law and tax.
Or maybe we should say that it is customer markets that change the size and scale of assets, not investment analysts who have a key interest in the outcomes that they recommend. Maybe we would get richer listening to our customers than listening to these back seat drivers?
Jul
22
The Science of Measurement
Filed Under Big Data, Blog, Education, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, social media, STM, Thomson, Uncategorized, Workflow | 1 Comment
This is the third attempt in a week to try to record the thinking that initiated the first one and pervaded the second . So here goes : ” Science is based upon measurement and evaluation , yet the activities of scientists have themselves been less measured and evaluated than the subjects of their research . ” In a society that now seeks the ROI of everything , this is becoming an important consideration . In the past we have been happy to measure secondary effects , like the shadow on the wall called “impact factor ” , when it came to measuring and evaluating ” good science ” . Now we can see clearly who is reading what and the way they rate it ( look only at Mendeley and ReadCube) , and what they say about it on Twitter and the blogs , we have a much broader measure of what is influential and effective . The market leaders in measurement a decade ago were Thomson (ISI ) , using the outstanding heritage of Eugene Garfield . They were followed by Elsevier , who today, by way of Scopus and its offshoots , probably match them in some ways and exceed them in others . Today , these players find themselves in a very competitive market space , and one in which pressure is mounting . Science will be deluged by data unless someone can signpost high quality quickly , and use it in filters to protect users from the unnecessary , while enabling everything to stay available to allow some people to search totality .
I started to get interested in this last year , when the word “alt-metrics ” first showed up . A PLoS blog by Jan Leloup in November 2011 asked for data :
“We seek high quality submissions that advance the understanding of the efficacy of altmetrics, addressing research areas including:
- Validated new metrics based on social media.
- Tracking science communication on the Web.
- Relation between traditional metrics and altmetrics including validation and correlation.
- The relationship between peer review and altmetrics.
- Evaluated tools for gathering, analyzing, or disseminating altmetrics “
So a wide range of new measuring points is required alongside new techniques for evaluating data about measurement gathered from a very wide variety of sources. And what is “altmetrics ” ? Simply the growing business of using social media data collection as a new evaluation point in order to triangulate measurements that point to the relative importance of various scientific inputs . Here the founders make the point at www.altmetrics.org:
“altmetrics is the creation and study of new metrics based on the Social Web for analyzing, and informing scholarship.
Our vision is summarized in:
“Altmetrics expand our view of what impact looks like, but also of what’s making the impact. This matters because expressions of scholarship are becoming more diverse. Articles are increasingly joined by:
- The sharing of “raw science” like datasets, code, and experimental designs
- Semantic publishing or “nanopublication,” where the citeable unit is an argument or passage rather than entire article.
- Widespread self-publishing via blogging, microblogging, and comments or annotations on existing work.
Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself. ”
So a new science of measurement and evaluation is being born , and , as it emerges , others begin to see ways of commercialising it . And rightly so , since without some competition here progress will be slow . The leader at present is a young London start-up called , wisely , Altmetric . It has created an algorithm, encased it in a brightly coloured “doughnut” with at-a-glance scoring, and its first implementation is on PLoS articles . I almost hesitate to write that it is a recent investment of Macmillan Global Science and Education’s Digital Science subsidiary , since they seem to crop up so often in these pages . But it is also certainly true that if this observant management has noted the trend then others have as well . Watch out for a crop of start-ups here , and the rapid evolution of new algorithms .
Which really brings me back to the conclusion already written in my previous pieces but not fully drawn out . Measurement and Evaluation – M&E – is the content layer above metadata in our content stack . It has the potential to stop us from drowning in our own productivity . It will have high value in every content vertical , not just in science . Readers will increasingly expect signposts and scores so that they do not waste their time . And more importantly than anything else, those who buy data in any form will need to predict the return that they are likely to get in order to buy with security . They will not get their buying decisions right , of course , but the altmetrics will enable them to defend themselves to budget officers , taxpayers , and you and I wen we complain that so much funding is wasted !
« go back — keep looking »