Jan
19
There is nothing on TV…
Filed Under B2B, Blog, Education, eLearning, Financial services, healthcare, Industry Analysis, internet, mobile content, online advertising, social media, Uncategorized | 1 Comment
“Well” she said “there is nothing on the television. I don’t know why we have one, since its certainly not for what we watch. Well, my husband watches the cricket, of course. He’s cricket crazy, up half the night watching the Ashes from Australia, he was. And I like Crime. Of course, it only takes me 10 minutes to find out who the killer is, and you’ve got to watch all those ads before you can see you were right, so I mostly record them and take out the ads as I watch them…”
This extract from a 2013 survey (this family watched 45 hours of television a week) demonstrates once again that television remains the dominant source of entertainment in many developed societies, even if we do not switch on at 6pm and close with the National Anthem when broadcasting stopped, as British social critics of the 1970s feared we would. Barry Parr, now Outsell’s lead analyst in the sector has started to examine what is happening with a series of very well-argued articles (“TV’s complexity crisis is an opportunity for content owners” 13 January 2014 www.outsellinc.com). He set me thinking about what content owners in the print industry did a decade ago when they went through the parallel process – the traditional delivery format is broken and the traditional delivery mechanisms, with all of their complex supply chain relationships, are beginning to fail. Do the reactions of the print world and the record industry give a clue to the likely reactions of their television peers?
A first reaction in print was disbelief, followed swiftly by denial that the speed or range of change could be anywhere near as severe as commentators reported. So as book publishers were saying that “they will always want narrative and always in book form” so cable operators and channel owners in television are talking about brand loyalty, the high value attached to scheduling, and the importance of holding the line on pricing and packaging. And just as brand does not attach to publishers in entertainment markets but to authors, so brand does not attach to channels but to programming/shows. So as a result both types of middleman – publishers and channel operators – misjudged their users, as almost all intermediaries did in the analogue world, because it was impossible for them to see how content was consumed, and their knowledge of their audiences , despite all the surveys, the focus groups and the market research, was stale by the time it reached them. Only in a digitally networked world do you begin to overcome the problems of knowing audiences, and even then, asking them questions is less informative than watching their behaviour and mapping their reactions (recommendations etc).
Shortly we shall see television distribution, which five years ago in Europe was diminishing its creative efforts and outsourcing everything, beginning to buy back the outsourcers and talk about the value of “content”, as in “content will always be king” (book publishing c.1995). This will be followed by a great wailing and gnashing of teeth around further falls in channel advertising revenues, while every effort is made to seek alternative revenue sources. I quoted Jim Dolan, chairman of Cablevision, last year, when he pointed out that his many children of all ages, now used Netflix on Cablevision. He saw this as a signal to work on Cloud libraries and the ability in the network to download and store up to 10 programmes simultaneously. And surely this is a very proper reaction, but only if the cable players really see their future as utilities, with very well-regulated margins, competitive pressure from telcos in a similar bind and subject to fickle consumers who can change broadband suppliers on a click. So here is the second thing which television will find hard to buy: digitally networked markets increase consumer power immensely, and the contractual tie-in so beloved of cable and satellite is now a very shaky foundation indeed. Alongside Netflix, who can fail to see Google and Amazon as major market players here – and they really do understand about Cloud libraries and downloads.
If Jim Dolan really thinks that the US cable industry is “living in a bubble with its focus on TV packages that people must pay for as offered” (www.hollywoodreporter.com/print/599574) then there is at least a hope that a note of realism may be afoot which was absent in print and music. Yet TV has always been about mass audience and numbers of eyeballs sold to the advertiser. Can it work on a niche interest, subscription model? Spending time last year with Fred Perkins, a survivor of print (ex FT.com, ex McGraw-Hill) at Information Television (http://www.information.tv/?cid=3) in London I saw convincing demonstrations (caravanning and mobile holiday homes formed a classic model for this) which made me wonder why more niches, alongside other B2B or B2C digital content vehicles, do not use niche TV effectively. OK, I know that many magazine publishers invested in studios in the hope of getting into an aligned television market, and this never worked. But that was before the digital broadband network had further blurred and softened the edges between content formats and packages. My conversations with Fred, and stories like the BBC News item (www.bbc.co.uk/news/business-25457001) last week which profiled, in describing the prospects for internet television broadcasting, NTVE (Nautical TV Europe) based in Magaluf, Mallorca, and financed not by advertising but by sponsorship and product placement. But its distribution model is, well, fairly cheap… or free, if you don’t find that four letter word offensive.
So this may not be an option for Mr Murdoch, who was rather hoping that the satellite/pay TV model would continue to fund him through the next three decades. The signs at the moment are that, under the same digital network stress as print and music, TV programme distribution will change radically to a My network model, still paid by subscription but no longer powerful as an advertising medium. And beyond that? Maybe the subscription will be to a programme guide that enables you to decide what you watch, when and where and in what sort of device, with monthly billing depending on the choices made, the storage used in the Cloud and the deals that you make with programme makers? It is Your Choice!
Jan
9
Post-Pub and Preprint -The Science Publishing Muddle
Filed Under B2B, Big Data, Blog, data analytics, healthcare, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Uncategorized, Workflow | 2 Comments
New announcements in science publishing are falling faster than snowflakes in Minnesota this week, and it would be a brave individual who claimed to be on top of a trend here. I took strength from Tracy Vence’s review, The Year in Science Publishing (www.the-scientist.com), since it did not mention a single publisher, confirming my feeling that we are all off the pace in the commercial sector. But it did mention the rise, or resurrection, of “pre-print servers” (now an odd expression, since no one has printed anything since Professor Harnad was a small boy, but a way of pointing out that PeerJ’s PrePrints and Cold Spring Harbor’s bioRxiv are becoming quick and favourite ways for life sciences researchers to get the data out there and into the blood stream of scholarly communication). And Ms Vence clearly sees the launch of NCBI’s PubMed Commons as the event of the year, confirming the trend towards post-publication peer review. Just as I was absorbing that I also noticed that F1000, which seems to me to still be the pacemaker, had just recorded its 150,000th article recommendation (and a very interesting piece it was about the effect of fish oil on allergic sensitization, but please do not make me digress…)
The important things about the trend to post-publication peer review are all about the data. Both F1000 and PubMed Commons demand the deposit or availability of the experimental data alongside the article and I suspect that this will be a real factor in determining how these services grow. With reviewers looking at the data as well as the article, comparisons are already being drawn with other researcher’s findings, as well as evidential data throwing up connections that do not appear if the article alone is searched in the data analysis. F1000Prime now has 6000 leading scientists in its Faculty (including two who received Nobel prizes in 2013) and a further 5000 associates, but there must be questions still about the scalability of the model. And about its openness. One of the reasons why F1000 is the poster child of post publication peer review is that everything is open (or, as they say in these parts, Open). PubMed Commons on the other hand has followed the lead of PeerJ’s PubPeer, and demanded strict anonymity for reviewers. While this follows the lead of the traditional publishing model it does not allow the great benefit of F1000: if you know who you respect and whose research matters to you, then you also want to know what they think is important in terms of new contributions. The PubPeer folk are quoted in The Scientist as saying in justification that “A negative reaction to criticism by somebody reviewing your paper, grant or job application can spell the end of your career.” But didn’t that happen anyway despite blind, double blind, triple blind and even SI (Slightly Intoxicated) peer reviewing?
And surely we now know so much about who reads what, who cites what and who quotes what that this anonymity seems out of place, part of the old lost world of journal brands and Open Access. The major commercial players, judging by their announcements as we were all still digesting turkey, see where the game is going and want to keep alongside it, though they will farm the cash cows until they are dry. Take Wiley (www.wiley.com/WileyCDA/pressrelease), for example, whose fascinating joint venture with Knode was announced yesterday. This sees the creation of a Knode – powered analytics platform provided as a Learned Society and industrial research service, allowing Wiley to deploy “20 million documents and millions of expert profiles” to provide society executives and institutional research managers with “aggregated views of research expertise and beyond”. Anyone want to be anonymous here? Probably not, since this is a way of recognizing expertise for projects, research grants and jobs!
And, of course, Elsevier can use Mendeley as a guide to what is being read and by whom. Their press release (7 January) points to the regeneration of the SciVal services, “providing dynamic real-time analytics and insights into the… (Guess What?)… Global Research Landscape”. The objective here is one dear to governments in the developed world for years – to help research management to benchmark themselves and their departments such that they know how they rank and where it will be most fruitful to specialize. So we seem to be quite predictably entering an age where time to read is coming under pressure from volumes of available research articles and evidential data, so it is vital to know, and know quickly, what is important, who rates it, and where to put the most valuable departmental resources – time and attention-span. And Elsevier really do have the data and the experience to do this job. Their Scopus database of indexed abstracts all purpose written to the same taxonomic standard now covers some 21,000 journals from over 5000 publishers. No one else has this scale.
The road to scientific communication as an open and not a disguised form of reputation management will have some potholes of course. CERN found one, well-reported in Nature’s News on 7 January (www.nature.com/news under the headline “Particle Physics papers set free”. CERN’s plan to use its SCOAP project to save participating libraries money, which was then to be disbursed to force journals to go Open Access met resistance, but from the APS, rather than the for profit sector. Meanwhile the Guardian published a long article (http://www.theguardian.com/science/occams-corner/2014/jan/06/radical-changes-science-publishing-randy-schekman) arguing against the views of Nobel laureate Dr Randy Schekman, the proponent of boycotts and bans for leading journals and supporters of impact factor measurement. Perhaps he had a bad reputation management experience on the way to the top? The author, Steve Caplan, comes out in favour of those traditional things (big brands and impact factors), but describes their practises in a way which would encourage an un-informed reader to support a ban! More valuably, the Library Journal (www.libraryjournal.com/2014/01) reports this month on an AAP study of the half-life of articles. Since this was done by Phil Davis it is worth some serious attention, and the question is becoming vital – how long does it take for an article to reach half of the audience who will download it in its lifetime? Predictably the early results are all over the map: health sciences are quick (6-12 months) but maths and physics, as well as the humanities, have long duration half lives. So this is another log on the fire of argument between publishers and funders on the length of Green OA embargoes. This problem would not exist of course in a world that moved to self-publishing and post-publication peer review!
POSTSCRIPT For the data trolls who pass this way: The Elsevier SciVal work mentioned here is powered by HPCC (High Power Computing Cluster), now an Open Source Big Data analytics engine, but created for and by LexisNexis Risk to manage their massive data analytics tasks as Choicepoint was absorbed and they set about creating the risk assessment system that now predominates in US domestic insurance markets. It is rare indeed in major information players to see technology and expertise developed in one area used in another, though of course we all think it should be easy.
« go back — keep looking »