Mar
23
The Odd Future of Aggregation
Filed Under B2B, Big Data, Blog, data analytics, Education, eLearning, Financial services, healthcare, Industry Analysis, internet, mobile content, news media, Publishing, Search, semantic web, social media, STM, Workflow | 1 Comment
Mr Bezos and his cohorts at WaPo (the Washington Post to you and I, earthlings) have decided, we are told, on an aggregation model. As far as skilled translators of Bezos-speak can tell, this will mean bringing together licensed news content from all over the globe – spicy bits of the Guardian, a thin but meaty slice of the London Times, a translated and processed sausage segment of FAZ, a little sauerkraut from Bild, a fricasse of Le Monde… well, you get the picture, and the fact that I am writing this before dinner. These ingredients will get poured into a WaPo membership pot, heated and served to members who want to feel that they are on top of global goings-on, from the horses mouth, and without having to read the endless recyclings and repetitions which characterize the world media at source.
Well, I see the point, and the idea of membership and participation seems to me one which has endless energy these days. But I have been thinking for several years now that the Aggregation business model as experienced from 1993 onwards on the Web is on its last legs. Believing that “curation” is too often a word which we use when we are trying to maintain or defend a job, I have tried to steer clear of imagining that storage, the ultimate network commodity, was a good place to start building a business. In the early days of the Web it was certainly different. Then we could create the whole idea of the “one stop shop” as a way of simplifying access and reducing friction for users. All of the things we were collecting and storing, for the purposes of aggregation, were in fact “documents”, and their owners wanted them to be stored and owned as documents, bought as documents and downloaded as documents. The early reluctance of STM publishers to apply DOI identity beyond the article level and make citations, or references or other document sub-divisions separately searchable seems in retrospect to demonstrate the willingness of IP owners to manipulate the access to protect the business model.
Three clear developments have comprehensively undermined the utility of content aggregation:
* the desire of users to move seamlessly from one part of one document through a link to another part of a different document seems to them a natural expression of their existence as Web users – and in the content industries we encouraged this belief.
* the availability of search tools in the Web which permit this self-expression simply raises the frustration level when content is locked away behind subscription walls, and increases the likelihood that such content will be outed to the Open web.
* the increasing use of semantic analysis and the huge extension of connectivity and discoverability which it suggests makes the idea that we need to collect all or sufficient content into a storehouse and define it as a utility for users just by the act of inclusion a very outdated notion indeed.
It seems to me that for the past decade the owners of major service centres in the aggregation game – think Nexis, or Factiva, or Gale or ProQuest – have all at various times felt a shiver of apprehension about where all of this is going, but with sufficient institutional customers thinking that it is easier to renew than rethink, the whole aggregation game has gone gently onwards, not growing very much, but not declining either. And while this marriage of convenience between vendors and payers gives stability, end users are getting frustrated by a bounded Web world which increasingly does not do what it says on the tin. And since the Web is not the only network service game in town, innovators look at what they might do elsewhere on internet infrastructure.
So, if content aggregation seems old-fashioned, will it be superseded by service aggregation, creating cloud-based communities of shared interests and shared/rented software toolsets? In one sense we see these in the Cloud already, as groups within Salesforce for example, begin to move from a tool-using environment to user-generated content and more recently the licensing of third party content. This is not simply, though, a new aggregation point, since the content externally acquired is now framed and referenced by the context in which users have used and commented upon it. Indeed, with varying degrees of enthusiasm, all of the great Aggregators mentioned above have sought to add tools to their armoury of services, but usually find that this is the wrong way round – the software must first enhance the end user performance, then lend itself to community exploitation – and then you add the rich beef stock of content. For me, Yahoo were the guys who got it right this week when they bought Vizify (www.vizify.com), a new way of visualizing data derived from social media. This expresses where we are far more accurately than the lauded success of Piano Media (www.pianomedia.com). I am all for software companies emerging as sector specialists from Slovakia onto a world stage, but the fact that there is a whole industry, exemplified by Newsweek’s adoption of Piano this week, concerned with building higher and harder paywalls instead of climbing up the service ladder to higher value seems to me faintly depressing.
And, of course, Mr Bezos may be right. He has a good track record in this regard. And I am told that there is great VC interest in “new” news: Buzzfeed $46m; Vox $80 m; Business Insider $30m, including a further $12m last week: Upworthy $12 m. Yet I still think that the future is distributed, that the collection aggregation has a sell-by date, and that the WaPo membership could be the membership that enables me to discover the opinions of the world rather than the news through a smartly specialized search tool that exposed editorial opinion and thinking – and saved us from the drug of our times – yet more syndicated news!
Mar
6
The Media Regeneration Game
Filed Under B2B, Blog, eBook, Education, Industry Analysis, internet, mobile content, news media, online advertising, Publishing, Reed Elsevier, social media, Uncategorized | 1 Comment
The last two days at Digital Media Strategies (Kings Place, London, 4-5 March 2014) have been amongst the best that I have spent in a conference hall in a decade. And I have wide experience to call upon! But Neil Thackray and Rory Brown and their team at the Media Briefing company pulled out all the stops to advance the game on their inaugural effort last year, and in the process pulled over 340 delegates and some first class “big names” and an even better class of “previously unknowns” from this diverse industry. And they really set me thinking: where were all these newspaper bosses and magazine tycoons during the long years when “it will never happen here” was the rule. Some still looked a bit nervous – Simon Fox, CEO of Trinity Mirror, caught in the headlights of a tigerish interrogation from Thackray, looked as if he were about to confess to war crimes at HMV and indecent assault on “The People”, but most of his colleagues were self-assured to the point of near-arrogance.
That at least could be an explanation of Mike Darcey, the News Corp CEO and his decision to spend 8 minutes of his own allotted time taking apart what he fancied to be the strategy of the next speaker, Andrew Miller, CEO, The Guardian Media Group. At least this precluded further dwelling upon the comparative failure of paywalls and the comparative lack of impact of digital advertising. And it enabled everyone to say that they were faithfully following the user experience. Yet it had the odd effect of making News Corp into a sort of John the Baptist warm up act for the Guardian, to which one felt that Andrew Miller responded by indicating that he had a better plan, but not revealing fully what he had up his sleeve. To those in the audience inured to the media having no plan at all, this was a tonic. At the moment the Guardian seems to be a connectivity junkie, rightly glorying in its content re-use and the amount of referral traffic it gets, celebrating its brand positioning as a global voice of liberal values and trying to draw the advertising it can get on this pitch. But I get an underlying feeling that they know that advertising is not the answer, and the room sat up when the topic turned to Guardian Membership.
Clearly if the Guardian can monetize its community effectively then it may be possible to get millions of people to subscribe to its values and buy into aspects of its content feed. Andrew Miller showed a picture of C P Scott and laid tributes before the lares and penates of great journalism, as indeed he should (and neither he nor I care that Edward Snowden appears to be a right wing Republican with a wholly eighteenth century view of the rights of the individual). However, if you have large populations of like-minded people with a strong community ethic then you can create – Guardian (Eye)witness. I well remember, while chairing Fish4, the frustration of the regional press competing with Mr Miller as he distributed free AutoTrader software to every used car dealer, enabling them to organize their inventory and upload easily – to AutoTrader. As a Guardian Member will I get the equivalent, thus broadening the scope (and reducing the cost?) of Great Journalism. Too early to say it yet, perhaps ? The editors would be talking quality control and the journos would be talking to the National Union of Journalists, but…
But at least we are all talking now. As a digital participant from 1980 and an internet – watcher from 1993, I am interested by how much of the industry response was fear and loathing. Hearst, at this conference a great example of digital thinking, spent the early years of the internet buying medical databases and B2B applications. Brilliant purchases, but what did they say about management’s view of their existing media futures? In the same terms, DMGT has turned itself in these time periods from being a newspaper company into a B2B player. No harm in that, but could earlier action have preserved the original structures. Or maybe the media is best re-invented not by its current practitioners but buy complete outsiders – great examples in this conference from Buzzfeed and from Business Insider? And then, what do we make of what seems to be a very European trend at present – letting the staff who know the markets create and test the ideas for recreating media and beyond media services.
I had heard a little about Sanoma’s regenerative Accelerator programmes before, and so was full of anticipation when Lassi Kurkijarvi covered the stage with energy and enthusiasm. With both internal and external venture activity he had a lot to cover. It is now fairly common for media players to invest in start-ups and develop incubators (Reed Elsevier have been venture capital investors for a decade; Holtzbrinck have their seed corn funding and efforts like Macmillan Digital Science and Digital Education; Gruner und Jahr spoke of their activities here) but getting 150 employees into a boot camp and encouraging ideation? Only for the Finns? Not at all, said Lassi. Here was a a planned process of open innovation starting with a mass kick off meeting, a webinar-based process, staff making quick pitches to get support for ideas, an initial selection of 20, crowd sourced selection of 5 for a boot camp experience and the result is 3 ideas which the company is now developing. So look out for Spot-a-shop, Huge or ClipScool – they did not come from Silicon Valley or Tech City, Old Street, but they may be none the less valuable as they express the knowledge of customers built up within a diversified media conglomerate like Sanoma.
So what does this mean? That media corporations can be regenerated from within? What would we have given to know that in 1993!
« go back — keep looking »