Feb
6
Robotic Journalism and Death Row
Filed Under B2B, Blog, Financial services, Industry Analysis, internet, mobile content, news media, online advertising, Publishing, Uncategorized, Workflow | Leave a Comment
I am still rolling across America, in a journey last week from San Diego to New York (again) for the DeSilva+Phillips Media Dealmakers Summit at the Pierre, and now on to Nashville, Tennessee. More below on the conference, but first back to a theme started in my blog “News not fit to Print”. I am becoming obsessed with the science around automated story development, and now see it everywhere I look. And everywhere I look I see a Western culture obsessed with fact-based journalism. As in Europe, much of the core material in reportage is statistical. Today is SuperBowl Sunday and the stats are coming down like dandruff, but I already wrote about Statsheets in the previous article so lets not go there. Instead, I have a copy of the Tennessean for 6 February in front of me. Lets try that.
First off, this is a good newspaper and nothing I say is intended to denigrate it. But the urge to “factualize” is all over it. Front page headline reads “Teaching immigrants is a growing challenge”. Apparently 22% of Metro Nashville public school students now need to learn English as a second language, compared to 15% in 2005. The city has, in an annual student enrolment of 78,000, 10.692 whose first language is Spanish, 1749 Arabic, 999 Kurdish, and more more and more breakdowns until we reach the Burmese and Karens at 169 and Amharic speakers at 154. Think this is a naturally statistical story? Lets go to the local news section whose arresting headline is “Execution Drug Options Limited”. Here we learn that Tennessee has 86 inmates on death row but only enough drugs to execute 8 of them. The pre-fatal injection anesthetic sodium thiopental is not now made in the US, so State governments are having to use veterinary anesthetics or buy the drug covertly in Europe – a dealer based in a British driving school offices in London is intriguingly mentioned in this connection…
But I am getting carried away. The point is that the core “Facts” of the narrative in these stories is based on the figures, and that is where Narrative Science (http://www.narrativescience.com/) comes in. As I was writing my first story this company announced a $6 million funding round led by Battery Ventures. The company was founded by a group whose experience includes Google, Doubleclick, and computing and journalism at Northeastern at Evanston, Ill. Their idea is to take all those fact-based stories and turn the facts into computer-based narrative, create templates around their recurrence and generate a new story with each update. Employment statistics, oil production, share price movements, population change etc etc. We are constantly comparing this quarter to last, or to the same last year, or to the best or worst in the last 5, 10 or 50 years. Where these are recurrent interests a computer can write them very effectively – and, a cynic would say, is more likely to report them accurately.
And the implications of this are immense, and were brought home to me by a casual conversation last week with the digital director of a leading B2B player. He is a Narrative Science triallist and his service is due to be launched during February. He noted both the very rapid need for updates in terms of market stats in his sector, as well as the fact that standard conventions around comparisons meant that these stories were ideal for computerized updates. These too were stories that needed to be squirted quickly onto mobile platforms – comment could follow later once everyone had the core narrative. He then alluded to the cost savings and the annual cost of journalists. I walked away with the idea in mind that the critical path to saving B2B as advertising fails to return will be a massive change in the cost base of the industry. Ironically, efforts to create a new computerized journalism at Northeastern may well end in the employment of less journalists, though those who are needed will be needed at a much higher level of intellectual input.
Finally, a footnote on the conference. My panel of B2B players were all stars (Mason Slaine, Clare Hart and Scott Schulman) but outside of them I was very taken with David Liu, CEO of the Knot and the two founders of Gilt Groupe: B2C is certainly coming into its own. But the session that made me most thoughtful was an Interview with David Levin, the CEO of UBM. His intellectually rigorous approach to a careful acquisition and disposal programme was very admirable. But is the old niche-based B2B model still available? I see Thomson Reuters creating an increasingly cross-sectoral approach as they build bridges between legal, tax and regulatory on the one hand and financial services on the other. Instead of unrelated niches are we going back to cross-selling related sectors to get growth leverage? And if we are is the Informa/UBM/EMAP model beginning to creak as these players have too little in any one niche to effectively cross-sell? Depends how you define sector and niche, of course, but we could be in line for another age of Happy Families card game swaps, aka vertical sector consolidation.
Jan
9
Decline and Fall of the Google Empire
Filed Under B2B, Blog, Industry Analysis, internet, mobile content, online advertising, Search, semantic web, Uncategorized | 2 Comments
In the course of this year I need to find a local source of shredding services in my desperate fight to stop this hut from drowning in paper. By the end of the year I shall need to have bought a new car. In the idle twilight between Christmas and new year I found myself Googling on both of these topics – and the process took longer and took me to more places than I had ever imagined. And I read more advertising, dodgy reviews and spam than I had ever imagined, so when I read that Paul Kedrosky had had an identical experience (http://broadstuff.com/archives/2370-On-the-increasing-uselessness-of-Google……html) then I perked up a bit. It is always good to find really clever people reacting just as you did. I then discovered a whole band of bloggers through December and January basically arguing that the Web of spam and misleading search of a decade ago, which Google had cleaned up effectively in its early days, had now returned to haunt us – on Google.
Whether this is the fault of Google is debatable. Some argue that it is SEO which causes the damage, others that it is the insatiable hunger for Google advertising. Some appear to think that a search environment without advertising will do the trick, and Vivek Wadhwa at UC Berkeley argues convincingly for Blekko (http://techcrunch.com/2011/01/01/why-we-desperately-need-a-new-and-better-google-2/). Both of these blogs demonstrate key facets of the debate, but, to my mind, the debate they are having is couched in the wrong terms entirely. What we must think about is not who replaces Google, but whether keyword searching has a future.
Now I must declare a prejudice. I have never been a huge fan of keyword searching. My experience of search began in the early 1980s, when as a (younger) Thomson manager I was deputed to build an online service for lawyers. We used a search package called STATUS which had been created for the UK’s Atomic Energy Research Establishment to search UK statutes and statutary instruments for references to the enactments which had set up the AERE. Both inventors worked for me, one as an advisor, the other as my CTO. Both warned me daily of the insufficiency of the system we were operating to do more than find words in documents, and not to fall victim to the idea that we were thereby creating “answers” or “solutions”. The result was that I was never a victim of the “myth of infallability ” that pervaded early web search engines and became an essential Google quality in the past 5 years. Infallable? A system that cannot distinguish the grossly out of date from today, that can be spoofed into presenting advertising copy as answers, or that can represent anything except a thought or a concept?
As a result of this early innoculation, my sights have long been set on finding search solutions, so I checked back with some of my legal market successors this week to see how they were faring. Was Google law going to sweep them away? Would the service principles of Google Scholar once applied to law, as Google have claimed, create universal free service values that would separate the lawyer from his dependence on subscription based legal retrieval engines? Not so, I learnt from Lexis Nexis. In fact, the opposite is the case. The body of law is finite, its authorship necessarily limited. In any legal domain, the retrieval engine investment is now dedicated towards tagging content with semantic metadata, developing the inference rules within the ontological structure created when taxonomies are being refined and redeveloped, and emerging as semantic search players. As law is increasingly defined in conceptual blocks which can be developed as a classification system for the ideas and arguments that lie behind legal concepts, systems are emerging which owe little to the world that Google still inhabit. And what Lexis (and undoubtedly Westlaw) are doing today will be the way in which law offices search contextually in their own and third party content tomorrow.
Is this just a law market phenomenon? Well, the techniques and ideas mentioned here have been very heavily involved in scientific research, especially in the life sciences, for the past five years. The whole standards environment created by Tim Berners-Lee and the World Wide Web council predicted this development and the search engine software SPARQL is an experimental exemplar of a direction taken by a number of semantic search start-ups. The drawback has been the tendency for searching on concepts to become very domain-focussed, where taxonomy can be more precise and concepts easier to describe. But as we move forward, this may be the next big push behind vertical search. Despite (or because) we have stopped talking about them, community-based vertical sector players like Globalspec have been able to take a strong grip on the way in which professionals work in a sector like engineering. Once community activity – making engineering design specs available for cross-searching – becomes susceptible to semantic enquiry, the ability of vertical business players to lock in users and establish themselves as the performance benchmark (and compliance engine) of the sector becomes realistic. The scenario that results from this is sometimes monopolistic, often duopolistic, seldom capable of sustaining rafts of competing content players.
So Google remains in place just as a consumer environment? No, I think that Facebook and its successors become the consumer research environment. Search by asking someone you know, or at least have a connection with, and get recommendations and references which take you right to the place where you buy. Search in mobile environments is already taking too long and throwing up too many false leads. Anyone here used a shredding company in South Bucks? How did you rate them? How do I contact them? I have this fantasy that I mention “Google” to my grandchildren and they say “did you mean the phone company?” What is the best strategy job in the industry: the one that defines the line of migration for Google out of search and towards the next big marketplace (pity they missed Groupon!).
« go back — keep looking »