May
11
Decline and Fall of the Google Empire: Revisited
Filed Under B2B, Big Data, Blog, Financial services, healthcare, Industry Analysis, internet, mobile content, news media, online advertising, Publishing, Search, semantic web, social media, STM, Uncategorized, Workflow | Leave a Comment
I have been waiting to write this post for four months. Ever since I wrote a piece with this title in January 2011 friends and colleagues have been asking “And now…?”, and this has intensified since Google’s results announcement in January 2012. 25% revenue growth? Breaking $10 billion revenue in a single quarter? In anyone elses’ results statement this would have been sparkling news in a recession. Google’s shares dropped 10% on the news. And then the analysis. Cost-per-click – Google’s revenue from advertizers – fell 8% in the quarter, and the same amount in the previous quarter. This is a company still totally dependent on advertising. Imagine a newspaper company whose yield from classifieds fell 8% per quarter to see the wonderful way in which “velocity”, as Larry Page describes growth, disguises performance.
When I last wrote on this subject I was trying to describe an advertising-based search company that was trying to kick the habit and migrate elsewhere. Clearly Android, now on 250 million handsets, is the most obvious escape hatch. Analysts forecast that 2012 will see Android account for 12% of gross revenues, which demonstrates that migration is slow and old habits die hard. So if my grandchildren do not grow up thinking of Google as a phone company, as I suggested in the original blog, what will they think of the mature Google, shuffling along in the carpet-slippers of 10% growth? Well, they could imagine it as an operating system – Chrome is still growing strongly and Chrome OS has not been fully exploited. Or they could think of it as a social network environment: Google+ is now up to 90 million members, still a fraction of Facebook, but up from 40 million the previous quarter. Indeed, social networking may be a “must win”, or at least a “must compete strongly” environment for Google if the search-advertising market is to be prolonged long enough for these other options to emerge from under the strategy umbrella. With Google taking the axe to so many of its product development fields directly related to search, this requirement is exacerbated.
However, what really gets me writing this evening is the strong suspicion that Google themselves think that the answer is elsewhere. An interview with Ben Fried, the Google CIO, in the Wall Street Journal yesterday has him saying that the Cloud is reaching a tipping point (http://blogs.wsj.com/cio/2012/05/10/google-cio-ben-fried-says-cloud-tipping-point-is-at-hand/?mod=google_news_blog). Google clearly feel that Cloud computing, in the age of ubiquitous broadband (whenever that happens), will be their route to a business base in individual and small business sectors. As Google has used the Cloud to take costs out of its own core business, which given the comments above it has needed to do, so it can use its global data centre coverage to do the same for others. In this world, where we can fondly imagine two remotely sited workers watching each other’s real time edits on a document in Google Docs, small development teams can access a wide range of tools and pursue the sort of “fail fast”, constantly re-iterating, development strategies beloved of major corporates.
But this is a place where the competition is established, hot and strong, and despite Google’s history as a solutions developer, Apple and Microsoft go back further. iCloud, dependent on a syncing environment rather than the broadband, moves all the files to the Cloud, with users retaining copies and, as Steve Jobs is always quoted as saying, demoting “the PC to be just a device”. There is a different philosophy of Cloud here, but one that seems more based on now than when. And then again there is Amazon, inspired, as was Google, by the long struggle to use the Cloud to solve its own back office issues, now offering AWS as a solution in the very markets that Google thinks should be its own.
So it cannot be just the Cloud that Google see as their exit-from advertising-dependence platform. But the Cloud and Big Data? This article’s timing is much influenced by the announcement of Google BigQuery, which, although semi-publicly trialled since December last year, was formally launched on 1 May (http://www.zdnet.com/blog/big-data/googles-bigquery-goes-public/405). Since it covers databases of up to two terabytes (seems big to me!), this has been described as a business intelligence tool by some commentators who expected larger database environments from the inventor of MapReduce (working in pedabytes), who kicked off this Big Data thing to begin with and are clearly working here as elsewhere from the “solve our own problems, then generalize to solve yours” standpoint indicated above. But here is a real irony: if you are working in a Big Data context much of what you will be looking for is indexed on Google, but not searchable in a Google Cloud context. Again, contrast Amazon, where they have now begun adding public databases to their Cloud offering, searchable in their EC2 (Electric Compute Cloud) context. Here are some of the first offerings:
- “Annotated Human Genome Data provided by ENSEMBL
The Ensembl project produces genome databases for human as well as almost 50 other species, and makes this information freely available.
- Various US Census Databases from The US Census Bureau
United States demographic data from the 1980, 1990, and 2000 US Censuses, summary information about Business and Industry, and 2003-2006 Economic Household Profile Data.
- UniGene provided by the National Center for Biotechnology Information
A set of transcript sequences of well-characterized genes and hundreds of thousands of expressed sequence tags (EST) that provide an organized view of the transcriptome.
- Freebase Data Dump from Freebase.com
A data dump of all the current facts and assertions in the Freebase system. Freebase is an open database of the world’s information, covering millions of topics in hundreds of categories. Drawing from large open data sets like Wikipedia, MusicBrainz, and the SEC archives, it contains structured information on many popular topics, including movies, music, people and locations – all reconciled and freely available.”
In all, Google now face a struggle. As they move to a new service environment, we need to remember that they created the original company not by inventing search but improving it. Page ranking was a big step forward in its day and created a meteoric growth company. From this they built an Empire, now maturing. Edward Gibbon, commenting upon the fall of Rome and the rise of its rivals, marked a certain point of no return. “If all the barbarian conquerors had been annihilated in the same hour, their total destruction would not have restored the empire of the West: and if Rome still survived, she survived the loss of freedom, of virtue, and of honour.”
Is this where Google now is, and can its still youthful originators recreate it?
Apr
27
Open Up Your APIs!
Filed Under B2B, Big Data, Blog, eBook, Education, eLearning, Financial services, Industry Analysis, internet, mobile content, news media, Publishing, Search, semantic web, social media, STM, Thomson, Uncategorized, Workflow | 3 Comments
In this industry five years is enough to benchmark fundamental change. This week I have been at the 9th Publishers’ Forum, organized as always by Klopotek, in Berlin. This has become, for me, a must attend event, largely because while the German information industry is one of the largest in Europe, German players have been marked by a conservative attitude to change, and a cautious approach to what their US and UK colleagues would now call the business model laws of the networked information economy. At some level this connects to a deep German cultural love affair with the book as an object, and how could that not be so in the land that produced Gutenburg? On another level, it demonstrates that German business needs an overwhelming business case justification to institute change, and that it takes a time for these proofs to become available. Which is not to say that German businesses in this sector have not been inventive. An excellent two part case study run jointly by Klopotek and de Gruyter was typical: de Gruyter are the most transformed player in the STM sector because they have seized upon distribution in the network and selling global access as a fast growth path, and Klopotek were able to supply the necessary eCommerce and back office attributes to make this ambition feasible. And above all, in a room of more than 300 newspaper, magazine and book executives, we were at last able to fully exploit the language and practice of the network in information handling terms. This dialogue would have been impossible in Germany five years ago. A huge attitudinal change has taken place. Now we can deploy our APIs and allow users to get the value and richness of our content, contextualised to their needs, instead of covering them with the stuff and hoping they get something they want.
In some ways the Day 2 Keynote from Andrew Jordan, CTO at Thomson Reuters GRU business, exemplified the extent of this. The incomparable Brian O’Leary had started us off on Day 1 in good guru-ish style by placing context in its proper role and reminding us that it is not content as such but its relationships that increasingly concern us. You could not listen to him and still believe that content was the living purpose of the industry, or that the word “publishing” had not changed meaning entirely. With Michael Healy of CCC and Peter Clifton of +Strategy following him to hammer home the new world of collaboration and licencing, and the increasing importance of metadata in order to identify and describe tradeable entities, we were well on the way towards a recognition of new realities, ferried there before dinner by Jim Stock of MarkLogic using the connected content requirements of BBC Sport in an Olympic year to get us started in earnest on semantic approaches to discovery and our urgent needs to create appropriate platform environments to allow us to use our content fluently in this context.
So the ground was well-prepared for Andrew Porter. He took us on a journey from the acquisition of ClearForest by Reuters while it was being acquired by Thomson, to the use of this software by the new company to create OpenCalais, allowing third parties (over 60 of them) to get into entity extraction (events and facts, essentially) and then into the creation of complex cross-referencing environments, and finally to the use of this technology by Thomson Reuters themselves in the OneCalais and ContentMarketplace environments. So here was living proof of the O’Leary thesis, on a vast scale, building business-orientated ontologies, and employing social tagging in a business context. Dragging together the whole data assets of a huge player to service the next customer set or market gap. And no longer feeling obliged to wrap all of this in a single instance database, but searching across separately-held corporate datasets in a federated manner using metadata to find and cross-reference entities or perform disambiguation mapping. Daniel Mayer of Temis was able to drive this further and provide a wide range and scale of cases from a technology provider of note. The case was made – whether or not what we are now doing is publishing or not, it is fundamentally changed once we realize that what we know about what we know is as important as our underlying knowledge itself.
And of course we also have to adjust our business models and our businesses to these new realities – patient Klopotek have been exercising expertise in enabling that systems re-orientation to take place for many years. And we must recognize that we have not arrived somewhere, but that we are now in perpetual trajectory. One got a real sense of this from an excellent presentation to a very crowded room by Professor Tim Bruysten of richtwert on the impact of social media, and, in another way, from Mike Tamblyn of Kobo when he spoke of the problems of vertical integration in digital media markets. And, in a blog earlier this week, I have already reported on the very considerable impact of Bastiaan Deplieck of Tenforce.
Speaking personally, I have never before attended a conference of this impact in Germany. Mix up everything in the cocktail shaker of Frank Gehry’s great Axica conference centre alongside the Brandenburg Gate, with traditional book publishers rubbing shoulders with major information players, and chatting to software gurus, industry savants, newspaper and magazine companies, enterprize software giants and business service providers and you create a powerful brew in a small group. Put them through seperate German and English streams, then mix them up in Executive Lounge seminars and discussion Summits and the inventive organizers give everyone a chance to speak and to talk back. This meeting had real energy and, for those who look for it, an indication that the changes wrought by the networked economy and its needs in information/publishing terms, now burn brightly in the heart of Europe.
« go back — keep looking »