Aug
30
Paradigm Lost
Filed Under Blog, data protection, Industry Analysis, internet, news media, online advertising, Publishing, Search, semantic web, Uncategorized | 1 Comment
So what happened in August? While I was on vacation the world seemed to change in mysterious ways, or, at least, I awoke to mysteries long in the making. Quite apart from England starting to win cricket matches on a regular basis, that is. Or summer to be sunny in these parts. Something fundamental happened.
I had the first inkling of this from the headline “Lycos sold for $35 million” (http://www.medianama.com/2010/08/223-lycos-ybrant/). So now we are 16 years on in the glorious history of the networked globe, and we have our first example of a start up with an almost complete cradle to grave financial history. Founded with a $2m investment from CMGI, this was the fastest company to the market when it floated on Nasdaq in 1996. By 1999, with a range of subsidiaries in some 40 countries, it became the most popular Website in the world. In May 2000, Telefonica of Spain bought it for $12.5 billion, according to cnet.com, and with the acquisition of Tripod, Lycos created the characteristic surge of the Bubble years – the Portal. When enterprize search failed for it, Lycos began to shed its subsidiaries, and sell off its local manifestations (to Bertelsmann in Germany, for example). The now diminished company was still innovative (remember Lycos Phone of 2006), despite its sale for $94 m to Daum of Korea in 2004. This latest sale, to the Indian advertising services player, Ybrant, emphasises that the current migration is to web advertising services. Revenues in 2009 were reported as $24.76 m. Ybrant made the acquisition for $36 m.
This is not a “how are the mighty fallen” story. It tells us instead how fast brands grow in the networks and above all how fast and threatening the steep slope of the success graph can seem for established players: Lycos and the creation of Terra Lycos was Telefonica’s vastly greater equivalent of the Murdoch Moment over My Space. And it is not a story about lack of ingenuity and innovation: Lycos genuinely moved with the tidal waters of business model change, and its history shows managers trying hard to re-position and re-use their access and brand position. This is a story about search.
At the root of the Lycos is Google and its growth. In many ways Lycos was a John the Baptist project, and the work which Google’s founders did was not so much an exercise in replacing the fundamentals of search created by Lycos and its competitors, but in adding back into the mix something of the experience of previous users (PageRank) in such a way that the user perception was “better results”.
Today Google has 85% of the market in search, and this year its results have begun to decline slightly. Not much, mind you. A peak of 86% market share followed by a near 2 percentage point decline is not a disaster, but it underscores something else: unless you are in India or China (and Google’s numbers are still roaring away in the former despite Google’s well publicized problems in the latter) the most significant global user communities are already on the Web – or unlikely to use the Web in significant numbers for very many years.
So will Google also and inevitably follow the path mapped out by Lycos? The pressure from the Semantic Web and the world of Linked Data certainly point in the opposite direction from keyword searching. But clearly not if the acquisition programme comes through, and the new business development programme matches it and Google are able to grow a new business alongside Search. The sector has never seen a company like Google for using its wealth to pursue opportunity outside of its core markets. From YouTube to Android, from DoubleClick to Aardvark, from Google Earth to Google Energy, the company sometimes seems to be restlessly evading its destiny while remaining 98% tied to advertising for its revenues.
For its destiny is surely now reasonably clear. There will be a decline in search as an apps orientated world moves more fundamentally towards solutions. Already Google is feeling some of this, as well as the continuing movement of advertising markets away from the traditional way of contextualization. There will be continuing pressure within solutions created for professional and business services for search to be customized to need, and good enough for active purposes (which may be better or more targeted or more rigorously selective or more representative of niche user groups than public search environments).
On their track records you would have to say that Google, driven by current management, will diversify and survive. But it may be a closer issue than many expected at IPO time, and some of this is reflected in the current share price decline. And if they do accomplish the building of a new company out of the old (an Internet first in itself) then it may be by rediscovering what users do in a way that the apps market already does. As someone wittily commented “if they had really cared about users all these years, the service would have been called Find, not Search”. But in the meanwhile business and professional information service providers may be relieved to find that insuperable Google pressures may lessen a little in order to allow integrated solutions to grow. This will create opportunities that are time limited, so nobody should sit around waiting for users to ask or rival revenues to grow.
And a final sob story. In 1994 our favourite comparison was the pornography marketplace, which blazed a trail in viral marketing and online portal techniques. Porn established itself as a sector to watch closely if you were in advertising markets, and a model of content protection and business model evolution if you weren’t. According to an article in Technology Review (www.technologyreview.com/web/26074/) porn is blighted by mass evasions of copyright on peer to peer networks and the rise of user-generated content. Wage rates are falling in the industry and so is program production. I do NOT know what the “solution” is here, but it is only to be expected that when all the other models created in early web days are changing then this one would as well.
Jul
8
Gribbling in the Dark
Filed Under Blog, Education, Industry Analysis, internet, Publishing, semantic web, STM, Thomson | Leave a Comment
So there was a word for it after all. Some kindly soul at a conference last week, seeing that I was unable to describe the strange digital burbling that took place when you dialled up a database in 1979 and inserted the telephone handset into the accoustic coupler, kindly shouted out the correct expression – the noise was “gribbling” and I was delighted to be reunited with a term which should never have been lost. And it allows me to remark, if I have not lost you already, that it is a mature industry whose terms of art, invented for a purpose, have now fallen into disuse because the processes they describe have become redundant. I expect to have to explain to my children how my typographer’s ruler works, or what slug setting, or galleys, or heavy leading or hot metal meant. The fact that the first generation digital expressions are already themselves redundant (who last saw an accoustic coupler?) tells an important story.
And that story is particularly relevant to the fascinating conference that I was attending. Last week’s seminar on “Ready for Web 3.0?” organized by ALPSP and chaired by Louise Tutton of Publishing Technologies was just what the doctor ordered in terms of curing us of the idea that we still have time to consider whether we embrace the semantic web or not. It is here, and in scholarly publishing terms it is becoming the default embedded value, the new plateau onto which we must all struggle in order to catch our breath while building the next level of value-add which forms the expectation of users coming to grips with a networked information society today. And from the scholarly world it will spread everywhere. I will put my own slides from the introductory scene-setting on this site, but if you can find any of the meaty exemplar presentations from ALPSP (it is worth joining them if they are going to do more sessions of this quality) or elsewhere then please review them carefully. They are worth it.
Particularly noteworthy was a talk by Professor Terri Attwood and Dr Steve Pettifer from the University of Manchester (how good to see a biochemistry informatician and a computer scientist sharing the same platform!). They spoke about Utopia Documents, a next generation document reader developed for the Biochemical Journal which identifies features in PDFs and semantically annotates them, seamlessly connecting documents to online data. All of a sudden we are emerging onto the semantic web stage with very practical and pragmatic demonstrations of the virtues of Linked Data. The message was very clear: go home and mark-up everything you have, for no one now knows what content will need to link to what in a web of increasing linkage universality and complexity. At the very least every one who considers themselves a publisher, and especially a science publisher, should read the review article by Attwood, Pettifer and their colleagues in Biochemical Journal (Calling International Rescue: Knowledge Lost in the Literature and information Landslide http://www.biochemj.org/bj/424/0317/bj4240317.htm) Incidentally, they cite Amos Bairoch and his reflections on Annotation in Nature Precedings (http://precedings.nature.com/documents/3092/version/1) and this is hugely useful if you can generalize from the problems of biocuration to the chaos that each of us faces in our own domains.
Two other aspects were intriguing. Utopia Documents had some funding from the European Commission, EPSRC, BBSRC, the University of Manchester and, above all, the BJ’s publisher, Portland Press. One expects the public bodies to do what they should be doing with the taxpayer’s cash: one respects a small publisher putting its money where its value is. And in another session, on the semantic web collaboration between the European Respiratory Society and the American Thoracic Society, called felicitously “Breathing Space”, we heard that the collaborators created some 30% of the citations in respiratory medicine, and that their work had the effect of “helping their authors towards greater visibility”. Since that is why the industry exists, it would seem that the semantic promise underpins the original publication promise. Publishers should be creating altars for the veneration of St Tim Berners Lee and dedicating devotions to the works Shadbolt and Hall, scholars of Southampton.
Sadly they are not, but coming out of this day of intense knowledge sharing one could not doubt that semantic web, aka Linked Data, had arrived and taken up residence these several years in scientific academe. Now if it will only bite government information and B2B then we shall be on our way. And, as Leigh Dodds of Talis reminded us, we shall have to learn a new language on that way. Alongside new friends like ontologies and entity recognition and RDF, add RDFa, SKOS (simple knowledge organizing systems to you!), XCRI education mark-up, OpenCalais (go to Thomson Reuters for more), triples, Facebook Open Graph, and Google Rich Snippets. Even that wonderful old hypertext heretic Ted Nelson got quoted later in the day: “Everything is deeply intertwingled”. And lets remember, this is not a “lets tackle these issues at our own pace when we think the market is ready” sort of problem: it is a “we are sinking under the weight of our own data and the lifeboat was needed yesterday” sort of a problem. Publishers must tackle it: if we learn how to resolve it without intermediaries then we certainly shall not need publishers.
« go back