Remember those days when intermediary businesses in information markets were going to be taken out of the loop by savvy operators who could increase margins by collapsing processes in the service cycle? In the far-off nineties, before bookshops had disappeared and while libraries were still functioning as they had for the previous century, this disintermediation stuff was really hot. We spoke of “disintermediating the disintermediators”, and even “re-intermediation” – well, I did at least, and I rather hoped that you might have nodded off through some of this, since it is all changing again now, and in ways that demonstrate that we were not always entirely right in our prognostications. No, let me rephrase that – I was more often wrong about this than I am now comfortable about admitting.

There are many reasons for this but the most obvious is the most painful – pure failure of imagination. I convict myself of the crime for which I have so often harangued others. A simple failure to remember that when one relationship in a chain changes, it changes everything else in the chain. A month of illness and recuperation and holidays has given time to catch up on a backlog of reading – and thinking. And reminded me to remember my roots. As a farmer’s son in the Cotswolds, the bane of our lives on small farms was the regimented slavery of milking cows at 6am and 4pm. Now that slavery is abolished, as avid followers of the UK radio soap The Archers will be aware (North Americans can start here: http://www.cbc.ca/news/canada/new-brunswick/robot-milkers-gaining-in-popularity-at-dairy-farms-in-n-b-1.2756987). Think through these changes in terms of the chain relationship idea, and we end up in a discussion about the future of farmers and the way we organize access to and curation of the land in our society.

So what we have to discuss is whether, in information, and often entertainment, markets our intermediate role is worth saving. Whether we call ourselves publishers, or information service solution vendors, matters not a whit. Do we do enough to stay in the loop as other relationships change in our client base, and other players threaten to subvert our value by combining it with theirs? When as a law publisher online I crowed that I had “captured” the user desktop all I was actually saying was that I had beaten the law firm’s library budget to a pulp. Very many law firms don’t have librarians any more, but, in recession, many have found that more and more legal process can be outsourced in commercial law. And, as I have noted here before, as outsourcers like Obelisk (www.obelisksupport.com/) band together the unemployed lawyers to provide a service base to re-align where the work is actually done, and outsourcers to corporate counsel like Axiom (www.axiomlaw.com) replace much of the service value that private law firms once offered to corporate customers, the tectonic plates are moving in that most conservative world of law, just as re-regulation after recession is creating a new marketplace around risk management and compliance. So, take the most conservative of professions, with highly protective union rules around membership and practice, which you would think would entomb change through mummified procedure – and even here we can see real evidence that within comparatively short periods of time, far-reaching change is massively afoot.

Then look at the organization of medicine, and medical advice. Or PR, and the ability of marketing department analytics to subvert much of the value of the PR businesses. Or insurance. Or construction and BIM, and planning processes. Or engineering design. Or property transactions. Or almost any field in the world of work or transactions that you can imagine. From the taxi drivers who resent Uber to the private drivers who park with RingGo, these changes in relationships are live on the streets of London today, yet we still take each change as a piecemeal development and not as a link in a fundamental shift. And we are very good at describing over-arching movement, but not at all good on detecting what those movements may mean on the ground. If you are still reading in the next few months I shall want to write about the Internet of Things, about M2M, about “Big” metadata, about ubiquitous computing, about semantic analysis, about additive manufacturing, about open and linked data etc etc. But I am now more determined than ever to describe those things in the clothing of work and business as it is now.

So what is the Future of Law Publishers , in the sense that I have used them as an example in this piece? Well, I think that the logic of what I have been looking at this month implies that they themselves will be dis-intermediated. Clearly the small players will successfully cope with the diminishing ranks or practitioners who want texts in some form or other, until that small market becomes a self-publishing function. I can imagine that the large players, like Thomson-Reuters, Lexis or Bloomberg BNA, will be able to migrate through acquisition into the workflow outsourcing business. Their data is becoming highly commoditized, and they have too little expertise to allow them to customize. So I see them as becoming service bureau, providing cloud-based services either to their former clients, or to their client’s clients. The decisions they make for their clients will be insurable and a good number of their employees will be legally qualified. Gradually, in some service areas, it will be hard to tell them apart from law firms. And that is a prevalent conclusion from research in these areas – only our physical, non-networked world could have sustained these separate service functions in the value chain. Put them all in the same virtual network, and inexorably they mutate into one solution. Before the summer break, I wrote about this here under the title “If its a Service, Outsource it…“. Reviewing that piece I now realize that we are seeing the first stages of a much more fundamental re-alignment. And it cannot be postponed or delayed because media and information corporations so wish it.

You can tell the sort of industry we are becoming by the language we use to describe what is happening. Does an industry which refers to data “mining”, or entity “extraction”, seem to you to want to align itself to the softer values of literary publishing? Our senior management teams are now replete with data or content “architects” working alongside data or process “engineers” to ensure that we handle data as content in the right way for today, while staying “agile” in terms of new product development. We are “solutions” orientated because now, for the first time in history, we really can tell how our content is being used, what problems users commonly encounter, and how we can ease their processes, help their learning, improve their workflow or deepen their insight by adjusting, or helping them to self-adjust. The way in which data-as-content is recorded in our systems creates new dataflows which are all about those reactions. We used to throw that data away, some of us, because we could not “read” it. Now the “exhaust data” blown out of the back of our machines when they are running full tilt, could be just the place to pan for gold.

And just as diesel is apparently more noxious than petrol, and heavy vehicle than modest family car, there are clearly many different varieties of exhaust. I have always worried greatly about the use to which events organizers have used the rich data derived from registrations, exhibitor profiles, attendee tracking and preference listings. Given privacy constraints there is clearly scope here to add third party data from venues and elsewhere and go beyond the needs of an individual show and into service development for the target group more generally. I have been told in the past that there is too much data to handle or too little to give significant results – all excuses which become increasingly pale in the age of data. And the same opportunities exist in the creation of usage data in online services universally.

But much of the exhaust data potential is less obvious. Jose Ferreira, founder and CEO at Knewton, notes in his latest blog (http://www.knewton.com/blog/ceo-jose-ferreira/):

“OER (Open Education Resources) represents a tectonic shift in education materials. Try typing “mitosis” into Google. Almost every search result on the first few pages is for OER exploring the process of cell division. The same is true for nearly any other concept you type in: “subject-verb agreement,” “supply and demand,” “Pythagorean theorem” — you name it. And what you can find today on the Internet is probably less than one tenth of one percent of the OER out there. Most is trapped on teachers’ PCs.”

And I bet he is right. Services already exploit this exhaust from the teaching processes of individual teachers (TES Connect, www.teacherspayteachers.com). But Jose’s argument goes further. If you are able to employ the OER (what I think I used to call the “learning object” then you are able to see who stumbled over it, what the exhaust data of assessment shows about understanding and accomplishment of learning objectives, and then you should be able to move towards a genuinely adaptive learning that understands learning difficulty and recognises speed of learning acquisition.

Another form of feedback loop came to light this week in a note from f1000Research, the Open Access service in STM which is clearly bent on adding fresh layers of meaning to the expression “on the fly”. Using studies of Drosophila Melanogaster (fly – geddit ?) in his paper on genetic variations in different populations (http://f1000research.com/articles/3-176/v1.) the Professor of Neurogenetics at Regensburg and f1000 release for the first time an article in which not only are the professor’s data changeable as fresh evidence emerges, but other labs are invited to add their own data to one part of the data to get a comparative view. This article is then a “living” entity, showing fresh results – “on the fly” – every time it is opened. It also, of course, allows every lab to make comparative studies of its results to the Regensburg results, introducing a fresh instance of the “repeatability” principle to peer review. And the interactions of other labs with the article produces a fresh stream of exhaust data, some of which may itself be citable in this instance.

Like “robotic milking”, the new craze in farming, this should be seen as a great gift to publishers. A robotic cash cow that milks itself! But I fear it will be very specialised in its applications, since looking a gift cow in the mouth, or swatting a data fly, are more traditional pastimes for those-once-called-publishers than searching for gold in the exhaust.

« go backkeep looking »