Paradox: nothing is more measured, assessed or examined than education, but we still seem to know remarkably little about how people “learn” in full sense of the word. And while the world is full of learned academics with impressive qualifications in “cognitive processing” and the like, try to build a “learning system” for humans and you encounter immediate design problems. Indeed, it is easier to teach machines to learn. So each generation seems to approach the problem – that each of us learns differently, under different stimuli and at different ages – in a different way. Once it was a matter of coursework and textbook. In this age, the Age of Assessment, satisfactory proof of learning is accomplished by testing. Never mind that the learner may have no resulting ability to deploy his or her learning in any other context than a test; we are developing people who can jump immediate hurdles and may not be able to successfully navigate the great oceans of life in front of them. This applies to schools and universities, but also to the rapidly growing vocational and training sectors.

Over in the medical environment, we have had evidence-based practice for over a decade. This is now becoming a discipline in its own right, combining systematic review of literature (for example, the Cochrane Collection) with statistical analysis, meta-analysis and risk-benefit analysis to produce, in combination with the patient record, some really effective results in diagnostic terms. These are now widely deployed in different configurations by information service solution providers like Elsevier, WK Health and Hearst Medicine. As genetic analysis and custom drug treatment become more common, this will no doubt develop, but as we have it today, the information service players are fully plugged into the system. How different to this is education!

Despite the huge collection of indicative statistics, there is still no feedback loop in education which tells teachers what works with certain types of learning profiles. As they develop and test digital learning environments, private sector learning systems developers (not just systems houses but content developers too) are getting significant feedback on the efficacy of their work. Schools store an ever-growing amount of performance data, and much of this can be related to learning systems. Examination boards have yet more (Digression: my most depressing moment in education this year – going to a parent’s evening with a sixth former studying classical civilizations. Question to teacher: what do you recommend as additional reading (I have shelves full of possibilities); Answer: we do not recommend reading around the subject. It only confuses people to have several interpretations and inhibits their ability to secure high pass grades!). And yet all of this content or evidence is disaggregated, not plumbed for learning experience significance, and there is no tradition of building ideas about what input might secure learning gains – just give the learner another diagnostic test!

These notes were sparked in the first place by the announcement  last month of the creation of a Coalition for Evidence-Based Education by the Institute for Effective Education at the University of York. I also know of the TIER project in the Netherlands  (involving the Universities of Amsterdam, Groeningen and Maastricht) and have great respect for the ongoing work of Better magazine, created by the Johns Hopkins Centre for Research and Reform in Education. But all of these seem to me as much concerned with applying evidence to changing policy at government or school administration level, as they are with developing practitioner tools. And they exemplify something else – there is not a publisher/education solutions supplier loose around any of them. True, no one ever field-trialled a textbook (though I once did this with a UK Schools Council course in the 1970s called “Geography for the Young School Leaver” – and it had dramatic effects on the presentation and construction of the learning journies involved). Yet here we are in the age of Pearson’s MyLab or the Nature Education’s Principles of Biology online learning experience. The age of iterative learning devices, wired for feedback and capable of recording both anonymized statistical performance data and giving diagnostic input to a single user or teacher on what needs support and re-inforcement in a learning process. Yet I know of no developer who trades use with feedback in terms of co-operating with government and schools in trialling, testing and developing new learning environments. And given that these are iterative – they tend to change over time as refinements are made and non-statistical feedback is procured – I know of no schemes which are able to demonstrate the increasing efficiency of their learning tools.

ELIG (the European Leaning Industry Group) has issued members of its marketing board, like myself, with an urgent requirement to uncover good case studies which demonstrate the efficacy of learning tools in practice. I can find plenty, but they are all based on the findings of the supplier. I can even find some where a headmaster says “exam results increased X% while we were using this system” – but they never indicate whether this was the sole change that led to the finding. If I were a teacher with a poor reader with real learning difficulties, where do I go for the  ML Consult or UpToDate medical review equivalent – a way of defining my pupil’s problem and relating it to success with others with similar  problems, and the learning systems feed back on the systems that worked? The answer is that you do not go anywhere, since education, one of the most lonely and secretive jobs in the professional world, is still not quite prepared to enter the digital age with the rest of us. And its suppliers, sharing something of that culture, still operate in an isolated way that also predates the new world of consolidation and massive systems development now beginning in this marketplace. And the Learner? Processed or Educated? It all depends on the feedback loop.

 

 

As a Thomson man of the generation of ’67, I was well schooled in the dictum “its not what you buy, but what and when you sell that makes the real difference.”* And having spent almost three decades button-holing anyone who would listen, like some crazed digital ancient mariner, on the importance of building digital presence in B2B publishing and information markets, I should probably be pleased to see headlines in the Financial Times (3 March 2012) heralding the sale of EMAP’s print assets (“Analysts say EMAP faces challenge to move away from print”). But I am not. I know exactly when these print assets should have been sold: in 2002 at the end of the Dotcom Bust. And I cannot persuade myself that a wrong move then will be rectified by a pointless move now, or that value will be added to anything by selling the subscription/advertising print stable at EMAP – or at UBM, or at Haymarket, or Centaur, or Incisive – to someone who is simply going to live on a declining annuity until it expires. There will in any case be few buyers, and those who do appear will not want the stable, but just one or two of the old nags. The analysts who shriek the headline of this piece are simply transaction mongers who have a firmer grip of deal commissions than they do of the current strategic realities of B2B. So lets go back to 2002 and see what has happened after the management of B2B information and publishing and events decided that it was far too early to exit print subscriptions and, like the regional press, the market would come back to them.

By 2005 it was becoming clear that the bits that worked in B2B, outside of events, were information services and solutions. By that year controlled circulation magazines and newsletters, which had proliferated and at times been generated by online at the end of the previous decade began to wilt. Just as in the pre-2005 period we had spoken of VANs and VADs, so we began to talk about “vertical search” (it turned out to be much the same anyway) and started providing tailored information to self-defined users in commerce and industry. We were beginning to experience for the first time what it was going to be like to live in a “networked society/economy”. A small revolution was taking place: managers were beginning to have to find out what their users did for a living and construct solutions around their daily lives. This meant specialization and expertise in particular verticals: managers could no longer be shifted from title to title on the basis that they knew journalists and advertisers and everything else was the same whether you were publishing in machine tools or in ladies fashions.

And then we came to workflow. If we were really entering an information solutions-type world (where Thomson Reuters had already gone in IP and GRC , and Lexis Risk in insurance) then we had to provide our content directly to the desk of the user, sliced so that it modelled his working patterns, and supported by software tools that added value to it and kept us essential to his processes, and thus too important to be lightly discontinued. And how did we plan to earn his trust in this guise? By either inventing a new brand (think Globalspec in engineering) or by using our old print brands to ensure user confidence (think Bankers Almanac at RBI). Never mind that the print which supported those brands had eroded away, since they were there for entirely different reasons.

And now we are laying another layer in digital development on top of all of this. We now talk of Big Data, of using the services we have created for users as a sort of focussing glass so that we can go out from them to the client’s own content and all sorts of other datasets and find linkages through data mining and extraction, squeezing fresh insight all the time into the workflow of users who, wherever they work, have increasingly become, like us, knowledge workers. And our events activities increasingly morph into always-on trading and learning experiences, where we do introduce clients to the range of products and services in the sector, update and inform on new releases to people who have said they want to know, and move increasingly into the training and professional development of the sectors that we have chosen. Do you see where we are going? We are going to be the full service providers to a handful of vertical markets which we feel confident about dominating.

Why are we confident about that domination? Because we have the brands, many of them over a hundred years old in this country, which our verticals were brought up upon. And behind those brands are archival morgues, full of data with residual value in a Big Data sense. We did not sell those brands in 2002 when they were a going concern, so why sell them now when they are a cause for concern. By all means close the print, by all means reconstruct the service values  using far less journalists in targeted niche environments online. By all means drive towards areas where you have real data intensity, but on the way remember the community and its existing brand affiliations. You want to take them with you.

Which brings us back round to EMAP. I see no point in hanging on to peripheral services, even data-based services like DeHavilland bought as recently as 2007, if they have no strategic coherence in terms of the markets that give EMAP positions of strength. I take these to be construction, local government, broadcast media and fashion. If strength in automotive cannot be linked to the Guardian’s position in Trader Media, then sell that too. But hold onto brands where they can be used to give community credibility and data where it can give archival searchability. By selling them you get a smaller but more profitable business. And that is also the result of digital network development of the type described here – smaller and more profitable businesses. Just don’t throw away something which is pretty worthless now on its own, but which may be needed on a journey to a much better place.

* Note that the companies that Thomson SOLD in the mid-1980s in the UK form the majority of EMAP and Trinity Mirror today, as well as large chunks of Springer and Infinitas, and elsewhere and afterwards the bulk of Cengage and a big portion of the US regional press. Were they right or not?


« go backkeep looking »