Now, class, this is a moment of real liberation. You are now free to learn on your own or collaboratively using new methods of learning which are as old as the hills and which depend on the acknowledgement of two Lessons:

LESSON 1: all learning is narrative. Unless it is conveyed in a story form we have no way of relating odd facts to each other.

LESSON 2: all true learning is enjoyable, whether it is done alone, in groups of learners, or by learners grouped around an inspired teacher.

We are now watching the far from inspiring sight of the world’s educational publishers, at all levels, trying to breathe fresh breath into the calcified corpses of print textbooks by recreating them as eTextbooks. This will fail. While we cannot recreate learning itself in the digital environment we can provide an entirely new learning experience, and it is an insult to the intelligence of learners to give them a book-look-alike format that apes print without adding value from digital. And to say that notes and bookmarks are significant value is rubbish. Only if you build a textbook ab initio online (Nature’s Principles of Biology is a case in point) can you claim some credit from instant updating and lifelong ownership. I spent a year of my life – 1969 – 1970 – editing and structuring Biology: A Functional Approach, which became a bestseller at its level for a decade. The narrative created was around deserting the study of plants and animals as classifications and species, the rote learning of a previous generation, and building a storyline around the way life on earth functions – from respiration to reproduction. A narrative about how life works. But that was telling stories then, in the great age of print. This week, I have seen two glimpses of the future, one expressed as as business organization, and the other as highly innovative technology. Both of them undermine completely the idea  that the future has anything to do with the reconstituted formats of print.

In the first instance I found myself this week in the prestigious Mayfair offices of Direct Learning Marketplace (www.DLMplc.com). This, in the jargon of the investor, is a “buy and build” vehicle for acquiring future-facing business assets in the field of business education. Driven by the entrepreneurial energies of Andy Hasoon, it has at its core an idea about learning which is one sustainable arm of  the two-pronged approach to what I now believe are the only viable metodologies for recreating learning in a networked society. By his purchase of Pixelearning, a Coventry company long on my map as an ideas centre in serious gaming, Andy signals an intention to place games at the heart of the learning experiences that he is tackling across the hugely fragmented territory of training, development, in-servicing etc in the business and industrial context. And since scale is a vital component here, and he works in a country with a gaming design tradition to be proud about, the acquisition approach is very appropriate. So to those traditional book publishers who have always said to me “Gaming is interesting but you can never build a big business around it”, I can now say “watch this space”!

And alongside gaming lets place the other future development strategy. In the 1990s, as a external director at Dorling Kindersley before it was bought by Pearson, I revelled in the development of CD-ROM-based multimedia learning experiences. The fact that this year, with the arrival of ePub3, we are at last able to do online what we could then do on disc in 1995 is surely a signal for something to happen. And it has, in Boulder, Colorado. There, a team with huge experience in multiple media development for education, led by Jeff Larsen, Larry Pape and Kevin Johnson, have begun to create video-based narratives that to me exemplify where we are going with tablet-based experiences. Their focus has been the iPad, and their initial field of engagement has again been business education (says a lot for how stroppy businesses can be when served “same old, same old” by training companies?). If you have reached this point please go immediately to http://www.inthetelling.com/tellit.html and then play the demo video (also on YouTube, where we, as learners/students, download 4 billion videos a day!). Here you will see a narrative core in video on one side of the iPad screen, with chapters, references and linkage on the other. Here you will also see navigation to other related resources. This is a licensable technology, backed by Cloud-based storage and streaming, and surrounded with the developer tools needed to create narrative based video learning on the TellIt technology.

And I thank this team for something else as well. They have avoided the over-hyped, near-meaningless term “multimedia”, which lost its meaning and its way in the dotcom boom/bust, and settled for Transmedia to express what they are doing. This is a good term for a new age of narrative-led, video-based, learning experiences and I hope it catches on. And one last note: everything spoken of here fits wonderfully onto the infrastructure of LMS/VLE/digital repositories that we have oversold to schools and learning institutions, and which now comes into its own. Alongside and around the installation of that infrastructure we also failed to persuade teachers, as well as learners, that learning could be recreated in the network, and improve in the process. Here are two initiatives – in games and video narrative – which at last make good that promise.

My personal voyage in the world of software for search and data service development continues. I had the pleasure last week of hearing a Tableau (http://www.tableausoftware.com/) user talk about the benefits of visualization, and came away with a strong view that we do not need to visualize everything. After all, visualization is either a solution – a way of mapping relationships to demonstrate a point not previously understood – or a way of summarizing results in ways that enable us to take them in quickly. I did not think of it as a communication language, and if that is what it is then clearly we are only in the foothills. Pictures do not always sustain narrative, and sometimes we kid ourselves that once we have the data in a graph then we all know what it means. Visualization needs a health warning: “The Surgeon General suggests that before inhaling any visualization you should first check the axes.”! However, when data visualization gets focussed then it becomes really exciting. Check out HG Data (www.hgdata.com), a way of analysing a corporations complete span of relationships:

“While LinkedIn tracks the relationships between people in business, HG Data tracks the underlying relationships between the business entities themselves.”

Now that is a seriously big claim, but you can begin here to see plug-in service values from Big Data which will shape the way we look at companies in future. But my real object this week was elsewhere – in deep and shallow Space. A subject of speculation to me over 20 years ago was whether we would ever be able to analytically control the floods of data beginning to be received from satellites which was inundating space research centres. In its day, this was the first “drinking from the firehose” phenomenon, and it would appear to me retrospectively that we never really cracked this one, as much as learnt to live with our inadequacies. In the intervening time we have become experts at handling very large dataflows, because Google was forced to learn how to do it. And in the intervening years the flood has grown past tsunami, and ceased to be an issue about space research, and become an issue about how we run Earth.

So first lets update on the Space side of things. Those few research satellites that I encountered in 1985 have now been joined, according to Frost and Sullivan, by a vast telemetry and measurement exercise in the skys above us which will result in around 927 satellites by 2020. Some 405 will be for communication, with earth observation (151), Navigation (including automatic aircraft landing) and reconnaisance figuring high. Only 75 will be devoted to the R&D which initially piqued my interest in this. But since the communication, navigation and observation functions will measure accurately down to one metre, we shall inevitably find our lives governed in similar micro-detail by what these digital observers discover.

Now step over and look at SpaceCurve (http://spacecurve.com/). I had the pleasure of speaking to its founder, Andrew Rogers, a week or so ago and came away deeply impressed by the position they have taken up. Andrew is a veteran of Google Earth (and a survivor of the UK Met Office!) He is also a problem solver, big time. Taking the view that Google may have cracked its own problems but were not going to crack anything of this scale, he left, and the result is SpaceCurve:

“Immediately Actionable Intelligence
SpaceCurve will deliver instantaneous intelligence for location-based services, commodities, defense, emergency services and other markets. The company is developing cloud-based Big Data solutions that continuously store and immediately analyze massive amounts of multidimensional geospatial, temporal, sensor network and social graph data.
The new SpaceCurve geospatial-temporal database and graph analysis tools will enable application developers and organizations to leverage the real-time models required for more powerful geospatial and other classes of applications and to extend existing applications.”

As I understand it, what SpaceCurve is about is solving the next generation problem before we have rolled out the current partial solution. This is 2.0 launching before 1.0 is fully out of beta. The problems that Andrew and his colleagues solved in interval indexing and graph analysis are not a part of the current Big Data market leaders output, but they are very much in line with the demands of geospatial data flows. Here real time analytics just do not do the job if they are dependent on column stores assuming an order relationship. The thing to do is to abandon those relationships. SpaceCurve is not just looking at far bigger data environments: it suggests that they cannot be handled in ways that we currently envisage as being “big data”.

Despite the increased size of content handling, SpacCurve see themselves searching in a partially federated manner, since many data holders, and in particular governments, will not allow the data off the premises. Government and corporations share the need to be able to see provenance and determine authenticity, so SpaceCurve’s role in these massive data collections may be in part as an outsourcing custodial authority, looking after the data on the owner’s site. And indeed, the problem for SpaceCurve may be one of which markets it chooses first and where the key interest comes from – government and public usage, or the enterprize markets.

The next major release is due in 2013, so we shall soon find out. Meanwhile, it is striking that a major investor here, Reed Elsevier Ventures, has a parent who invested, through Lexis, in Seisint, also a deeply government aligned environment, and more recently in the Open Source Big Data environment, HPCC. Investing in the next generation is always going to make sense in these fast moving markets.


« go backkeep looking »