A wise man said that “Content without Technology is lame; Technology without Content is Blind”. Einstein was working his way towards this conclusion, but it was in fact Timo Hannay of Macmillan/Digital Science who came out with this formulation during this week’s ePublishing Innovations Forum at the IET in London (http://www.epublishing-forum.com/). Incisive Media, who also do the Online conference and exhibition at Olympia in December, having been doing this Spring meeting for four years, and I have been their privileged chairman for each. So I know the sea change of the past half decade, I know that change just gets quicker, and I know that Timo is fundamentally right and is one of only a handful who are doing something about it. I also see that “publishing”, if it is useful to retain the term, is almost redefined everytime we hold this meeting, and that the players making strides in solutioning (ugly term), collaboration and community seem to be mining the seams that have revenues and margins embedded in them.

The conference contained several beautifully worked case studies. Take Timo as an example. His themes are about knowledge discovery, research management and software tools (http://www.digital-science.com/). The ability today to read chemical names and turn them into chemical structures and use them to cross search literature and patent databases is a beautiful expression of what we mean when we say that we have to produce solutions that reduce costs and increase productivity. Tomorrow we will want to take this, and his ability to track and map research patterns and structures, and his investments in experiment and project management systems and roll them into career duration, compliance required Electronic Lab Manuals (ELN). Then a few of us will sit down over a beer and reflect that Elsevier sold the ELN market leader, MDL, almost a decade ago. The circularity of markets is only a wonder to those who have been swept full circle several times!

Then lets take David Craig, who came to the microphone to announce that his Thomson Reuters GRC (Governance, Risk and Compliance) division (http://accelus.thomsonreuters.com/) had the day previously finalized the acquisition of World-Check (said on the New York grapevine to be a $530m dollar deal), and was now pushing hard towards the content integration and software services needed to flesh out the complete solutioning picture around regulatory compliance in all its phases. He too speaks the language of collaboration, and now appears to prefer the term “community” to “workflow”. And the distinction is interesting and not an idle one. He does not want to build content-injected process models for the individual corporate units that severally and separately do compliance. He wants to do corporate engines that unite functions to get results, so that he is not tied to the future fortunes of compliance officers or finance departments or auditors or corporate counsel or tax advisers, but provides structures in which they all participate, share content and create outcomes. And if that argues for a different culture in the fully networked corporation, he also sees content creation and sharing between corporates, professionals and othe participants (especially regulators) which allows risk information to be shared rapidly in the network. Again, the high ground is becoming a universal solution which is so widely plugged in that unplugging threatens the health of the participants themselves.

And then take Donal Smith. The CEO of Data Explorers (http://www.dataexplorers.com/) defined what happens to this type of process in the completely satisfying niche. He showed us how certain types of unregulated content must be collected and analysed to keep markets safe from themselves. In this case the content concerns contracts to “borrow” equity against future equity movements – the activity known as “shorting”. Markets must know what proportion of a company’s equity is already committed, so Data Explorers is a venture of necessity, using user-generated content to create indices which allow markets to work efficiently. Its operating principles are ubiquity and non-exclusivity. Process? Collaboration? Its all here.

I could go on. I loved the energy in the education sector, with Cambridge University Press and Global Grid for Learning using similar models in the workload of teachers, and Microsoft, in the guise of David Langridge, their education partnerships director, coming from the other to position the new Office 365 as the vehicle for content integration in schools. And I am aware that by stopping here I ignore many excellent presentations that followed parallel themes. We did interviews and panels which enabled participants to see these trends at work. We looked at the future of the newspaper with Julian Sambles of the Telegraph and the future of the eBook with Tim Cooper of Harlequin (Mills and Boon). Adriana Lukas, coming from the user side as an advisor to major players like Johnson and Johnson, caused a run on the bar by exploring the powerful virtues of five widely used ad-blockers during the opening of her examination of social media as marketing. Elsewhere we discussed the importance of metadata and even paradata (could be my new word!) and finally Geoff Metzger of Superdu brought us down to earth by revealing marketing technology in a box – how to create instant web presence (without waiting for the IT department) to promote books and services. Back to earth, and back to books, in a voyage that began with Kate Worlock, for Outsell, defining the global marketplace, its growth, strengths and weaknesses and some of these key trends. I can now tell you how it feels to introduce one’s own daughter as a keynote speaker (Wonderful!!).

And so much more that I must apologize to those who I have omitted. I wandered away from the IET (Institute of Engineering and Technology, appropriately enough) no longer wondering why they changed their name from Institute of Electrical Engineers. Its the technology, stupid. And now we cannot do without it.

Which is to say, we haven’t exactly been storming it in St Petersburg this week. There is little which is revolutionary in nature about any conference of librarians, publishers and academics, and the 13th meeting of the Fiesole Collection Development Retreat does not sound like a caucus of anarchists. But then, pause for a moment. We have been 60 or so mixed discipline people, an offshoot of the famous Charleston librarians meetings, using a few days in a brilliant city to cogitate on the future needs of research, and the roles that librarians and publishers may play in attaining them. Anyone afraid of the argument would not come here, but go to specialist meetings of their own ilk.

Youngsuk Chi, surely the most persuasive and diplomatic representative of the publishing sector, keynoted in his role as CEO at Elsevier Science and Technology. So we were able to begin with the schizophrenia of our times: on one side the majestic power of the largest player in the sector, dedicated both to the highest standards of journal quality and the maintenance  of the peer reviewed standards of science, while on the other investing hugely and rightly in solutioning for scientists (ScienceDirect, Scopus, Scirus, SciVerse, SciVal…..) without the ability to make those solutions universal (by including all of the science needed to produce “answers”).

I was taught on entry into publishing that STM was only sustainable through the support of the twin pillars of copyright and peer review. This week those two rocked a little in response to the earth tremors shaking scholarship and science. We reviewed Open Access all right, but this now seems a tabby cat rather than the lion whose roar was going to reassert the power of researchers (and librarians) over scholarly collections. The real force which is changing copyright is the emergence of licensing and contract systems in the network which embed ownership but defuse the questions surrounding situational usage. And the real force which is changing peer review is the anxiety in all quarters to discover more and better metrics which demonstrate not just the judgement of peers but the actual usage of scholars, the endurability of scholarship, and the impact of an article and not the journal in which it appeared.

And its clearly game over for Open Access. The repetitive arguments of a decade have lost their freshness, the wise heads see that a proportion of most publishing in most sectors will be Open Access, much of it controlled by existing publishers like Springer who showed the intensity of their thinking here. But does it matter if this is 15% of output in History rising to 30% in Physics? It is a mixed economy, and my guess is that the norm will be around 15% across the board, which makes me personally feel very comfortable when I review the EPS prognosis of 2002! A few other ideas are going out with the junk as well – why did we ever get so excited about the institutional repository, for example.

So where are the Big Ideas now? Two recurrent themes from speakers resonated with me throughout the event. We now move forward to the days of Big Data and Complete Solutions. As I listened to speakers referring to the need to put experimental data findings in places where they were available and searchable, I recalled Timo Hannay, now running Digital Science, and his early work on Signalling Gateway. What if the article is, in some disciplines, not the ultimate record? What if the findings, the analytical tools and the underlying data, with citations added for “referenceability”, forms the corpus of knowledge in a particular sector? And what if the requirement is to cross search all of this content, regardless of format or mark-up, in conjunction with other unstructured data? And use other software tools to test earlier findings? And in these sectors no one can pause long enough to write a 10,000 word article with seven pages of text, three photos and a graph?

And where does this data come from? Well, it is already there. Its experimental, of course, but it is also observational. It is derived from surveillance and monitoring. It arises in sequencing, in scanning and in imaging. It can be qualitative as well as quantitative, it derives from texts as well as multimedia, and it is held as ontologies and taxonomies as well as in the complex metadata which will describe and relate data items. Go and take a look at the earth sciences platform, www.pangea.de, or at the consortium work at www.datacite.org in order to see semantic web come into its own. And this raises other questions, like who will organize all of this Cloud-related content – librarians, or publishers, or both, or new classes of researchers dedicated to data curation and integration? We learnt that 45% of libraries say that they provide primary data curation, and 90% of publishers say that they provide it, but the anecdotal evidence is that few do it well and most do no more than pay lip service to the requirement.

Of course, some people are doing genuinely new things (John Dove of Credo with his interlinking reference tools – www.credoreference.com – for undergraduate learning would be a good example: he also taught us how to do a Pecha Kutcha in 6 minutes and 20 slides!). But it is at least observable that the content handlers and the curators are still obsessed by content, while workflow solutions integrate content but are not of themselves content vehicles. My example would be regulatory and ethical compliance in research programmes. The content reference will be considerable, but the “solution” which creates the productivity and improves the lab decision making and reduces the costs of the regulatory burden will not be expressed in terms of articles discovered. Long years ago I was told that most article searching (as much as 70% it was alleged) was undertaken to “prove” experimental methodology, to validate research procedures and to ensure that methods now being implemented aligned with solutions already demonstrated to have successfully passed health and safety strictures. Yet in our funny mishaped world no specialist research environment seems to exist to search and compare this facet, though services like www.BioRAFT.com are addressing the specific health and safety needs.

Summing up the meeting, we were pointed back to the role of the Web as change agent. “Its the Web, Stupid!” Quite so. Or rather, its not really the Web, is it? Its the internet. We are now beyond the role of the Web as the reference and searching environment, and back down into the basement of the Internet as the communications world between researchers, supported by the ancilliary industries derived from library and publishing skills, moves into a new phase of its networked existence. It takes meetings that have equal numbers of academics and librarians and publishers to provide space to think these thoughts. Becky Lenzini and her tireless Charleston colleagues have now delivered a further, 13th, episode in this exercise in recalibration of expectations, and deserve everyone’s gratitude for doing so. And the sun shone in St Petersburg in the month of the White Nights, which would have made any storming of the Winter Palace a bit obvious anyway.

« go backkeep looking »