May
13
Stroking the Winter Palace
Filed Under Blog, eBook, Industry Analysis, internet, Publishing, Reed Elsevier, Search, semantic web, STM, Workflow | 1 Comment
Which is to say, we haven’t exactly been storming it in St Petersburg this week. There is little which is revolutionary in nature about any conference of librarians, publishers and academics, and the 13th meeting of the Fiesole Collection Development Retreat does not sound like a caucus of anarchists. But then, pause for a moment. We have been 60 or so mixed discipline people, an offshoot of the famous Charleston librarians meetings, using a few days in a brilliant city to cogitate on the future needs of research, and the roles that librarians and publishers may play in attaining them. Anyone afraid of the argument would not come here, but go to specialist meetings of their own ilk.
Youngsuk Chi, surely the most persuasive and diplomatic representative of the publishing sector, keynoted in his role as CEO at Elsevier Science and Technology. So we were able to begin with the schizophrenia of our times: on one side the majestic power of the largest player in the sector, dedicated both to the highest standards of journal quality and the maintenance of the peer reviewed standards of science, while on the other investing hugely and rightly in solutioning for scientists (ScienceDirect, Scopus, Scirus, SciVerse, SciVal…..) without the ability to make those solutions universal (by including all of the science needed to produce “answers”).
I was taught on entry into publishing that STM was only sustainable through the support of the twin pillars of copyright and peer review. This week those two rocked a little in response to the earth tremors shaking scholarship and science. We reviewed Open Access all right, but this now seems a tabby cat rather than the lion whose roar was going to reassert the power of researchers (and librarians) over scholarly collections. The real force which is changing copyright is the emergence of licensing and contract systems in the network which embed ownership but defuse the questions surrounding situational usage. And the real force which is changing peer review is the anxiety in all quarters to discover more and better metrics which demonstrate not just the judgement of peers but the actual usage of scholars, the endurability of scholarship, and the impact of an article and not the journal in which it appeared.
And its clearly game over for Open Access. The repetitive arguments of a decade have lost their freshness, the wise heads see that a proportion of most publishing in most sectors will be Open Access, much of it controlled by existing publishers like Springer who showed the intensity of their thinking here. But does it matter if this is 15% of output in History rising to 30% in Physics? It is a mixed economy, and my guess is that the norm will be around 15% across the board, which makes me personally feel very comfortable when I review the EPS prognosis of 2002! A few other ideas are going out with the junk as well – why did we ever get so excited about the institutional repository, for example.
So where are the Big Ideas now? Two recurrent themes from speakers resonated with me throughout the event. We now move forward to the days of Big Data and Complete Solutions. As I listened to speakers referring to the need to put experimental data findings in places where they were available and searchable, I recalled Timo Hannay, now running Digital Science, and his early work on Signalling Gateway. What if the article is, in some disciplines, not the ultimate record? What if the findings, the analytical tools and the underlying data, with citations added for “referenceability”, forms the corpus of knowledge in a particular sector? And what if the requirement is to cross search all of this content, regardless of format or mark-up, in conjunction with other unstructured data? And use other software tools to test earlier findings? And in these sectors no one can pause long enough to write a 10,000 word article with seven pages of text, three photos and a graph?
And where does this data come from? Well, it is already there. Its experimental, of course, but it is also observational. It is derived from surveillance and monitoring. It arises in sequencing, in scanning and in imaging. It can be qualitative as well as quantitative, it derives from texts as well as multimedia, and it is held as ontologies and taxonomies as well as in the complex metadata which will describe and relate data items. Go and take a look at the earth sciences platform, www.pangea.de, or at the consortium work at www.datacite.org in order to see semantic web come into its own. And this raises other questions, like who will organize all of this Cloud-related content – librarians, or publishers, or both, or new classes of researchers dedicated to data curation and integration? We learnt that 45% of libraries say that they provide primary data curation, and 90% of publishers say that they provide it, but the anecdotal evidence is that few do it well and most do no more than pay lip service to the requirement.
Of course, some people are doing genuinely new things (John Dove of Credo with his interlinking reference tools – www.credoreference.com – for undergraduate learning would be a good example: he also taught us how to do a Pecha Kutcha in 6 minutes and 20 slides!). But it is at least observable that the content handlers and the curators are still obsessed by content, while workflow solutions integrate content but are not of themselves content vehicles. My example would be regulatory and ethical compliance in research programmes. The content reference will be considerable, but the “solution” which creates the productivity and improves the lab decision making and reduces the costs of the regulatory burden will not be expressed in terms of articles discovered. Long years ago I was told that most article searching (as much as 70% it was alleged) was undertaken to “prove” experimental methodology, to validate research procedures and to ensure that methods now being implemented aligned with solutions already demonstrated to have successfully passed health and safety strictures. Yet in our funny mishaped world no specialist research environment seems to exist to search and compare this facet, though services like www.BioRAFT.com are addressing the specific health and safety needs.
Summing up the meeting, we were pointed back to the role of the Web as change agent. “Its the Web, Stupid!” Quite so. Or rather, its not really the Web, is it? Its the internet. We are now beyond the role of the Web as the reference and searching environment, and back down into the basement of the Internet as the communications world between researchers, supported by the ancilliary industries derived from library and publishing skills, moves into a new phase of its networked existence. It takes meetings that have equal numbers of academics and librarians and publishers to provide space to think these thoughts. Becky Lenzini and her tireless Charleston colleagues have now delivered a further, 13th, episode in this exercise in recalibration of expectations, and deserve everyone’s gratitude for doing so. And the sun shone in St Petersburg in the month of the White Nights, which would have made any storming of the Winter Palace a bit obvious anyway.
Mar
29
Credit where Credit is Due
Filed Under B2B, Blog, data protection, Financial services, Industry Analysis, internet, privacy, Publishing, Reed Elsevier, Search, Thomson, Uncategorized, Workflow | 2 Comments
It sometimes seemed a long way to go to find oneself engrossed in a conversation on credit referencing for small and medium sized enterprizes. However, around 3 pm Hong Kong time last Thursday I heard a conference light up with really vibrant debate, sourced from all around the Asia-Pacific region, on a subject which at once focussed regional attention and yet was symptomatic of the state of ePublishing and Information Solutions in a global networked society as a whole. The event was the Business Information Industry Association of Asia-Pacific, holding one of its sessions of joint user-provider debate within the framework of the newly-launched Online Information Asia Pacific. Day 1 of the main conference had addressed wider themes, with an excellent introduction by Stephen Mak, the Hong Kong CIO, Stephen Arnold on Search, and effective work from case studies in Thailand on Web 2.0 in the context of knowledge capture and from Pebbleroad in Singapore on Knowledge Capture. My own contribution on the Device Wars are attached in the download section of this blog.
Outside of a full conference room were some 90 exhibitors and over two days around 1000 delegates from across the region. I hope that Incisive Media are encouraged and keep plugging away at this. The old Online conference in London is now 33 years old, and there is no doubt in my mind that in the middle of that run it was a critical meeting place for industry and users. Some of the discussions I heard in Hong Kong had exactly the cathartic flavour of those vital days of industry self-discovery. As Steve Goodall of Outsell (also a sponsor) noted in his BIIA regional and global review, this is the most significant growth area in global information markets. Even newspapers sell here!
But Day 2, for me at least, focussed on the BIIA Forum, which I attended as Chairman and where, when the conference concluded, I glowed with pride at the happy accident that had led to my five year involvement in this organization, in support of my old friend and collaborator, Joachim Bartels, BIIA’s founder and now its chief executive. It was his inspiration to set up regional fora of users and suppliers, a most appropriate one in a region of such huge diversity and different cultural styles. Looking around a room that included users from trading organizations as diverse as Merck or National Semiconductor or Cargill, interested parties like the IFC (World Bank) and the Peoples Bank of China, and vendors of services and solutions ranging from Thomson Reuters and Lexis Nexis to SinoTrust, Veda Advantage, D&B (sponsors and also participants from several countries) and Standard and Poor’s one could appreciate the range of likely debate. It was when the voices began and the questions flowed – from Hong Kong itself (including an enthusiastic group from the Chinese University of Hong Kong), from Thailand, from Singapore, from Australasia, from India, from China, from Taiwan, augmented by interested participants from Europe and the USA, that the regional magic and the connectivity to global information market trends took fire.
The issues surfaced innocently enough. In a topic devoted to eliminating information asymmetries it quickly became clear that for many participants business information was becoming increasingly controversial. There were major issues concerning government-held information in the region, symptomatic both of culture and control, and of privacy and data protection legislation. Everyone recognized the role of business information services in creating value, and the utility of those services in creating credit referencing services which enabled the region’s huge and growing trade. Yet there was also an air of discontent: current content was becoming commoditized, and in particular, it was becoming much more difficult to provide reliable and verifiable information about small and medium-sized enterprizes. And the problems were not confined to banks and finance houses: clearly identifying SMEs (or even defining them) was a problem for all traders in the market, especially as SMEs are generally seen as the engine of growth in any economic recovery. How well I recall this debate in the European Union in the 1990s, and how frustrating it was that nothing could ever be done at any level to alleviate it. I settled in for an interesting but fruitless discussion.
Which was not how things turned out. Instead real energy was devoted to ways of tackling this. One party, in which I found myself a dissident, sought remediation through re-regulation. Information control was the answer, and this had to be accompanied by better benchmarking to define what information should always be available on differently defined enterprizes at different sizes. Enforcement of disclosure was the stumbling block. Meanwhile on the other side, the counter argument that the internet was an economy unto itself, where every trader left an impression, seemed to me to have growing attraction. The implication here was that increasingly, as we move about the network buying and selling things, we should want to have our efforts noted and scored, so that the favourable or otherwise impression of our activities on everyone else could be known. This would be a competitive activity, and risk management value would migrate to those who were best at mining the network’s information yields.
And it was this that hung in my mind on the long trip back to London. We have now reached the stage in a networked society where the source of all information about participants in that society lies increasingly in the network itself. We now have the tools, in data mining and entity extraction, to locate and interrogate both structured and unstructured content. Increasingly semantic enquiry and things like this week’s announcement of MarkLogic’s experiment in this area give confidence that we are at the application and not the speculative level. Then think of the advances in workflow modelling noted here from the most major players like Lexis and Thomson Reuters. I do not seriously doubt any longer that the answers to most information services development questions are already known, because the content needed to answer them already lies, though under-exposed, in the network. And Asia-Pacific remains undoubtedly the most stimulating part of the world if you want to think about Next Steps.
« go back — keep looking »