Jan
14
PPPR: A Black View
Filed Under data protection, eBook, Industry Analysis, internet, Publishing, Reed Elsevier, semantic web, social media, STM, Thomson, Uncategorized, Workflow | Leave a Comment
A few weeks ago, in “Scraps and Jottings” I tried to reflect, while talking about the newly-launched journal Cureus, an increasing feeling that both traditional publishers and the mujahaddeen of the Open Access world (yes, that good Mullah Harnad and his ilk) are both being overtaken by events. The real democratization which will change this world is popular peer review. Since the Mujahadeen got in and named the routes to Open Access Paradise as Green and Gold, and publishers seem quite happy to work within these definitions, especially if they are gold, I have no choice but to name the Post Publication Peer Review process as the Black Route to Open Access. You read it here first.
This thought is underlined by the announcement, since I wrote my previous piece, that the Faculty of 1000 (F1000Research) service has emerged from its six month beta and can now be considered fully launched. Here we have a fully developed service, dedicated to immediate “publication”, inclusive of all data, totally open and unrestricted in access and enabling thorough and innovative refereeing as soon as the article is available. And the refereeing is open – no secrets of the editorial board here, since all of the reports and commentaries are published in full with the names and affiliations of referees. The F1000Research team report that in the last six months they have covered major research work from very prominent funders – Wellcome, NIH etc – and that they now have 200 leading medical and biological science researchers on their International Advisory panel and more than 1000 experts on the Editorial Board (see http://f1000research.com). And since they have a strategic alliance with figshare, the Macmillan Digital Science company, “publishing” in this instance could be as simple as placing the article in the researcher’s own repository and opening it up within F1000Research. And since othe partners include Dryad and biosharing, the data can also be co-located within specialized data availability services. Saves all those long waits – as soon as it is there, with its data as well, the article is ready to be referenced alongside the academic’s next grant application. The fact that all current publishing has been accompanied by the relevant data release (for which read genomes, spreadsheets, videos, images, software, questionnaires etc) indicates that this too is not the barrier that conventional article publishing made it out to be.
Ah, you will say, the problem here is that the article will not get properly into the referencing system and without a “journal” brand attached to it there will be a tendency to lose it. Well, some months ago Elsevier agreed that Scopus and Embase would carry abstracts of these articles, and, as as I write PubMed has agreed to inclusion once post-publication review has taken place. But then, you will say, these articles will not have the editorial benefits of orthodox journal publishing, or appear in enhanced article formats. Well, nothing prevents a research project or a library licensing Utopia Docs, and nothing inhibits a freelance market of sub-editors selling in services if F1000Research cannot provide them – this is one labour market which is dismally well staffed at present.
Now that F1000Research has reached this point it is hard to see it not move on and begin to influence the stake which conventional publishing has already established in conventional Open Access publishing. And F1000 obviously has interesting development plans of its own: its F1000Trials service is already in place to cover this critical part of bio-medical scholarly communication, and, to my great joy, it has launched F1000Posters, covering a hugely neglected area for those trying to navigate and annotate change and track developments. Alongside Mendeley and the trackability of usage, post-publication review seems to me a further vital step towards deep, long term change in the pattern of making research available. My new year recommendation to heads of STM publishing houses is thus simple: dust off those credit cards, book a table at Pied de Terre, and invite Vitek round for lunch. He has not sold an STM company since BMC, but it looks as if he has done the magic once again.
But, now, I must end on a sad note. The suicide this week of Aaron Swartz, at the age of 26, is a tragic loss. I understand that he will be known as one of the inventors of RSS – and of Reddit – and he had been inventing and hacking since he was 13. PACER/RECAP controversially “liberated” US Common Law to common use. He was known to suffer from severe depression and it appears that he ended his life in a very depressed state. But here is what Cory Doctorow (http://boingboing.net/2013/01/12/rip-aaron-swartz.html) had to say about what might have been a contributory factor:
“Somewhere in there, Aaron’s recklessness put him right in harm’s way. Aaron snuck into MIT and planted a laptop in a utility closet, used it to download a lot of journal articles (many in the public domain), and then snuck in and retrieved it. This sort of thing is pretty par for the course around MIT, and though Aaron wasn’t an MIT student, he was a fixture in the Cambridge hacker scene, and associated with Harvard, and generally part of that gang, and Aaron hadn’t done anything with the articles (yet), so it seemed likely that it would just fizzle out.
Instead, they threw the book at him. Even though MIT and JSTOR (the journal publisher) backed down, the prosecution kept on. I heard lots of theories: the feds who’d tried unsuccessfully to nail him for the PACER/RECAP stunt had a serious hate-on for him; the feds were chasing down all the Cambridge hackers who had any connection to Bradley Manning in the hopes of turning one of them, and other, less credible theories. A couple of lawyers close to the case told me that they thought Aaron would go to jail.”
Well, one thing we can be quite certain about. Protecting intellectual property or liberating it cannot ever be worth a single human life.
Jan
8
Acquisition + Collaboration = Complete?
Filed Under B2B, Big Data, eBook, Industry Analysis, internet, Reed Elsevier, STM, Uncategorized | 1 Comment
Sit down to read this with the mind of a research engineer in the public or the private sector. On the screen in front of you there are links to the foremost research resources that you are likely to use in everyday life. Behind them are other links to a host of services that you may use. Above all, you want to be able to search this corpus of knowledge as an entity, and you want the alerts and intelligence services that you use to reflect updates and developments across the entire waterfront of engineering knowledge. And the data types are pretty different. Some is classic data, and may occur in the evidential material that underlies academic research, or in reports and findings on performance or failure of materials. Other information exists as design specifications, or patents, or standards or as structured academic articles or ebooks. Some exists in index entries and as citations or references. Still more is available online in newspaper files, video archives, blogs, tweets and magazine morgues. Engineering research was never easier, but is still not easy. And few subjects are as fragmented as engineering – or have a more important task than ensuring that knowledge is shared across those fragmentations when necessary, for the sake of progress, and the health and safety of everyone. Here is a classic Big data argument waiting to be made.
Yet as it came to the Web few areas were more diverse than engineering. Despite the early attempts of Engineering Village (later bought by Elsevier), it was not until Warburg Pincus funded GlobalSpec that real vertical search arrived, and with it the focus on a huge user-contributed library of specifications. This service is now owned by IHS, who are able to align with it their equally vast collections of patents and standards. So is this the staring place for all enquiry, given that GlobalSpec also indexes the content of vastly authoritative sources like IEEE. Well, almost – but the academic articles remain in the locked service environments of journal publishers like Elsevier, the leading player in this field. So we still have to sign up for all those journals wherever they are published? Well, yes, until yesterday, that is, when Elsevier announced the acquisition of Knovel (http://www.elsevier.com/about/press-releases/corporate/elsevier-acquires-knovel,-provider-of-web-based-productivity-application-for-the-engineering-community). Knovel indexes all of the 100 professional and scholarly journal publishers in this sector, including IET. It is a fast expanding online source which claims to have added 20% more data in the past year. So what we now need on our engineer’s dashboard now?
Well, we certainly need GlobalSpec/IHS, with links to IEEE, and we certainly need Elsevier/Knovel, with links to ScienceDirect, but wouldn’t it be better to have a single access and complete cross-search in a Big Data context? Just a minute, though. Way back in 2006 a really good database visionary called Scott Virkler, then VP Business Development at GlobalSpec, helped to put in place a strategic collaboration with Elsevier, and after that became Elsevier’s VP of search strategy. So are those links still in place? And can you easily cross search all of these files from one place as Scott undoubtedly intended? I ask because it seems to me that consolidation and collaboration is the name of the game, and the game need have no losers. Alexander van Boetzelaer, who runs the corporate markets sector of Elsevier, has a fine record in collaboration. He and his team created GeoFacets, for the oil and gas industry, and IHS was one of their partners in doing that. But in order for collaboration to work partners have to be determined to make it work, creating interfaces with shared ownership, developing ways of exchanging user-derived data, and sharing marketing efforts and knowledge where necessary. There is still a tendency for collaboration to develop a market of two – and then end in a situation which is just one step away from what users really want.
All these takes time, and since it is over a decade since Elsevier invested in Engineering Village we appear to have plenty of that. Knovel was not even founded then, but it now amounts to a very considerable step forward in Elsevier’s further work with corporate markets. It claims 700 corporate customers and will add real muscle to the corporate markets drive at Elsevier, but we need to bear in mind that acquisition is no longer what it once was in the major market players in information. Thankfully we have matured from the 1990s, when it was about corporate ego and machismo when it was not driven by a desire to hoover up all the proprietory content in the sector. Now we know that content is not king, we can buy securely in the search to create more marketing connections while developing premium vale added services designed, whether collaboratively or not, towards the complete satisfaction of the customer service need. And all that Knovel data and all that GlobalSpec data will not do that in separate containers unless they can be combined and intermixed in the user’s workflow. The next chapter here is the next level of service development, and, given the differentiation of their resources and the fragmentation of the market, it seems to me unlikely that either Elsevier or IHS can do this alone in engineering. There was never a better moment, as in many markets, for talking to the apparent, but unreal, competitor.
« go back — keep looking »