Sep
27
RPA: Generating Workflow and Automating Profits?
Filed Under Artificial intelligence, B2B, Big Data, Blog, data analytics, Financial services, Industry Analysis, internet, machine learning, RPA, semantic web, Uncategorized, Workflow | Leave a Comment
RPA Robotic Process Automation. The new target of the Golden Swarm of software VC investors. Sometimes misleadingly known in more refined versions as IPA (Intelligent Process Automation, not warm English beer).
In my view the central strategic question for anyone who owns or collects and manages news and information, educational and professional content, prices or market data relating to business verticals, and commodities is now simply this: when I license data to process automation, what is my expectation of the life of that annuity revenue stream, and how fast do my users connections and market requirement sensitivity decay? Over the past five years we have seen an industry predicated on the automation of mundane clerical work take huge strides into high value workflows. Any doubt around this thought can be clarified by looking at the speed of advance of automated contract construction in the legal services market. The ability to create systems that assemble precedents, check due diligence, create drafts and amend them is as impressive as it is widespread. The fact that many law firms charge as much for signing off on the results as they did for the original work says more for their margins than it does for the process. But that message is the clearest of all: automating process software is expensive, but eventually does wonders for your margins in a world where revenue growth is hard to come by for many.
And, at least initially, RPA systems are greedy eaters of content. Some early players, like Aravo Solutions, became important middlemen for information companies like Thomson Reuters and Wolters Kluwer in creating custom automation for governance, risk and compliance systems. Their successors, productising the workflow market, have been equally enthusiastic about licensing premium content, but unlike their custom predecessors, while they have enjoyed the branded value of the starter content, they have also found that this is less important over time. If the solution works effectively and reduces headcount, that seems to be enough. And over time, systems can become self-sufficient in terms of content, often updating information online or finding open data solutions to diminish licensing costs.
The ten companies in this sector (which included Century Tech as an example of learning as a workflow) that I started to follow three years ago have matured rapidly. Three have become clear market leaders in the past 6 months. Automation Anywhere and UiPath in the US, together with Blue Prism in Europe have begun, from an admittedly low start points, to clock up 100-500%+ annualised revenue growth rates, But a note of caution is needed, and was importantly provided by Dan McCrum writing in the FT on 13 September (https://ftalphaville.ft.com/2018/09/13/1536811200000/The-improbably-profitable–loss-making-Blue-Prism/). He demonstrated that by writing all of its sales costs ( mostly through third parties) to fixed administration costs it was able to claim close to 100% ebitda and score a 1.7 billion pound valuation on the London AIM market while revenues were 38 m pounds and losses are still building. UiPath (Revenues $100m, revenue growth 500%, valuation $1 bn) and Automation Anywhere (valuation $1.8 bn) follow a similar trajectory.
All content markets are looking at a future where machines use more content than people, This makes it more important than ever that information is sourced in ways that can be verified, audited, validated and scored. This is not just an “alternative facts” or “fake news” issue – it is about trust in the probity of infrastructures we will have to rely upon. Content owners need to be able to sell trust with content to stay in place in the machine age, at least until we know where the trusted machines are kept. In the meanwhile it will be interesting to see which information, data and analytics companies acquire one of these new software players, or which of these new high value players uses the leverage of that valuation to move on a branded and trusted information source.
Jul
23
The End Of The Beginning Of The End?
Filed Under Artificial intelligence, Blog, data analytics, healthcare, Industry Analysis, internet, machine learning, Publishing, Reed Elsevier, semantic web, STM, Workflow | Leave a Comment
Is this Elsevier’s “music industry” moment? As more news emerges of German academics denied continuing access to journals, while Projekt Deal talks in Germany appear becalmed, there will certainly be anti-commercial publishing opinion in academe that hopes so. The whole debate on German access to Elsevier looks more and more like Britain’s Brexit talks, with one party in each case stating its minimum terms and not seeing any reason to settle for less, while the other reiterates “final positions” without getting any closer to a deal. And Elsevier will be as keenly aware as the poor UK trade negotiators that a false move in the push for a deal with someone who does not need to compromise simply hardens resistance to compromise. Those of us who have relied on the “surely good sense will prevail amongst people of good will on both sides” argument begin to despair of both sets of negotiations.
So what happens in a digitally networked world when parties fail to agree? Those with most skin in the game get hurt first. When the music industry faced the problems of download and disc burning it wasn’t strict enforcement of copyright that saved them from users who knew what they wanted and had the technology that could do it. Instead, music owners and distributors were faced to accept co-operation withe the technology plays as the price for continued participation in a, for them, smaller but still profitable market. And with that came consolidation, a different sort of investment profile and and a new relationship with the only really powerful people in a networked world – the end users.
And in Elsevier’s world those users have never been more powerful. As Joe Esposito rightly suggests in Holly Else’s Nature article (19 July), there is Sci-Hub for a start. But then there is more than that. Social networking has already been widely used to distribute articles. Many academics are acutely aware of who they most want their readers to be and regularly circulate to them. “Good enough” publishing on pre=print servers proliferates Institutional and individual reputation management raises its game. It is not that the whole and holy progress of traditional academic publishing comes to a halt – simply that water finds its way round a dam – and then gets used to and deepens the new water courses. Do we really needs articles? Can we just report the data? Can that and our discussions about it be cited? High level science research already carries huge cost and time pressures around research publication. Elsevier must be anxious in Germany about creating a breakpoint that drives publication in-house.
At this point I always find it useful to ask a silly question. And a favourite is “What would Steve Jobs have done with this problem?” Irrational responses do sometimes win markets. And Jobs after all responded to the levelling off of consumer computer markets by inventing the computer as a hub, to run iPhone, iPad, iPod etc. So, Steve, what do you think?
STEVE JOBS: “Well, I would scrap this Projekt Deal for a start. Its going nowhere. Just walk away. Tell them you are not interested anymore…
Then I would go to the Federal government and say ‘Can you get the research institutes, the universities and everyone concerned with research funding round a table? We have a plan to increase German research funding by 5-7% per year for five years without it costing the German taxpayer a cent.’
Then I would say to my people at Elsevier: we are the technically best equipped company in the sector. For 25 years we have invested in Science Direct, Scirus, Scopus, SciVal and the rest. We know the future is not in journals or even in content but we find it hard to divorce from the past and embrace the future. So we need a learning experience, to teach us how our next market works. But it comes at a price.
Then I would say to the German government: We want the contract for intelligent services and risk management in German research. We will put all our technologies into this deal. Its scope will be providing your research communities with ways of mapping prior and current work, in Germany and elsewhere, evaluating success or failure in current work , providing intelligent tools to give every researcher full-beam headlights in their niche, showing German research where its major collaborative possibilities and competitive pressures where, giving government and institutions unique insight into where quality of outcomes lies – and where current funding is being wasted. We offer you a five yer deal to populate all your systems with our knowledge, and since we are learning, building alongside you, and developing some new things on which you can earn royalties in future, we also offer you a special price. Now can we start negotiating?
Oh, yeah, I almost forgot. We also have a special workflow deal – help us make the smoothest, hassle free workload system of uploading QA articles and you will never pay more than $1000 per article in APCs. And, as I always said at the end of presentations, One More Thing… all access to ALL journals to all users is completely free to all Elsevier registered German users for the life of this contract.”
There are of course no instant solutions and no predictability. But RELX investors, industry analysts and anyone trying to get an IPO off the ground will be hoping that someone somewhere will be able to find a breakpoint – here as well as with Brexit.
« go back — keep looking »