Jun
17
Data and Analytics are now B2B Central
Filed Under Artificial intelligence, B2B, Big Data, Blog, data analytics, Financial services, Industry Analysis, internet, machine learning, RPA, semantic web, Uncategorized, Workflow | Leave a Comment
Sometimes you go to a conference that just crackles with the excited atmosphere that surrounds the moment that has come. When Houlihan Lokey were putting together their conference on data and analytics, which took place last week, I can well imagine that there was a conversation that went “we need to attract 150 at least so let’s invite 350“. We have all done it. And then comes a day when almost 300 of the invitees turn up, it’s standing room only in front of the coffee urn, and the room pulsates with conversation, networking, and commentary. So it was at the Mandarin Oriental in London last Wednesday, and there were other virtues as well. Working in panels of corporate leaders and entrepreneurs, a short conference with short sessions had real insight to offer. There is a lesson there for all of us still indeed to put on 3 day events – short and intensive and double track does leave a worry that one might have missed something as well as an appetite for more.
After a Keynote by Phil Snow, the CEO of FactSet, the conference resolved into four panels covering insurance, research and IP, risk and compliance, and lastly a group of founders talking about their companies. And while companies like FactSet now take a fully integrated view of the marriage of the content and technology with data and analytics, it is also clear that companies in the sectors covered straggle across the entire spectrum from a few APIs and data feeds, right through to advanced algorithmic experimentation and prototyped machine learning applications. And everywhere we spoke about what AI might mean to the business. But no where did we define what exactly that might mean, or demonstrate very tangibly real examples of it in action. And this for me strengthens a prejudice. It is one thing to look back on the algorithms that we have been using for five years and refer to them in publicity as a “AI -driven service”, but quite another thing to produce creative and decision-making systems capable of acting autonomously and creatively.
Yet the buzz of conversation in tearoom was all about people wanting to take advantage of the technology breakthroughs and data availability, and wanting to invest in opportunistic new enterprises. This is much better than the other way round, of course: many of us remember the period after the “dot com bust” when the money dried up and investors only wanted to look at historic cash flows. But as the data and analytics revolution presses forward further, there have to be satisfying opportunities to create real returns in a measurable timespan. I do not think this will be a problem but I do think that we have to expect disappointments after the exaggerated wave of expectations around AI and machine learning. And from conferences like this it is becoming clearer and clearer that workflow will remain a key focus. Creating longer and longer strands of work process robotics and using intelligent technology to provide decision-making support and then improved decision-making itself seems likely. While RPA (robotic process automation) is making real inroads into clerical process, it is not yet either having an impact on nontrivial decision-making, or upon the business of bringing wider ranges of knowledge to address decision s normally made by that most fallible of qualities, human judgement.
Looking back, there was another element that did not surface at Wednesday‘s fascinating event. Feedback is what improves machines and makes the development track accelerate. But as we build more and more feedback loops into these knowledge systems we learn more and more about the behaviour of customers, and the gaps between how people actually behave and what they say (or we think) they want, grow larger. The “exhaust data” resulting from usage does not get much of a mention on these occasions. But if, for example, we looked at the field of scholarly communications and the research and IP markets, I could at least make the argument that content consumption at some point in the future will be the prerogative of machines only. The idea of researchers reading research articles or journals will become bizarre. There will simply be too much content in any one discipline. The most important thing will be for machines to read, digest, understand and map the knowledge base, allowing researchers to position their own work in terms of the workflow of the domain. And one other piece of information will then become vitally important. The researcher will need feedback to know who has downloaded his own findings, how they were rated, and whether other scholars’ knowledge maps matched his own. Great contextual data drawn from a wider and wider range of sources is fuelling the revolution in data and analytics. Great analysis of feedback data coming off these new solutions will drive the direction of travel.
None of this lies at the door of Houlihan Lokey. By providing a place for a heterodox group of investors and entrepreneurs to mingle and talk they do us all a favour, and in the process demonstrate just how hot the data and analytics field is at the present moment.
Sep
27
RPA: Generating Workflow and Automating Profits?
Filed Under Artificial intelligence, B2B, Big Data, Blog, data analytics, Financial services, Industry Analysis, internet, machine learning, RPA, semantic web, Uncategorized, Workflow | Leave a Comment
RPA Robotic Process Automation. The new target of the Golden Swarm of software VC investors. Sometimes misleadingly known in more refined versions as IPA (Intelligent Process Automation, not warm English beer).
In my view the central strategic question for anyone who owns or collects and manages news and information, educational and professional content, prices or market data relating to business verticals, and commodities is now simply this: when I license data to process automation, what is my expectation of the life of that annuity revenue stream, and how fast do my users connections and market requirement sensitivity decay? Over the past five years we have seen an industry predicated on the automation of mundane clerical work take huge strides into high value workflows. Any doubt around this thought can be clarified by looking at the speed of advance of automated contract construction in the legal services market. The ability to create systems that assemble precedents, check due diligence, create drafts and amend them is as impressive as it is widespread. The fact that many law firms charge as much for signing off on the results as they did for the original work says more for their margins than it does for the process. But that message is the clearest of all: automating process software is expensive, but eventually does wonders for your margins in a world where revenue growth is hard to come by for many.
And, at least initially, RPA systems are greedy eaters of content. Some early players, like Aravo Solutions, became important middlemen for information companies like Thomson Reuters and Wolters Kluwer in creating custom automation for governance, risk and compliance systems. Their successors, productising the workflow market, have been equally enthusiastic about licensing premium content, but unlike their custom predecessors, while they have enjoyed the branded value of the starter content, they have also found that this is less important over time. If the solution works effectively and reduces headcount, that seems to be enough. And over time, systems can become self-sufficient in terms of content, often updating information online or finding open data solutions to diminish licensing costs.
The ten companies in this sector (which included Century Tech as an example of learning as a workflow) that I started to follow three years ago have matured rapidly. Three have become clear market leaders in the past 6 months. Automation Anywhere and UiPath in the US, together with Blue Prism in Europe have begun, from an admittedly low start points, to clock up 100-500%+ annualised revenue growth rates, But a note of caution is needed, and was importantly provided by Dan McCrum writing in the FT on 13 September (https://ftalphaville.ft.com/2018/09/13/1536811200000/The-improbably-profitable–loss-making-Blue-Prism/). He demonstrated that by writing all of its sales costs ( mostly through third parties) to fixed administration costs it was able to claim close to 100% ebitda and score a 1.7 billion pound valuation on the London AIM market while revenues were 38 m pounds and losses are still building. UiPath (Revenues $100m, revenue growth 500%, valuation $1 bn) and Automation Anywhere (valuation $1.8 bn) follow a similar trajectory.
All content markets are looking at a future where machines use more content than people, This makes it more important than ever that information is sourced in ways that can be verified, audited, validated and scored. This is not just an “alternative facts” or “fake news” issue – it is about trust in the probity of infrastructures we will have to rely upon. Content owners need to be able to sell trust with content to stay in place in the machine age, at least until we know where the trusted machines are kept. In the meanwhile it will be interesting to see which information, data and analytics companies acquire one of these new software players, or which of these new high value players uses the leverage of that valuation to move on a branded and trusted information source.
keep looking »