A Hitchhiker’s Guide to AI that Actually Works for Business

Print Friendly, PDF & Email

In this special guest feature, Alex Hoff, Senior VP of Product Management & Marketing at Vendavo, believes that if you want an AI or ML solution that will be of any practical use, it needs to be a white-box model that is explainable, interpretable, and it will be both more usable and effective if it allows for human insights and intelligence to be combined with the artificial intelligence and insights – a centaur, or perhaps a cyborg. Alex brings enterprise software solutions to market that enable B2B corporations to operationalize their commercial excellence strategies. He has been with Vendavo in various roles for over ten years, working with customers in industrial manufacturing, process industry, consumer product manufacturing and business services.

For the last 18 years, I have been using, explaining, and now building AI-based software solutions for use in the business realm, mostly related to the use case of price optimization. I’ve had to deploy an AI-based solution at my own company & sell the outputs to our skeptical executives, convince potential customers to buy and deploy our software solutions in a later role, and now I get to build these solutions alongside my data scientist coworkers. I’ve even picked up the pieces from previous, failed projects – mostly from my competitors or well-intentioned “do it yourself” internal projects. I offer here two key learnings from these experiences.  

AI Needs to be Explainable

The first is that AI solutions for business managers need to be understood by business managers who don’t have advanced degrees in quantitative fields. As some of the hype around AI and ML is maturing into more practical perspectives, there are even well-accepted terms for some of this as well, like “Explainable AI” (XAI) and “white-box models”. 

In my first experience with making use of an AI-based solution, I inherited a project that was somewhat failing. My predecessors had selected a great solution that leveraged some brilliant data science and utilized terabytes of transaction data in some very non-linear, multivariate regression models supported with Bayesian inferences. But their mistake was that they proudly rolled out this decision-support system for business managers and executives, many of whom had not asked for a decision-support system, and for whom the fancy words I just used were as Greek as it gets. 

When I inherited the project, the algorithms – which were yielding really good results – were being ignored more than 50% of the time. Few people trusted the outputs and recommendations. After a long and arduous campaign of demystification and explaining, I eventually got everyone comfortable with the reality that the fancy-sounding words just meant that some very powerful computers were crunching numbers in ways that mimicked how average business managers thought about pricing, elasticity, revenues, and profits. In the end, we got our acceptance rate up above 90% and could confidently document a 700% ROI, but it took a LOT of explaining. The software didn’t offer explanations; we had to do that ourselves.

I’ve heard many similar stories when encountering failed projects from other approaches. The hard truth is that you can have the most powerful and accurate models, but if the business users don’t trust them (which usually requires some level of explanation) then you’ve got a wasted effort and a failed investment; it has even cost some people their jobs.

In most use cases, an AI or ML solution for business needs to include some means of explanation so that the results can be understood and interpreted, and trusted. As one expert in machine learning put it, “While the machine-learning objective might be to reduce error, the real-world purpose is to provide useful information.” Interpretability is almost always going to be an important requirement for any AI and ML solution to be useful and effective in a business setting. Interpretability should be one of the fundamental design principles from the start. This may sound simple, but it can get difficult for quants when you are tempted to create that slightly better model with better predictive power that is also harder to explain to the unwashed masses of everyday business users.

IA (Intelligence Augmentation) not AI (Artificial Intelligence)

The second key learning is that most AI solutions for real business applications need to be pragmatic blends of both artificial intelligence and human intelligence – something called intelligence augmentation (IA) not just AI. Too many IT analysts and marketing VPs have hyped the promise of AI and ML and created unrealistic (and naïve) expectations for many would-be users. At one conference, I sat through three presentations by software companies and consultants that started the exact same way: they told you the story of IBM’s Deep Blue, and how it defeated the chess grandmaster, Gary Kasparov.  And this was before “The Queen’s Gambit” suddenly made chess cool. “Here’s the story we’ve been telling ourselves about AI for decades: it’s man versus machine, creators versus their creation, a ball of wrinkly meat versus a smooth block of silicon.”ii  But what few people talk about is that after Gary Kasparov lost to IBM’s AI, he and others developed a concept of combining both artificial and human intelligence in a tournament of “Centaur Chess” where competitors of all types competed: supercomputers, human grandmasters, mixed teams of humans and Ais.  “Not surprisingly, a Human+AI Centaur beats the solo human.  But — amazingly — a Human+AI Centaur also beats the solo computer.”ii

In the real-world of everyday business problems data is sparse, often historical (backward-looking), lacks the insights that humans may have collected in an analog manner, and in some cases, the data simply lacks observable evidence of the users’ potential strategy (hypothesis). Humans can have critical insights or context that the best model around utilizing the best data available, simply can’t provide.  The two together – artificial intelligence AND human intelligence – usually yield not only the most trustworthy results but usually the best results as well.

One practical example of this principle is what we often refer to as “business rules” – essentially constraints that need to be layered into the AI-enabled solution so that you can prevent unacceptable (from a business perspective) outcomes.  These need to be as simple as the rules wizard in a desktop office software solution – something that can be managed without an advanced degree in operations research or statistics.

Another example of this is in the simple ability to provide overrides and “manual” adjustments to predictions and outcomes. While a purist might object to this, things like this are key to adoption and success. And you can always design the solution to observe such overrides and consider them to help train your model to be even better.

In summary, if you want an AI or ML solution that will be of any practical use, it needs to be a white-box model that is explainable, interpretable, and it will be both more usable and effective if it allows for human insights and intelligence to be combined with the artificial intelligence and insights – a centaur, or perhaps a cyborg.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*