Sign up for our newsletter and get the latest big data news and analysis.

Interview: Tom Wilde, CEO of Indico

I recently caught up with Tom Wilde, CEO of Indico, to survey his perspectives for how AI is revolutionizing the modern enterprise including AI use cases, successful and failed AI initiatives, unstructured vs. structured content, democratizing AI, and citizen data scientists. Tom brings 25 years of experience in solving the complex problems of digital content to the role of CEO of indico. Prior to joining indico, Tom was the Chief Product Officer at Cxense (see-sense), a leading Data Management provider and founder of Ramp, an enterprise video content management company. Tom also held senior roles at Fast Search, Miva Systems, and Lycos. He has extensive experience with company building and venture-backed startups and holds an MBA in Entrepreneurial Management from Wharton.

insideBIGDATA: Where are you seeing enterprises gaining the most value out of AI and machine learning? What are the most common use cases?  Is there real ROI there?

Tom Wilde: There are plenty of examples of AI being applied at the point of customer interaction, and those are exciting, but Indico is applying AI more behind the scenes – to automate manual, back-office business processes. So, our view may be a bit different.  We call it intelligent process automation, or IPA for short.

The use cases we see most often are using AI, or IPA to automate document-based workflows such as contract analytics, audit planning and reporting, RFP analysis and composition, sales opportunity workflow automation, customer support analysis and automation, appraisal and claims analysis, etc.  These use cases are fundamentally different than most applications of AI in that they deal primarily with unstructured content – all the text, documents and images that make up over 80% of the data in most enterprises. Rather than following simple task-based automation, workflows with unstructured data require some form of AI-based decisioning in order to augment or automate the workflow.

The ROI with these use cases is very real: up to 85% faster cycle times; up to 4X increase in organizational capacity and throughput, and the ability to redeploy valuable resources to higher-value activities for the business.

insideBIGDATA: Where do users succeed and fail in their AI initiatives?  What are some of the most common stumbling blocks?

Tom Wilde: There are definitely a few we see over and over.

  1. Setting out without any real business outcome in mind – Experimentation with AI is important, but it’s also very difficult to do that without a real use case to test it on. Do you have a use case with a specific business problem you are trying to address? If not, it’s important to find one. It does not have to be big, but you need a defined business problem with a defined set of outcomes to experiment on and learn from.
  2. Assuming a common understanding of the business process in question – As I mentioned earlier, we’re focused on automating existing, back-office business process. In most organizations, you’d expect these to be pretty well-defined, but the surprising reality is  different people in an organization have very different views on how a specific business process works. It’s really important to start with a common understanding of the different steps in a process before you can apply AI to automate it. Don’t make assumptions. It’s probably not as close as you think.
  3. Being unrealistic about what AI can and can’t do – When it comes to unstructured content, many organizations come into their project with the assumption that AI can tell them what the right answer is inside a large pool of data. That’s a mistake. AI is great at discovering what maps to an already defined desired state; e.g., if this is what compliant contract language looks like, AI can then automate the process of identifying which contracts are compliant. which are not, and then recommend next steps. But, if you can’t define the desired outcome, AI can’t do it for you.
  4. Not having access to the right data – You have to have data that can define your desired state. This may involve having your business SMEs label or annotate examples of what reflects the right answer vs. examples that represent the wrong answer. It does not necessarily have to be a lot of them, but you do need very clear examples of right and wrong to put AI to work against a larger data set. We also suggest that clients look internally for data vs. scraping it from the Internet. It’s generally much easier to work with.
  5. Data Science teams going it alone – AI projects can only go so far without people that have expertise about that business outcome or process being improved. Too many AI projects get too far down the road before the business people get pulled in. When this happens, it can be really difficult to get a project back on track.  It’s important that both groups understand their essential roles in the process. We encourage clients to task their IT/data science staff to go out and work with the business to find use cases that might benefit from AI and intelligent process automation.

insideBIGDATA: Indico talks about unstructured vs. structured content.  What do enterprises need to consider as they apply AI to the different types of content they have?

Tom Wilde: Structured data (think data that resides in tables and spreadsheets with pre-determined values and meaning) is easy to automate because it’s black and white. Tell AI exactly what you need it to do with structured data and it can do it better, faster and cheaper than a human.  There is no judgement required nor does it need to learn and improve with experience. AI solutions like RPA are perfect for these types of use cases. 

Unstructured and semi-structured content is fundamentally different. To automate business processes and workflows that involve a lot of this type of data, AI has to be able to make accurate judgements based on the information and context available. This has been a huge stumbling block to date because of the amount of training data needed to create viable learning models, and the inability of existing AI solutions to work effectively with unstructured content.

insideBIGDATA: We hear a lot about RPA these days; big funding announcements, huge valuations, etc.  How do you distinguish between what companies are doing with RPA vs. AI? How much overlap is there?  How should enterprises distinguish between them?

Tom Wilde: RPA has absolutely been one of the hottest areas of tech in the last two years. Just the other day, UiPath announced that it raised another $568 million. Not to mention Automation Anywhere and BluePrism. RPA has a simple, easy-to-understand value prop – process automation and cost efficiency – so it’s an easy win for any company looking for productivity gains. And those companies are seeing huge success as a result.

As I mentioned, RPA is great with repetitive, deterministic businesses processes involving structured data — where there is no judgment involved. But it’s not in business processes involving unstructured content where cognitive ability and context are required. This is where AI solutions like intelligent process automation (IPA) comes in. IPA doesn’t replace or compete with RPA. It complements it, by handling those workflows that can’t be automated using RPA and plugging unstructured content in structured form back into business process flows.

insideBIGDATA: “Democratizing AI” and “Citizen Data Scientists” are two (of many) terms we hear a lot from industry analysts.  Are these realistic goals for enterprises?

Tom Wilde: I think the trend is very real, but don’t expect to turn your line of business folks into data scientists that understand algorithms and data models. Instead, leverage them to help identify the right use cases and to label the right data inputs so that everyone can agree on what represents the desired outcomes in the context of the business goal at hand. This also helps data scientists and line of business professionals set realistic expectations for their initiatives.

There are some capabilities like reducing the amount of data required to train models and making it easier to label data that are important enablers of this trend. But in my view, what will really fuel it is the ability for data scientists to collaborate more closely with the line of business.

Sign up for the free insideBIGDATA newsletter.

Leave a Comment

*

Resource Links: