Managing AI Risks. Multiple Stakeholders Need Access to the Right Data and Insights

Print Friendly, PDF & Email

There is no doubt that AI is exploding across businesses, and it is not just with the moon shots that make news headlines. Due to the speed and scale at which AI can operate, it is being used across the critical operations and decision making in everyday businesses on tasks that teams of people would previously have been addressing.

As AI has often emerged from a purely data science function, those models in production today have often lacked the formal business review process should take place across multiple stakeholders in businesses, and tend to operate in silos across the organization each built using different technologies. As an example, in Deloitte’s 2020 State of AI in the Enterprise report1, it is reported that a bank carried out an inventory of their models that use advanced or AI-powered algorithms and came to a staggering total of 20,000.

With metrics like this in mind, it is not surprising then that business leaders are now cognizant, not just to the AI technology that is being used, but the business risks associated with its use. As per Deloitte’s report, these risks are potentially strategic, operational, and ethical. In fact, more than half of AI adopters reported major or extreme concerns about these potential risks. Drilling into their data, we see that (amongst many categories) 62% of respondents have major or extreme concerns with vulnerabilities in their AI, 57% with new and changing regulation and 53% with ethical issues.

This problem is recognized also in KPMG’s report on Trust in AI2 which states “Ultimately, trust in AI and machine learning depends upon accountability of AI results, but few organizations have solid practices in place.” In fact, they go further to state “Many organizations also lack the tools and expertise to gain control and a full understanding of algorithms…”.

Business process frameworks are an important part of the solution, adding in the necessary checks and balances that should be included (either retrospectively or with new projects). However, frameworks alone are not the solution. It is critical that they are informed with the right data from the appropriate AI models and that it is adopted by the right people.

It is critical then that these issues of AI risk are addressed with clear, transparent, and up to date data into the operation of the AI model. And this then brings forth two points:

  1. A clear understanding, backed up with data, is needed into each AI models’ operation. Each AI model operates differently, and it is critical to use a data driven approach to understand the risks associated with the model. This is not just about explainability – yes that is an important part – but a full view into each models’ operation is required. For example, is the model operating fairly, without unwanted bias? Are there weaknesses that cause vulnerabilities? Clear, factual data is required on this.
  2. A multi-stakeholder approach is required. The task of assessing AI model risk is so critical to businesses that stakeholders from data science, legal, risk, compliance and IT should all have access to data on the AI models’ operation. Access directly to the insights should be available for all stakeholders (rather than summarized in slides from colleagues). This is to ensure they are viewed from the perspective of each stakeholder and not lost in the construal between teams.

It is also important to note that managing AI risk is not a one-time thing. Unlike a traditional rules-based system, AI models are continually learning and evolving. Whilst the risk level of a model may have been recently evaluated as low, this could all have changed as new data is introduced. Evaluating the risk of an AI model is a continual process.

Regulation is an important driver for this. In the EU this is already in place to some extent with the GDPR, however the US will see various legislation coming soon, most notably the Algorithmic Accountability Act. However, whilst regulation will become an enforceable driver for addressing AI risk and accountability, business leaders are already adopting these principles. Whilst I was in discussion with a senior executive of a US firm, they commented to me:

“Whilst we don’t have the equivalent of the GDPR in place here yet, we are following the principles of it now because it’s the right thing to do.”

I think this says it all.

About the Author

Dr Stuart Battersby is Chief Technology Officer of Chatterbox Labs and holds a PhD in Cognitive Science. Chatterbox Labs is an Enterprise AI software company whose AI Model Insights platform delivers Trustworthy and Fair AI.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind



  1. Hi Dr Stuart Battersby,
    Thanks for providing the in-depth knowledge for the AI technology. I look forward to see many more inspiring articles by you.
    Plz check my blog related A Complete Guide To Uipath ReFramework

  2. Love this! 🙌 How about also ensuring AI benefits diverse communities?