Don’t overlook independence in Responsible AI

Print Friendly, PDF & Email

The arrival of ChatGPT and other large language models (LLMs) has brought the notion of AI ethics to mainstream discussion.  This is good because it shines a light on the area of work which has been tackling these issues for some time now.  This is the field of Responsible AI, and it doesn’t just apply to ChatGPT and LLMs, it applies to any application of AI or machine learning that can have an impact on people in the real world.  For example, AI models may be deciding whether to approve your loan application, progress you to the next round of job interviews, put you forward as a candidate for preventative healthcare or determine if you’re going to reoffend when on parole.

Whilst the field of Responsible AI is gaining traction in the enterprise (in part driven by imminent regulation such as the EU’s AI Act), there are issues with current approaches to implementing Responsible AI.  Possibly to due illiteracy in AI and data across large organizations, the task of Responsible AI is often thrown to the data science teams.  These teams are usually made up of scientists who are tasked with designing and building effective and accurate AI models (most often using machine learning techniques).

The key point here is that it’s not the right approach to task the teams (and by association, the technologies they use) that build the models, with the job of objectively evaluating these models. 

Fields outside of AI have a long and effective history of requiring independence in audits.  As required by the Securities and Exchange Commission (SEC) in the United States, the auditor of a company’s finances must be fully independent from the company in question.  From the SEC: “Ensuring auditor independence is as important as ensuring that revenues and expenses are properly reported and classified.”  

Independence is also a key requirement in the Model Risk Management (MRM) process – a process by which the statistical models developed in financial institutions are independently tested and verified.  The three levels of MRM (Model Development, Model Validation and Internal Audit) should each maintain strict independence from each other.

We should therefore not ignore this valuable history of audit independence when implementing Responsible AI.  In this field, AI models and data must be measured so that aspects such as fairness, disparity, privacy, robustness, etc can be quantified and assessed against an organization’s processes, principles, and frameworks.

Independence in Responsible AI should apply to both the people carrying out the assessments and the technology that they use to do it.  This is important because:

  • People may be defensive of the models they’ve built.  This is quite understandable as they’ve likely invested a lot of time and effort into this model build; however, with this in mind they are unable to objectively evaluate their own work.
  • AI models are often built and trained using custom code, written by data scientists.  People make mistakes in all lines of work, in this context it would result in errors or bugs in the code.  Good software practise promotes the reuse of code, so it’s likely that the same code would be used for evaluation of the models.
  • In the design of an AI model and curation of data, people make assumptions and judgement calls throughout that process (and these are often codified in software).  A thorough independent process must not rely on those assumptions.
  • Automated software tools may build models for a data scientist (these technologies are often called AutoML tools).  They’re sold on being quicker, easier and cheaper to build a model than a manual approach.  However, if they provide the technical measurement of the models they’ve just built, they’re simply grading their own homework.
  • An enterprise (or government) organization will likely have many models, not just one.  In order to have effective governance of these models at scale, the quantitative metrics must be comparable between the models.  If model build teams create new metrics that they deem appropriate for each of their models, comparing these to corporate standards at scale will be nearly impossible.

By bringing in wider teams and technologies to the Responsible AI process you also benefit from brining a diverse set of skills and viewpoints.  The task of Responsible AI requires skills in ethics, legal, governance, compliance, and law (to name just a few), and practitioners of these skills need to be armed with independent quantitative metrics that they can rely upon.

As technologies such as ChatGPT raise awareness of the ethical issues associated with AI, more and more leadership executives are becoming cognizant to the unintended consequences of their own AI.  Whilst they are not going to understand the technical detail of their AI, an effective Responsible AI process gives them the confidence that the appropriate guardrails are in place.

Whilst the fields of AI and machine learning are fast moving, and teams are just getting to grips with tackling the ethical and regulatory issues associated with these, the principles of effective audits are not new.  As teams design their Responsible AI processes, it worth taking a moment to look at what is already known.

About the Author

Dr Stuart Battersby is Chief Technology Officer of Chatterbox Labs and holds a PhD in Cognitive Science. Chatterbox Labs is a Responsible AI software company whose AI Model Insights platform independently validates enterprise AI models and data.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*

Comments

  1. I think you don’t have to be a pessimistic person to realize that in the (not very distant) future AI will eventually take over at least some aspects of work in the texting/writing/journalism area. While my company is using an AI software to write some of the blogposts for us, there are (at least in Germany, where I am from) already smaller news websites using AI to write the reports on smaller local sports events. It is not a pessimistic prediction saying that at some point, when we have trained these softwares enough, they will take over at least a few jobs. Personally, I think it is now our challenge to find ways to work with that knowledge and not be surprised once it will happen.