The Need for Transparency and Explainability in EU AI Regulation

Print Friendly, PDF & Email

AI systems can analyze data and patterns at unprecedented scale and speed, with its use cases and sophistication growing exponentially. We’ve only scratched the surface of AI’s potential benefits for humanity. With responsible development, the technology could continue to facilitate huge changes, from democratizing scientific data to fighting diseases and mitigating climate change.  

However, across industries and in the eyes of the public, there is a growing backlash against these emerging technologies. Concerns around data security and potential job losses from automation have become increasingly relevant, as evidenced by the recent strikes by SAG as Hollywood writers, in part, protest the potential for scripts to be written by AI.  

The increasing mistrust of AI amongst the public underscores the need for thoughtful regulation. As governments hurry to develop policies to govern AI use, they face the immense pressure of allowing ongoing innovation that can improve lives while also protecting individuals and societies from potential harm. This balancing act requires that any regulations are adaptive and evolved collaboratively amongst lawmakers, researchers, and tech experts. It also requires AI developers themselves taking responsibility for the transparency, explainability and fairness of their systems.  

The Need for Nuanced Governance 

Recent AI regulatory proposals from the EU and other governments have focused on crucial areas like limiting the uses of AI for mass surveillance and requiring developers to publish training data for public chatbots. These requirements are important first steps. However, sweeping one-size-fits-all AI governance could hinder the progress of socially beneficial AI tailored to different domains. Healthcare AI has profoundly different risks and requirements than AI used in employment hiring tools or autonomous vehicles. Prescriptive regulations applied uniformly could inadvertently restrict innovation that improves lives. 

Governments must draft regulation that accounts for differing domains and use cases. Moreover, regulations should enable appropriate oversight without being so restrictive that they stifle progress. Finding the right balance necessitates input from technologists and researchers, in addition to lawmakers. It will require evolving governance that keeps pace with rapid AI advances. Most importantly, it will need nuanced policies finely tuned for different contexts, not blanket restrictions. With a collaborative, adaptable approach, regulation can provide necessary safeguards while still fostering AI development across sectors. 

The Role of Ethical Technology Development 

While governments work to shape AI regulation, technology companies have an obligation to build AI systems ethically. Responsible AI development starts with transparency – organisations should publish details on training data and models used, enabling both scrutiny and better understanding from end users. 

This includes providing citations for any datasets or research papers leveraged to train AI systems. Details on model architectures, hyperparameter configurations, and other technical specifics should also be disclosed where feasible. AI tools must also be explainable – it should be clear how algorithms make decisions that impact people’s lives.  

Rigorous testing protocols should be used to probe for problems. Systems must be monitored after deployment as well to detect model drift or performance degradation over time. Fostering responsible AI requires ongoing vigilance and commitment to ethical principles throughout the development lifecycle. 

Establishing ethical principles to guide design choices at every step of development is imperative. Companies leading in AI research should form internal review boards to evaluate their systems, just as medical and academic institutions utilise ethics oversight. Technologists and companies that self-impose standards for transparency, explainability and fairness will not only avoid punitive future regulations but can help inform policymakers on feasible oversight options.  

The Path Forward: Collaboration to Unlock AI’s Potential 

The global debate on AI governance is only getting started. While proposals from the EU signal a shift toward more assertive oversight, we are still in the early stages of determining ideal policies. As this debate continues and policies evolve, it is critical that lawmakers and technologists collaborate closely, recognising their shared goal of steering AI’s rapid development toward a positive impact. 

Artificial intelligence holds immense promise to transform society for the better. Realising that promise in a responsible way will require diligence, cooperation, and unceasing commitment to ethical progress from all involved. By embracing transparency, we can build an AI future that represents the best of human ingenuity.  

The opportunity is within our grasp if we choose collaboration and openness over opacity and shared benefit over zero-sum competition. Together, we can craft policies and develop technologies that unlock AI’s immense potential while protecting our freedoms and values.  

About the Author

Anita Schjøll Abildgaard is CEO and Co-Founder of Iris.ai 

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*