Regulate the Use Cases—Not AI Itself

Print Friendly, PDF & Email

Several months back, AI leaders expressed caution surrounding federal government involvement in the technology’s development and deployment. However, a major survey found that nearly 4 in 10 Americans were more concerned than excited about the rise of generative AI, pushing leaders to deeply consider the checks and balances required to build and maintain trust. This was further emphasized when OpenAI’s Sam Altman and Google’s Sundar Pichai testified before Congress, asking for the very technology they created to be regulated.

Now, governments must decide the best course of action. European lawmakers, for example, recently drafted a law that would regulate the development of AI products, including ChatGPT, and how companies use datasets to train large language models (LLMs). However, there could be a much more effective approach to regulating AI without slowing innovation.

Considering specific use cases

Creating parameters and regulation frameworks for AI development  is an attractive solution—at first glance. After all, if the technology is regulated during the research and development phase, this practice would improve the safety and use of these tools. However, doing so would significantly slow the pace of innovation and progress the industry has seen take off in recent months. So, what if we regulate specific use cases instead? 

By approaching regulation through the lens of use cases, like licensing the business applications of AI models, rather than requiring licenses for creating these models, organizations can better tailor based on usage and industry. This practice allows developers to fine-tune their technical work without the looming pressure of evolving regulatory standards. It also levels the playing field, allowing smaller companies to break through without navigating red tape.

Another factor to consider is that every industry has unique challenges to solve, with some requiring a deeper level of oversight than others. Regulating use cases allows governments to make these important distinctions. Take healthcare as an example. Proponents of generative AI have long discussed how technology can revolutionize the healthcare sector, and Google Cloud’s recent partnership with the Mayo Clinic highlights what’s at stake. Under the new deal, Mayo Clinic medical professionals will use an enterprise search function that lets workers interpret data like a patient’s medical history with a simple query, even when the data is stored across multiple locations and formats. 

The two organizations are incredibly excited about how the tool will reduce burnout and save time that can be used to better care for patients. Yet, the risks of this approach cannot be overlooked. Generative AI is prone to ‘hallucinations’, or misinterpreting the massive amounts of data, when humans don’t have oversight into the process. This is especially dangerous in healthcare, where patients rely on physicians to make quick, accurate decisions when treating illnesses, placing a dire need for human involvement throughout the process.

Questionable instances have already begun to appear—an oncology nurse at UC Davis Medical Center was alerted by AI that a patient had sepsis—but she was confident it was incorrect, noting her many years of experience in the field. However, hospital rules forced her to follow protocols and draw blood from the patient due to the sepsis flag, which could have exposed him to infection and drove up his hospital bill. After all,  the algorithm was wrong, and the patient was not septic.

This is not to say that AI doesn’t have its place in healthcare but rather to illuminate the risks we face by failing to regulate specific use cases that require additional human involvement. 

By regulating specific commercial use cases, governments can show the public that they take potential risks seriously and are dedicated to ensuring the safe implementation of AI. This is a great first step to beginning a dialogue, keeping humans in the loop and making society more comfortable with AI to improve how we work and live.

About the Author

CF Su, VP of ML, Hyperscience. CF brings over 15 years of R&D experience in the tech industries. He’s led engineering teams in fast-paced start-ups as well as big Internet giants. His expertise includes areas of search ranking, content classification, online advertisement, and data analytics. Most recently, CF was the Head of Machine Learning at Quora, where his teams developed ML applications of recommendation systems, content understanding, and text classification models. Before that, he held technical leadership positions at Polyvore (acquired by Yahoo), Shanda Innovations America, and Yahoo Search and was a senior researcher at the Fujitsu Lab of America. To date, CF’s industry contributions include 14 U.S. patents and more than 20 technical papers.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*