Big Tech is Likely to Set AI Policy in the U.S. We Can’t Let That Happen

Print Friendly, PDF & Email

Innovation is key to success in any area of tech, but for artificial intelligence, innovation is more than key – it’s essential. The world of AI is moving quickly, and many nations – especially China and Europe – are in a head-to-head competition with the US for leadership in this area. The winners of this competition will see huge advances in many areas – manufacturing, education, medicine, and much more – while the left-behinds will end up dependent on the good graces of the leading nations for the technology they need to move forward.

But new rules issued by the White House could stifle that innovation, including  that coming from small and mid-size companies. On October 30th, the White House issued an “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which seeks to develop policy on a wide range of issues relating to AI. And while many would argue that we indeed do need rules to ensure that AI is used in a manner that serves us safely and securely, the EO, which calls for government agencies to make recommendations on AI policy, makes it likely that no AI companies other than the industry leaders – the near-oligopolies like Microsoft, IBM, Amazon, Alphabet (Google), and a handful of others – will have input on those policy recommendations. With AI a powerful technology that is so important to the future, it’s natural that governments would want to get involved – and the US has done just that. But the path proposed by the President is very likely to stifle, if not outright halt, AI innovation.

Pursuing important goals in the wrong way

A 110 page behemoth of a document, the EO seeks to ensure, among other things, that AI is “safe and secure,” that it “promotes responsible innovation, competition, and collaboration,” that AI development “supports American workers,” that “Americans’ privacy and civil liberties be protected,” and that AI is dedicated to “advancing equity and civil rights.” The EO calls for a series of committees and position papers to be released in the coming months that will facilitate the development of policy – and, crucially, limitations – on what can, or should, be developed by AI researchers and companies.

Those certainly sound like desirable goals, and they come in response to valid concerns that have been voiced both inside and outside the AI community. No one wants AI models that can generate fake video and images that are indiscernible from the real thing, because how would you be able to believe anything? Mass unemployment caused by the new technologies would be undesirable for society, and likely lead to social unrest – which would be bad for rich and poor alike. And inaccurate data due to racially or ethnically imbalanced data gathering mechanisms that could skew databases would, of course, produce skewed results in AI models – besides opening propagators of those systems to a world of lawsuits. It’s in the interest of not just the government, but the private sector as well, to ensure that AI is used responsibly and properly. 

A larger more diverse range of experts should shape policy

At issue is the way the EO seeks to set policy, relying solely on top government officials and leading large tech firms. The Order initially calls for reports to be developed based on research and findings by dozens of bureaucrats and politicians, from the Secretary of State to the Assistant to the President and Director of the Gender Policy Council to “the heads of such other agencies, independent regulatory agencies, and executive offices” that the White House could recruit at any time. It’s based on these reports that the government will set AI policy. And the likelihood is that officials will get a great deal of their information for these reports, and set their policy recommendations, based on work from top experts who already likely work for top firms, while ignoring or excluding smaller and mid-size firms, which are often the true engines of AI innovation.

While the Secretary of the Treasury, for example, is likely to know a great deal about money supply, interest rate impacts, and foreign currency fluctuations, they are less likely to have such in-depth knowledge about the mechanics of AI – how machine learning would impact economic policy, how database models utilizing baskets of currency are built, and so on. That information is likely to come from experts – and officials will likely seek out information from the experts at largest and entrenched corporations that are already deeply enmeshed in AI.

There’s no problem with that, but we can’t ignore the innovative ideas and approaches that are found throughout the tech industry, and not just at the giants; the EO needs to include provisions to ensure that these companies are part of the conversation, and that their innovative ideas are taken into consideration when it comes to policy development. Such companies, according to many studies, including several by the World Economic Forum, are “catalysts for economic growth both globally and locally,” adding significant value to national GDPs. 

Many of the technologies being developed by the tech giants, in fact, are not the fruits of their own research – but the result of acquisitions of smaller companies that invented and developed products, technologies, and even whole sectors of the tech economy. Startup Mobileye, for example, essentially invented the alert systems, now almost standard in all new cars, that utilize cameras and sensors that warn drivers they need to take action to avert an accident.And that’s just one example of hundreds of such companies acquired by companies like AlphabetAppleMicrosoft, and other tech giants.

Driving Creative Innovation is Key

It’s input from small and mid-sized companies that we need in order to get a full picture of how AI will be used – and what AI policy should be all about. Relying on the AI tech oligopolies for policy guidance is almost a recipe for failure; as a company gets bigger, it’s almost inevitable that red tape and bureaucracy will get in the way, and some innovative ideas will fall by the wayside. And allowing the oligopolies to have exclusive control over policy recommendations will essentially just reinforce their leadership roles, not stimulate real competition and innovation, providing them with a regulatory competitive advantage – fostering a climate that is exactly the opposite of the innovative environment we need to remain ahead in this game. And the fact that proposals will have to be vetted by dozens of bureaucrats is no help, either.

If the White House feels a need to impose these rules on the AI industry, it has a responsibility to ensure that all voices – not just those of industry leaders – are heard. Failure to do that could result in policies that ignore, or outright ban, important areas where research needs to take place – areas that our competitors will not hesitate to explore and exploit. If we want to remain ahead of them, we can’t afford to stifle innovation – and we need to ensure that the voices of startups, those engines of innovation, are included in policy recommendations.

About the Author

Dr. Anna Becker, CEO and cofounder of Endotech.io. Dr. Becker leads the AI/ML teams at EndoTech. Her deep-learning algorithms managed nearly Billion dollars of investment (AuM) and have been deployed in managing institutional monies for more than a decade. Following her Ph.D. in Artificial Intelligence from the Technion Institute, Dr. Becker has founded and sold several AI companies in the FinTech space including Strategy Runner.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideBIGDATANOW

Speak Your Mind

*