AI Ethics and The New Digital Divide

Print Friendly, PDF & Email

General sentiment towards AI has been consistently trending negatively over the last couple of years. More and more news pieces are featured on the regular news cycle depicting AI companies as bad  actors. And, what’s more concerning, algorithms themselves are starting to be perceived as evil invisible hands that shape our lives in negative ways.

Despite the hype that followed the renascence of AI with the advent of Deep Learning in the early  2010s, it’s not the singularity that people nowadays fear, but the effects that AI has on their day to day lives as well as on big picture events like Brexit or the 2016 US election.

This negative public perception is already starting to be shaped by politicians into laws (in theory)  designed to protect the general public against these negative effects on society. Starting mid 2018, the industry now has to deal with the draconian provisions included in the GDPR that affect AI specifically.

On the positive side, AI practitioners are starting to realize that we need to start taking ethical positions regarding the projects we get involved with. Otherwise, we risk public perception about our industry skewing further and further negatively.

The entire AI field needs to engage in serious conversations around the ethics of the products we create or we’ll face the consequences. Another AI Winter is very possible, but this time it wouldn’t be triggered by our over-promises, but by society’s perception of us and our creations.

The digital divide is also, in my opinion, exacerbating this problem. I’m not referring to the divide between those who have equal access to computing devices and information and those who do not, but  the divide between those who can create and understand AI applications and those who can’t.

In the past 5 years the bar to accessing powerful AI architectures has been lowered dramatically by cool  projects like Keras. However, what good can access to these AI tools do if only a few corporations  can actually feed large quantities of data to them, which is when these AI architectures really shine.

Corporations like Google and Facebook get basically all the data. Smaller companies have to make do  with the small data sets that we can create in house and the few data sets that allow commercial use  that are available out there, or pay thousands of dollars in the very few data markets available.

I know that the following opinion will not be a popular one, but I firmly believe that academia is doing a disservice to society by not fully open-sourcing the models and data sets they create. Many publicly  funded projects release data sets intended for research purposes only, despite the fact that their funds  came from regular tax payers as well as taxes paid by businesses.

Governments would help to level the playing field by requiring publicly funded academic research to  fully open source their results. This would allow this research to revert back to small businesses, giving them the ability to compete with the industry incumbents. At the same time, governments would need to strongly encourage business, if not explicitly require, to open source the models that they build  making use of publicly funded research, thus creating a value cycle that ultimately benefits the public interest.

The AI industry needs to engage with the general public if it wants to regain its trust, at a level that  benefits everybody, allows small companies to bring value to the table and doesn’t lock up the how-to  knowledge and AI’s most important commodity: data.

We need to make the general public a participant in the AI creation cycle. Not like the milk cow that it  currently is, but like an agent that through governance can verify how its ethics align with the projects  the industry creates. In other words, like an agent with the power to steer the wheel and correct course  by empowering companies whose values, products and business practices better align with the general public’s interest.

We need to make AI cool once more, but this time, let’s make it cool for and with the people.

About the Author

Paulo Malvar is Chief Computational Scientist at Codeq LLC. Paulo’s main research field is Machine Learning applied to Computational Linguistics, specially in the areas of Named Entity Recognition, Opinion Mining, Topic Detection, Emotion Classification, Speech Act Classification and Text Summarization. He also has extensive experience working on Machine Translation (both SMT and RBMT). Paulo has a Master’s degree in Computational Linguistics from San Diego State University. Check out Codeq’s AI powered view of your inbox that you won’t be able to quit – Courier.

 

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*