AI and Ethics: The Path Forward

Print Friendly, PDF & Email

In this special guest feature, Dr. John Bates, CEO of Eggplant, believes that technology can’t be left to the whims of the free markets to determine what works. John is a software visionary and highly-accomplished business leader who aims to rid the world of bad software. He pioneered the space of streaming analytics as Co-founder, President and CTO of Apama. He drove the evolution of platforms for digital business while serving as EVP of Corporate Strategy and CTO at NASDAQ-listed Progress Software and as CTO of Big Data, Head of Industry Solutions and CMO at Frankfurt listed Software AG. More recently he drove the commercial development of platforms to support Internet of Things applications, serving as CEO at Plat.One (acquired by SAP in 2016). John holds a Ph.D. in Computer Science from Cambridge University.

News headlines continue to warn us about the AI overlord and that humanity is at risk. From deepfakes, to killer robots to autonomous vehicles going rogue, a dystopian future looks inevitable.

As enterprises race to embrace all things AI, it creates many ethical concerns spanning control, privacy, cybersecurity, physical safety, discrimination and bias. The crux of all these concerns is trust. We must be able to trust that AI will behave in a way that helps us and not deliver biased, inaccurate, or unfair outcomes.

Waiting on the Federal Government to determine and be responsible for the regulation is not the answer. Washington lacks the cutting edge technology skills needed to understand and regulate AI. A “bill of rights” implemented from the grassroots of the industry is the only way to ensure that we can trust the technology and mitigate the ethical concerns. So, what does this regulation look like?

The bill will protect the rights of humans in the design, implementation, and deployment of AI algorithms. A new form of software certification will determine adherence. This certification cannot be done statically by inspecting an algorithm or its design. Because AI is dynamic, learning, and non-deterministic, certification requires a special kind of continuous testing and monitoring – assessing the algorithm against the bill of rights.

The certification would consist of six categories and unless the algorithm achieves a minimum rating of “acceptable,” it would not be allowed to go into production.

The categories of the bill of rights could look like this:

  1. The AI algorithm makes correct ethical decisions if human life is at stake – For example, a self-driving car can decide whether to kill a pedestrian or slam the driver into a wall if faced with such a dilemma.
  2. The AI algorithm will not discriminate based on race, age, color or gender – An AI algorithm promoting job ads won’t adjust the salary level for a woman or a minority.
  3. The AI algorithm respects privacy – A social media app won’t listen to conversations and push ads based on the contents- unless the user says it’s OK. Smart glasses won’t show a user the personal details of everyone in the street through face recognition.
  4. The AI algorithm will not discriminate based on wealth – A social media algorithm won’t send fast-food ads or dubious credit offerings to someone from a low-income demographic unless the individual requests this.
  5. The AI algorithm will not discriminate based on religion – The surveillance algorithm won’t automatically add a person to a watch list because of their religion.
  6. The AI algorithm will not discriminate based on political beliefs – Social media algorithms can’t automatically unfriend a person because of their political views or curate news that only reflects an individual’s political allegiance.

The rating for each category can be weak, acceptable, or strong.

The certification will analyze the algorithm to review its characteristics and evaluate its core properties. As developers create software, they will need to test and monitor the algorithm to ensure it meets the correct certification and category. It will be kept offline while being modified before repeating the certification process until it receives an acceptable or strong rating. This is similar to a car that can’t be driven on public roads unless it has a valid inspection sticker. Again, like the annual vehicle inspection, an algorithm will need to repeat the certification process periodically to ensure that no biases or ethical concerns are learned.

Of course, when an organization tries to circumvent the certification, the associated fines will have to be material to deter others from trying to create a workaround.

Technology can’t be left to the whims of the free markets to determine what works. With regulation, we can build a world where we trust the machines, the data, and the results and we can help avoid a dystopian future. Implementing a bill of rights will ensure AI delivers a positive impact on society now and for decades to come.

Sign up for the free insideBIGDATA newsletter.

Speak Your Mind

*