AI and the Emerging Crisis of Trust

AI
Print Friendly, PDF & Email

In this guest article, Doug Bordonaro, Chief Data Evangelist at ThoughtSpot, explores the role trust plays in adoption of AI in the home, our jobs and beyond. 

AI

Doug Bordonaro, Chief Data Evangelist at ThoughtSpot

Earlier this month, a newspaper in Ohio invited its Facebook followers to read the Declaration of Independence, which it posted in 12 bite-sized chunks in the days leading up to July 4. The first nine snippets posted fine, but the 10th was held up after Facebook flagged the post as “hate speech.” Apparently, the company’s algorithms didn’t appreciate Thomas Jefferson’s use of the term “Indian Savages.”

It’s a small incident, but it highlights a larger point about the use of artificial intelligence and machine learning. Besides being used to filter content, these technologies are making their way into all aspects of life, from self-driving cars to medical diagnoses and even prison sentencing. It doesn’t matter how well the technology works on paper, if people don’t have confidence that AI is trustworthy and effective, it will not be able to flourish.

The issue boils down to trust, and it goes beyond just the technology itself. If people are to accept AI in their homes, their jobs and other areas of life, they also need to trust the businesses that develop and commercialize artificial intelligence to do the right thing, and here too there are challenges. Last month, Google faced down a minor revolt by thousands of employees over a contract with the U.S. military that provides its AI for use in drone warfare. Microsoft and Amazon faced a similar worker uprising, over use of their facial recognition technologies by law enforcement.

We humans are generally skeptical of things we don’t understand, and AI is no exception. Presented with a list of popular AI services, 41.5 percent of respondents in a recent survey could not cite a single example of AI that they trust. Self-driving cars also incite wariness: Just 20 percent of people said they would feel safe in a self-driving car, even though computers are less likely to make errors than people.

The industry needs to address these challenges if we’re all to enjoy the value and benefits that AI can bring. To do so, it helps to start by looking at the ways trust intersects with AI and then consider ways to address each.

Trust in businesses. Consumers need confidence that early adopters of AI, notably technology giants like Google and Facebook, will apply AI in ways that benefit the greater good. People don’t inherently trust corporations — a recent Salesforce study found that 54 percent of customers don’t believe companies have their best interests at heart. Businesses need to earn that trust by applying AI wisely and judiciously. That means not making clumsy mistakes like telling a family their teen daughter is pregnant before she’s broken the news herself.

Trust in third parties. Consumers also need confidence that a company’s partners will use AI appropriately. AI and machine learning require massive amounts of data to function. The more data, and the greater variety of data, available to these systems enable more nuanced and entirely new use cases.  While many businesses share personal data with third parties for marketing and other purposes, incidents like the Cambridge Analytica fiasco create a backlash that make people less willing to entrust their data with businesses. Failing to build trust between both data collectors and those that eventually use that data will dramatically hinder AI’s long term potential.

Trust in people. For all its potential to automate tasks and make smarter decisions, AI is programmed and controlled by humans. People build the models and write the algorithms that allow AI to do its work. Consumers must feel confident these decisions are being made by professionals who have their users’ interests at heart. AI can also be a powerful tool for criminals, and developers of the technology need to be accountable for how it is used.

Trust in the technology. AI’s “black box problem” makes people skeptical of results because, very often, no one really knows how they were arrived at. This opens the technology to charges of bias in important areas like criminal sentencing. The black box problem can also inhibit adoption of AI tools in business. Employees are asked to devote time and resources to recommendations made by machines, and they won’t do so unless they have confidence in the recommendations being made.

GDPR disclosures may help build trust among consumers about how their data will be used, and companies should be equally transparent about how they use AI and machine learning.

These challenges aren’t stifling AI’s development significantly today, but they will if they are not addressed. Issues of public trust may also determine which businesses succeed with AI and which do not. We need to nip this issue in the bud. It does no good to blame consumers and employees for not understanding AI or being skeptical of its applications. The industry has a duty to itself and the public to build confidence in AI if it’s to fulfil its promise. Here are some ways it can achieve this:

Standards and Principles. To address its employee uprising, Google published a list of AI principles that included a pledge to use the technology only in ways that are “socially beneficial.” Rather than every business doing the same, the industry should agree to a set of standards and principles that guide its use of artificial intelligence. The nonprofit OpenAI consortium is addressing concerns about AI safety; it should broaden its mandate to encompass public trust in AI more broadly.

Transparent usage. GDPR disclosures may help build trust among consumers about how their data will be used, and companies should be equally transparent about how they use AI and machine learning. Consumers should be able to answer questions like: What data is being captured to use in an AI or ML system, what kind of applications are they running using my data, and when am I interacting with an AI or ML system?  If a system fails, we need to be candid not only about what caused the problem, but how it will be addressed in future.

Technology Tracking. AI can be a powerful tool for criminal use, including for fraud, deception and even nation-state attacks. Developers need to be accountable for the technologies they release into the market, which means there needs to be a way to trace an artificial intelligence technology back to its origin. This could be achieved through a system of digital watermarking, for example.

Employee engagement. To build trust among employees, communication is key. At town hall meetings, businesses should encourage employees to ask questions and air concerns about how AI is being used at the company. They should also be free to voice concerns privately, with the company’s chief data officer, for example. An open dialogue should help avoid flare-ups like the one faced by Google.

Trust in technology builds over time. If tools like Siri and Alexa deliver on their promise to make life better and more convenient, consumers will become more trusting. At work, employees will gain confidence in AI programs when they see positive results from the decisions they make. But we can’t take acceptance of artificial intelligence for granted. Every Tesla accident, every clumsily censored post, every questionable business decision will erode trust before it has a chance to build. If we don’t want an AI rebellion on our hands, we need to focus on user acceptance as much as the technology itself.

Doug Bordonaro is Chief Data Evangelist at ThoughtSpot

Speak Your Mind

*

Comments

  1. Jon S. Powell says

    Great article – really encapsulates the AI trust issue and what businesses can do to deal with the issue.