We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!


Artificial intelligence (AI) solutions are facing increased scrutiny due to their aptitude for amplifying both good and bad decisions. More specifically, for their propensity to expose and heighten existing societal biases and inequalities. It is only right, then, that discussions of ethics are taking center stage as AI adoption increases.

In lockstep with ethics comes the topic of trust. Ethics are the guiding rules for the decisions we make and actions we take. These rules of conduct reflect our core beliefs about what is right and fair. Trust, on the other hand, reflects our belief that another person — or company — is reliable, has integrity and will behave in the manner we expect. Ethics and trust are discrete, but often mutually reinforcing, concepts.

So is an ethical AI solution inherently trustworthy?

Context as a trust determinant

Certainly, unethical systems create mistrust. It does not follow, however, that an ethical system will be categorically trusted. To further complicate things, not trusting a system doesn’t mean it won’t get used.

The capabilities that underpin AI solutions – machine learning, deep learning, computer vision, and natural language processing – are not ethical or unethical, trustworthy, or untrustworthy. It is the context in which they are applied that matters.

For example, using OpenAI’s recently released GPT-3 text generator, AI can be used to pen social commentary or recipes. The specter of AI algorithms generating propaganda raises immediate concerns. The scale at which an AI pundit can be deployed to spread disinformation or simply influence the opinions of human readers who may not realize the content’s origin makes this both unethical and unworthy of trust. This is true even if (and this is a big if) the AI pundit manages to not fall prey to and adopt the racist, sexist, and other untoward perspectives rife in social media today.

On the other side of the spectrum, I suspect the enterprising cook conducting this AI experiment resulting in a watermelon cookie wasn’t overly concerned about the ethical implications of a machine-generated recipe — but also entered the kitchen with a healthy skepticism. Trust, in this case, comes after verification.

Consumer trust is intentional

Several years ago, SAS (where I’m an advisor) asked survey participants to rate their level of comfort with AI in various applications from health care to retail. No information was provided about how the AI algorithm would be trained or how it was expected to perform, etc. Interestingly, respondents indicated they trusted AI to perform robotic surgery more than AI to check their credit. The results initially seemed counterintuitive. After all, surgery is a life-or-death matter.

However, it is not just the proposed application but the perceived intent that influences trust. In medical applications there is an implicit belief (hope?) that all involved are motivated to preserve life. With credit or insurance, it’s understood that the process is as much about weeding people out as welcoming them in. From the consumer’s perspective, the potential and incentive for the solution to create a negative outcome is pivotal. An AI application that disproportionally denies minorities favorable credit terms is unethical and untrustworthy. But a perfectly unbiased application that dispenses unfavorable credit terms equally will also garner suspicion, ethical or not.

Similarly, an AI algorithm to determine the disposition of aging non-perishable inventory is unlikely to ring any ethical alarms. But will the store manager follow the algorithm’s recommendations? The answer to that question lies in how closely the system’s outcomes align with the human’s objectives. What happens when the AI application recommends an action (e.g., throw stock away) at odds with the employee’s incentive (e.g., maximize sales — even at a discount)? In this case, trust requires more than just ethical AI; it also requires adjusting the manager’s compensation plan, amongst other things.

Delineating ethics from trust

Ultimately, ethics can determine whether a given AI solution sees the light of day. Trust will determine its adoption and realized value.

All that said, people are strangely willing to trust with relatively little incentive. This is true even when the risks are higher than a gelatinous watermelon cookie. But regardless of the stakes, trust, once lost, is hard to regain. No more trying a recipe without seeing positive reviews — preferably from someone whose taste buds you trust. Not to mention, disappointed chefs will tell people who trust them not to trust you, sometimes in the news. Which is why I won’t be trying any AI-authored recipes anytime soon.

Watermelon cookies aside, what are the stakes for organizations looking to adopt AI? According to a 2019 Capgemini study, a vast majority of consumers, employees, and citizens want more transparency when a service is powered by AI (75%) and to know if AI is treating them fairly (73%). They will share positive experiences (61%), be more loyal (59%) and purchase more (55%) from companies they trust to operate AI ethically and fairly. On the flip side, 34% will stop interacting with a company they view as untrustworthy. Couple this with a May 2020 study in which less than a third (30%) of respondents felt comfortable with businesses using AI to interact with them at all and the stakes are clear. Leaders must build AI systems – and companies – that are trustworthy and trusted. There’s more to that than an ethics checklist. Successful companies will have a strategy to achieve both.

Kimberly Nevala is AI Strategic Advisor at SAS, where her role encompasses market and industry research, content development, and providing counsel to F500 SAS customers and prospects.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Author
Topics