GROW YOUR TECH STARTUP

Building trust with AI: the ultimate barrier for human-machine interactions

September 14, 2017

SHARE

facebook icon facebook icon

How can machines prove their trustworthiness to their human counterparts? An important question for AI development, and one that spurs unnerving speculation about an AI-dominated future.

In the 1968 movie, 2001: A Space Odyssey, the highly advanced AI character HAL 9000 gets disconnected by Dave, the human co-protagonist, due to several life-threatening malfunctions. As it unsuccessfully tries to convince Dave not to disconnect it, HAL utters: “I know everything hasn’t been quite right with me, but I can assure you now, very confidently, that it’s going to be all right again.

machine trust

Brett Israelsen, PhD Student at University of Colorado Boulder

It is no coincidence that HAL’s plead is the unorthodox partial title of a recently submitted scientific paper where author Brett Israelsen, a CS PhD student at the University of Colorado, explores how to make AI more trustworthy, in order to make it better at reassuring humans it is doing what it is supposed to.

We want to trust AI—it makes our lives so much easier

When using a machine, we are trusting it. We trust that that our key presses will be transformed into an email that will reach the intended recipient; we trust the microwave to heat the leftovers. In these tasks humans remain in control—even when the task itself is performed by a machine.

With AI, machines get increasingly capable of varying degrees of intelligent behavior. For an example, Siri or Cortana, examples of AI on our everyday lives, are capable of interacting with a user, learning the user’s preferences, and reasoning about what assistance the user needs.

But to fully develop these capabilities, to entrust Artificially Intelligent Agents (AIAs) with more of our sensitive data and with the most delicate assignments, humans must be able to have confidence in AI.

“The interaction between humans and AIAs will become more ‘natural’, as the AIA will be attempting to communicate fundamentally to a human’s trust. It doesn’t matter what language you speak or what kind of user you are—all humans decisions are influenced by trust”, said Israelsen.

But what does “trust” mean in this context?

An AIA is untrustworthy if it misuses, abuses or ignores our signs of trust in it, such as relying on it for a task, believing it will act in our best interest, and even becoming vulnerable as we depend on its results. And this is a two-way street: an untrustworthy AIA forces us humans to try odd workarounds that could compromise results, to abuse the machine out of frustration—even to stop using AI altogether.

Trust is not just about cramming fail-safe emergency solutions into AIAs—that is also necessary, but akin to having a shed full of AK-47s in a neighborhood you describe as “very safe”.  AIAs that we can trust are those that recognize, give priority and provide feedback to our signs of trust, because they will do what we need, do it well, and notify us when things are not going according to plan.

Sure enough, there is no point in having a super-intelligent digital butler you cannot trust; but how can we make AI fulfill these criteria?

To trust, we need assurances

Thankfully, we are not yet close to an era where our lives are constantly at the mercy of an AIA. We are at that sweet spot where we can create a framework that will force all future AIAs to provide assurances.

The humongous scale and speed of AI’s data processing effectively makes it hard for humans to understand its fast-paced algorithmic inner “thoughts”. An assurance is a technical term for a machine explaining what it is doing and how the process is going—essentially becoming accountable for its actions.

“Trust is built upon accountability”, points out an IBM report on the topic. Voilá, we have a solution to the problem: whether it is a virtual assistant on your smartphone or a Big Brother-like overseer, an AIA should be subject to accountability like any other employee with access to potentially sensitive data and from whom certain results are expected.

In other words, by implementing assurances scientists can make AIAs explain themselves to humans every time, consequently making AI trustworthy for humans and avoiding all that misuse and abuse mentioned earlier: “As assurances are implemented in AIAs, it should become easier and more natural [for humans] to use them correctly. That is the fundamental purpose of assurances”, explained Israelsen.

Assurances already come in many ways but as most things AI, this is a nascent field. Future developments are the crossroads between psychology, computer science, data visualization and other fields. From stats to sentences to diagrams, there are many ways of reassuring users things are going well.

Trustworthiness can even come as part of the design—for example, we tend to believe shiny new things will work better, faster and reliably—and our own personal biases, which could lead to personalized assurances in the future.

Cooking with the wrong ingredients is dangerous

Data is to AI as ingredients are to a cook: using salt instead of sugar would not end well. Likewise, entering inadequate data into an AIA would yield unpredictable, incompetent—or worse!—results. With assurances in place, “AIAs should also be more difficult to use incorrectly—unless you’re trying”, says Israelsen. Or someone else tries.

Nonetheless, a more structured implementation of assurances could open highly sensitive applications with reduced human interaction, boosting AI development in fields such as medicine and intelligent architecture.

“I believe AIAs with assurances will be more widely accepted in society, and will be equipped to ride through some of the typical waves of uncertainty, worry—and sometimes panic—that surround new technologies”, said Israelsen, concluding that “it is left to us and other researchers to move beyond and begin formally making assurances a reality.”

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending