Artificial intelligence promises to make decisions better and faster than humans can — even smart humans. AI’s superiority is clear when the choice is “Which road should I take home?” or “How should I organize distribution chains?” But in life-or-death situations, can AI deliver?
I’m a social psychologist who studies technology, but when I was in college, I worked for a geophysical surveying company. We looked for natural gas in the frozen forests of northern Canada. Most sites were remote and very cold. Many could be reached only by helicopter.
One winter afternoon a pilot at one of those sites radioed with bad news: A storm had moved in, making visibility poor and flying dangerous. My crew chief, Ian, had to make a difficult decision: Should he risk our lives by flying in the storm or by staying overnight in the frigid wilderness with no food or shelter? He chose to stay overnight. Although we faced freezing temperatures, I had full faith in Ian’s decision. He had worked for years as a wilderness firefighter, and he knew about survival. I literally trusted him with my life.
If my company had been using AI, Ian might not have been making decisions that night. A computer program could have weighed the weather against the costs of losing the crew against the costs of losing the helicopter against many other factors. That intelligent machine might have come to the same conclusion Ian did — that stranding us overnight was the best possible choice — but would I have trusted that decision? Would I have felt safe?
My work since suggests that I would not have trusted AI with my life. And that lack of trust raises serious roadblocks for the full implementation of AI in the workforce, even when no lives are at stake.
My research examines how people understand other minds — human minds, animal minds, and computer minds — and reveals that their contents are more ambiguous than we often think. We can never directly experience the thoughts and feelings of others, and so we’re left to make our best guesses about questions such as: Does your baby love you as much as you love him? When your boss smiles, is she actually happy? Does your dog feel embarrassed when you catch it doing something naughty?
Although biological minds can be hard to understand, the nature of computer minds is even more opaque. When Deep Blue beats Garry Kasparov at chess, does it want to win, or is it just programmed to do so? When Google alerts us to the best route home after work, does it really understand what it means to commute? When Netflix recommends a movie we might like, does it care about our enjoyment?
People perceiving the minds of AI see them as very one-sided — capable of powerful thought but totally incapable of feeling. It’s a pretty accurate perception of current technology, because neither Google nor Netflix can fall in love or enjoy the taste of chocolate. But what truly limits AI — or at least its role in the workforce — is that people believe that robots will never feel.
In part, it’s that inability to feel that makes people regard AI as untrustworthy. This is incredibly important for the deployment of AI. Will employees trust something that views them in purely functional terms — as workers with certain skill sets — rather than as individuals with hopes and concerns?
Trusting team members requires at least three things: Mutual concern, a shared sense of vulnerability, and faith in competence. Mutual concern — knowing that your teammates care about your well-being — is perhaps the most basic element of trust. When a platoon leader risks being shot by going behind enemy lines to rescue one of his soldiers, he is not making the optimal decision from a functional perspective. However, the very fact that — unlike an AI system — he will choose this “irrational” course of action makes everyone in the platoon trust him more, which leads to better overall team performance.
In everyday situations, where careers and promotions are at stake, we still want to know that supervisors and coworkers see us as people rather than as variables in a giant optimization problem. We want to be something more than a row in an inventory spreadsheet. But that’s all AI understands us to be.
We mistrust AI not only because it seems to lack emotional intelligence but also because it lacks vulnerability. If humans mess up in a job, they can be fired, lose a bonus, or even die. But in an AI workplace, if an expert decision-making system wrongly recommends one course of action over another, the computer suffers no consequences. AI systems are gambling only with the fates of others, never with their own.
The third impediment to trust is actually AI’s strength: its superhuman ability to calculate and predict. We are quick to trust AI’s competence after seeing firsthand how it can arrive at huge sums in seconds or forecast the movement of stocks. Unfortunately, this can work against AI, which performs well only under narrow conditions. When it is pushed to operate outside its limits — when a whole family uses the same Netflix account, or when Google is asked to predict the outcome of a relationship — disappointment is inevitable.
I recently spoke with someone in the Office of Naval Research, part of the U.S. Department of Defense, who outlined how technologically inexperienced sailors operate AI systems. First, they approach AI with a sense of awe, expecting it to complete every job perfectly. But if a system makes mistakes that seem — from the point of view of humans — obviously stupid, the sailors stop using it altogether, even in the structured situations in which AI would actually excel. To build trust, AI needs to communicate its confidence or, even better, express its fear of failure.
No one can dispute that AI is leaping ahead in sophistication, but our ability to trust it is lagging behind. This is important because in many industries success requires deep and implicit trust within teams. On oil rigs and in army platoons, trusting your teammates can be a matter of life and death. In less dangerous businesses, trust can be the difference between succeeding and failing to close a deal or finish a project. We trust other people not because they are incredibly smart — like AI — but because they have emotional connections, specifically with us.
That doesn’t mean AI isn’t useful. Quite the contrary. It represents a deconstructed mind, a focused intelligence groomed for maximum performance. In so many ways, it is unlike the well-rounded human mind, which can comprehend language, solve problems, and understand others’ feelings all at the same time.
If I were working at that surveying job in northern Canada today, I still might not trust a computer to save my life in the forest, but I would trust AI to screen the weather and decide against our even venturing forth that morning. I’m glad I had a human crew chief, but I wish a computer had prevented our being stranded in the first place
Kurt Gray is an associate professor of psychology and neuroscience at the University of North Carolina, Chapel Hill. He received his PhD from Harvard University. Gray studies mind perception, moral judgment, social dynamics, and creativity and is an award-winning researcher and teacher. He is a coauthor (with Daniel Wegner) of The Mind Club: Who Thinks, What Feels, and Why It Matters.
IMAGE CREDITS: http://cdn.washingtonexaminer.biz