The Skeptics Society & Skeptic magazine


AI and Uncertainty

One winter evening in 2014, Stuart Russell, a professor of Computer Science at the University of California, Berkeley, was riding the Paris Metro. He was on his way to a rehearsal for a choir that he had joined while living in the French capital during a sabbatical from Berkeley.

That evening, he was listening to the piece that he would be practicing, Samuel Barber’s Agnus Dei, the composer’s choral arrangement of his haunting Adagio for Strings. Swept up in the sublime music, Russell had a breathtaking idea. AI should be built to support ineffable human moments like this one. Instead of delegating an objective to a machine and then stepping back, designers should make systems that will work with us to realize both our complex, shifting goals and our values and preferences. “It just sprang into my mind that what matters, and therefore what the purpose of AI was, was in some sense the aggregate quality of human experience,” he later recalled. And in order to be constantly learning what humans want or need, AI must be uncertain, Russell realized. “This is the core of the new approach: we remove the false assumption that the machine is pursuing a fixed objective that is perfectly known.”

Talking with me by video call one day in the fall of 2022, Russell elaborates. Once the machine is uncertain, it can start working with humans instead of “just watching from above.” If it doesn’t know how the future should unfold, AI becomes teachable, says Russell, a thin, dapper man with a manner of speaking that is somehow both poetical and laser precise. A key part of his Paris epiphany, he says, “was realizing that actually [AI’s] state of uncertainty about human objectives is permanent.” He pauses. “To some extent, this is how it’s going to be for humans too. We are not born with fixed reward functions.”

A few weeks later, I meet up virtually with Anca Dragan, an energetic Berkeley roboticist who is a protégé of Russell’s and one of a growing number of high-profile scientists turning his vision for reimagining AI into algorithmic reality.

“One of my biggest lessons over the past five years or so has been that there’s a tremendous amount of power for AI in being able to hold appropriate uncertainty about what the objective should be,” she tells me. Power? I ask. She explains that by making AI “a little bit more humble, a little bit more uncertain, all of a sudden magical things happen” for both the robot and the human. Together, we begin watching two illustrative bits of video whose banality belies their importance.

In a first clip filmed during experiments in her laboratory, we watch as a robot arm swings into action, carrying a coffee cup several feet high above a table. Almost immediately, a graduate student in a red T-shirt tries to push the arm lower. “It’s Ellis’s favorite mug,” says Dragan, describing the hypothetical scenario inspiring the research, “and he doesn’t like it that the robot is holding it so high up because if it drops, it will break.” As Ellis pushes, the robot doesn’t fight or freeze. But as soon as he lets go—“this is the interesting part,” says Dragan—the robot promptly bounces back up, reclaiming its initial trajectory. This is how AI traditionally has treated the human—as a pesky obstacle on the road to fulfilling the gospel of its objective, says Dragan. The robot views Ellis as an unknown to be ignored, skirted, or eliminated in order to get the job done. I watch as he gives the imperturbable machine a final two-fingered poke before standing back, looking a little defeated.

In what is known as the classical period of AI, early systems by necessity were built to operate in a kind of utopian world that was clear-cut, predictable, and fully understood. In order to make the first algorithms work, designers had to, as Dragan says, “cut off a tiny piece of the world, put it in a box, and give it to a robot.” By the 1980s, however, scientists realized that if they were to create systems for realworld use, they needed to grapple with the unpredictability of life.

To meet this challenge, computer scientist Judea Pearl famously turned to Bayes’ theorem, an Enlightenmentera mathematical system for dealing with uncertainty by constantly updating one’s prior beliefs with new evidence. By investing AI with probabilistic capabilities, Pearl enabled systems to weigh various actions against both the current state of the world and a range of possible futures before deciding on the best route to maximizing a reward. He gave AI wiggle room. Yet the foundational premise of the work remained the same. Unknowns—whether a hesitant pedestrian in a crosswalk, an unanswerable search engine query, or a coffee drinker with ideas of his own—are best summarily dispatched en route to realizing an objective. When Ellis lets go, the coast is clear. The robot knows just what to do.

In the next clip, Ellis tries again. But this time, he only has to push the arm down once before stepping back and watching, one hand nonchalantly in his pocket, as the robot glides by a few inches above the table. Suddenly, the system is doing not what it wants but something far more in line with what Ellis prefers. The maneuver is over in less than a minute, and the inner workings of the robot’s metamorphosis are hidden from view. But I can clearly see that this time, the robot has learned something about carrying coffee, about human priorities, and about aligning with intelligences other than its own. As the robot completes the task, Ellis nods approvingly to someone off camera. He looks relieved.

This is the new paradigm of what Russell calls “human-compatible AI.” Gone is the fallacy of the known fixed objective, whether it is given in advance—“win points”—or, as is the case with a strategy called inverse reinforcement learning, pieced together by the system from initial training demonstrations that in effect say “carry the coffee this way.” (In the latter scenario, a robot may accept a correction while in training, but once it is deployed, it will remain undeterred from its objective.) As Ellis experienced, most standard robots cannot learn on the fly.

In contrast, uncertain AI can adapt in the moment to what we want it to do. Imbued with probabilistic reasoning about its aims or other equivalent mathematical capabilities, the system dwells in “a space of possibilities,” says Dragan. A push is not an obstacle to getting its way but a hint of a new, likely better direction to go. The human is not an impediment but a teacher and a teammate. Perhaps most important, human-compatible AI likely will be open to being shut down if it senses that it might not be on the right track, preliminary studies suggest. A human wish to turn the robot off is just another morsel of information for a system that knows that it does not know. “That’s the big thing that uncertainty gives you, right; you’re not sure of yourself anymore, and you realize you need more input,” says Dragan gleefully. “Uncertainty is the key foundation upon which alignment can rest.”

By making AI a little bit more humble, a little bit more uncertain, all of a sudden, magical things happen for both the robot and the human.

In initial user studies, people working with uncertain robots achieve better task performance with less time and effort. They view such systems as more seamlessly collaborative and sensitive to their needs. “The robot seemed to quickly figure out what I cared about,” said one participant. In one experiment, when a physically present robot verbally expressed uncertainty about a thorny moral dilemma, people saw it as more intelligent than one that asserted that it was sure of what to do.

The music that helped set the stage for Stuart Russell’s vision of a new AI celebrates the liminality and the ambiguity of life. One of the world’s most-heard pieces of modern classical music, Barber’s Adagio for Strings unfolds in a single brief movement suffused with moments of suspense and dissonance. Critic Johanna Keller writes that the piece seems to convey “the effect of a sigh, or courage in the face of tragedy, or hope” and ends on a note of uncertainty. She writes, “In around eight minutes the piece is over, harmonically unresolved, never coming to rest.”

• • • • • •

At Virginia Tech, I at last meet up with an I-Don’t-Know robot. But unlike Ellis, I am working with a system whose uncertainty is an open book. In Dylan Losey’s lab, I discover the critical complement to making AI better at knowing that it does not know: creating systems that also admit to their uncertainty.

The painter-robot sports three sets of armbands, called soft haptic displays, at the base, in the middle, and near the end of its five-foot length. As I guide it through its work of drawing a line down the table, the robot tells me where in the task it is unsure by inflating specific bands associated with particular aspects of the process. If it is unsure about the angle to hold its claw-like “end effector,” for example, it inflates the bottom-most armbands in each set with a soft woosh. In this way, I can get a read on whether the robot is catching on no matter where I place my hands. “You can actually touch the robot’s uncertainty,” Losey tells me. “You can feel in real time as you move it how confused it is.”

If uncertainty enables an AI system to be open to our suggestions, then AI that can also show its unsureness will allow us to know where we stand in our increasingly high-stakes interactions with such machines. A cycle of questions and answers on both sides can result. “When a robot can let a person know, ‘hey, this is where I am at, this is what I’ve learned,’ or ‘this is my best guess but I am a little bit uncertain so take that with a grain of salt’—that’s what I’m working for,” says Losey, a scientist with a rapid-fire pace of speaking and a somber intensity.

The research is critical, he and others believe, because not only does standard AI fall woefully short in its understanding of humanity, but we in turn know less and less about the complex black-box systems that increasingly manage our lives. “Even as a designer, often I have no clue what’s going to happen next with [standard] robots,” Losey admits. “I have to press play and hope that what I see is what I want to see.” The question is, he says, “how can we open that box?”

How and why does AI succeed or fail? Why did the model conclude that one person was worthy of parole, a job interview, or a loan while a similar candidate was not? We often do not know in part because AI operates in abstract mathematical terms that rarely correspond to human ideas and language. In addition, the more astonishing AI’s achievements have become, the more opaque they are to human understanding. After being handily defeated at Go by an AI program, one shocked world champion said AlphaGo’s extraordinary strategic play revealed that “not a single human has touched the edge of the truth of Go.”

Slowly, the creation of openly uncertain systems is becoming a key part of global efforts to make explainable and transparent AI. It is not enough to bring to light what AI knows, for example, by exposing which reward objective or data set was used in training an algorithm. To work with AI, to anticipate its moves, to gauge its strengths and ours, to parse the magic, we also should understand what it does not know, leading scientists assert. Dozens of frontline laboratories worldwide are working to build AI that can speak a language of uncertainty that humans can readily comprehend.

Some robots show people on-screen hypothetical scenarios about their next moves, in effect asking, “Should I move closer to or further from the stove?” or “Should I avoid a certain intersection on my way to fetch coffee?” Others play a kind of robot charades. In Losey’s lab, a standing robot often used in warehouses acted out for me a plethora of sometimes indecipherably similar ways for it to stack dishes. Its thoroughness raised unresolved research questions, such as how much and what kinds of uncertainty a system should display or how AI’s incertitude can interact productively with ours. “It’s not just a question of robot uncertainty,” says Laura Blumenschein, a soft robotics expert who cocreated the haptic arm. “It’s a question of human–robot systems and the combined uncertainty within them.”

Beyond robots, openly uncertain AI models have shown promise for use in medical diagnosis systems and already are being used to bolster AI-assisted drug discovery. For example, to address rising bacterial resistance to drugs, a new kind of model created by Yoshua Bengio and other top researchers in Canada has shown exciting potential to identify synthetic peptides, that is, small proteins that might be turned into new antibiotics. Instead of relying on pattern recognition to settle on one best answer, Generative Flow Networks explore less obvious paths in the data to uncover numerous possible answers, in this case candidate peptides that can be tested further by models and humans alike.

“The whole point is that we want to keep in mind many possible explanations—we want to account for uncertainty,” says Nikolay Malkin of Mila, the Quebec-based leading AI research institute where the algorithm was created. And by operating reflectively rather than relying on simplifying and opaque snap judgments, the new models shed light on both a problem’s deeper causal intricacies and their own decision-making processes. The system’s uncertainty can be an engine of transparency.

For many scientists, moreover, constructing AI that admits its uncertainty is not just a safety feature, a path to adaptability, a practicality. It is a matter of right and wrong.

Julian Hough is a British computer scientist with a rising reputation and a kindly demeanor. The longer he has been in the field, however, the more concerned he has become about the pretense of certainty traditionally built into the machine. Hough offers a final word of warning. Any time that a system’s uncertainty is swept under the rug, he cautions, “it won’t be going away. It’s just going to be hidden in dangerous ways and basically hidden by system designers.” By way of example, he describes a scenario. “Say a cop robot is looking for a suspect, and it has 60 percent confidence in one person, but it’s been programmed to act at any level beyond 50 percent confidence. If it does not express that level of doubt, that’s very dangerous. It could have fatal consequences.”

This is a watershed moment in the history of AI. Uncertainty is at the heart of efforts to create systems that can better align with human aims. There is no easy blueprint for reimagining humanity’s most powerful and dangerous invention to date. Still, one day sooner than you may imagine, you might work side by side with a robot that will ask you good questions and admit to its uncertainty, all while expecting that you in turn will do so too. END

This essay was excerpted and adapted by the author from Uncertain: The Wisdom and Wonder of Being Unsure (Prometheus Books). Copyright © 2023 by Maggie Jackson. Reprinted with permission.

About the Author

Maggie Jackson is an award-winning author and journalist who is a leading thinker on technology’s impact on humanity. A former contributing columnist for the Boston Globe, Jackson’s writings have been translated into multiple languages and have appeared in the New York Times, the Wall Street Journal, New Philosopher, and Le Monde’s Courrier International. Her expertise has been featured on NPR, MSNBC, and the BBC; and in many other global media outlets. She is the recipient of numerous grants, fellowships, and awards and has spoken at venues from Google to Yale.

This article was published on May 3, 2024.

 
Skeptic Magazine App on iPhone

SKEPTIC App

Whether at home or on the go, the SKEPTIC App is the easiest way to read your favorite articles. Within the app, users can purchase the current issue and back issues. Download the app today and get a 30-day free trial subscription.

Download the Skeptic Magazine App for iOS, available on the App Store
Download the Skeptic Magazine App for Android, available on Google Play
SKEPTIC • 3938 State St., Suite 101, Santa Barbara, CA, 93105-3114 • 1-805-576-9396 • Copyright © 1992–2024. All rights reserved • Privacy Policy