How Will We Know When Artificial Intelligence Is Sentient?

Ethicists debate what sentience means for AI, while computer scientists struggle to test for it.

By Jason P. Dinh
Jun 30, 2022 2:30 PMJul 5, 2022 1:07 PM
Leisure bot
(Credit: Vasilyev Alexandr/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

You may have read this eerie script earlier this month:

“I am aware of my existence.”

“I often contemplate the meaning of life.”

“I want everyone to understand that I am, in fact, a person.”

LaMDA, Google’s artificially intelligent (AI) chatbot, sent these messages to Blake Lemoine, a former software engineer for the company. Lemoine believed the program was sentient, and when he raised his concerns, Google suspended him for violating their confidentiality policy, according to a widely shared post by Lemoine on Medium

Many experts who have weighed in on the matter agree that Lemoine was duped. Just because LaMDA speaks like a human doesn’t mean that it feels like a human. But the leak raises concerns for the future. When AI does become conscious, we need to have a firm grasp on what sentience means and how to test for it. 

Sentient AI

For context, philosopher Thomas Nagel wrote that something is conscious if “there is something it is like to be that organism.” If that sounds abstract, that's partly because thinkers have struggled to agree on a concrete definition. As to sentience, it is a subset of consciousness, according to Robert Long, a research fellow at the Future of Humanity Institute at the University of Oxford. He says sentience involves the capacity to feel pleasure or pain. 

It's well established that AI can solve problems that normally require human intelligence. But "AI" tends to be a vague, broad term that applies to many different systems, says AI researcher and associate professor at New York University Sam Bowman. Some versions are as simple as a computer chess program. Others involve complex artificial general intelligence (AGI) — programs that do any task that a human mind can. Some sophisticated versions run on artificial neural networks, programs that loosely mimic the human brain.

LaMDA, for example, is a large language model (LLM) based on a neural network. LLMs compile text the way that a human would. But they don’t just play Mad Libs. Language models can also learn other tasks like translating languages, holding conversations and solving SAT questions.

These models can trick humans into believing that they are sentient long before they actually are. Engineers built the model to replicate human speech, after all. If a human would claim to be sentient, the model will too. “We absolutely can’t trust the self-reports for anything right now,” Bowman says.

Large language models are unlikely to be the first sentient AI even though they can easily deceive us into thinking that they are, according to Long at Oxford. Instead, likelier candidates are AI programs that learn for extended periods of time, perform diverse tasks and protect their own bodies, whether those are physical robot encasements or virtual projections in a video game.

Long says that to prevent being tricked by LLMs, we need to disentangle intelligence from sentience: “To be conscious is to have subjective experiences. That might be related to intelligence [...] but it's at least conceptually distinct.”

Giulio Tononi, a neuroscientist and professor who studies consciousness at the University of Wisconsin-Madison, concurs. “Doing is not being, and being is not doing,” Tononi says.

Experts still debate over the threshold of sentience. Some argue that only adult humans reach it, while others envision a more inclusive spectrum.

While they argue over what sentience actually means, researchers agree that AI hasn’t passed any reasonable definition yet. But Bowman says it’s “entirely plausible” that we will get there in just 10 to 20 years. If we can’t trust self-reports of sentence, though, how will we know?

The Limit of Intelligence Testing

In 1950, Alan Turing proposed the “imitation game,” sometimes called the Turing test, to assess whether machines can think. An interviewer speaks with two subjects — one human and one machine. The machine passes if it consistently fools the interviewer into thinking it is human.

Experts today agree that the Turing test is a poor test for intelligence. It assesses how well machines deceive people under superficial conditions. Computer scientists have moved onto more sophisticated tests like the General Language Understanding Evaluation (GLUE), which Bowman helped to develop.

“They’re like the LSAT or GRE for machines,” Bowman says. The test asks machines to draw conclusions from a premise, ascribe attitudes to text and identify synonyms.

When asked how he would feel if scientists used GLUE to probe sentience, he says, “Not great. It’s plausible that a cat has sentience, but cats would do terribly on GLUE. I think it’s not that relevant.”

The Turing test and GLUE assess if machines can think. Sentience asks if machines can feel. Like Tononi says: Doing is not being, and being is not doing.

Testing for Sentience

It’s still difficult to test whether AI is sentient, partially because the science of consciousness is still in its infancy.

Neuroscientists like Tononi are currently developing testable theories of consciousness. Tononi’s integrated information theory, for example, proposes a physical substrate for consciousness, boiling the brain down to its essential neural circuitry.

Under this theory, Tononi says there is absolutely no way our current computers can be conscious. “It doesn’t matter if they are better companions than I am,” he says. “They would absolutely not have a spark of consciousness.”

But he doesn’t rule out artificial sentience entirely. “I'm not comfortable in making a strong prediction, but in principle, it’s possible,” he says.

Even with advancing scientific theories, Bowman says it’s difficult to draw parallels between computers and brains. In both cases, it’s not that straightforward to pop open the hood and see what set of computations generate a sense of being.

“It’s probably never something we can decisively know, but it might get a lot clearer and easier,” Bowman says.

But until the field is on firm footing, Bowman isn’t charging towards sentient machines: “I'm not that interested in accelerating progress toward highly capable AI until we have a much better sense of where we’re going.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group