Artificial Brains May Pose a Startling Ethical Dilemma

Lab-grown artificial brain models could open new doors for neurological research. But what if they become conscious?

By Cody Cottier
Feb 28, 2023 4:00 PMMar 3, 2023 2:35 PM
Organoid image
Lab-grown brain organoids can help researchers explore early stages of human brain development. (Credit: UC San Diego Health Sciences)

Newsletter

Sign up for our email newsletter for the latest science news
 

Four decades ago, philosopher Hilary Putnam described a famous and frightening thought experiment: A “brain in a vat,” snatched from its human cranium by a mad scientist who then stimulates nerve endings to create the illusion that nothing has changed. The disembodied consciousness lives on in a state that seems straight out of The Matrix, seeing and feeling a world that doesn’t exist. 

Though the idea was pure science fiction in 1981, it’s not so far-fetched today. Over the past decade, neuroscientists have begun using stem cell cultures to grow artificial brains, called brain organoids — handy alternatives that sidestep the practical and ethical challenges of studying the real thing.  

The Rise of Artificial Brains

As these models improve (they’re currently pea-sized simplifications), they could lead to breakthroughs in the diagnosis and treatment of neurological disease. Organoids have already enhanced our understanding of conditions like autism, schizophrenia, even Zika virus, and hold the potential to illuminate many others. Yet they also raise unsettling questions.  


Read More: These Tiny 'Brains' Could Help Demystify the Human Mind


At the heart of organoid research lies a catch-22: The proxies must resemble actual brains to yield insights that could improve human lives; but the better the resemblance — that is, the closer they come to consciousness — the harder it is to justify using them for our selfish purposes.

“If it looks like a human brain and acts like a human brain,” writes Stanford law professor Henry Greely in an article published in The American Journal of Bioethics in 2020, “at what point do we have to treat it like a human brain — or a human being?” 

Artificial Brains and the Problem of Consciousness 

In preparation, scientists, bioethicists and philosophers are grappling with a surreal set of conundrums: Once we’ve brought these strange entities into existence, what moral consideration do we owe them? How do we balance the potential harm to organoids with their immense benefit to humans? How can we even know whether we’re causing them harm? 

A big part of the problem is that we can’t answer that last question — there isn’t a clear way to tell if an organoid is suffering. Humans and animals use their bodies to communicate distress, but a blob of neurons has no means of connecting with the outside world.

Greely, who specializes in biomedical ethics, puts it grimly in his 2020 article: “In a vat, no one can hear you scream.” 


Read More: Do Insects Have Feelings and Consciousness?


Although researchers have identified neural correlates of consciousness — brain activity that marks conscious experience — there’s no guarantee those correlates will be the same in organoids as in humans. Nita Farahany, a Duke law professor, and 16 other colleagues explained the difficulty in a 2018 Nature paper.

“Without knowing more about what consciousness is and what building blocks it requires,” they write, “it might be hard to know what signals to look for in an experimental brain model.” 

Based on some experiments, artificial brains seem to be at the fringe of awareness already. In 2017, a team of scientists from Harvard and MIT recorded brain activity while shining light upon an organoid’s photosensitive cells, showing that it could respond to sensory stimuli. But such experiments don’t — indeed, cannot — prove an organoid has any inner experience corresponding to the behavior we observe. (Another eerie thought experiment, the “philosophical zombie,” highlights the fact that even our belief in the consciousness of other humans is ultimately a matter of faith.) 

Seeing Through an Organoid's Eyes

Last year, a group of Japanese and Canadian philosophers, writing in the journal Neuroethics, bypassed that thorny issue altogether. According to their “precautionary principle,” we should err on the safe side and simply assume that organoids are conscious. That takes us beyond the intractable question of whether they have consciousness, at which point we can consider what kind of consciousness they might have — a tough question in its own right, but perhaps more fruitful than the first. 

The researchers argue that how we ought to treat organoids depends on what they are able to experience. In particular, it hinges on “valence,” the feeling that something is pleasant or painful. Consciousness alone doesn’t demand moral status; it’s possible that certain forms of it don’t come equipped with suffering. No harm, no foul. 


Read More: How Will We Know When Artificial Intelligence Is Sentient?


But the more similar an organism is to us, the philosophers suggest, the more likely its experience resembles ours. So where do organoids fit on the spectrum? It’s hard to say. They share important aspects of our neural structure and development, though for now the most advanced are basically tiny bits of particular brain regions, lacking the vast web of interconnections from which human sentience arises. Most organoids comprise just 3 million cells to our 100 billion, and without blood vessels to provide oxygen and nutrients, they can’t mature much more. 

That said, scientists can already link independently grown organoids, each representing a different brain region, to allow electrical communication between them. As these “assembloids” and their isolated components become more sophisticated, it’s possible they could have some sort of “primitive” experience. For example, photosensitive organoids like the one mentioned above might dimly sense a flash of light, then a return to darkness, even if the event raises no thoughts or feelings. 

It only gets more outlandish, and more relevant to the ethical dilemma. If an artificial model of the visual cortex can generate visual experience, it stands to reason that a model of the limbic system (which plays a key role in emotional experience) could feel primitive emotions. Ascending the ladder of consciousness, maybe a model of the brain networks underlying self-reflection could be aware of itself as a distinct being. 

An Ethical Framework for Artificial Brains

The philosophers are quick to point out that this is all speculation. Even if organoids do eventually develop into full-fledged brains, we know so little about the origin of consciousness that we can’t be sure neural tissue will function the same in such an alien context. And while the simplest solution would be to suspend this research until we have a better grasp on the mind’s workings, a moratorium could have its own cost. 

In fact, some observers argue it would be unethical not to continue organoid work. Farahany, in a podcast accompanying her Nature study, says that “this is our best hope for being able to alleviate a tremendous amount of human suffering that’s caused by neurological and psychiatric disorders.” By establishing proper guidelines, she contends, we can address organoid welfare without sacrificing medical gains.  

In 2021, Oxford philosopher Julian Savulescu and Monash University philosopher Julian Koplin proposed such a framework, based to some extent on existing protocols for animal research. Among other things, they suggest creating no more organoids than necessary, making them only as complex as necessary to meet the goals of the research, and using them only when the expected benefits warrant potential harm. 


Read More: AI and the Human Brain: How Similar Are They?


The past few years have brought several more high-profile efforts to untangle the ethical implications of organoid research. In 2021 the National Academies of Sciences, Engineering, and Medicine published a book-length report on the subject, and the National Institutes of Health funds a similar ongoing project through its BRAIN Initiative. 

The uncertainty surrounding brain organoids touches some of the most profound questions we can ask — what does it mean to be conscious, to be alive, to be human? These are old, ever-pressing concerns. But in light of this “onrushing ethical dilemma,” as it has been called, they appear especially urgent.

Even if the prospect of organoid consciousness is remote, Farahany says, “the mere fact that it is remote, rather than impossible, creates a need for us to have the conversation now.” 

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group