Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-11T22:35:26.389Z Has data issue: false hasContentIssue false

Should Cerebral Organoids be Used for Research if they Have the Capacity for Consciousness?

Published online by Cambridge University Press:  27 October 2021

Henry T. “Hank” Greely*
Affiliation:
School of Law, Stanford University, Stanford, California, USA Department of Genetics, Stanford University, Stanford, California, USA
Karola V. Kreitmair*
Affiliation:
Department of Medical History and Bioethics, School of Medicine and Public Health, University of Wisconsin–Madison, Madison, Wisconsin, USA
*
*Corresponding author: Email. hgreely@stanford.edu; kreitmair@wisc.edu
*Corresponding author: Email. hgreely@stanford.edu; kreitmair@wisc.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
The Great Debates
Copyright
© The Author(s), 2021. Published by Cambridge University Press

HG: (opening affirmative presentation): The capacity for consciousness of human neural organoids is an interesting issue. I have been involved in discussions about it and have written a little on it in the last 3 or 4 years. I was on an National Academy of Sciences committee that on April 8 released a report entitled The Emerging Field of Human Neural Organoids, Transplants, and Chimeras: Science, Ethics, and Governance.Footnote 1 I am also on the National Institutes of Health BRAIN Initiative Neuroethics Working Group, which has talked about these issues. The views expressed in this debate are mine alone and should not be viewed as reflecting the views of either the NAS Committee or the NIH BRAIN Initiative’s Neuroethics Working Group.

I want to talk about three things. First, I want to explain what neural organoids are, and, more importantly, what they are not. Second, I want to talk about their capacity for consciousness. Third, I want to try to answer the debate question. Spoiler alert: to say that they should by used is a little too strong but I do not think, based on current information, it makes sense to ban their use for research purposes.

So, what are organoids? They are little balls of cells, a sphere, originally referred to as spherical organoids, or spherical constructs. They are about two millimeters in diameter, so think of either a large black peppercorn or a very small pea. They are made up of human brain cells, typically made from induced pluripotent stem cells (iPSCs), although they can also be made from human embryonic stem cells as well as some later fetal stem cell types. The most common human neural organoid today would involve four steps: 1) taking a few cells from my skin; 2) turning those into iPSCs by injecting them with particular proteins that give them the ability to become a variety of different cell types; 3) encouraging the iPSCs to become brain cell types and then; 4) putting them into a petri dish with a three-dimensional framework, the appropriate nutrients, the appropriate factors, and letting them grow. Madeline Lancaster, currently at the University of Cambridge, was the first researcher to do this successfully. At the time, around 2013, she was working with Jürgen Knoblich. I do not believe any of those original organoids are still “alive,” but some human neural organoids that have been “alive” for 4 or 5 years now.

The organoids do not get very large. And the reason for that is the simple one of lack of supply. At this point, the organoids do not have vasculature, any blood vessels. So, they have no blood to bring them oxygen and sugar or take away carbon dioxide and other things from them. Each cell needs to be near enough to the culture medium to get that kind of support. This limits how large they can grow because if you get too many cells packed together, the ones on the inside are not going to be able to get to those essential supplies So, they often take the form of hollow spheres—the cells in the middle of the sphere die off for lack of oxygen and sugar.

Most of the time, the organoids are made out of neurons, but not always. Some people have made organoids that include some of the glial cells, the so-called support staff of the brain (a very bad term that understates the importance of both glial cells and support staff). These glial cells can include astrocytes, oligodendrocytes, and microglia, but for the most part, the organoids are made up of neurons. And each one, depending on the organoid, has 1 million, 2 million, or maybe 4 million different neurons.

The normal adult human brain has somewhere in the vicinity of 85-95 billion neurons. So, the organoid is has just over one hundred thousandth (1/100,000) the number of neurons as the human brain. It is small, but on the other hand, some of these organoids do have more neurons than bumblebees or cockroaches, animals that have perception, organized behavior, and apparently purposeful activity. But they are still substantially smaller than the brains of mice. A mouse brain is about 80 million neurons. There are these limits as to how large single organoids can grow based on the physical problem of access to oxygen and glucose.

People are working on that problem, using at least two different approaches. Some try to make the organoids grow their own blood vessels or give them synthetic blood vessels that will move that blood will be able to move oxygen and glucose in and carbon dioxide out. Others are cleverly stringing together independent organoids, connecting the hollow spheres into a larger structure. The current term for that is an assembloid. Think of it as a strand of pearls. Each organoid is a pearl, but they are all on the same necklace, and they are all connected to each other by (and perhaps communicating with each other) through the string.

What makes organoids so interesting for research is that they self-organize. Some newspapers and occasionally some scientists call them “mini brains.” That is a terrible term. They do not organize themselves like human brains. There is no cortex with two hemispheres and each hemisphere with four lobes. They do not have a cerebellum or a hippocampus, but they do have cells differentiating in different ways, some in ways reminiscent of some parts of human brains.

So, you may get an organoid to organize itself in a layered manner that looks like certain parts of the cerebral cortex, but does not look like an entire cerebral cortex. It is not a miniature cerebral cortex—there is no miniature brain in that dish. There is a hollow sphere of neurons that, over time, differentiate into different neuronal types and organize themselves differently. That is a big part of what makes them fascinating; another part is that they live and, to some extent, function.

Consider this experiment with human neural organoids, performed by Stanford’s Sergiu Pasca. One might take cells from someone who has a severe case of autism as a result of a genetic variation, a deletion of part of chromosome 22, at the q11.2 region, as well as someone who does not have autism and does not have that deletion. Take those skin cells, turn them into iPSCs, turn those into neural cells, turn those into organoids, and watch how they develop to see if there are differences, in development or function, between the neurons in the two organoids.

One last point on organoids. It may be that no man is an island; but an organoid is an island. It is isolated and in the equivalent of sensory deprivation chamber. It receives no inputs; it generates no outputs. But, that can also change. Paola Arlotta at Harvard has made organoids that are attached to retinal cells. If a researcher shines a light on the retinal cells, the organoid’s electrical response, its firing patterns, change. They are different when the light is on and the light is off—an input channel. Pasca at Stanford has made organoids that are connected to muscle tissue. If you stimulate the organoid in some ways, the muscle tissue will contract. If you stop that stimulation it will relax—an output channel.

As far as I know, this is where organoid science seems to be today. Remember, though, that the whole field is less than a decade old and researchers are working constantly to make organoids more complicated.

So much for the first part of this talk. Now to the second, the capacity of organoids for consciousness. The debate question asks “Should cerebral organoids be used for research is they have the capacity for consciousness?” That is a difficult test to apply because (a) we do not know what consciousness is and (b) we do not know what capacity means in this context. Consciousness is one of those slippery words that, at least in English, gets used in many different ways with many different meanings and many different synonyms. People talk about sentience, self-awareness, self-consciousness, perception, directed activity, and cognitive processing, among many more. Consciousness is a mushy ball of meanings. (Philosophers could be very helpful here in clarifying the terminology and the uses of these particular terms.)

For some people, consciousness means what all of us are doing right now. We are aware of who we are, what we are, what we are listening to (or, in my case, at this point re-writing). The debate listeners made decisions whether to listen to me, to check their email, or to have a cup of coffee. That is a strong version of consciousness. Consciousness might also be a cockroach deciding that, as soon as a light turns on, it needs to scuttle under something to hide, or perhaps even a plant turning toward the sunlight.

This is a problem, not primarily with the debate question, but with the whole field. We do not really have a good sense of what we mean by consciousness. And different people are probably using it to mean different things.

But perhaps more importantly, we do not know what a capacity for consciousness would be. In one sense, “capacity” is a volume measure—the amount of fluid a container will hold is its capacity. In the context of our question, “capacity” is used more in the sense of the possibility, the potential, or the capability for consciousness. Assume we could say that some behaviors were good evidence of consciousness, perhaps by a shortcut through a version of the Turing test: if it acts like it is conscious, we might say it is conscious. But an organoid is not going to answer our questions, at least not now, in a way that allows even the Turing test.

The problem has a semantic aspect to it in terms of fuzzy definitions, but even if we get a nice, clear, crisp definitions, there is an operational problem in assessing organoids. It is very hard to know what capacity would mean and how you would judge whether something had capacity. However you define consciousness, you may be able to say whether something has that consciousness or not, depending on whether its behavior meets whatever test you have for consciousness. But, how will we know whether it has the capacity for thus far unused behaviors that would indicate consciousness? That is going to be really hard to figure out, in part because we just do not know that much about consciousness or even how to distinguish a conscious brain from an unconscious one.

Happily, I do not have to worry about that in the third part of this presentation, my answer to the debate question: “If organoids have the capacity for consciousness should they be used for research?” Unless one wants to overturn a huge amount of existing biological research, the answer has to be “yes,” organoids can be used for research, because we use things for research all the time that not only have the capacity for consciousness but that are actually conscious.

Mice and rats are at least, by anything other than the self-awareness human self-consciousness standard, clearly conscious. They might even be conscious by the human standard, it is really hard to know. We do research on them. We do research on monkeys, which are even closer to consciousness and we do research on some of the great apes. Most of the great apes we no longer use in any research, at least intrusive research, but there is one great ape we do research on all the time—ourselves. And, we are conscious; I have been a research participant. I hope that I have the capacity for consciousness, I think I have consciousness. I can be a research subject.

Now, if we thought that organoids had reached the kind of size and complexity where they might be able to perceive things like pain, might somehow have a sense of identity, then I think they should be treated in research the way we treat non-human animals. Whether they should be treated as mice or as monkeys—both animals used in research but with different levels of treatment and concern—would depend on the organoids. But, if we thought human neural organoids had reached those levels, research with them should have some oversight, oversight that does not currently exist. Right now, that kind of regulatory oversight does not exist, at least in the United States.

If somehow, an organoid were to become something that we thought had the degree of consciousness, however we define it, that is equivalent to what we think of in humans as enough to establish “personhood”, then they should get a higher level regulatory protection, which would mean they could be used in research but only with their informed consent. (“Only” is a little strong there since there are a few exceptions to informed consent even with humans.)

So, unless we want to say that almost all of our in vivo biological research is illegitimate, we have to be able to say that even if organoids had the capacity for consciousness, they should be used in research just as we do use non-human, and human, animals with that capacity. I do not see any reason why organoids should receive better treatment. I will now turn this conversation over to my friend Karola who will explain to you why I am deeply wrong.

KK (opening negative presentation): It is a pleasure to be here talking with Hank about this topic. I will try to convince you that indeed cerebral organoids should not be used for research if they have the capacity for consciousness. I want to specify that I mean by research the sort of invasive research that neuroscience performs. But, we can quibble about whether I need that specification.

My argument takes the following form. I will try to convince you that entities that have the capacity for “Conscious State X,” which I will define, have high moral status. Then I will try to convince you that we cannot rule out that cerebral organoids that have the capacity for consciousness, which I will call “COCCs,” have the capacity for Conscious State X. From this, we can conclude that we cannot rule out that COCCs have high moral status. Next, I will argue that entities for which we cannot rule out a high moral status should not be subject to invasive neuroscience research, and thus I will conclude that COCCs should not be subject to invasive neuroscience research. This is my overall argument structure. I will return to this later.

First, I have to talk a bit about consciousness. Hank put out the challenge that we really do not have a good understanding of what we mean by consciousness. I would say that philosophers do actually have a good understanding of what we mean by consciousness, it is just that they do not always explain it in terms that nonphilosophers are willing to engage with; so, I agree that there is a need for interdisciplinary communication on this front.

Since my job is to argue against the proposition and since the proposition concerns cerebral organoids that have the capacity for consciousness, I am going to assume that the cerebral organoids in question have this capacity for consciousness. I am not going to argue about whether cerebral organoids could develop the capacity for consciousness because that is beyond the debate question. Although, I may return to this in my consideration of objections later.

So, what do we mean by consciousness? Well, conscious experience involves a certain “what-it-is-like-ness.” There is a certain experiential state. There might be something like the taste of a strawberry or the sensation of a breeze. It does not actually need to be an externally stimulated sensation, it can be an internal sense of discomfort, for example. Conscious experience is essentially subjective experience. I am having conscious experiences that none of you—nor anyone in this world except for me—can access. This is not because you cannot look into my head. You could put me in a functional magnetic resonance imaging (fMRI) scanner and even have a screen that lets you read out the thoughts that I am having; perhaps a description such as “Karola is seeing blue”—but you would still not be able to access my conscious experience. That is because there is what we can think of as this object/subject chasm.

There is an explanatory gap, well known in the philosophy of mind, between the way I can access experiences as a subject and the objective third person means of investigation that science has available. Things like fMRI scanners, verbal reports, or measurements, these are all wonderful tools of science. But we cannot use these to access subjective states of others. (Or of ourselves in fact.) The only way we know anything about subjective states of others is through inference.

Just now I see Hank looking a little perplexed, a little bit frowny, and he is probably thinking “What is she talking about” or “Oh, I’ve already heard her talk about this before.” I can infer from his facial expression what his internal sensations are based on thinking that when I have that sort of facial expression, I have that sort of internal sensations. We even gain insight into the conscious states of nonhuman animals through these inferential means. For instance, in research, when we have nonhuman animals, when we see them trigger the release of an analgesic after they have been injured, we infer “This animal is experiencing pain.” We infer from behavioral cues what the internal states of a nonhuman animal might be. For human subjects, we tend to just ask them if they are conscious and they say “Yes, I’m experiencing pain and it’s at seven.” That is a verbal report of a phenomenal experience.

What can we say about consciousness and cerebral organoids? As Hank pointed out, this is a very bizarre question and it is a very hard one to answer. Clearly, we cannot use the kind of behavioral methods we use for nonhuman animals because, as Hank pointed out, organoids really are an island, they do not have the behavioral outputs; they are not about to tell us ‘”Yes, I’m experiencing hunger right now.” And even if they had some behavioral outputs—as Hank suggested, there are some organoids that are being connected up with muscle tissues that might have some output by retracting or expanding—we do not know what that means. We do not have a dictionary that tells us when an organoid contracts, it means it is experiencing “X.” That sort of dictionary does not exist. So, the way I can use inference to go from my behavioral cues to my internal states and infer what your behavioral cues tell me about your internal states, we do not have that kind of a process for organoids.

I am trying to convince you that there is a profound lack of knowledge about what the subjective states of a cerebral organoid might be. Again, we are assuming that these organoids have subjective states (or at least have the capacity for subjective states). What those states are like, however, is inaccessible to us through our inferential means. To be clear, there are theories regarding the neural correlates of consciousness, that is, the sufficient conditions of a brain to give rise to consciousness. This is an exciting field of neuroscientific research and there are a number of candidates as to what is sufficient for conscious experience. Maybe it has to do with slow wave activity in the posterior hot zone of the cortex, or perhaps just mid brain regions being activated, or a kind of global cortical activation. However, these are not theories about the content of conscious experience; they are theories about what is required for consciousness of any kind to be instantiated. They do not tell us what the subjective experience is like for the subject.

Why is it so important to know what that consciousness is like? Hank rightly pointed out that there are many possible types of conscious states. As he noted, there can be a simple sort of basic sentience of light or dark, warm or cold, or we can have more full blown qualitative conscious experiences such as seeing colors, experiencing sound, feelings of discomfort or pleasurable experiences. We can have states of conscious experiences in which we have the ability to make computations; we can maybe think things like “This thing is not like that thing.” We can have self-consciousness; and self-consciousness itself has a range of degrees of complexity. Self-consciousness comes on a spectrum, there is everything from a simple kind of recognition of a distinction between a self and the other, to a kind of full-blown narrative self-consciousness that gives us the perception of a self that persists through time—the same me that is sitting here now, was sitting her a couple of days ago, and will be sitting here in a couple of days. (That is right, I never leave this seat.)

I want to argue that because of this subject/object chasm, because of the explanatory gap, we do not know what kind of consciousness is instantiated in a conscious cerebral organoid. We do not know that it is not the kind that admits of self-consciousness. It might be something like that of an individual in a minimally conscious state, who is also deaf and blind, but for whom there is a self there. For all we know, cerebral organoids that are experiencing consciousness, not the ones we have today, but the ones that may exist in the future who will experience consciousness, could be experiencing this level of consciousness. And this is what I call “Conscious State X.” It is not the full-blown unimpaired, neuro-typical, aware and awake adult human consciousness that we are experiencing right now; but it is something approaching such a state that has some features of self-consciousness inherent in it. And because of the subject/object chasm, we cannot rule out that organoids have it.

I have shown what I needed for one of my premises, namely that we cannot rule out that cerebral organoids have this Conscious State X. Next, I will show that, if something has Conscious State X, it has High Moral Status. High Moral Status is not full moral status, because it is missing our narrative full-fledged self-consciousness; but it seems that we do tend to have a commonsense view that the more complex a conscious state that an entity possesses, the more moral status we think the entity has. I think that Hank alluded to this with the cockroach and chimpanzee example. We have fewer pangs about stepping on an ant as opposed to killing a whale and I think, in part, that is because we think that a whale has more sophisticate conscious experiences and therefore has more moral status. What I am suggesting is that if an entity has this Conscious State X, or the capacity for Conscious State X which is defined as this sort of almost full-blown, on-the-road-to full, Conscious State X, then it has High Moral Status.

This takes me to my subconclusion that we cannot rule out that cerebral organoids that had the capacity for consciousness, COCCs, have this High Moral Status. If you are with me so far, you agree that we cannot rule out that COCCs have High Moral Status. Finally, I need to convince you that entities that have High Moral Status ought not to be used for invasive neuroscience research. I will use one of Hank’s insights from his recent paper in the American Journal of Bioethics. He argues that the problem with brain surrogates is that when you get them too good, they run into the same problem that you have with not being able to do invasive brain research on the best models of human brains–that is, living human brains. We would never permit invasive and potentially destructive brain research on a fully conscious human brain. If neuroscientific research is invasive and potentially destructive, we do not admit any leeway here. If we are going to cut open a brain, we want to make sure that the person has died before we dissect the brain. We do not do this with living brains, we do this with dead brains and that is because we really care about the moral status that the living entities have. In fact, we have a very precautionary approach to such brains. We do not say “Oh, if we accidently get one wrong, that is not a big deal.” No, we err on the side of caution and we err on the side of not violating this moral status—anything short of that seems to be akin to the kind of Nazi doctor experiments where horrible invasive and destructive research was committed on individuals with full moral status.

I want to argue that the same precautionary approach, ought to be applied to entities with High Moral Status and not just entitles with full moral status. The reason is that we care about entities with full moral status are the same reasons we should care about entities with High Moral Status. And that is that they have this capacity for self-conscious experiencing of a world and this might allow for things like the ability to dread, the ability to fear, the ability to despair. Again, these are all features of Conscious State X which I tried to show that we cannot rule out for COCCs. Therefore, we should err on the side of caution and we should maintain the precautionary principle when it comes to these entities and not permit invasive neuroscience research into entities that have High Moral Status. This brings me to my final conclusion, which is that cerebral organoids that have the capacity for consciousness should not be subject to invasive neuroscience research.

I want to counter one of Hank’s objections. He said, “Look, if we do not allow research into cerebral organoids that have the capacity for consciousness then we have to do away with all sorts of in vivo research using nonhuman animals.” I think that is not necessarily true. We only have to do away with this kind of research on entities that have the capacity for Conscious State X. As for this Conscious State X, we cannot rule out that this state is instantiated in cerebral organoids; but, likely we can rule it out in a whole number of nonhuman animals.

One exception might be nonhuman primates. There has been a development in nonhuman animal research of trying to move away from nonhuman primates, precisely because I think there is a recognition that nonhuman primates do have something like what I am calling Conscious State X, and therefore that it is not ethical to perform invasive neuroscience research on them. Although there is still research going on, I think this is a historical artifact and will be phased out because people are increasingly recognizing that this kind of research is inappropriate due to the High Moral Status of such entities. And so, the same thing goes for the cerebral organoids. If we cannot rule out that they have this High Moral Status, then we should not be conducting this kind of research on them. If my argument is correct, this does not mean that we have to rule out all nonhuman research, merely that being conducted on the small sliver of nonhuman animals that do in fact have this sophisticated form of consciousness.

Rebuttals

HG Rebuttal: It is nice to be reminded that dealing with philosophers is not quite the same as dealing with real human beings, but close. In an argument with a philosopher, the philosopher is always able to construct some specific little, narrow, probably impossible example—certainly never seen so far in nature—and argue from that. Although I will say that you have lost one aspect of the philosopher’s presentation style. Normally, when I hear philosophers present they’re reading word by word from a text, one that was already been handed out to everyone in the audience. As a lawyer with an interest in presentation styles, I applaud you for not doing that. I will also note that you seem to be a philosopher who has been infected by bioscience. Your acronym “COCCs” sounds very biosciencey to me.

We can debate on the philosophical territory that Karola has staked out. I will note that the kind of cerebral organoids she is postulating are a tiny subset of cerebral organoids that are out there now and also that are potentially out there. A tiny subset may be forever empty; it certainly is empty today. And they are a subset of what people would normally think of as “organoids with the capacity for consciousness,” because she is not talking about consciousness broadly; she is talking about one specific form of consciousness, which is a consciousness that is human, or human-like, or close to human, or vaguely human, or otherwise partaking something of the special nature of human consciousness.

I will happily take that as a concession, that all other sorts of organoids with “lesser” levels of consciousness can be experimented upon in the real world of policy and ethics. That would be a big win, because it includes all the organoids that we have now, all the organoids we are likely to have in the near future, and maybe all the organoids we will ever have.

However, narrowing down to what Karola wants to focus on, organoids with sort of human-like levels of consciousness, she says, I think accurately, that we cannot disprove that there could be an organoid like that. And, of course proving a negative is always a difficult thing. We cannot disprove that my iPhone has that level of consciousness. My phone is pretty complicated, especially when it is hooked up to the internet. I am willing to accept that my big coffee mug does not have that level of consciousness, although I cannot really prove it. There is nothing about it that would lead us to think that it might have it, whereas there is something at least provocative about organoids and that level of consciousness. But my iPhone and my Macintosh computer have “behaviors” that evoke human consciousness even more. There could be a variety of things that we treat in ways that are inconsistent with the level of treatment of humans that Karola wants to use as the standard here for treating neural organoids that might have the “capacity for consciousness,” in the sense that she is using “consciousness.”

I will still take Karola on even in the very narrowly hypothetical defined organoid she is talking about. We do perform invasive experiments on entities like that; we do invasive experiments with humans in ways that Karola said we would never imagine doing in the brain of a healthy human person.

Part of that depends on your definition of “invasive.” Sticking instruments into a brain is what I think most people would consider invasive. But, of course, unless you are invading somebody’s brain—unless you are causing changes to the brain’s reactions—you are not doing anything to the bearer of that brain. I am invading your brains right now. Assuming you are actually listening to me, I am invading your brain. I am causing your brain to change, causing neurons to fire, causing synapses to strengthen or weaken. If you remember tomorrow that I talked today, it will because I caused physical changes to your brain. That is invasive in a sense, and we do that all the time.

But, beyond that, we also stick things into peoples’ brains for research purposes. Because of the risks, those are people whose brains are already open and accessible to the invasive device because of some medical intervention for their health. The medical benefit is the justification for having their brains accessible. For example, some epilepsy patients are hospitalized and have parts of their skulls removed in an effort to determine the site of their epileptic focus. While they wait perhaps for weeks for a seizure to occur that will allow their epilepsy to be “mapped,” they will often volunteer for research. This research often involves inserting a network of electrodes into their neocortexes. The patients are then watched for weeks and the electrodes will be activated, not only for recording but also for stimulation that lets the researchers see what happens.

That has no direct benefit to the research subject and, like everything, has some risks. Josef Parvizi, a Stanford neurologist (and friend) tells me he has not had anybody who has had a bad experience during this kind of research, but he has had people who have had weird experiences. When, for example, a particular electrode in a particular location was activated for two patients, both of them said something like “That was very strange, I just felt like I am about to confront a real challenge at some point in the future but I am going to overcome it.” This research came with risks. Perhaps instead the stimulation would have caused a terrible nightmare or headache. Neither of the patients were harmed, but they might have been. We do similar research with people about to undergo neurosurgery to implant electrodes for deep brain stimulation.

What is different from organoid research? We get the patients’ consent. So, let us get consent from the organoid. But wait, Karola is going to say “No, my organoids could be conscious, but they cannot be conscious enough to consent. Or, “They have enough behaviors so that I know they have this human-like consciousness but they cannot communicate sufficiently for consent.” If Karola defines her hypothetical organoid that narrowly, I guess I give up—for that particular example. Maybe we should not do research with an organoid that leads us to think it has human levels of consciousness, but with which we cannot communicate and from which we cannot get consent. If you give me the rest of the organoid universe, somewhere between 99.99999% and 100%, of all human neural organoids, I am willing to concede that.

I do not think that means I am conceding the debate overall. But, let me note that the idea of something that has no inward perception and no outward expression having something like human consciousness seems hard to imagine. If it has inputs or can have some sort of output, researchers should be able to do some kind of communication with it. They might try method similar to those used in a famous experiment with a patient who appeared to be in a persistent vegetative state. Functional magnetic resonance imaging detected signs of localized neuronal firings when he was asked to think about playing tennis. If the answer to a question was “Yes” and to think about walking through his house if the answer was “No.” At least one patient answered five questions correctly that way. So if the organoid can perceive anything, then it seems to me you may well be able to get it to communicate. Whether you can communicate well enough to get its “informed consent” may be highly unlikely, but of course, I think Karola’s hypothetical is highly unlikely as well. And if she can be highly unlikely I think I should be able to be highly unlikely. In such a case, I say OK, if you have an organoid with human-like consciousness that might be human in its moral status, treat it as a human. But even that does not mean never do research with it. And I will note that as far as creating an organoid that might have human-like moral status, there is no reason to think we are there now and there’s very little reason to think we will get there within my lifetime—if ever.

KK: Rebuttal

Thank you, for already conceding such large components of your stance; but, I think my argument is a little more ambitious than you are realizing—that is, I am claiming something harder to defend then what you are conceding. So, I just want to be clear that I do not think my job is quite done. I am not just claiming that one ought not to perform invasive neuroscience research on cerebral organoids that have the capacity for Conscious State X. Rather, I am claiming that since we cannot rule out that all cerebral organoids that have consciousness or have the capacity for consciousness, that is all COCCs, have Conscious State X, that we ought not to do invasive neuroscience research on any such entities. I am claiming something pretty strong: That we ought not to do invasive neuroscience research on any cerebral organoids that have or have the capacity for consciousness.

I agree with you that today we very likely do not have cerebral organoids that are at this state. But then you pulled up your phone and said “Look, how are you going to rule out that my iPhone has consciousness; and, if you cannot rule it out that it has Conscious State X, we have to apply all these protections to it.”

That is not quite right, because the way we know anything about what kind of cerebral organoid might have any kind of conscious state is by looking toward the neural correlates of consciousness. I argue that these neural states can give us insight into whether or not consciousness is present; but, as I argued, they cannot give us sufficient insight into what the content of such consciousness is. Your phone, as new as it may be, does not have neural correlates of anything, let alone consciousness. I do think that my view requires that we have some way of identifying which cerebral organoids in general have a capacity for consciousness, but that is implicit in the debate question, “Should Cerebral Organoids be Used for Research if They Have the Capacity for Consciousness?” The question itself assumes that we have some class of cerebral organoids that have the capacity for consciousness.

Now you might think that “Hey, if we can delineate this class, if we can say what kind of cerebral organoids have the capacity for consciousness, then we also know what the content of that consciousness is.” Maybe you think that it is crazy to say that “Oh, we can tell which organoids have consciousness, but not which ones have Consciousness State X”; but I do not think that is correct. The neural correlates of consciousness, for instance take the posterior hot zone hypothesis, are a way of identifying whether or not a particular brain is instantiating consciousness. They do not tell us the content of that conscious state. They are merely a way of differentiating between the presence or the absence of conscious experience. The ways we gain insight into the content of a conscious experience is, for instance, by putting a conscious human into a fMRI scanner and showing them a picture of a house, letting the person think of a house, and then seeing what the brain does when they are thinking of a house. We know they are thinking of a house because they tell us, “I am thinking of a house.”

That is a way to know about the content in conscious humans from whom we can elicit verbal reports, but it is not a way to know about content in conscious cerebral organoids who cannot provide verbal reports. Not today anyway, and arguably not ever, unless we manage to build a decipherable behavioral output organ on top of the cerebral organoid. Therefore, I do think that there is this concern that we can know something about the presence of consciousness, but not about the content of consciousness.

I have some further points I want to get through briefly. Hank made the claim, regarding “invasive research,” that I am “invading” your brain right now; well, I think this is a pretty far-fetched definition of “invasive research.” I do not think, from a common sense perspective, that we think of listening to music as being an invasive intervention in your brain. I am concerned in this debate with the commonsense notion of “invasive” where there are implements inserted into the brain. Hank also stated “Well, look we do invasive research on humans all the time, people who have their brains open because of some neuroscientific or neurological procedure, and we can do research on them.” I grant you that point and I agree that I have to limit my claim in the following way: We ought not to do invasive neuroscientific research that is not within the context of a medical procedure. The kind of research interventions Hank delineated are in the context of a medical procedure or that have some benefit for the patient, and I am willing to specify that the kinds of research that I am suggesting should not be permitted in COCCs are those that are not within the context of a medical procedure or that are not beneficial to the cerebral organoid in question.

Hank also brought up consent and he stated that if by the time, we get to the kinds of organoids that are able to have these sorts of sophisticated cognitive conscious states, they will be likely to be able to consent so we really do not have to worry about it anymore. I do not think that is true. Hank talks about the fMRI studies with individuals who are believed to be in a persistent vegetative state but then appear to be in the minimally conscious state because they were able to modulate their brains in ways to respond to question. First, as rightly noted, some of these individuals were able to do this; but it is certainly possible that there are conscious states that are present in the brains of individuals in a minimally conscious state that evade the capture of an fMRI. Second, in order to receive informed consent for participating in a research study, we have to make sure that the individual has capacity to administer that consent. How would we do a capacity assessment? Doing a capacity assessment on an individual in a minimally conscious state, who can answer “yes” by thinking about tennis and answer “no” by thinking about walking through the rooms of the house seems entirely insufficient. The sort of robustness we require for the capacity to consent to invasive neuroscience research cannot be assessed through these very simple Yes/No questions. I thus reject the claim that we should be able to sufficiently assess the capacity of a cerebral organoid to make decisions regarding invasive neuroscience research in a scenario where we are using neuroscientific means to ask the Yes/No questions.

With that I believe that I have convincingly argued my case against Hank.

Best Argument of Opposing Side

HK: I am trying to figure out how to do this without sounding snarky. I think Karola’s best argument is that if you define the debate question to be about a currently non-existing, “no reason to think it may ever exist,” class of things that have human-like consciousness and are deserving of moral status, but without the ability to consent, and give her that narrow entity, then she wins. And no, I guess I cannot argue with that. I do not think it is likely or plausible; but of course, I am just a humble country lawyer and philosophers argue about things like this all the time.

In terms of the realities of what we should be doing with respect to research and cerebral organoids, I do not think is a very relevant position issue; it is not going to be relevant any time soon and may never be relevant. But, I think, Karola, on that one, you’ve got me.

KK: I think Hank’s best points are (1) the argument from the Utilitarian value of this research and (2) the concerns regarding the consistency of my position. Undoubtedly there is a lot of value in cerebral organoid research and we have not talked about how that should factor into the equation of ethical decisionmaking regrading invasive neuroscience research in cerebral organoids. Specifically, what sorts of moral obligations are generated by the suffering that this kind of research could alleviate. I began talking about this at the beginning but it has not been brought up again and I think that may be a very strong argument for the permissibility of this kind of research. With respect to the second point, I do think it is hard for me to thread the needle between not permitting research where there is a possibility that cerebral organoids have this special type of consciousness state (Conscious State X), while at the same time allowing that invasive neuroscience research in most nonhuman animals is permissible. There are responses to both of these challenges, and I have provided some, but these are serious challenges.

References

Note

1. Committee on Ethical, Legal, and Regulatory Issues Associated with Neural Chimeras and Organoids. The Emerging Field of Human Neural Organoids, Transplants, and Chimeras: Science, Ethics, and Governance 2021; Washington, DC: National Academies Press.Google Scholar