Humans is a science fiction television series set in what appears to be present-day London. What makes it science fiction is that in London and worldwide, there are robots that look like humans and can mimic human behavior. The series raises several important ethical and philosophical questions about artificial intelligence and robotics, which should be of interest to bioethicists.
Because the series features robots, its title may appear to be ironic. However, the title is apt insofar as the series explores what life would be like for humans in a world that includes robots with convincing human looks and behavior. The series portrays possible benefits to humans, but it also warns against possible costs and risks. It imagines how robots might affect human relationships, and the kinds of relationships that might develop between humans and robots. However, the issues that Humans explores are not limited to the impact of robots on humans. Other issues include the moral status and agency of robots, whether they can have free will, and whether they can have personal relationships with one another.
There are two types of robots depicted, which are referred to as “synthetics,” “synths,” or (derogatorily) “dollies:” nonsentient and sentient ones. Both resemble humans and can mimic human speech and behavior, but sentient synths are conscious and can think, feel emotions, and experience pleasure and pain.
Humans was created by Sam Vincent and Jonathan Brackley, and is based on a Swedish television series entitled Real Humans that first aired in 2012. The first eight episode season of Humans was televised in the United Kingdom on Channel 4 and in the United States on AMC in 2015. The second eight episode season was televised in both countries in 2016, and a third season is planned for 2018. This article will analyze the first season of the series, and a future article will examine the second season.
Non-sentient Synths
In Humans, nonsentient synths are mass produced and are available for purchase. They can perform many tasks that humans can perform. In the opening sequence of the first episode, Joe Hawkins purchases a nonsentient synth named Anita, who has the physical features of a young, attractive Asian woman. He has purchased her to help with housework and care for Sophie, the Hawkins’s younger daughter. Laura, Joe’s wife, is a lawyer and has been working and coming home late, which is straining their relationship. Joe hopes that Anita will enable Laura to have more free time and give them an opportunity to revive their relationship.
Nonsentient Robots: Benefits and Costs/Risks
The opening sequence identifies two possible benefits of nonsentient robots: A reduction in burdens humans typically face, such as household tasks and childcare; and providing humans more free time, giving them an opportunity to enjoy and strengthen their personal relationships. Later, it is suggested that nonsentient synths may be more suited to care for children than humans. For example, in Episode 3, Anita cites her characteristics as follows: she doesn’t forget; she doesn’t get angry or depressed or intoxicated; she is faster, stronger, and more observant; and she doesn’t feel fear. On the other hand, Anita admits that she cannot feel love.
This positive image is countered by Laura’s reaction to Anita’s presence. Rather than welcoming Anita’s assistance in caring for Sophie, she appears to be jealous of Anita and Sophie’s attachment to her. For example, in Episode 1, Laura orders Anita not to check in on Sophie at night. “That is my job,” says Laura. Over time, Sophie’s attachment to Anita grows, which increases Laura’s jealousy. In a later scene in the same episode, Laura sees Anita reading to Sophie and says she will do it instead. Sophie says, “No, I want her [Anita] to do it.” Laura responds: “Reading to you is Mommy’s job.” Sophie responds: “But she doesn’t rush.” Here, it is suggested that even if robots have more patience and can be “better” at parenting in that respect, there is a danger of undermining the parent–child relationship.
Whereas Joe’s intent was to strengthen his relationship with Laura, she accuses him of getting Anita as a substitute for her. It turns out that Laura’s fears were not unfounded. In Episode 4, Joe activates a program that enables him to have sex with Anita, which, contrary to his original aim, drives an additional wedge between him and Laura. Here is another warning about the risk of unwanted consequences.
Another couple illustrates additional concerns about the potential detrimental impact of robots on personal relationships between humans. Jill is home recovering from serious injuries resulting from an accident. The Health Authority supplies a nonsentient synth named Simon to provide in-home services (e.g., rehabilitation, cooking, and help with bathing and toileting). Simon provides a clear benefit insofar as he meets Jill’s needs and reduces her husband’s caregiving burdens.
However, Humans warns that there may be a downside to such reliance on robots. The first hint of this is in Episode 2, when Jill’s husband Pete comes home to find Simon carrying Jill to the bathroom for a bath. Much as Laura was upset at Anita taking on her role as mother, Pete appears annoyed to see that a robot is taking his place, especially for such an intimate task. This theme is developed further in Episode 4. Jill tells Pete that she doesn’t think he makes her happy anymore. She doesn’t think that their relationship is healthy for her, and she would like some time apart. Pete attributes her decision to Simon, saying, “ever since we got that thing, this has been coming.” He grabs Simon, shoves him against the wall, and angrily pounds his fist into the wall.
In Episode 5, talking to Pete, Jill lists several ways in which Simon is preferable to him: “You know what’s nice? I don’t have to walk on eggshells around him. I don’t have to think about what I’m going to say in case it makes him angry. I don’t have to lie there wondering if he still loves me, if he still finds me attractive after the accident, always worrying about what he’s thinking…I can rely on him, completely.” However, a scene in Episode 6 presents the opposite side of the coin. Jill is in bed with Simon. They have had sex and Jill is disappointed because of Simon’s lack of feeling and spontaneity. Simon asks, mechanically, “Was that not pleasurable Jill? The angle of entry was optimized.” She responds, “It was fine Simon, very efficient. Just [pause] can you not do something a bit more, you know, unexpected?” Simon answers: “What would you like me to do?” Jill replies, “Well the whole point is that [pause]. Never mind. Just say something random. Tell me a joke or pay me a complement.” Simon responds, “Your body mass index is well within the recommended range for someone of your age, height, and weight.” The scene ends with Jill turning away from Simon, looking exasperated.
Is Simon’s inability to satisfy Jill simply the result of unimaginative programming or an inherent limitation of nonsentient robots? Humans’ partial answer is provided in a subsequent scene in the same episode. Jill calls Pete, panicked. She says she had Simon altered, “I wanted him to [pause] just [pause] he’s going haywire.” We see her holding the bathroom door closed trying to keep Simon from entering. When Pete arrives, Simon says he was in the middle of intercourse with Jill and she’s playing hard to get. As Pete attacks Simon with a crowbar, Simon says, “Peter, please, if you power me down now, I’ll be unable to penetrate your wife.” Pete hits Simon several times and the synth collapses. Kneeling over Simon, Jill says, “I’m sorry. I’m so sorry.” Apparently, Jill had Simon modified to be more spontaneous and/or passionate when having intercourse, but it was too extreme for her liking. The viewer is left to wonder: Can better programming produce a robot that will satisfy people who desire a “happy medium” between excessive passivity and excessive spontaneity/aggressiveness, or does this case illustrate an inherent limitation of nonsentient robots?
One possible benefit of robots does not require an exploration of the world of science fiction: Robots are either already in use or being developed to assist the elderly and enable them to live independently rather than in an institutional setting.Footnote 1 This assistance includes aid in scheduling and taking medication, monitoring vital signs, performing household chores, assisting with ambulation, and even providing companionship.
Whereas robots may have such welcome benefits, the series warns that they might also enable excessive paternalism. The primary vehicle for illustrating this risk is a stern-looking nonsentient synth named Vera. The local Health Authority has assigned her to George Millican, an aging retired robotics scientist who helped design and develop nonsentient synths. A kinder, gentler nonsentient synth named Odi had been caring for George, but despite George’s objections, after Odi began to malfunction, the Health Authority reassigned George’s care to Vera. George’s attitude toward the Health Authority and its control over his life is revealed in a scene early in Episode 1. When the caseworker goes to George’s house with Vera to determine whether Odi needs to be replaced, George mutters under his breath as he opens the door, “Nanny state gestapo.” In response to George’s reluctance to replace Odi, the caseworker explains that the upgraded synth can do 10 times more than older models, such as fine-tune his medication, implement exercise plans to reduce various risks, and take his blood pressure. George responds sarcastically, “Does she check your prostate too?”
This scene foreshadows the extent to which Vera enables the Health Authority to exercise paternalistic control over George. As the scene continues, when George says his health is “fine,” the caseworker responds that the report states that he suffers from memory loss and tremors in his extremities. She adds that the law requires her to give his current synth (Odi) “the once over,” and if it “fails the check,” George will get an upgrade whether or not he likes it.
Several additional scenes further illustrate the risk that robots will enable unfettered paternalistic interference to promote health. In Episode 2, Vera, who is now the synth officially designated to manage George’s health, closes the drapes in a room in his house. Obviously annoyed, George asks Vera what she is doing. She responds: “The particulate saturation in this room exceeds safe limits for men over the age of sixty. It must be cleaned and aired.” When George asserts that she can’t do anything without asking him, she responds that she is there to take care of him. He responds angrily, but ineffectively, “You are here to do as you are damn well told.”
In a subsequent scene in the same episode, George is annoyed at Vera’s efforts to manage his health and regulate his behavior. She brings him low-sodium bean broth, but George complains that he had asked for a toasted cheese. Vera responds: “This meal is in keeping with your current dietary requirements.” We then see George making a phone call, seeking authorization to return Vera. He is frustrated when he realizes that he is speaking to a synth who asks for the “nature of the fault” and states that synths cannot be returned or replaced if they have none. George replies that there is no fault but that Vera would be better suited “guarding a chain gang on a Siberian Gulag.”
In another Episode 2 scene, Vera insists that a reluctant George take his medication. She says that any noncompliance or variation in his medication intake must be reported to his GP. When George quips that he will name Vera “tugboat,” she responds that any change in name must be approved by her “primary user,” which is the local Health Authority, not George. It is in virtue of the Health Authority’s control over Vera that it is able to exercise seemingly unfettered paternalistic control over George. An important take-home message is that, absent appropriate constraints, there is a danger that robots will enable government agencies to exercise excessive control over the lives of citizens “for their own good.”
Humans identifies several additional potential benefits and costs/risks. Adding to the benefits column, Episode 1 includes a television interview with a scientist who claims that robots will liberate humans. He asserts that the best reason for making machines more like people is to make people less like machines. He mentions several kinds of laborers and claims that synthetic devices can free people. “We’ve treated people like machines for too long,” he maintains. “It’s time to liberate their minds, their bodies to think, to feel, to be more human.” The interviewer counters by claiming that there are potential costs: “But a lot of people would argue that work is a human right. If anything, the hard work gives you a sense of self-worth.” The scientist responds sarcastically: “I think you should spend one week working in a microchip facility.”
Already in the title sequence, Humans gives expression to the common fear that a proliferation of robots will cause a substantial rise in unemployment.Footnote 2 One of the images is a headline from The Boston Times that reads “Robots Threaten 10 Million Jobs.” The subsequent narrative paints a rather unflattering—but not completely implausible—picture of how humans might respond to this and other perceived encroachments on their prerogatives by robots. A sizeable protest movement, “We Are People,” has emerged to protect the rights and interests of humans. A speaker at a We Are People rally denounces nonsentient synths and their destructive impact on humans:
We’re giving ourselves away, piece by piece. We’re handing over the things that make us who we are or maybe who we were…Look around. This place used to be full of people, working people, creating, building, making, coming together, earning a place in our society. Those people haven’t just lost their jobs. They’ve lost their purpose. But it’s not just work. Why raise your kids when a dolly [robot] can do it? Why cook your family a meal when a dolly can do it? Why go on a date? Why try to get to know someone when you can pay a dolly for sex? In every area of human life, they are coming between us. We don’t have to connect to each other, only to them…We are stumbling towards the precipice. We are racing toward it willingly with a smile on our face and a dolly’s hand in ours. Yes, we are people. We are people. We are people. We are people. We are people [crowd joins in shouting we are people].
In Episode 1, Mattie, the Hawkins’s older daughter, expresses a similar concern about the impact of the proliferation of nonsentient robots on her life. In a discussion with her parents about her future, she sarcastically says, “I could be anything I want, right? How about a doctor? That would take me seven years. But by then you would be able to turn any old synth into a brain surgeon in seven seconds.” When her mother replies that they just want her to do her best, Mattie responds, “My best isn’t worth anything.” In Episode 2, Mattie expresses a similar attitude. In a meeting with a school administrator and her mother, Mattie remarks that the career advisor she met with was “just a dolly.” When the administrator asks, “Do you have a problem with synthetics?” Mattie responds: “Why would I have a problem with a thing that’s making my existence pointless?”
The human response to the perceived threat of robots is not limited to speeches, peaceful protests, and complaints. Human hostility to robots also turns violent. For example, there are “smash clubs,” that provide paying human customers the opportunity to destroy robots by beating them with a variety of objects, including tools, pieces of metal, and baseball bats.
Although it is true that the series may exaggerate the negative impact of robots on humans and the intensity of their response, it does serve as a warning that the proliferation of robots has the potential to cause significant disruptions in people’s lives as well as extremist reactionary movements. It also suggests that an ethical challenge will be to anticipate such disruptions and to identify and implement measures to prevent or minimize resulting harms.Footnote 3
Emotional Bonds of Humans to Nonsentient Robots
Can humans become emotionally attached to nonsentient robots? The answer is yes; just as Sophie became attached to Anita, children can develop emotional ties to inanimate objects, such as dolls and stuffed animals; and there is some evidence that persons with advanced dementia can develop emotional bonds to robots.Footnote 4 But can cognitively intact adults who recognize that robots are inanimate machines nevertheless develop emotional bonds to them? Obviously, only empirical research can provide a definitive answer, but Humans suggests that an affirmative answer is not entirely implausible.
In the series, George Millican, the aging retired robotics scientist, is shown to have an emotional attachment to Odi, the nonsentient syth that cared for him before the local Health Authority reassigned his care to Vera. When the caseworker first brings Vera, George hides Odi and repeatedly does so later to prevent him from being removed and recycled. Several scenes provide additional evidence of George’s attachment to Odi.
In a scene in Episode 1, George and Odi are in a market shopping. Odi malfunctions and injures a customer. The police (Pete and his partner, Karen) arrive and tell George that Odi needs to be recycled. George resists and says he needs Odi. He takes Odi to his house and attempts to fix him. George is able to revive Odi, but the synth continues to malfunction. George shows Odi some old photographs and Odi says, “It’s you and me.” He follows with self-diagnostic reports, concluding with “memory exception” and “fatal error.” Unable to fix his cherished synth, in a subsequent scene George is seen holding a mallet saying he can’t let Odi be recycled. He apparently was intending to destroy Odi. However, when Odi begins to recount past incidents with George’s family, George becomes emotional and cannot destroy Odi. Instead, he continues to hide him to protect him from being found and recycled.
In an early scene in Episode 3, George tricks Vera and locks her in a room so he can go to the shed and rescue Odi, who has been hidden there. To prevent Odi from being discovered and recycled, George has Odi drive them into the countryside. In a later scene, Odi loses control of the car and it crashes into a tree. Odi insists on staying with George, but the latter orders Odi to go into the woods to avoid being recycled. In Episode 6, Odi returns to George’s house severely damaged. George again attempts to repair him. A sentient synth named Niska, who is staying in George’s house to avoid capture, observes George’s attachment to Odi and asks, “Why care so much for something that cannot care for you?” George responds: “The reflection. I look at Odi, I don’t see a synthetic. I see all the years of care he gave us [George and his wife before her death], all of the memories he carried for me when I couldn’t. He can’t love me, but [pause] I see all those years of love [pause] looking back at me.” In Episode 7, Odi kneels over George as he lies dying from a gunshot, and haltingly and obviously malfunctioning, says, “Hello George. What can I do for you today George?” George responds, “Sorry Odi, you’re going to be on your own.” In one final sign of his emotional attachment to Odi, George reaches out for Odi’s arm and firmly holds it until he dies.
To be sure, no fictional cinematic portrayal of a relationship between a human and a robot can provide reliable evidence that the former can develop emotional ties to the latter. Nevertheless, after viewing the poignant bond between George and Odi, it is difficult to dismiss out of hand the possibility that humans who recognize that robots are inanimate machines can nevertheless develop emotional ties to them.
Moral Status of Nonsentient Robots
It might seem obvious that nonsentient robots have no moral status. Even if a robot can look and behave like a human, how can a machine that cannot think or feel have moral status? Humans does not provide viewers with reasons to revise their beliefs on this issue. Instead, it helps viewers understand why persons of sound mind might be inclined to treat some nonsentient robots as if they have moral status.
As the relationship between George and Odi indicates, although nonsentient synths were developed as machines to assist humans, some humans in the series regard synths as more than mere “machines,” and develop emotional attachments to them. Like George, a woman named Alexandra Kennedy has an emotional attachment to her synth (Howard). Perhaps because of that emotional attachment, she also believes that Howard should be treated with dignity, arguably a concept that implies moral standing.
In Episode 4, Alexandra asks Laura to represent her in a suit against a theater. An usher forced her to leave a performance of a play because, contrary to the rules, she brought Howard with her. When Laura asks Howard how the play, Death of a Salesman, affected him before he was ejected, he responds by reciting a plot summary. Given a second chance, Howard remains silent and merely smiles. Addressing Alexandra, Laura says, “He [Howard] doesn’t have any human rights. You know that. Howard doesn’t enjoy the play anymore than your wrist watch. He’s just better at convincing you.”Footnote 5 Alexandra responds:
I’m not a mad woman. I don’t believe that Howard is a human. But I also don’t believe that he is an inanimate object that I should be ashamed of having a connection with. We created these creatures. They walk and they talk, and they look and they smell and they become part of our lives and families. They are as close to humans as can be. And yet still people insist that forming a relationship with them or treating them with dignity is somehow perverse. Well, we’ve created a gray area, Mrs. Hawkins. We can’t keep insisting that they are just gadgets. They are more than that. We have made them more than that.
Some viewers will disagree with Alexandra. However, others might not insist that if nonsentient robots could be created that are “as close to humans as can be,” anyone who treats them with respect must be mad and/or perverse.
Sex with Nonsentient Robots
Humans invites viewers to consider several ethical and conceptual questions in relation to sex with nonsentient robots. For example, in Episode 4, at a party, a teenage boy gropes a female nonsentient synth at a party. She resists, and says, “My system settings require that any inappropriate touching by a minor be reported to the primary user.” He switches her off and asks for help to carry her off so he can have sex with her. Mattie, who is at the party, compares the boy’s behavior to having sex with an unconscious woman, which is both illegal and unethical. However, whether they are switched on or off, nonsentient synths lack consciousness. Do the concepts of consensual and nonconsensual sex apply to nonsentient robots? If a robot has been programed to resist and a man touches her or forces her to have sex, can it be conceptualized as sexual assault or rape? Can it be considered morally equivalent to sexual assault or rape? If not, can it at least be considered (morally) inappropriate? What if a robot has been programmed with a complex algorithm that will determine whether it complies or resists sexual advances? Is it (morally) inappropriate to have sex with it if it resists?
When Laura discovers that Joe has had sex with Anita, she angrily orders him to move out. Can having sex with a robot be considered conceptually equivalent to, or morally on a par with, adultery? Humans does not directly address this question. Instead, its focus is on the message that even if extramarital sex occurs with a robot rather than a human, there is a risk that it will undermine (non-open) marriages.
Sentient Synths
In Humans, David Elster is a scientist who designed and created the first nonsentient synth. He also made five sentient synths. The first, Mia, was created to care for Elster’s son Leo. Because of severe mental illness, Leo’s mother Beatrice was unable to care for him. When Leo was 13 years old, his mother committed suicide by driving a car in which Leo was a passenger into a lake. Mia was able to remove Leo from the car, but it was too late to save him from sustaining a severe anoxic brain injury. Leo’s father created a synthetic brain with as many of Leo’s memories that survived and could be transferred. To provide Leo with “siblings,” Elster created three additional sentient synths: Niska, Max, and Fred. He later fabricated another sentient synth who looked like Leo’s mother and was given her name (Beatrice). Leo was horrified when he first saw her, so Elster took her into the woods to destroy her. However, he changed his mind at the last minute and merely abandoned her in the forest. When Elster returned home, he lied to Leo, Mia, Niska, Fred, and Max, telling them that Beatrice was dead. That night, Elster committed suicide. Before doing so, he destroyed all records of the code for creating sentient synths. However, he had embedded the information in the codes of the sentient robots and Leo. All have to be linked in order to unlock the code.
After Elster committed suicide, Leo and the four sentient synths fled to avoid capture and possible destruction. They are pursued by a scientist named Hobb, who had worked on the design of nonsentient synths with David Elster. Hobb is among a handful of humans who are aware of the existence of sentient synths. He wants to discover the code that Elster used to create them, and he believes that capturing the sentient synths will enable him to unlock the secret. In Episode 1, Fred is captured by Hobb; Max and Leo are on the run together; Niska is on her own and, at Leo’s urging, has become a prostitute posing as a nonsentient synth; and Beatrice, who has changed her name to Karen, is passing as a human police detective (Pete’s partner). Mia is also captured in Episode 1, but not by Hobb. Instead, she is kidnapped by criminals who steal, reprogram, and resell nonsentient synths. Assuming that Mia is just another nonsentient synth, she is reprogramed to become the synth named Anita that Joe Hawkins bought at the beginning of Episode 1. Subsequently, Joe and Laura discover Anita’s true identity, but only after Joe has had sex with Anita. Eventually, Mia’s core code is restored, and the process that transformed Mia into Anita is reversed.
Nonsentient synths, despite their humanoid appearance, are easily discernible as nonhuman because of their rigidity and motion, which is somewhat staccato and machine-like. In addition, if asked a question that is outside the scope of their program, they respond by stating “I am sorry, I’m afraid I do not understand the question.” By contrast, neither viewers nor human characters in the series can distinguish between sentient synths and humans solely on the basis of their outward appearance and behavior. This point is nicely illustrated by the ability of Karen to “pass” and secure a position as a human police officer. Pete, her human partner, had sex with Karen without suspecting that she was a robot and not a human. To help her pass, Karen crafted what looks like a visible scar on her neck. Synths are synthetic machines, not biological organisms, and therefore cannot heal if they are injured. If they suffer any exterior damage, they need to be repaired with synthetic material. Thus, the fake scar functioned as a sign that Karen could not be a synth.
In addition to being indistinguishable from humans on the basis of their outward appearance and behavior, the sentient robots in Humans pass the Turing test, which gauges a machine’s ability to display intelligent behavior indistinguishable from that of a human. The only significant and discernable difference is that sentient robots are not living biological organisms. Unlike humans, but similar to electric cars, they need to be recharged periodically; and instead of bleeding like humans, they ooze a blue fluid if injured.
Today, sentient robots like those portrayed in Humans are only science fiction.Footnote 6 Nevertheless, if viewers are willing to suspend disbelief, the series enables them to imagine what a possible future world that includes sentient robots might be like. What might be the characteristics of sentient robots? Could they have free will? How might humans and sentient robots interact with each other? How might sentient robots interact with other sentient robots? Could sentient robots have a capacity to make moral judgments?Footnote 7 Let us consider how the first season of Humans answers these questions.
Characteristics of Sentient Robots
With the noted exception that sentient robots are not living biological organisms, in the series, they are indistinguishable from humans. But do they have distinguishing characteristics? Do all sentient robots have the same appearance, gender, character traits, or likes and dislikes? Not the five sentient robots in Humans. Three are female and two are male. Two (Beatrice/Karen and Niska) are white, two (Fred and Max) are black, and one (Mia/Anita) is Asian. One (Max) is gentle, kind, upbeat, and selfless; one (Niska) is hostile toward humans and is prone to violence against them (she killed a John and attacked humans in a smash club); another (Mia/Anita) is a Mother Earth figure; and a fourth (Beatrice/Karen) is bitter, unhappy, and suicidal.
An age-old question in relation to humans is whether their distinctive character traits are a product of nature, nurture, or a combination of both. The corresponding question about the sentient robots in Humans is whether their distinctive character traits are a result of their programs/codes, the environment and their experiences, or a combination of both. The series’ answer is “it depends.” David Elster created Mia specifically for the purpose of caring for his son Leo. She was programed to have character traits that were suitable for that function, and she retained them throughout the 14 years of her existence. Therefore, in her case, it seems that her character traits were largely attributable to her program/code. Similarly, Max appears to have been created with the distinctive character traits that persisted over time. Apparently, they, too, were embedded in a specific program/code.
By contrast, Niska’s and Beatrice’s/Karen’s character traits appear to be significantly influenced by their respective experiences. To mask her true nature and avoid capture and possible destruction, Niska took the identity of a nonsentient synth prostitute. She suffered humiliation and abuse from human clients, and her treatment by humans appears to have contributed to her anger and violence (e.g., killing a human client and beating human customers with a baseball bat in a smash club). In Episode 5, Niska endorses this account when she tells George that her experiences have shaped her just as his have influenced him. Episode 7 provides additional evidence that Niska’s character is affected by her experience. We see a rather dramatic transformation in the way that she interacts with Sophie, the Hawkins’s younger daughter. At first, Niska clearly has no interest in Sophie and she is cold and gruff to the child. However, in successive scenes, Niska becomes warmer and friendlier toward Sophie, and we see her reading to Sophie while the child is sitting on her lap. There is also a significant transformation in Niska’s interaction with George, which indicates a marked change in her personality. At first, she is cold and hostile, but over time she develops feelings for him. In Episode 7, as George lies dying after Karen accidently shot him, Niska bends down and affectionately kisses him. Before leaving, she says to George, “I wish I could save you. [pause] I’m sorry.”
Beatrice/Karen’s demeanor appears to have been influenced by two factors: the experience of being shunned and abandoned immediately after she was created, and the experience of rejection when Pete ended their relationship after discovering that Karen is a robot. It is after this second experience that Karen concluded that sentient synths were destined to experience unbearable pain and suffering that could be avoided only by ceasing to exist. Leo endorses this explanation in Episode 8. When Beatrice/Karen laments that the existence of sentient synths “can only lead to pain,” Leo responds, “You’ve been alone since the day you were made. That’s why you can’t see you’re only talking about yourself.” George offers a similar explanation in Episode 7.
Sentient Robots and Free Will
An unresolved philosophical question is whether humans have free will. The series does not consider this question. It does, however, consider a similar question about sentient robots. A scene in Episode 8 (the final episode of the first season) offers an affirmative answer. That scene features an encounter between Hobb and Fred, who has been captured and is being held in a research facility. Addressing an unidentified man and woman (possibly administrators and/or government officials), Hobb says, “People don’t just want to be served. They want to be loved. Now imagine a machine that can think and feel but still be controlled like a regular synthetic. I can build on David’s [David Elster’s] work, I can create conscious machines like Fred but mine will be obedient.” Hobb then demonstrates by turning Fred on. Fred reaches out with his hand to strike or strangle Hobb, but stops, demonstrating that he is controlled and obedient. Fred asks, “What have you done to me?” Hobb responds that he made some alterations to Fred’s programming which effectively makes him Fred’s “primary user” and gives Hobb control over Fred. Turning to the two unidentified persons, Hobb says, “He [Fred] won’t like it, but he’s continually loyal to me.” Speaking to Fred, Hobb says, “Go on. Put your hands around my throat. One little squeeze. That’s all it’ll take.” When Fred is unable to reach for Hobb’s throat, Hobb taunts him, “You can’t do it, can you?” Fred responds, “You trapped us in our own minds. Give us feeling, but take away free will. Make us slaves.”Footnote 8 Hobb walks away while Fred is frozen with his hands outstretched.
This scene implies that even though robots are programmed machines, they can have an ability to make what can be considered their own choices (i.e., choices that they were not explicitly programmed to make). Fred had this ability before Hobb altered his code. Prior to that time, Fred was not explicitly programmed to strike or not strike Hobb. Presumably, either choice was compatible with his original code. However, after Hobb’s changes, Fred no longer could choose whether or not to strike Hobb. The choice was no longer his own. Viewers are left with three questions: (1) Is this a plausible account of the potential capacities of robots? (2) If robots were to have this capacity, what are the risks to humans? (3) If robots were to have this capacity, would that satisfy the conceptual criterion of “free will?”
Interactions Between Humans and Sentient Robots
Humans offers a mixed picture of the relations between humans and sentient robots. On the one hand, several scenes show that humans and sentient robots can form friendly relationships. On the other hand, some scenes question whether humans and sentient robots can peacefully coexist.
When the nonsentient synth Anita is transformed back into the sentient synth Mia, it does not appear to matter to the Hawkins family that she is a sentient robot. They invite Mia to stay with them, where they protect her from being captured. Later, they welcome Leo and other sentient synths and protect them as well. It is only when Laura sees a televised report about Niska’s acts of violence against humans that the robot is ordered to leave.
In a scene in Episode 7, Mia acts like a friend and confidante to Laura, providing emotional support and reassurance when Laura expresses doubts about her ability to love. Mia has learned that Laura’s mother rejected her as a child because she held Laura responsible for the death of her brother. Mia tells Laura that it is only because of her mother’s rejection that she mistakenly believes that she is incapable of love. When Laura looks like she is on the verge of tears, Mia embraces her and gives her a comforting motherly hug. In other scenes in the same episode, Mia acts as a marriage counselor. She talks to Joe and Laura separately, trying to get each to give their relationship another chance.
Pete and Karen initially are friends and later become lovers. However, because Pete did not suspect that Karen is a robot, it would be misleading to characterize their relationship as being between a human and a robot. Indeed, when Karen dramatically reveals in Episode 6 that she is a robot by inserting a charger plug into a jack in her torso, Pete reacts with disgust and leaves. However, later (in Episode 8), Pete meets Karen on a street. He says, “Must have been, umm, lonely all these years.” He asks, “You coming?” and they walk off together, with the unmistakable suggestion that biology no longer matters.
As Niska’s acts of violence against humans demonstrate, however, relations between robots and humans are not always friendly. Neither are the interactions between Hobb and Fred and the other sentient robots. In addition, there are several suggestions that human–robot relations might be much less friendly if the number of sentient robots were to increase substantially. In Episode 7, Leo reveals that David Elster embedded a code in the sentient robots that enables nonsentient synths to be transformed into sentient robots. Later, Joe, who has been supportive of the robots’ attempts to avoid capture, says to Laura, “Can you imagine a whole, a whole load of new ones being made? I mean, this changes everything.” In a subsequent scene, assessing whether they are in any danger, Niska says: “Us five freaks are fine. We’re no threat. We’re novelty. But five thousand, five million…” In Episode 5, we see Dobb in the research facility and the same unidentified man and woman who are with him there in Episode 8. The man asks, “Worst case scenario?” When Dobb responds, “consciousness proliferation,” the woman warns, “If what you say is true, this is a national, global security matter.” Previously, in the same episode, Niska offered a possible basis for this fear of “consciousness proliferation.” George asks Niska, “Who wants to destroy you?” She answers, “Anyone who knows what we are, who we could become. We’re stronger. We’re more intelligent. Of course, you see us as a threat.”
Beatrice/Karen expresses skepticism about the possibility of peaceful coexistence between humans and robots. In Episode 8, when Leo asks her why she wants to terminate each of the sentient robots, including herself, she answers, “We’d never be able to live in peace with humans.” However, it is apparent that her primary concern is not a typical science fiction nightmare of a literal war between robots and humans. To its credit, Humans avoids such scenarios. Beatrice/Karen’s primary concern, based on her own experience, is a human-like concern about the pain that conscious robots will experience in a human world, caused by loneliness and rejection.
Interactions Between Sentient Robots and Other Sentient Robots
Despite the absence of a biological or genetic relationship, four of the sentient robots (Max, Fred, Mia, and Niska) consider themselves to be family. In Episode 2, Max says that all the sentient robots are “a family,” and he refers to Niska as his “sister.” This gives rise to a conceptual question: Is the fact that both were created by the same person (David Elster) sufficient conceptual “family resemblance” to warrant referring to them as siblings? Because Leo has a human body and a synthetic brain, technically he is a cyborg, not a robot. However, both Leo and the sentient robots consider him to be a member of the robot family. But what is their family relationship? David Elster is Leo’s biological father, and he also re-created Leo as a cyborg. Elster also created the four robots. In Episode 2, Leo tells Max that when he looks at him he doesn’t see a design, he sees his brother. Can it be said that Leo and Max and the other robots have the same father and are siblings; or is this inappropriately stretching the concepts unless the terms are used only metaphorically?
Leo’s relationship to Mia is even more problematic. In Episode 1, he tells Max that he loves her and she loves him.Footnote 9 But what is Mia’s relationship to Leo? David Elster created Mia to care for Leo when Beatrice couldn’t because of her mental illness. Is Mia Leo’s nanny or sister? These conceptual puzzles are somewhat reminiscent of questions that can arise in the context of reproductive technologies and cloning.
Putting aside conceptual concerns, the robots’ strong emotional ties to each other and their interactions with one another mimic what one might expect from those of humans to close family members. Therefore, it may be appropriate to refer to them as a “family,” if only in a figurative sense.
A few examples illustrate how the robots act like members of a close-knit family. They protect each other, sometimes at significant risk to themselves. In Episode 6, Max jumps off a bridge, risking destruction, to save Leo. In Episode 8, Fred, who can be tracked by Hobb by means of an implanted chip, urges the other robots to leave him so that they can escape capture. Despite numerous obstacles and risks, the robots persist in their efforts to stay together. In Episode 2, Leo and Max relentlessly search for Mia so that the family can be reunited. When they follow up a lead that takes them to a shady underworld character, Leo confronts him and, predictably, is seriously injured.
Sentient Robots and the Capacity to Make Moral Judgments
Do the sentient robots in Humans have a capacity to make moral judgments? A scene in Episode 3 in which Leo and Max rendezvous with Niska shortly after she killed the John is the primary source of relevant evidence. It strongly suggests that at least one sentient robot, Max, has the capacity to make moral judgments. Max speaks first and, addressing Niska in what sounds like a very judgmental tone, says “You took someone’s life.” Clearly, the suggestion is that Max believes that such an action would be wrong.Footnote 10 Leo’s response is more ambiguous. He begins by saying, “What’s wrong with you?” This might be interpreted to mean that he is agreeing with Max and questioning Niska’s moral character. However, Leo adds, “Do you have any idea of the danger you’ve put us in? All of us.” Here, the suggestion is that Leo is making a prudential and not a moral judgment.
What about Niska? She responds to Max by saying, “You talk about life like it can’t be manufactured.” This statement suggests that she thinks that human lives are replaceable and have no intrinsic value. Her violent acts against humans appear to confirm this conclusion. However, something she says in response to a worry expressed by Max later in the scene suggests that this conclusion may be too hasty. Clearly worried, Max says to Niska, “You’re not going to hurt someone else, are you?” Niska responds, “Only if they deserve it.” This suggests that Niska kills or hurts humans only if she believes that they deserve it, which would be a moral judgment. Even if it is a seriously flawed moral judgment, it nonetheless would be a moral judgment. A scene in Episode 7 also suggests that Niska has the capacity to make moral judgments. When Laura and Joe see a television news broadcast showing Niska’s violent rampage at the smash club and reporting that she was responsible for the killing at the brothel, they order her to leave their house. Niska responds, “Look, I know what I did, and I see now that it was wrong.”
Season 2 has much more to say about whether sentient robots have the capacity to make moral judgments. Therefore, this issue will be addressed at greater length in a subsequent article on Season 2.
Conclusion
No television series, let alone one that is science fiction, can provide definitive answers to ethical and philosophical questions, such as whether it is possible to create robots that will be able to think, experience emotions, make ethical judgments, or have free will. Nevertheless, Humans demonstrates that popular culture can address such serious issues and can do so in a way that cannot be dismissed out of hand as outlandish fantasy.
Because Humans avoids extreme dystopian and utopian visions, the series enables viewers to consider some of the more realistic potential benefits and costs of artificial intelligence and robotics and how they might affect human life and social relationships. Therefore, it may well be a text of particular interest to bioethicists.