This commentary addresses the issue of Human-Like Machines (HLMs), which Lake et al. would like to be able to do more than have “object recognition” and play “video games, and board games” (abstract). They would like a machine “to learn or think like a person” (sect. 1, para. 3). I argue that people do vastly more than this: they interact, communicate, share, and collaborate; they use their learning and thinking to “behave”; they experience complex emotions. I believe that these authors have a far too limited sense of what “human-like” behavior is. The kinds of behavior I have in mind include (but are certainly not limited to) these:
-
1. Drive with a friend in a stick shift car from LA to Vancouver, and on to Banff…
-
2. Where, using a fly he or she tied, with a fly rod he or she made, he or she should be able to catch a trout which…
-
3. He or she should be able to clean, cook, and share with a friend.
-
4. He or she should have a clear gender identity, clearly recognizing what gender he or she is, and understanding the differences between self and other genders. (Let's decide our HLM was manufactured to be, and identifies as, “male.”)
-
5. He should be able to fall in love, get married, and reproduce. He might wish to vote; he should be able to pay taxes. I'm not certain if he could be a citizen.
-
6. He should be able to read Hop on Pop to his 4-year-old, helping her to get the idea of reading. He should be able to read it to her 200 times. He should be able to read and understand Foucault, Sahlins, Hinton, le Carré, Erdrich, Munro, and authors like them. He should enjoy reading. He should be able to write a book, like Hop on Pop, or like Wilder's The Foundations of Mathematics.
-
7. He should be able to have irreconcilable differences with his spouse, get divorced, get depressed, get psychological counseling, get better, fall in love again, remarry, and enjoy his grandchildren. He should be able to detect by scent that the baby needs to have her diaper changed. Recent research indicates that the human nose can discriminate more than one trillion odors (Bushdid et al. Reference Bushdid, Magnasco, Vosshall and Keller2014). Our HLM should at least recognize a million or so. He should be able to change a diaper and to comfort and calm a crying child. And make mac and cheese.
-
8. He should be able to go to college, get a B.A. in Anthropology, then a Ph.D., get an academic job, and succeed in teaching the complexities of kinship systems to 60 undergraduates.
-
9. He should be able to learn to play creditable tennis, squash, baseball, or soccer, and enjoy it into his seventies. He should be able to get a joke. (Two chemists go into a bar. The first says, “I'll have an H2O.” The second says, “I'll have an H2O too.” The second guy dies.) He should be able both to age and to die.
-
10. He should be able to know the differences between Scotch and Bourbon, and to develop a preference for one or the other, and enjoy it occasionally. Same for wine.
I'm human, and I can do, or have done, all those things (except die), which is precisely why I think this is a fool's errand. I think it is a terrible idea to develop robots that are like humans. There are 7 billion humans on earth already. Why do we need fake humans when we have so many real ones? The robots we have now are (primarily) extremely useful single-function machines that can weld a car together in minutes, 300 a day, and never feel like, well, a robot, or a rivethead (Hamper Reference Hamper2008).
Even this sort of robot can cause lots of problems, as substantial unemployment in industry can be attributed to them. They tend to increase productivity and reduce the need for workers (Baily & Bosworth Reference Baily and Bosworth2014). If that's what single-purpose (welding) robots can do, imagine what a HLM could do. If you think it might not be a serious problem, read Philip K. Dick's story, Do Androids Dream Electric Sheep (Dick Reference Dick1968), or better yet, watch Ridley Scott's film Blade Runner (Scott Reference Scott2007) based on Dick's story. The key issue in this film is that HLMs are indistinguishable from ordinary humans and are allowed legally to exist only as slaves. They don't like it. Big trouble ensues. (Re number 6, above, our HLM should probably not enjoy Philip Dick or Blade Runner.)
What kinds of things should machines be able to do? Jobs inimical to the human condition. Imagine an assistant fireman which could run into a burning building and save the 4-year-old reading Dr. Seuss. There is work going on to develop robotic devices – referred to as exoskeletons -- that can help people with profound spinal cord injuries to walk again (Brenner Reference Brenner2016). But this is only reasonable if the device helps the patient go where he wants to go, not where the robot wants to go. There is also work going on to develop robotic birds, or orniothopters, among them the “Nano Hummingbird” and the “SmartBird.” Both fly with flapping wings (Mackenzie Reference Mackenzie2012). The utility of these creatures is arguable; most of what they can do could probably be done with a $100 quad-copter drone. (Our HLM should be able to fly a quad-copter drone. I can.)
Google recently reported significant improvements in language translation as a result of the adoption of a neural-network approach (Lewis-Kraus Reference Lewis-Kraus2016; Turovsky Reference Turovsky2016). Many users report dramatic improvements in translations. (My own experience has been less positive.) This is a classic single-purpose “robot” that can help translators, but no one ought to rely on it alone.
In summary, it seems that even with the development of large neural-network style models, we are far from anything in Blade Runner. It will be a long time before we can have an HLM that can both display a patellar reflex and move the pieces in a chess game. And that, I think, is a very good thing.
This commentary addresses the issue of Human-Like Machines (HLMs), which Lake et al. would like to be able to do more than have “object recognition” and play “video games, and board games” (abstract). They would like a machine “to learn or think like a person” (sect. 1, para. 3). I argue that people do vastly more than this: they interact, communicate, share, and collaborate; they use their learning and thinking to “behave”; they experience complex emotions. I believe that these authors have a far too limited sense of what “human-like” behavior is. The kinds of behavior I have in mind include (but are certainly not limited to) these:
1. Drive with a friend in a stick shift car from LA to Vancouver, and on to Banff…
2. Where, using a fly he or she tied, with a fly rod he or she made, he or she should be able to catch a trout which…
3. He or she should be able to clean, cook, and share with a friend.
4. He or she should have a clear gender identity, clearly recognizing what gender he or she is, and understanding the differences between self and other genders. (Let's decide our HLM was manufactured to be, and identifies as, “male.”)
5. He should be able to fall in love, get married, and reproduce. He might wish to vote; he should be able to pay taxes. I'm not certain if he could be a citizen.
6. He should be able to read Hop on Pop to his 4-year-old, helping her to get the idea of reading. He should be able to read it to her 200 times. He should be able to read and understand Foucault, Sahlins, Hinton, le Carré, Erdrich, Munro, and authors like them. He should enjoy reading. He should be able to write a book, like Hop on Pop, or like Wilder's The Foundations of Mathematics.
7. He should be able to have irreconcilable differences with his spouse, get divorced, get depressed, get psychological counseling, get better, fall in love again, remarry, and enjoy his grandchildren. He should be able to detect by scent that the baby needs to have her diaper changed. Recent research indicates that the human nose can discriminate more than one trillion odors (Bushdid et al. Reference Bushdid, Magnasco, Vosshall and Keller2014). Our HLM should at least recognize a million or so. He should be able to change a diaper and to comfort and calm a crying child. And make mac and cheese.
8. He should be able to go to college, get a B.A. in Anthropology, then a Ph.D., get an academic job, and succeed in teaching the complexities of kinship systems to 60 undergraduates.
9. He should be able to learn to play creditable tennis, squash, baseball, or soccer, and enjoy it into his seventies. He should be able to get a joke. (Two chemists go into a bar. The first says, “I'll have an H2O.” The second says, “I'll have an H2O too.” The second guy dies.) He should be able both to age and to die.
10. He should be able to know the differences between Scotch and Bourbon, and to develop a preference for one or the other, and enjoy it occasionally. Same for wine.
I'm human, and I can do, or have done, all those things (except die), which is precisely why I think this is a fool's errand. I think it is a terrible idea to develop robots that are like humans. There are 7 billion humans on earth already. Why do we need fake humans when we have so many real ones? The robots we have now are (primarily) extremely useful single-function machines that can weld a car together in minutes, 300 a day, and never feel like, well, a robot, or a rivethead (Hamper Reference Hamper2008).
Even this sort of robot can cause lots of problems, as substantial unemployment in industry can be attributed to them. They tend to increase productivity and reduce the need for workers (Baily & Bosworth Reference Baily and Bosworth2014). If that's what single-purpose (welding) robots can do, imagine what a HLM could do. If you think it might not be a serious problem, read Philip K. Dick's story, Do Androids Dream Electric Sheep (Dick Reference Dick1968), or better yet, watch Ridley Scott's film Blade Runner (Scott Reference Scott2007) based on Dick's story. The key issue in this film is that HLMs are indistinguishable from ordinary humans and are allowed legally to exist only as slaves. They don't like it. Big trouble ensues. (Re number 6, above, our HLM should probably not enjoy Philip Dick or Blade Runner.)
What kinds of things should machines be able to do? Jobs inimical to the human condition. Imagine an assistant fireman which could run into a burning building and save the 4-year-old reading Dr. Seuss. There is work going on to develop robotic devices – referred to as exoskeletons -- that can help people with profound spinal cord injuries to walk again (Brenner Reference Brenner2016). But this is only reasonable if the device helps the patient go where he wants to go, not where the robot wants to go. There is also work going on to develop robotic birds, or orniothopters, among them the “Nano Hummingbird” and the “SmartBird.” Both fly with flapping wings (Mackenzie Reference Mackenzie2012). The utility of these creatures is arguable; most of what they can do could probably be done with a $100 quad-copter drone. (Our HLM should be able to fly a quad-copter drone. I can.)
Google recently reported significant improvements in language translation as a result of the adoption of a neural-network approach (Lewis-Kraus Reference Lewis-Kraus2016; Turovsky Reference Turovsky2016). Many users report dramatic improvements in translations. (My own experience has been less positive.) This is a classic single-purpose “robot” that can help translators, but no one ought to rely on it alone.
In summary, it seems that even with the development of large neural-network style models, we are far from anything in Blade Runner. It will be a long time before we can have an HLM that can both display a patellar reflex and move the pieces in a chess game. And that, I think, is a very good thing.