Within a modern framework of promising, yet still inadequate state-of-the-art artificial intelligence, Lake et al. construct an optimistic, ambitious plan for innovating truer representative neural network-inspired machine emulations of human consciousness and cognition, elusive pinnacle goals of many cognitive, semiotic, and cybernetic scientists (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2014; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). Their machine learning-based agenda, possibly requiring future generations of pioneering hybrid neuromorphic computing architectures and other sorts of technologies to be fully attained (Lande Reference Lande1998; Indiveri & Liu Reference Indiveri and Liu2015; Schuller & Stevens Reference Schuller and Stevens2015), relies on implementing sets of data-/theory-established “core ingredients” typical of natural human intelligence and development (cf. Bengio Reference Bengio2016; Meltzoff et al. Reference Meltzoff, Kuhl, Movellan and Sejnowski2009; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Weigmann Reference Weigmann2006). Such core ingredients, including (1) intuitive causal physics and psychology, (2) compositionality and learning-to-learn, and (3) fast efficient real-time gradient-descent deep learning and thinking, will certainly endow contemporary state-of-the-art machines with greater human-like cognitive qualities. But, in Lake et al.'s efforts to create a standard of human-like machine learning and thinking, they awkwardly, and perhaps ironically, erect barriers to realizing ideal human simulation by ignoring what is also very human -- variations in cognitive-emotional neural network structure and function capable of giving rise to non-normative (or unique) personalities and, therefore, dynamic expression of human intelligences and identities (Clark Reference Clark and Floares2012; Reference Clark2015; Reference Clarkin press-a; Reference Clarkin press-b; Reference Clarkin press-c). Moreover, this same, somewhat counterintuitive, problem in the authors' otherwise rational approach dangerously leaves unaddressed the major ethical and security issues of “free-willed” personified artificial sentient agents, often popularized by fantasists and futurists alike (Bostrom Reference Bostrom2014; Briegel Reference Briegel2012; Davies Reference Davies2016; Fung Reference Fung2015).
Classic interpretations of perfect humanness arising from the fallibility of humans (e.g., Clark Reference Clark and Floares2012; Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Wolfram Reference Wolfram2002) appreciably impact the technical feasibility and socio-cultural significance of building and deploying human-emulating personified machines under both nonsocial and social constraints. Humans, as do all sentient biological entities, fall within a fuzzy organizational and operational template that bounds emergence of phylogenic, ontogenic, and sociogenic individuality (cf. Fogel & Fogel Reference Fogel and Fogel1995; Romanes Reference Romanes1884). Extreme selected variations in individuality, embodied here by modifiable personality and its link to mind, can greatly elevate or diminish human expression, depending on pressures of situational contexts. Examples may include the presence or absence of resoluteness, daring, agile deliberation, creativity, and meticulousness essential to achieving matchless, unconventional artistic and scientific accomplishments. Amid even further examples, they may also include the presence or absence of empathy, morality, or ethics in response to severe human plight and need. Regardless, to completely simulate the range of human intelligence, particularly solitary to sociable and selfish to selfless tendencies critical for now-nascent social-like human-machine and machine-machine interactions, scientists and technologists must account for, and better understand, personality trait formation and development in autonomous artificial technologies (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). These kinds of undertakings will help yield desirable insights into the evolution of technology-augmented human nature and, perhaps more importantly, will inform best practices when establishing advisable failsafe contingencies against unwanted serendipitous or designed human-like machine behavior.
Notably, besides their described usefulness for modeling intended artificial cognitive faculties, Lake et al.'s core ingredients provide systematic concepts and guidelines necessary to begin approximating human-like machine personalities, and to probe genuine ethological, ecological, and evolutionary consequences of those personalities for both humans and machines. However, similar reported strategies for machine architectures, algorithms, and performance demonstrate only marginal success when used as protocols to reach nearer cognitive-emotional humanness in trending social robot archetypes (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Wallach et al. Reference Wallach, Franklin and Allen2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015), emphasizing serious need for improved adaptive quasi-model-free/-based neural nets, trainable distributed cognition-emotion mapping, and artificial personality trait parameterization. The best findings from such work, although far from final reduction-to-practice, arguably involve the appearance of crude or primitive machine personalities and identities from socially learned intra-/interpersonal relationships possessing cognitive-emotional valences. Valence direction and magnitude often depend on the learner machine's disposition toward response priming/contagion, social facilitation, incentive motivation, and local/stimulus enhancement of observable demonstrator behavior (i.e., human, cohort-machine, and learner-machine behavior). The resulting self-/world discovery of the learner machine, analogous to healthy/diseased or normal/abnormal human phenomena acquired during early formative (neo)Piagetian cognitive-emotional periods (cf. Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Zentall Reference Zentall and Clark2013), reciprocally shapes the potential humanness of reflexive/reflective machine actions through labile interval-delimited self-organizing traits consistent with natural human personalities, including, but not restricted to, conscientiousness, openness, emotional stability, agreeableness, and extraversion/introversion.
Even simplistic artificial cognitive-emotional profiles and personalities thus effect varying control over acquisition and lean of machine domain-general/-specific knowledge, perception and expression of flat or excessive machine affect, and rationality and use of inferential machine attitudes/opinions/beliefs (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Cardon Reference Cardon2006; Davies Reference Davies2016; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Wallach et al. 2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015). And, by favoring certain artificial personality traits, such as openness, a learner machine's active and passive pedagogical experiences may be radically directed by the quality of teacher-student rapport (e.g., Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013), enabling opportunities for superior nurturing and growth of distinctive, well-adjusted thoughtful machine behavior while, in part, restricting harmful rogue machine behavior, caused by impoverished learning environments and predictable pathological Gödel-type incompleteness/inconsistency for axiomatic neuropsychological systems (cf. Clark & Hassert Reference Clark and Hassert2013). These more-or-less philosophical considerations, along with the merits of Lake et al.'s core ingredients for emerging artificial non-normative (or unique) personalities, will bear increasing technical and sociocultural relevance as the Human Brain Project, the Blue Brain Project, and related connectome missions drive imminent neuromorphic hardware research and development toward precise mimicry of configurable/computational soft-matter variations in human nervous systems (cf. Calimera et al. Reference Calimera, Macii and Poncino2013).
Within a modern framework of promising, yet still inadequate state-of-the-art artificial intelligence, Lake et al. construct an optimistic, ambitious plan for innovating truer representative neural network-inspired machine emulations of human consciousness and cognition, elusive pinnacle goals of many cognitive, semiotic, and cybernetic scientists (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2014; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). Their machine learning-based agenda, possibly requiring future generations of pioneering hybrid neuromorphic computing architectures and other sorts of technologies to be fully attained (Lande Reference Lande1998; Indiveri & Liu Reference Indiveri and Liu2015; Schuller & Stevens Reference Schuller and Stevens2015), relies on implementing sets of data-/theory-established “core ingredients” typical of natural human intelligence and development (cf. Bengio Reference Bengio2016; Meltzoff et al. Reference Meltzoff, Kuhl, Movellan and Sejnowski2009; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Weigmann Reference Weigmann2006). Such core ingredients, including (1) intuitive causal physics and psychology, (2) compositionality and learning-to-learn, and (3) fast efficient real-time gradient-descent deep learning and thinking, will certainly endow contemporary state-of-the-art machines with greater human-like cognitive qualities. But, in Lake et al.'s efforts to create a standard of human-like machine learning and thinking, they awkwardly, and perhaps ironically, erect barriers to realizing ideal human simulation by ignoring what is also very human -- variations in cognitive-emotional neural network structure and function capable of giving rise to non-normative (or unique) personalities and, therefore, dynamic expression of human intelligences and identities (Clark Reference Clark and Floares2012; Reference Clark2015; Reference Clarkin press-a; Reference Clarkin press-b; Reference Clarkin press-c). Moreover, this same, somewhat counterintuitive, problem in the authors' otherwise rational approach dangerously leaves unaddressed the major ethical and security issues of “free-willed” personified artificial sentient agents, often popularized by fantasists and futurists alike (Bostrom Reference Bostrom2014; Briegel Reference Briegel2012; Davies Reference Davies2016; Fung Reference Fung2015).
Classic interpretations of perfect humanness arising from the fallibility of humans (e.g., Clark Reference Clark and Floares2012; Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Wolfram Reference Wolfram2002) appreciably impact the technical feasibility and socio-cultural significance of building and deploying human-emulating personified machines under both nonsocial and social constraints. Humans, as do all sentient biological entities, fall within a fuzzy organizational and operational template that bounds emergence of phylogenic, ontogenic, and sociogenic individuality (cf. Fogel & Fogel Reference Fogel and Fogel1995; Romanes Reference Romanes1884). Extreme selected variations in individuality, embodied here by modifiable personality and its link to mind, can greatly elevate or diminish human expression, depending on pressures of situational contexts. Examples may include the presence or absence of resoluteness, daring, agile deliberation, creativity, and meticulousness essential to achieving matchless, unconventional artistic and scientific accomplishments. Amid even further examples, they may also include the presence or absence of empathy, morality, or ethics in response to severe human plight and need. Regardless, to completely simulate the range of human intelligence, particularly solitary to sociable and selfish to selfless tendencies critical for now-nascent social-like human-machine and machine-machine interactions, scientists and technologists must account for, and better understand, personality trait formation and development in autonomous artificial technologies (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). These kinds of undertakings will help yield desirable insights into the evolution of technology-augmented human nature and, perhaps more importantly, will inform best practices when establishing advisable failsafe contingencies against unwanted serendipitous or designed human-like machine behavior.
Notably, besides their described usefulness for modeling intended artificial cognitive faculties, Lake et al.'s core ingredients provide systematic concepts and guidelines necessary to begin approximating human-like machine personalities, and to probe genuine ethological, ecological, and evolutionary consequences of those personalities for both humans and machines. However, similar reported strategies for machine architectures, algorithms, and performance demonstrate only marginal success when used as protocols to reach nearer cognitive-emotional humanness in trending social robot archetypes (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Wallach et al. Reference Wallach, Franklin and Allen2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015), emphasizing serious need for improved adaptive quasi-model-free/-based neural nets, trainable distributed cognition-emotion mapping, and artificial personality trait parameterization. The best findings from such work, although far from final reduction-to-practice, arguably involve the appearance of crude or primitive machine personalities and identities from socially learned intra-/interpersonal relationships possessing cognitive-emotional valences. Valence direction and magnitude often depend on the learner machine's disposition toward response priming/contagion, social facilitation, incentive motivation, and local/stimulus enhancement of observable demonstrator behavior (i.e., human, cohort-machine, and learner-machine behavior). The resulting self-/world discovery of the learner machine, analogous to healthy/diseased or normal/abnormal human phenomena acquired during early formative (neo)Piagetian cognitive-emotional periods (cf. Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Zentall Reference Zentall and Clark2013), reciprocally shapes the potential humanness of reflexive/reflective machine actions through labile interval-delimited self-organizing traits consistent with natural human personalities, including, but not restricted to, conscientiousness, openness, emotional stability, agreeableness, and extraversion/introversion.
Even simplistic artificial cognitive-emotional profiles and personalities thus effect varying control over acquisition and lean of machine domain-general/-specific knowledge, perception and expression of flat or excessive machine affect, and rationality and use of inferential machine attitudes/opinions/beliefs (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Cardon Reference Cardon2006; Davies Reference Davies2016; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Wallach et al. 2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015). And, by favoring certain artificial personality traits, such as openness, a learner machine's active and passive pedagogical experiences may be radically directed by the quality of teacher-student rapport (e.g., Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013), enabling opportunities for superior nurturing and growth of distinctive, well-adjusted thoughtful machine behavior while, in part, restricting harmful rogue machine behavior, caused by impoverished learning environments and predictable pathological Gödel-type incompleteness/inconsistency for axiomatic neuropsychological systems (cf. Clark & Hassert Reference Clark and Hassert2013). These more-or-less philosophical considerations, along with the merits of Lake et al.'s core ingredients for emerging artificial non-normative (or unique) personalities, will bear increasing technical and sociocultural relevance as the Human Brain Project, the Blue Brain Project, and related connectome missions drive imminent neuromorphic hardware research and development toward precise mimicry of configurable/computational soft-matter variations in human nervous systems (cf. Calimera et al. Reference Calimera, Macii and Poncino2013).