Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-12T00:09:37.717Z Has data issue: false hasContentIssue false

The humanness of artificial non-normative personalities

Published online by Cambridge University Press:  10 November 2017

Kevin B. Clark*
Affiliation:
Research and Development Service, Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA 90073; California NanoSystems Institute, University of California at Los Angeles, Los Angeles, CA 90095; Extreme Science and Engineering Discovery Environment (XSEDE), National Center for Supercomputing Applications, University of Illinois at Urbana–Champaign, Urbana, IL 61801; Biological Collaborative Research Environment (BioCoRE), Theoretical and Computational Biophysics Group, NIH Center for Macromolecular Modeling and Bioinformatics, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL 61801. kbclarkphd@yahoo.comwww.linkedin.com/pub/kevin-clark/58/67/19a

Abstract

Technoscientific ambitions for perfecting human-like machines, by advancing state-of-the-art neuromorphic architectures and cognitive computing, may end in ironic regret without pondering the humanness of fallible artificial non-normative personalities. Self-organizing artificial personalities individualize machine performance and identity through fuzzy conscientiousness, emotionality, extraversion/introversion, and other traits, rendering insights into technology-assisted human evolution, robot ethology/pedagogy, and best practices against unwanted autonomous machine behavior.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

Within a modern framework of promising, yet still inadequate state-of-the-art artificial intelligence, Lake et al. construct an optimistic, ambitious plan for innovating truer representative neural network-inspired machine emulations of human consciousness and cognition, elusive pinnacle goals of many cognitive, semiotic, and cybernetic scientists (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2014; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). Their machine learning-based agenda, possibly requiring future generations of pioneering hybrid neuromorphic computing architectures and other sorts of technologies to be fully attained (Lande Reference Lande1998; Indiveri & Liu Reference Indiveri and Liu2015; Schuller & Stevens Reference Schuller and Stevens2015), relies on implementing sets of data-/theory-established “core ingredients” typical of natural human intelligence and development (cf. Bengio Reference Bengio2016; Meltzoff et al. Reference Meltzoff, Kuhl, Movellan and Sejnowski2009; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Weigmann Reference Weigmann2006). Such core ingredients, including (1) intuitive causal physics and psychology, (2) compositionality and learning-to-learn, and (3) fast efficient real-time gradient-descent deep learning and thinking, will certainly endow contemporary state-of-the-art machines with greater human-like cognitive qualities. But, in Lake et al.'s efforts to create a standard of human-like machine learning and thinking, they awkwardly, and perhaps ironically, erect barriers to realizing ideal human simulation by ignoring what is also very human -- variations in cognitive-emotional neural network structure and function capable of giving rise to non-normative (or unique) personalities and, therefore, dynamic expression of human intelligences and identities (Clark Reference Clark and Floares2012; Reference Clark2015; Reference Clarkin press-a; Reference Clarkin press-b; Reference Clarkin press-c). Moreover, this same, somewhat counterintuitive, problem in the authors' otherwise rational approach dangerously leaves unaddressed the major ethical and security issues of “free-willed” personified artificial sentient agents, often popularized by fantasists and futurists alike (Bostrom Reference Bostrom2014; Briegel Reference Briegel2012; Davies Reference Davies2016; Fung Reference Fung2015).

Classic interpretations of perfect humanness arising from the fallibility of humans (e.g., Clark Reference Clark and Floares2012; Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Wolfram Reference Wolfram2002) appreciably impact the technical feasibility and socio-cultural significance of building and deploying human-emulating personified machines under both nonsocial and social constraints. Humans, as do all sentient biological entities, fall within a fuzzy organizational and operational template that bounds emergence of phylogenic, ontogenic, and sociogenic individuality (cf. Fogel & Fogel Reference Fogel and Fogel1995; Romanes Reference Romanes1884). Extreme selected variations in individuality, embodied here by modifiable personality and its link to mind, can greatly elevate or diminish human expression, depending on pressures of situational contexts. Examples may include the presence or absence of resoluteness, daring, agile deliberation, creativity, and meticulousness essential to achieving matchless, unconventional artistic and scientific accomplishments. Amid even further examples, they may also include the presence or absence of empathy, morality, or ethics in response to severe human plight and need. Regardless, to completely simulate the range of human intelligence, particularly solitary to sociable and selfish to selfless tendencies critical for now-nascent social-like human-machine and machine-machine interactions, scientists and technologists must account for, and better understand, personality trait formation and development in autonomous artificial technologies (Cardon Reference Cardon2006; Clark Reference Clark and Floares2012; Reference Clark2015; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013). These kinds of undertakings will help yield desirable insights into the evolution of technology-augmented human nature and, perhaps more importantly, will inform best practices when establishing advisable failsafe contingencies against unwanted serendipitous or designed human-like machine behavior.

Notably, besides their described usefulness for modeling intended artificial cognitive faculties, Lake et al.'s core ingredients provide systematic concepts and guidelines necessary to begin approximating human-like machine personalities, and to probe genuine ethological, ecological, and evolutionary consequences of those personalities for both humans and machines. However, similar reported strategies for machine architectures, algorithms, and performance demonstrate only marginal success when used as protocols to reach nearer cognitive-emotional humanness in trending social robot archetypes (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013; Wallach et al. Reference Wallach, Franklin and Allen2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015), emphasizing serious need for improved adaptive quasi-model-free/-based neural nets, trainable distributed cognition-emotion mapping, and artificial personality trait parameterization. The best findings from such work, although far from final reduction-to-practice, arguably involve the appearance of crude or primitive machine personalities and identities from socially learned intra-/interpersonal relationships possessing cognitive-emotional valences. Valence direction and magnitude often depend on the learner machine's disposition toward response priming/contagion, social facilitation, incentive motivation, and local/stimulus enhancement of observable demonstrator behavior (i.e., human, cohort-machine, and learner-machine behavior). The resulting self-/world discovery of the learner machine, analogous to healthy/diseased or normal/abnormal human phenomena acquired during early formative (neo)Piagetian cognitive-emotional periods (cf. Nisbett & Ross Reference Nisbett and Ross1980; Parker & McKinney Reference Parker and McKinney1999; Zentall Reference Zentall and Clark2013), reciprocally shapes the potential humanness of reflexive/reflective machine actions through labile interval-delimited self-organizing traits consistent with natural human personalities, including, but not restricted to, conscientiousness, openness, emotional stability, agreeableness, and extraversion/introversion.

Even simplistic artificial cognitive-emotional profiles and personalities thus effect varying control over acquisition and lean of machine domain-general/-specific knowledge, perception and expression of flat or excessive machine affect, and rationality and use of inferential machine attitudes/opinions/beliefs (Arbib & Fellous Reference Arbib and Fellous2004; Asada Reference Asada2015; Berdahl Reference Berdahl2010; Cardon Reference Cardon2006; Davies Reference Davies2016; Di & Wu Reference Di and Wu2015; Han et al. Reference Han, Lin and Song2013; Hiolle et al. Reference Hiolle, Lewis and Cañamero2014; Kaipa et al. Reference Kaipa, Bongard and Meltzoff2010; McShea Reference McShea2013; Read et al. Reference Read, Monroe, Brownstein, Yang, Chopra and Miller2010; Wallach et al. 2010; Youyou et al. Reference Youyou, Kosinski and Stillwell2015). And, by favoring certain artificial personality traits, such as openness, a learner machine's active and passive pedagogical experiences may be radically directed by the quality of teacher-student rapport (e.g., Thomaz & Cakmak Reference Thomaz, Cakmak and Clark2013), enabling opportunities for superior nurturing and growth of distinctive, well-adjusted thoughtful machine behavior while, in part, restricting harmful rogue machine behavior, caused by impoverished learning environments and predictable pathological Gödel-type incompleteness/inconsistency for axiomatic neuropsychological systems (cf. Clark & Hassert Reference Clark and Hassert2013). These more-or-less philosophical considerations, along with the merits of Lake et al.'s core ingredients for emerging artificial non-normative (or unique) personalities, will bear increasing technical and sociocultural relevance as the Human Brain Project, the Blue Brain Project, and related connectome missions drive imminent neuromorphic hardware research and development toward precise mimicry of configurable/computational soft-matter variations in human nervous systems (cf. Calimera et al. Reference Calimera, Macii and Poncino2013).

References

Arbib, M. A. & Fellous, J. M. (2004) Emotions: From brain to robot. Trends in Cognitive Science 8(12):554–61.CrossRefGoogle ScholarPubMed
Asada, M. (2015) Development of artificial empathy. Neuroscience Research 90:4150.Google Scholar
Bengio, Y. (2016) Machines who learn. Scientific American 314(6):4651.Google Scholar
Berdahl, C. H. (2010) A neural network model of Borderline Personality Disorder. Neural Networks 23(2):177–88.CrossRefGoogle ScholarPubMed
Bostrom, N. (2014) Superintelligence: Paths, dangers, strategies. Oxford University Press. ISBN 978-0199678112.Google Scholar
Briegel, H. J. (2012) On creative machines and the physical origins of freedom. Scientific Reports 2:522.Google Scholar
Calimera, A., Macii, E. & Poncino, M. (2013) The human brain project and neuromorphic computing. Functional Neurology 28(3):191–96.Google Scholar
Cardon, A. (2006) Artificial consciousness, artificial emotions, and autonomous robots. Cognitive Processes 7(4):245–67.Google Scholar
Clark, K. B. (2012) A statistical mechanics definition of insight. In: Computational intelligence, ed. Floares, A. G., pp. 139–62. Nova Science. ISBN 978-1-62081-901-2.Google Scholar
Clark, K. B. (2014) Basis for a neuronal version of Grover's quantum algorithm. Frontiers in Molecular Neuroscience 7:29.Google Scholar
Clark, K. B. (2015) Insight and analysis problem solving in microbes to machines. Progress in Biophysics and Molecular Biology 119:183–93.Google Scholar
Clark, K. B. (in press-a) Classical and quantum Hebbian learning in modeled cognitive processing. Frontiers in Psychology.Google Scholar
Clark, K. B. (in press-b) Neural field continuum limits and the partitioning of cognitive-emotional brain networks. Molecular and Cellular Neuroscience.Google Scholar
Clark, K. B. (in press-c) Psychometric “Turing test” of general intelligences in social robots. Information Sciences.Google Scholar
Clark, K. B. & Hassert, D. L. (2013) Undecidability and opacity of metacognition in animals and humans. Frontiers in Psychology 4:171.CrossRefGoogle Scholar
Davies, J. (2016) Program good ethics into artificial intelligence. Nature 538(7625). Available at: http://www.nature.com/news/program-good-ethics-into-artificial-intelligence-1.20821.Google ScholarPubMed
Di, G. Q. & Wu, S. X. (2015) Emotion recognition from sound stimuli based on back-projection neural networks and electroencephalograms. Journal of the Acoustics Society of America 138(2):9941002.CrossRefGoogle Scholar
Fogel, D. B. & Fogel, L. J. (1995) Evolution and computational intelligence. IEEE Transactions on Neural Networks 4:1938–41.Google Scholar
Fung, P. (2015) Robots with heart. Scientific American 313(5):6063.Google Scholar
Han, M. J., Lin, C. H. & Song, K. T. (2013) Robotic emotional expression generation based on mood transition and personality model. IEEE Transactions on Cybernetics 43(4):1290–303.Google Scholar
Hiolle, A., Lewis, M. & Cañamero, L. (2014) Arousal regulation and affective adaptation to human responsiveness by a robot that explores and learns a novel environment. Frontiers in Neurorobotics 8:17.CrossRefGoogle ScholarPubMed
Indiveri, G. & Liu, S.-C. (2015) Memory and information processing in neuromorphic systems. Proceedings of the IEEE 103(8):1379–97.Google Scholar
Kaipa, K. N., Bongard, J. C. & Meltzoff, A. N. (2010) Self discovery enables robot social cognition: Are you my teacher? Neural Networks 23(8–9):1113–24.Google Scholar
Lande, T. S., ed. (1998) Neuromorphic systems engineering: Neural networks in silicon. Kluwer International Series in Engineering and Computer Science, vol. 447. Kluwer Academic. ISBN 978-0-7923-8158-7.Google Scholar
McShea, D. W. (2013) Machine wanting. Studies on the History and Philosophy of Biological and Biomedical Sciences 44(4 pt B):679–87.Google Scholar
Meltzoff, A. N., Kuhl, P. M., Movellan, J. & Sejnowski, T. J. (2009) Foundations for a new science of learning. Science 325(5938):284–88.Google Scholar
Nisbett, R. E. & Ross, L. (1980) Human inference: Strategies and shortcomings of social judgment. Prentice-Hall. ISBN 0-13-445073-6.Google Scholar
Parker, S. T. & McKinney, M. L. (1999) Origins of intelligence: The evolution of cognitive development in monkeys, apes and humans. Johns Hopkins University Press. ISBN 0-8018-6012-1.Google Scholar
Read, S. J., Monroe, B. M., Brownstein, A. L., Yang, Y., Chopra, G. & Miller, L. C. (2010) A neural network model of the structure and dynamics of human personality. Psychological Reviews 117(1):6192.Google Scholar
Romanes, G. J. (1884) Animal intelligence. Appleton.Google Scholar
Schuller, I. K., Stevens, R. & Committee Chairs (2015) Neuromorphic computing: From materials to architectures. Report of a roundtable convened to consider neuromorphic computing basic research needs. Office of Science, U.S. Department of Energy.Google Scholar
Thomaz, A. L. & Cakmak, M. (2013) Active social learning in humans and robots. In: Social learning theory: Phylogenetic considerations across animal, plant, and microbial taxa, ed. Clark, K. B., pp. 113–28. Nova Science. ISBN 978-1-62618-268-4.Google Scholar
Wallach, W., Franklin, S. & Allen, C. (2010) A conceptual and computational model of moral decision making in human and artificial agents. Topics in Cognitive Science 2:454–85.Google Scholar
Weigmann, K. (2006) Robots emulating children. EMBO Reports 7(5):474–76.Google Scholar
Wolfram, S. (2002) A new kind of science. Wolfram Media. ISBN 1-57955-008-8.Google Scholar
Youyou, W., Kosinski, M. & Stillwell, D. (2015) Computer-based personality judgments are more accurate than those made by humans. Proceedings of the National Academy of Sciences of the United States of America 112(4):1036–40.Google Scholar
Zentall, T. R. (2013) Observational learning in animals. In: Social learning theory: Phylogenetic considerations across animal, plant, and microbial taxa, ed. Clark, K. B., pp. 333. Nova Science. ISBN 978-1-62618-268-4.Google Scholar