Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-12T00:03:40.774Z Has data issue: false hasContentIssue false

Human-like machines: Transparency and comprehensibility

Published online by Cambridge University Press:  10 November 2017

Piotr M. Patrzyk
Affiliation:
Faculty of Business and Economics, University of Lausanne, Quartier UNIL-Dorigny, Internef, CH-1015 Lausanne, Switzerland. piotr.patrzyk@unil.chdaniela.link@unil.chjulian.marewski@unil.ch
Daniela Link
Affiliation:
Faculty of Business and Economics, University of Lausanne, Quartier UNIL-Dorigny, Internef, CH-1015 Lausanne, Switzerland. piotr.patrzyk@unil.chdaniela.link@unil.chjulian.marewski@unil.ch
Julian N. Marewski
Affiliation:
Faculty of Business and Economics, University of Lausanne, Quartier UNIL-Dorigny, Internef, CH-1015 Lausanne, Switzerland. piotr.patrzyk@unil.chdaniela.link@unil.chjulian.marewski@unil.ch

Abstract

Artificial intelligence algorithms seek inspiration from human cognitive systems in areas where humans outperform machines. But on what level should algorithms try to approximate human cognition? We argue that human-like machines should be designed to make decisions in transparent and comprehensible ways, which can be achieved by accurately mirroring human cognitive processes.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2017 

How to build human-like machines? We agree with the authors' assertion that “reverse engineering human intelligence can usefully inform artificial intelligence and machine learning” (sect. 1.1, para. 3), and in this commentary we offer some suggestions concerning the direction of future developments. Specifically, we posit that human-like machines should not only be built to match humans in performance, but also to be able to make decisions that are both transparent and comprehensible to humans.

First, we argue that human-like machines need to decide and act in transparent ways, such that humans can readily understand how their decisions are made (see Arnold & Scheutz Reference Arnold and Scheutz2016; Indurkhya & Misztal-Radecka Reference Indurkhya, Misztal-Radecka, Indurkhya and Stojanov2016; Mittelstadt et al. Reference Mittelstadt, Allo, Taddeo, Wachter and Floridi2016). Behavior of artificial agents should be predictable, and people interacting with them ought to be in a position that allows them to intuitively grasp how those machines decide and act the way they do (Malle & Scheutz Reference Malle and Scheutz2014). This poses a unique challenge for designing algorithms.

In current neural networks, there is typically no intuitive explanation for why a network reached a particular decision given received inputs (Burrell Reference Burrell2016). Such networks represent statistical pattern recognition approaches that lack the ability to capture agent-specific information. Lake et al. acknowledge this problem and call for structured cognitive representations, which are required for classifying social situations. Specifically, the authors' proposal of an “intuitive psychology” is grounded in the naïve utility calculus framework (Jara-Ettinger et al. Reference Jara-Ettinger, Gweon, Schulz and Tenenbaum2016). According to this argument, algorithms should attempt to build a causal understanding of observed situations by creating representations of agents who seek rewards and avoid costs in a rational way.

Putting aside extreme examples (e.g., killer robots and autonomous vehicles), let us look at the more ordinary artificial intelligence task of scene understanding. Cost-benefit–based inferences about situations such as the one depicted in the leftmost picture in Figure 6 of Lake et al. will likely conclude that one agent has a desire to kill the other, and that he or she values higher the state of the other being dead than alive. Although we do not argue this is incorrect, a human-like classification of such a scene would rather reach the conclusion that the scene depicts either a legal execution or a murder. The returned alternative depends on the viewer's inferences about agent-specific characteristics. Making such inferences requires going beyond the attribution of simple goals – one needs to make assumptions about the roles and obligations of different agents. In the discussed example, although both a sheriff and a contract killer would have the same goal to end another person's life, the difference in their identity would change the human interpretation in a significant way.

We welcome the applicability of naïve utility calculus for inferring simple information concerning agent-specific variables, such as goals and competence level. At the same time, however, we point out some caveats inherent to this approach. Humans interacting with the system will likely expect a justification of why it has picked one interpretation rather than another, and algorithm designers might want to take this into consideration.

This leads us to our second point. Models of cognition can come in at least two flavors: (1) As-if models, which only aspire to achieve human-like performance on a specific task (e.g., classifying images), and (2) process models, which seek both to achieve human-like performance and to accurately reproduce the cognitive operations humans actually perform (classifying images by combining pieces of information in a way humans do). We believe that the task of creating human-like machines ought to be grounded in existing process models of cognition. Indeed, investigating human information processing is helpful for ensuring that generated decisions are comprehensible (i.e., that they follow human reasoning patterns).

Why is it important that machine decision mechanisms, in addition to being transparent, actually mirror human cognitive processes in a comprehensible way? In the social world, people often judge agents not only according to the agents' final decisions, but also according to the process by which they have arrived at these (e.g., Hoffman et al. Reference Hoffman, Yoeli and Nowak2015). It has been argued that the process of human decision making does not typically involve rational utility maximization (e.g., Hertwig & Herzog Reference Hertwig and Herzog2009). This, in turn, influences how we expect other people to make decisions (Bennis et al. Reference Bennis, Medin and Bartels2010). To the extent that one cares about the social applications of algorithms and their interactions with people, considerations about transparency and comprehensibility of decisions become critical.

Although as-if models relying on cost-benefit analysis might be reasonably transparent and comprehensible, for example, when problems are simple and do not involve moral considerations, this might not always be the case. Algorithm designers need to ensure that the underlying process will be acceptable to the human observer. What research can be drawn up to help build transparent and comprehensible mechanisms?

We argue that one source of inspiration might be the research on fast-and-frugal heuristics (Gigerenzer & Gaissmaier Reference Gigerenzer and Gaissmaier2011). Simple strategies such as fast-and-frugal trees (e.g., Hafenbrädl et al. Reference Hafenbrädl, Waeger, Marewski and Gigerenzer2016) might be well suited to providing justifications for decisions made in social situations. Heuristics not only are meant to capture ecologically rational human decision mechanisms (see Todd & Gigerenzer Reference Todd and Gigerenzer2007), but also are transparent and comprehensible (see Gigerenzer Reference Gigerenzer, Gigerenzer and Selten2001). Indeed, these heuristics possess a clear structure composed of simple if-then rules specifying (1) how information is searched within the search space, (2) when information search is stopped, and (3) how the final decision is made based upon the information acquired (Gigerenzer & Gaissmaier Reference Gigerenzer and Gaissmaier2011).

These simple decision rules have been used to model and aid human decisions in numerous tasks with possible moral implications, for example, in medical diagnosis (Hafenbrädl et al. Reference Hafenbrädl, Waeger, Marewski and Gigerenzer2016) or classification of oncoming traffic at military checkpoints as hostile or friendly (Keller & Katsikopoulos Reference Keller and Katsikopoulos2016). We propose that the same heuristic principles might be useful to engineer autonomous agents that behave in a human-like way.

ACKNOWLEDGMENTS

D.L. and J.N.M acknowledge the support received from the Swiss National Science Foundation (Grants 144413 and 146702).

References

Arnold, T. & Scheutz, M. (2016) Against the moral Turing test: Accountable design and the moral reasoning of autonomous systems. Ethics and Information Technology 18(2):103–15. doi:10.1007/s10676-016-9389-x.CrossRefGoogle Scholar
Bennis, W. M., Medin, D. L. & Bartels, D. M. (2010) The costs and benefits of calculation and moral rules. Perspectives on Psychological Science 5(2):187202. doi:10.1177/1745691610362354.CrossRefGoogle ScholarPubMed
Burrell, J. (2016) How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society 3(1):112. doi:10.1177/2053951715622512.CrossRefGoogle Scholar
Gigerenzer, G. (2001) The adaptive toolbox. In: Bounded rationality: The adaptive toolbox, ed. Gigerenzer, G. & Selten, R., pp. 3750. MIT Press.Google Scholar
Gigerenzer, G. & Gaissmaier, W. (2011) Heuristic decision making. Annual Review of Psychology 62:451–82. doi:10.1146/annurev-psych-120709-145346.CrossRefGoogle ScholarPubMed
Hafenbrädl, S., Waeger, D., Marewski, J. N. & Gigerenzer, G. (2016) Applied decision making with fast-and-frugal heuristics. Journal of Applied Research in Memory and Cognition 5(2):215–31. doi:10.1016/j.jarmac.2016.04.011.CrossRefGoogle Scholar
Hertwig, R. & Herzog, S. M. (2009) Fast and frugal heuristics: Tools of social rationality. Social Cognition 27(5):661–98. doi:10.1521/soco.2009.27.5.661.CrossRefGoogle Scholar
Hoffman, M., Yoeli, E. & Nowak, M. A. (2015) Cooperate without looking: Why we care what people think and not just what they do. Proceedings of the National Academy of Sciences of the United States of America 112(6):1727–32. doi:10.1073/pnas.1417904112.CrossRefGoogle ScholarPubMed
Indurkhya, B. & Misztal-Radecka, J. (2016) Incorporating human dimension in autonomous decision-making on moral and ethical issues. In: Proceedings of the AAAI Spring Symposium: Ethical and Moral Considerations in Non-human Agents, Palo Alto, CA, ed. Indurkhya, B. & Stojanov, G.. AAAI Press.Google Scholar
Jara-Ettinger, J., Gweon, H., Schulz, L. E. & Tenenbaum, J. B. (2016) The naïve utility calculus: Computational principles underlying commonsense psychology. Trends in Cognitive Sciences 20(8):589604. doi:10.1016/j.tics.2016.05.011.CrossRefGoogle ScholarPubMed
Keller, N. & Katsikopoulos, K. V. (2016) On the role of psychological heuristics in operational research; and a demonstration in military stability operations. European Journal of Operational Research 249(3):1063–73. doi:10.1016/j.ejor.2015.07.023.CrossRefGoogle Scholar
Malle, B. F. & Scheutz, M. (2014) Moral competence in social robots. In: Proceedings of the 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering. IEEE. doi:10.1109/ETHICS.2014.6893446.CrossRefGoogle Scholar
Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S. & Floridi, L. (2016) The ethics of algorithms: Mapping the debate. Big Data & Society 3(2):121. doi:10.1177/2053951716679679.CrossRefGoogle Scholar
Todd, P. M. & Gigerenzer, G. (2007) Environments that make us smart: Ecological rationality. Current Directions in Psychological Science 16(3):167–71. doi:10.1111/j.1467-8721.2007.00497.x.CrossRefGoogle Scholar