Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-05T22:43:34.221Z Has data issue: false hasContentIssue false

Optimality is both elusive and necessary

Published online by Cambridge University Press:  10 January 2019

Joachim Meyer*
Affiliation:
Department of Industrial Engineering, Tel Aviv University, Tel Aviv 6997801, Israel. jmeyer@tau.ac.iljmeyer.tau.ac.il

Abstract

Optimality of any decision, including perceptual decisions, depends on the criteria used to evaluate outcomes and on the assumptions about available alternatives and information. In research settings, these are often difficult to define, and therefore, claims about optimality are equivocal. However, optimality is important in applied settings when evaluating, for example, the detection of abnormalities in medical images.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

A long history of research supports the notion that human decisions may not be optimal. Many of the early studies dealt with probability learning and “probability matching.” In these studies, decision makers often choose the alternative with the higher payoff probability according to this probability (e.g., when there is a 70% chance to get a positive payoff from choosing an alternative, participants tend to choose it 70% of the time). The optimal strategy in such experiments is always to choose the alternative with the higher payoff probability. Kenneth Arrow (Reference Arrow1958, p. 14) stated 60 years ago, referring to probability matching, that “the remarkable thing about this is that the asymptotic behavior of the individual, even after an indefinitely large amount of learning, is not the optimal behavior.”

However, probability matching may actually be optimal. This is the case if it is used in an environment with possible changes and competition for resources (Gallistel Reference Gallistel2005). This description probably characterizes the vast majority of environments outside the experimental psychology lab. Therefore, probability matching may be the optimal strategy in most settings, and using it in a lab experiment does not imply non-optimality of human decisions. Hence, we need to define the criteria according to which one evaluates a decision. A decision may be optimal under some assumptions and will be non-optimal under different assumptions.

The optimality of decisions does not only depend on the assumptions on which evaluations are based. Optimality is also always judged from a particular point of view. Perceptual decisions may seem non-optimal when evaluated from a “god's eye view,” knowing the true probabilities of events. However, as Rahnev & Denison (R&D) point out, prior expectations affect judgments. If a person believes certain events are more likely than others and bases her decisions on this belief, the decisions may very well be optimal, considering the information that is available when the decision is made.

Similarly, the decision maker's experience in the experiment determines likelihood estimates. In binary classification experiments, the probability of events is binomially distributed. As long as the number of observations is relatively small, assessments of the probabilities of events may differ greatly from the “true” probabilities. Furthermore, even if the person observed a large number of events, the assessed likelihood depends on the memory for the events (the relative number of true and false positive and true and false negative classifications). Not all of these events may be equally salient in memory, leading to possibly biased likelihood estimates. An optimal response to biased likelihood estimates will seem non-optimal.

Hence, the notion of optimality is not very informative. Instead, an attempt should be made to model the cognitive processes that lead to the perceptual decisions, in line with R&D's suggestions. Such models should consider the context in which decisions are made, the person's prior expectations, and the events the person encounters. They should also take into account the properties of the memories these events leave. The “optimality” criterion can be some benchmark to which one can compare decisions, but unless we believe people have some supernatural ability for clairvoyance, we should not expect people to reach this optimality criterion.

However, in some conditions, optimality is critical, and the criteria for optimality are relatively clearly defined. This is often the case outside the laboratory, as when clinicians make decisions regarding the existence of malignancies in medical images. The optimality of decisions in such tasks depends on the costs and benefits of different outcomes and the likelihood of malignancies in a population. If such decisions deviate systematically from optimality, some steps can be taken to lower the discrepancies (e.g., provide training, change procedures). Alternatively, it may be possible to provide human decision makers with aids that help in the detection process (based on image analyses, etc.). If such aids are available, the question of whether people assign optimal weights to the information from these aids is of major importance. In fact, it turns out that people often assign insufficient weight to valid aids, and they over-rely on their own perceptions (as, e.g., in Meyer et al. Reference Meyer, Wiczorek and Günzler2014). If people clearly deviate from optimality, and the perceptual decisions can be made without involving humans (e.g., by employing some computer vision and artificial intelligence [AI] mechanisms), then perhaps we should not include people in these tasks. Hence, the optimality of human decisions can be a factor in the design and evaluation of human-computer systems.

To conclude, in a research context one may aim to predict human decisions from a detailed understanding of the evolving situation in which the decisions are made, without committing oneself to the elusive notion of optimality. In applied settings, it is important to analyze performance and to compare it to optimality criteria when evaluating a system that is used to achieve some goal. These two statements do not contradict each other. To achieve both goals, we should develop models of the task, of the way the human performs the task, and of the implications this task performance may have. These models can help us to understand human decisions, whether these are optimal or not. The models can also serve to predict the overall performance of a system in which humans use technology to perform some task involving perceptual decisions.

References

Arrow, K. J. (1958) Utilities, attitudes, choices: A review note. Econometrica 26:123.Google Scholar
Gallistel, C. R. (2005) Deconstructing the law of effect. Games and Economic Behavior 52(2):410–23.Google Scholar
Meyer, J., Wiczorek, R. & Günzler, T. (2014) Measures of reliance and compliance in aided visual scanning. Human Factors 56(5):840–49.Google Scholar