Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-11T09:29:46.320Z Has data issue: false hasContentIssue false

Descending Marr's levels: Standard observers are no panacea

Published online by Cambridge University Press:  10 January 2019

Carlos Zednik
Affiliation:
Otto-von-Guericke-Universität Magdeburg, D-39016 Magdeburg, Germany. carlos.zednik@ovgu.dehttps://sites.google.com/site/czednik/
Frank Jäkel
Affiliation:
Technische Universität Darmstadt, Centre for Cognitive Science, D-64283 Darmstadt, Germany. jaekel@psychologie.tu-darmstadt.de

Abstract

According to Marr, explanations of perceptual behavior should address multiple levels of analysis. Rahnev & Denison (R&D) are perhaps overly dismissive of optimality considerations at the computational level. Also, an exclusive reliance on standard observer models may cause neglect of many other plausible hypotheses at the algorithmic level. Therefore, as far as explanation goes, standard observer modeling is no panacea.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Rahnev & Denison (R&D) argue that “we should abandon any emphasis on optimality or suboptimality and return to building a science of perception that attempts to account for all types of behavior” (sect. 1, para. 4). We agree that the current fixation on optimality is unhealthy. At the same time, however, we question whether standard observers are really sufficient to “account for” perceptual behavior. Because they cut across different tasks, they may provide some much-needed unification (Colombo & Hartmann Reference Colombo and Hartmann2017). Nevertheless, they are by themselves unlikely to constitute full-fledged explanations. Following Marr (Reference Marr1982), explanations of perceptual behavior should answer questions at three distinct levels of analysis. Alas, it is not clear how standard observers help descend Marr's levels from the computational level to the algorithmic and implementational levels.

At the computational level, investigators ask “what” a perceptual system is doing and “why.” The popularity of ideal observers (Swets et al. Reference Swets, Tanner and Birdsall1961) stems in part from answering both of these questions. Because ideal observer models are tweaked to fit behavioral data, they provide mathematical descriptions of “what” a perceptual system is doing. They also answer questions about “why”: A perceptual system behaves as it does because that behavior is optimal for the task (Bechtel & Shagrir Reference Bechtel and Shagrir2015).

Like ideal observers, standard observers address “what” questions at the computational level by fitting behavioral data. Whereas ideal observer models are often criticized for failing to address questions below the computational level (Jones & Love Reference Jones and Love2011), R&D's standard observer models also address “how” questions at the algorithmic level. Many (but not all) of the hypotheses in Table 1 of the target article emphasize algorithmic-level features such as capacity limitations, imprecisions, ignorance, or the inability to employ complex decision rules. These algorithmic-level aspects are easily accommodated once optimality is given up. In other words, R&D trade in the ability to answer questions about “why” for an improved ability to answer questions about “how.”

We applaud this shift in emphasis from “why” to “how.” However, we feel that (a) “why” questions should not be dismissed quite so quickly, and that (b) properly answering “how” questions may require taking into account hypotheses that are unlikely to be considered within the standard observer approach.

Regarding (a), R&D's dismissive attitude toward optimality is understandable insofar as the explanatory value of “why” questions remains unclear (Danks Reference Danks, Chater and Oaksford2008; but cf. Shagrir Reference Shagrir2010). Nevertheless, such questions can still have pragmatic import; considering what a system is supposed to be doing may lead to an improved understanding of what it is actually doing. R&D admit as much in section 4.2 but do not go far enough. Many historical attempts to uncover mechanisms in biology and neuroscience begin by specifying these mechanisms’ roles in the containing environment: The heart is viewed as a pump for the circulatory system (Bechtel Reference Bechtel2009), and dopamine is known to contribute to the regulation of emotions (Craver Reference Craver and Huneman2013). In this vein, Swets et al. (Reference Swets, Tanner and Birdsall1961, p. 311) argue that ideal observers should be used not only to describe optimal behavior, but also as a “convenient base from which to explore the complex operations of a real organism.” In line with this view, we believe that perceptual scientists may productively tweak an ideal observer's optimal solution so as to eventually arrive at an organism's actual solution (see also Zednik & Jäkel Reference Zednik and Jäkel2016). Hence, although we agree that it is a mistake to rely too heavily on the unclear explanatory value of optimality considerations, we believe that it would be a mistake to dismiss these considerations altogether.

Regarding (b), more should be said about the transition from “what” and “why” questions at the computational level to “how” questions at the algorithmic level. We have previously argued that Marr's hierarchy can be descended by applying heuristic strategies to identify candidate hypotheses at lower levels of analysis (Zednik & Jäkel Reference Zednik, Jäkel, Bello, Guarini, McShane and Scassellati2014; Reference Zednik and Jäkel2016). Many of the hypotheses summarized in Table 1 result from the “push-down” and “plausible-algorithms” heuristics: Whereas the former involves hypothesizing that an ideal observer's computational-level structure reflects an algorithmic-level description of the underlying mechanism, the latter involves adapting this description according to established psychological principles about, for example, capacity limitations. Additionally, R&D's plea for standard observers that can unify models across different tasks attaches great importance to what we have called the “unification” heuristic. Many other useful heuristics are not considered in the target article, however. In particular, some of the most promising recent work is driven by the “tools-to-theories” heuristic (cf. Gigerenzer Reference Gigerenzer1991), in which algorithms developed in, for example, machine learning and Bayesian statistics are co-opted as algorithmic-level hypotheses for explaining how real organisms approximate (or fail to approximate) ideal observers. In particular, Sanborn et al. (Reference Sanborn, Griffiths and Navarro2010) suggest that particle filters ‒ a class of algorithms for approximating Bayesian inference ‒ accurately describe the algorithms that humans deploy to learn categories. Interestingly, these algorithms approximate priors and posteriors through samples and thereby suggest very different components and processes than the original ideal observers. Hence, whereas developing standard observers may be one viable way of addressing “how” questions at the algorithmic level, other approaches may lead to different answers that also merit consideration.

In summary, although we agree that perceptual scientists should in fact shift from questions about “what” and “why” to questions about “how,” we warn against thinking of the standard observer framework as a panacea. For one, “why” questions may continue to play an important role in the process of scientific discovery at the computational level and should not be dismissed prematurely. For another, although standard observers may be one promising way to answer “how” questions at the algorithmic level, other approaches might yield diverging and even incompatible answers. Finally, very little has yet been said about “where” questions at the implementational level (Stüttgen et al. Reference Stüttgen, Schwarz and Jäkel2011; Zednik Reference Zednik, Glennan and Illari2017). Therefore, although standard observer models may play an important role in explanations of perceptual behavior, until we have satisfactory explanations on all three of Marr's levels, we should be patient and let different research strategies run their course.

References

Bechtel, W. (2009) Looking down, around, and up: Mechanistic explanation in psychology. Philosophical Psychology 22:543–64.Google Scholar
Bechtel, W. & Shagrir, O. (2015) The non-redundant contributions of Marr's three levels of analysis for explaining information-processing mechanisms. Topics in Cognitive Science 7(2):312–22.Google Scholar
Colombo, M. & Hartmann, S. (2017) Bayesian cognitive science, unification, and explanation. British Journal for the Philosophy of Science 68(2):451–84.Google Scholar
Craver, C. F. (2013) Functions and mechanisms: A perspectivalist view. In: Functions: Selection and mechanisms, ed. Huneman, P., pp. 133–58. Springer.Google Scholar
Danks, D. (2008) Rational analyses, instrumentalism, and implementations. In: The probabilistic mind: Prospects for rational models of cognition, ed. Chater, N. & Oaksford, M., pp. 5975. Oxford University Press.Google Scholar
Gigerenzer, G. (1991) From tools to theories: A heuristic of discovery in cognitive psychology. Psychological Review 98(2):254–67.Google Scholar
Jones, M. & Love, B. C. (2011) Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences 34(4):169–88. Available at: http://www.journals.cambridge.org/abstract_S0140525X10003134.Google Scholar
Marr, D. (1982) Vision: A computational investigation into the human representation and processing of visual information. W. H. Freeman.Google Scholar
Sanborn, A. N., Griffiths, T. L. & Navarro, D. J. (2010) Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review 4:1144–67.Google Scholar
Shagrir, O. (2010) Marr on computational-level theories. Philosophy of Science 77(4):477500.Google Scholar
Stüttgen, M. O., Schwarz, C. & Jäkel, F. (2011) Mapping spikes to sensations. Frontiers in Neuroscience 5:125.Google Scholar
Swets, J. A., Tanner, W. P. & Birdsall, T. G. (1961) Decision processes in perception. Psychological Review 68(5):301–40. Available at: http://www.ncbi.nlm.nih.gov/pubmed/13774292.Google Scholar
Zednik, C. (2017) Mechanisms in cognitive science. In: The Routledge handbook of mechanisms and mechanical philosophy, ed. Glennan, S. & Illari, P., pp. 389400. Routledge.Google Scholar
Zednik, C. & Jäkel, F. (2014) How does Bayesian reverse-engineering work? In: Proceedings of the 36th Annual Conference of the Cognitive Science Society, ed. Bello, P., Guarini, M., McShane, M. & Scassellati, B., pp. 666–71. Cognitive Science Society.Google Scholar
Zednik, C. & Jäkel, F. (2016) Bayesian reverse-engineering considered as a research strategy for cognitive science. Synthese 193:3951–85.Google Scholar