Bermúdez (Reference Bermúdez2020; target article) makes persuasive arguments against taking extensionality (“irrelevance of an agent's representation of options to choices among them”) to be an inviolable part of rationality – and highlights the “rational” use of different representations of “the same” option space to describe and prescribe (human) action. Specific frames are useful for selecting among equilibria in competitive games. The existence of an equilibrium does not entail knowledge of how one gets there, let alone how one gets there quickly. Can we proceed beyond an acceptance of reasonable violations of extensionality to study the degree to which and probability that any particular frame is useful in a context?
Think of a frame F as a representation of a state of affairs that generates a decisive reason for undertaking a particular action in a specific context – one that functions as a cause for taking an action, in the counterfactual sense: If R were not true or valid, then action A would not be undertaken. An intuitive way to measure its usefulness is to ask: How quickly does it produce such a reason? – or – How much thinking is required to generate a reason R for acting from a frame F that structures the representation of the facts relevant to a situation? The computational complexity (Cook, Reference Cook1971; Papadimitriou, Reference Papadimitriou1994) of calculating R from F can operationalize the “cost of rationalizable action” for a frame by measuring the dependence of the number of operations required to get from F to R on the number of variables picked out by F as relevant. Also, how much closer to R do we get with every operation that proceeds from F as a starting point – how much information per unit of calculation do we get as we think our way from F to R?
Informational gain and computational cost. Take frame (F) to be: “competitive game with perfect information, N players and M strategies each”: ‘being a strategy in the (right) Nash equilibrium NE of strategies most likely to be selected by all N − 1 others’ is the decisive reason (R) the frame can supply – upon due calculation of NE. How difficult is it to get from F to R? In the worst case, the number of operations required to compute a Nash equilibrium grows exponentially in the number of players and strategies (Daskalakis, Reference Daskalakis2008). The cost may be acceptable for the (infrequently encountered) 2 player +2-strategy games used to teach and talk about theoretical games in which all agents maximize their own utilities, know the strategies and payoffs of all other players and know that all others have the same knowledge of everyone's utility–strategy profiles and are capable of the calculations required to get to (the right) equilibrium, and are rational. But, in situations involving many players and many strategies (more frequently encountered in practice) that are not susceptible to a shortcut, the cost of getting from F to R may be prohibitive or crippling.
The benefit of “thinking one step further” down the path from F to R may not be monotonically increasing in the number of calculative steps performed (Moldoveanu, Reference Moldoveanu2009). It depends on the frames used by other interactants and on the logical depth to which they reason. When the computational capacity or computational cost–benefit profiles of other players are such that they do not think their way(s) to any equilibrium, the use of frames that address the computational prowess of other players via “cognitive hierarchies” (“‘savvy strategists,” “neophytes”) can help create more accurate representations (Ho, Camerer, & Chong, Reference Ho, Camerer and Chong2004).
“I–we” re-framing of games featuring interactions among joint and individual payoffs (Bacharach, Reference Bacharach2006) is a way of simplifying the process of generating a reason for acting in a certain way from a frame for representing a social situation by helping one select from a number of equilibria via priming a particular response. To act consistently, one would have to assume the prime works in the same way for the others, so, at least some interactive reasoning is required. But “game models” of interactive decisions may be replaced by “social frames” (Fiske, Reference Fiske1992): In an “authority ranking” frame in which A sees her and B's actions as confirming or disconfirming A's (respectively, B's) power, rank, or prestige in the eyes of N observers watching the interaction, a(n easily justifiable) option – for which A has a decisive reason predicated on the use of the frame – is to take the action most likely to “put B in his place.” This may be to act so as to upend the rules of the game or to walk away from it.
Efficiency can trump representational accuracy in individual deliberations. Take the case of an individual deciding among outcomes that must fulfill many objectives (Keeney & Raiffa, Reference Keeney and Raiffa1993). If emotions truly influence decisions in ways that varies with at least six different attributes of an option (uncertainty, pleasantness, attentional activity, anticipated effort, controllability of self of, and responsibility of others for outcome) (Lerner, Li, Valdesolo, & Kassam, Reference Lerner, Li, Valdesolo and Kassam2015) – that supplies an example of a six-dimensional objective function. Given the computational complexity of multi-objective optimization, the use of context-adaptive frames that prime a small subset of the attributes to induce the quick generation of a reason makes sense, given time constraints: One tries to avoid being “caught in mid-thought” by the hazardous flow of life (Moldoveanu, Reference Moldoveanu2011).
Making the computational complexity- and informational gain-aware selection of frames to match contexts that require decisive, quick action part of a “rationality toolbox” challenges more of what we thought we knew about rational choice than just the extensionality of its representations. Given it is not reasonable to require a rational agent to know all of the logical consequences of what she knows and also not reasonable for one to not think about the first-order consequences of a representation of a situation (“The exam is on Tuesday” & “Today is Tuesday”→“The exam is today”) – where, along the “depth of reasoning” dimension, do we demarcate between “rational” and “irrational”?
Bermúdez (Reference Bermúdez2020; target article) makes persuasive arguments against taking extensionality (“irrelevance of an agent's representation of options to choices among them”) to be an inviolable part of rationality – and highlights the “rational” use of different representations of “the same” option space to describe and prescribe (human) action. Specific frames are useful for selecting among equilibria in competitive games. The existence of an equilibrium does not entail knowledge of how one gets there, let alone how one gets there quickly. Can we proceed beyond an acceptance of reasonable violations of extensionality to study the degree to which and probability that any particular frame is useful in a context?
Think of a frame F as a representation of a state of affairs that generates a decisive reason for undertaking a particular action in a specific context – one that functions as a cause for taking an action, in the counterfactual sense: If R were not true or valid, then action A would not be undertaken. An intuitive way to measure its usefulness is to ask: How quickly does it produce such a reason? – or – How much thinking is required to generate a reason R for acting from a frame F that structures the representation of the facts relevant to a situation? The computational complexity (Cook, Reference Cook1971; Papadimitriou, Reference Papadimitriou1994) of calculating R from F can operationalize the “cost of rationalizable action” for a frame by measuring the dependence of the number of operations required to get from F to R on the number of variables picked out by F as relevant. Also, how much closer to R do we get with every operation that proceeds from F as a starting point – how much information per unit of calculation do we get as we think our way from F to R?
Informational gain and computational cost. Take frame (F) to be: “competitive game with perfect information, N players and M strategies each”: ‘being a strategy in the (right) Nash equilibrium NE of strategies most likely to be selected by all N − 1 others’ is the decisive reason (R) the frame can supply – upon due calculation of NE. How difficult is it to get from F to R? In the worst case, the number of operations required to compute a Nash equilibrium grows exponentially in the number of players and strategies (Daskalakis, Reference Daskalakis2008). The cost may be acceptable for the (infrequently encountered) 2 player +2-strategy games used to teach and talk about theoretical games in which all agents maximize their own utilities, know the strategies and payoffs of all other players and know that all others have the same knowledge of everyone's utility–strategy profiles and are capable of the calculations required to get to (the right) equilibrium, and are rational. But, in situations involving many players and many strategies (more frequently encountered in practice) that are not susceptible to a shortcut, the cost of getting from F to R may be prohibitive or crippling.
The benefit of “thinking one step further” down the path from F to R may not be monotonically increasing in the number of calculative steps performed (Moldoveanu, Reference Moldoveanu2009). It depends on the frames used by other interactants and on the logical depth to which they reason. When the computational capacity or computational cost–benefit profiles of other players are such that they do not think their way(s) to any equilibrium, the use of frames that address the computational prowess of other players via “cognitive hierarchies” (“‘savvy strategists,” “neophytes”) can help create more accurate representations (Ho, Camerer, & Chong, Reference Ho, Camerer and Chong2004).
“I–we” re-framing of games featuring interactions among joint and individual payoffs (Bacharach, Reference Bacharach2006) is a way of simplifying the process of generating a reason for acting in a certain way from a frame for representing a social situation by helping one select from a number of equilibria via priming a particular response. To act consistently, one would have to assume the prime works in the same way for the others, so, at least some interactive reasoning is required. But “game models” of interactive decisions may be replaced by “social frames” (Fiske, Reference Fiske1992): In an “authority ranking” frame in which A sees her and B's actions as confirming or disconfirming A's (respectively, B's) power, rank, or prestige in the eyes of N observers watching the interaction, a(n easily justifiable) option – for which A has a decisive reason predicated on the use of the frame – is to take the action most likely to “put B in his place.” This may be to act so as to upend the rules of the game or to walk away from it.
Efficiency can trump representational accuracy in individual deliberations. Take the case of an individual deciding among outcomes that must fulfill many objectives (Keeney & Raiffa, Reference Keeney and Raiffa1993). If emotions truly influence decisions in ways that varies with at least six different attributes of an option (uncertainty, pleasantness, attentional activity, anticipated effort, controllability of self of, and responsibility of others for outcome) (Lerner, Li, Valdesolo, & Kassam, Reference Lerner, Li, Valdesolo and Kassam2015) – that supplies an example of a six-dimensional objective function. Given the computational complexity of multi-objective optimization, the use of context-adaptive frames that prime a small subset of the attributes to induce the quick generation of a reason makes sense, given time constraints: One tries to avoid being “caught in mid-thought” by the hazardous flow of life (Moldoveanu, Reference Moldoveanu2011).
Making the computational complexity- and informational gain-aware selection of frames to match contexts that require decisive, quick action part of a “rationality toolbox” challenges more of what we thought we knew about rational choice than just the extensionality of its representations. Given it is not reasonable to require a rational agent to know all of the logical consequences of what she knows and also not reasonable for one to not think about the first-order consequences of a representation of a situation (“The exam is on Tuesday” & “Today is Tuesday”→“The exam is today”) – where, along the “depth of reasoning” dimension, do we demarcate between “rational” and “irrational”?
Financial support
This research was funded by the Desautels Centre for Integrative Thinking, Rotman School of Management, University of Toronto.
Conflict of interest
None.