Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Schiller, Daniela
Yu, Alessandra N.C.
Alia-Klein, Nelly
Becker, Susanne
Cromwell, Howard C.
Dolcos, Florin
Eslinger, Paul J.
Frewen, Paul
Kemp, Andrew H.
Pace-Schott, Edward F.
Raber, Jacob
Silton, Rebecca L.
Stefanova, Elka
Williams, Justin H.G.
Abe, Nobuhito
Aghajani, Moji
Albrecht, Franziska
Alexander, Rebecca
Anders, Silke
Aragón, Oriana R.
Arias, Juan A.
Arzy, Shahar
Aue, Tatjana
Baez, Sandra
Balconi, Michela
Ballarini, Tommaso
Bannister, Scott
Banta, Marlissa C.
Barrett, Karen Caplovitz
Belzung, Catherine
Bensafi, Moustafa
Booij, Linda
Bookwala, Jamila
Boulanger-Bertolus, Julie
Boutros, Sydney Weber
Bräscher, Anne-Kathrin
Bruno, Antonio
Busatto, Geraldo
Bylsma, Lauren M.
Caldwell-Harris, Catherine
Chan, Raymond C.K.
Cherbuin, Nicolas
Chiarella, Julian
Cipresso, Pietro
Critchley, Hugo
Croote, Denise E.
Demaree, Heath A.
Denson, Thomas F.
Depue, Brendan
Derntl, Birgit
Dickson, Joanne M.
Dolcos, Sanda
Drach-Zahavy, Anat
Dubljević, Olga
Eerola, Tuomas
Ellingsen, Dan-Mikael
Fairfield, Beth
Ferdenzi, Camille
Friedman, Bruce H.
Fu, Cynthia H.Y.
Gatt, Justine M.
de Gelder, Beatrice
Gendolla, Guido H.E.
Gilam, Gadi
Goldblatt, Hadass
Gooding, Anne Elizabeth Kotynski
Gosseries, Olivia
Hamm, Alfons O.
Hanson, Jamie L.
Hendler, Talma
Herbert, Cornelia
Hofmann, Stefan G.
Ibanez, Agustin
Joffily, Mateus
Jovanovic, Tanja
Kahrilas, Ian J.
Kangas, Maria
Katsumi, Yuta
Kensinger, Elizabeth
Kirby, Lauren A.J.
Koncz, Rebecca
Koster, Ernst H.W.
Kozlowska, Kasia
Krach, Sören
Kret, Mariska E.
Krippl, Martin
Kusi-Mensah, Kwabena
Ladouceur, Cecile D.
Laureys, Steven
Lawrence, Alistair
Li, Chiang-shan R.
Liddell, Belinda J.
Lidhar, Navdeep K.
Lowry, Christopher A.
Magee, Kelsey
Marin, Marie-France
Mariotti, Veronica
Martin, Loren J.
Marusak, Hilary A.
Mayer, Annalina V.
Merner, Amanda R.
Minnier, Jessica
Moll, Jorge
Morrison, Robert G.
Moore, Matthew
Mouly, Anne-Marie
Mueller, Sven C.
Mühlberger, Andreas
Murphy, Nora A.
Muscatello, Maria Rosaria Anna
Musser, Erica D.
Newton, Tamara L.
Noll-Hussong, Michael
Norrholm, Seth Davin
Northoff, Georg
Nusslock, Robin
Okon-Singer, Hadas
Olino, Thomas M.
Ortner, Catherine
Owolabi, Mayowa
Padulo, Caterina
Palermo, Romina
Palumbo, Rocco
Palumbo, Sara
Papadelis, Christos
Pegna, Alan J.
Pellegrini, Silvia
Peltonen, Kirsi
Penninx, Brenda W.J.H.
Pietrini, Pietro
Pinna, Graziano
Lobo, Rosario Pintos
Polnaszek, Kelly L.
Polyakova, Maryna
Rabinak, Christine
Helene Richter, S.
Richter, Thalia
Riva, Giuseppe
Rizzo, Amelia
Robinson, Jennifer L.
Rosa, Pedro
Sachdev, Perminder S.
Sato, Wataru
Schroeter, Matthias L.
Schweizer, Susanne
Shiban, Youssef
Siddharthan, Advaith
Siedlecka, Ewa
Smith, Robert C.
Soreq, Hermona
Spangler, Derek P.
Stern, Emily R.
Styliadis, Charis
Sullivan, Gavin B.
Swain, James E.
Urben, Sébastien
Van den Stock, Jan
vander Kooij, Michael A.
van Overveld, Mark
Van Rheenen, Tamsyn E.
VanElzakker, Michael B.
Ventura-Bort, Carlos
Verona, Edelyn
Volk, Tyler
Wang, Yi
Weingast, Leah T.
Weymar, Mathias
Williams, Claire
Willis, Megan L.
Yamashita, Paula
Zahn, Roland
Zupan, Barbra
and
Lowe, Leroy
2024.
The Human Affectome.
Neuroscience & Biobehavioral Reviews,
Vol. 158,
Issue. ,
p.
105450.
There are a myriad of enticing issues raised by Gilead et al. I will focus on a theme that emerges in different guises throughout their treatment. This theme is the structure of implicit generative models that the brain uses to furnish predictions of its sensorium. The nature of generative models is especially important from the perspective of active inference – a corollary of the free energy principle (Friston Reference Friston2013); where many interesting aspects of generative models boil down to their factorial structure.
In what follows, I try to explain why generative models are so central to representation in active (Bayesian) inference as planning (Attias Reference Attias2003; Baker & Tenenbaum Reference Baker, Tenenbaum, Sukthankar, Geib, Bui, Pynadath and Goldman2014; Friston et al. Reference Friston, Mattout and Kilner2011). I then consider the factorial nature of these models, which endows them with deep (hierarchical) structure; from the concrete to the abstract – and, crucially, from the past to the future (Friston et al. Reference Friston, Rosch, Parr, Price and Bowman2017d; Russek et al. Reference Russek, Momennejad, Botvinick, Gershman and Daw2017). Underwriting this treatment is an enactive aspect of representational processing; namely, the notion that inference about the causes of our sensations is the easy problem: the hard part is inferring the best way to gather those sensations (Davison & Murray Reference Davison and Murray2002; Ferro et al. Reference Ferro, Ognibene, Pezzulo and Pirrelli2010; MacKay Reference MacKay1992).
Gilead et al. refer often to the formalism of active inference. I think this is perfectly appropriate, because a formal treatment of representational structure is, in its essence, a treatment of the generative models that underwrite inference. Technically, a generative model is just a probability distribution over some causes and their consequences. In the setting of the embodied brain, the causes are states of the world “out there” – that are hidden behind our sensations. These sensations are the consequences. Inverting a generative model refers to the inverse mapping from (sensory) consequences to their (worldly) causes. These causes are the abstracta and concreta that constitute different kinds of representations in Gilead et al. The generative model is important because most of the heavy lifting – in terms of understanding structure–function relationships in the brain – rests on its form. In other words, if one knows the generative model, model inversion can be cast in terms of the Bayesian brain hypothesis (in a normative sense) (Doya Reference Doya2007; Knill & Pouget Reference Knill and Pouget2004) or combined with standard inversion schemes to generate neuronal processes theories about computational brain architectures and neuronal message passing (Friston et al. Reference Friston, Parr and de Vries2017c).
These theories are usually cast in terms of belief-updating via a gradient descent on variational free energy. There are several schemes that fall under this class; all of which have been used as biologically plausible process theories for perceptual inference. Crucially, exactly the same quantity is optimised by action; thereby providing a formal account of the action–perception cycle (Fuster Reference Fuster2004). Particular instances include predictive coding (Rao & Ballard Reference Rao and Ballard1999) and variational message passing for generative models based upon continuous and discrete states, respectively. These process theories constitute a field in cognitive neuroscience that has become known as predictive processing (Clark Reference Clark2013; Seth Reference Seth2014). Therefore, what are the most important aspects of a generative model?
One aspect has already been mentioned; namely, the distinction between continuous and discrete models. However, a feature that is common to both is their factorial structure. In fact, from a technical perspective, the way in which we factorise our (non-propositional) posterior beliefs about hidden causes (i.e., how we come to represent things “out there”) rests upon a factorisation known as a mean field approximation in physics and machine learning. Key examples emerge throughout (Gilead et al.). The first is a factorisation over the levels of a deep (hierarchical) generative model. Typically, the lowest levels – that generate sensory data – are concrete and modality bound. As one ascends the hierarchy, the states of the world represented become more abstract and inclusive.
Another important aspect of factorisation is a carving of putative hidden states of the world within any hierarchical level. My favourite example is the factorisation into “what” and “where” (Ungerleider & Haxby Reference Ungerleider and Haxby1994). In short, knowing what something is does not tell you where it is and vice versa. This (conditional) independence is manifest beautifully, in terms of the functional anatomy of the dorsal and ventral streams in the brain. This sort of factorisation emerges frequently in Gilead et al. One intriguing example is the notion of predicators; namely, representations that behave like functions. An interesting question here is whether one needs to treat relationships in a way that is fundamentally different from objects? For example, how is a representation of “what” formerly distinct from a representation of “where,” when generating visual input?
The final factorisation is over time. This theme emerges in modality-specific features, objects, and relationships – that rest upon the notion of object permanence. This sort of permanence has to be written into a generative model of a capricious world. This theme reappears in terms of spatiotemporal contiguity in the treatment of multimodal features. Indeed, the premise of Gilead et al. rests upon integrating influential theories in the “predictive brain camp” with “prospection (or future oriented mental time travel).” This is a big move, because it entails generative models of dynamics, narratives, or trajectories – with representations of the past and future. In turn, this enables the representation of states that have not yet been realised. These states undergird “simulation of future events” (intro., para. 4) and a sense of agency. In other words, the notion of a model that can “generate a representation that models the specific problem at hand” (sect. 3.1, para. 3) is exactly a generative model of the future, with “my” action as a latent state that has to be inferred. This is important from the point of view of active inference, because it suggests that much of our inference is not about states of affairs “out there” but more about “what would happen if I did that” (Schmidhuber Reference Schmidhuber2006). This is nicely summarised as (ibid., p. 23):
The functionality of a simulation stems from the fact that the person running the simulation self-projects into it, that is, becomes an agent in the simulated situation.
Gilead et al. then offer a compelling conclusion about mental travel (sect. 4, para. 2):
Representational structures … form the bridges that allow us to traverse uncertainty. In light of this … the link between abstraction and mental travel is fundamental to any consideration of these constructs; there is no mental travel without abstraction, and there is no need for abstraction but to support mental travel.
I would add:
and there is no need for mental travel that but to support inference about what I should do next.