Brette's treatment of information theory does not do justice to the role of the receiver in Shannon's (Reference Shannon1948) theory. The three elements of communication essential to understanding the role of codes in neural communication are not “correspondence, representation, causality”; they are source, signal, and receiver. The receiver must have a set of possible messages it may receive about some state of the world (the source) by way of a signal. It must also have a probability distribution over the set of possible messages, because Shannon's formula makes information a property of probability distributions. Absent a distribution, there is no measure of information. Shannon's theory establishes the conceptual foundation for a scientific understanding of the flow of information from the world into and within brains (Gallistel, Reference Gallistelin press).
Brett fails to ask the first question that must be asked when applying Shannon's theory to understanding world-brain communication: What determines the set of possible messages? Not the world; its role is passive. Brain structure determines the sets of possible messages; they are all and only the sets of messages its highly differentiated structures enable it to receive. Sensory transducers and the signal processing machinery that extracts information about distal stimuli (things out there in the world) from the signals generated in those transducers by proximal stimuli (the stimuli that act directly on the transducers) determine the messages a brain can receive.
Color vision provides an illustrative example. The set of messages the human brain receives about the reflectance spectra of surfaces is determined by the distinguishable locations in a neural vector space with three bipolar dimensions. Neither the dimensionality of the vector space nor the bipolarity of the vectors that encode color is a property of spectra. The brain imposes this encoding when it creates the three types of cones in the retina and the multiple stages of signal processing that map from cone photon catches to the encodings that mediate color percepts. In setting up three and only three cone types, brain epigenesis establishes the dimensionality of the space. In setting up the circuitry for subtracting the signal from one cone type from the signal of another cone type, it establishes the bipolarity of the representational structure. The bipolarity creates a distinctive feature of color perception, the mutual exclusivity of certain color pairs (red-green and yellow-blue, for example).
Shannon's source-coding theorem enables us to understand why the brain imposes this structure on the color messages it receives: An efficient code must reflect the source statistics. In the case of color, the source statistics are the statistics of the reflectance spectra of surfaces in the natural world. In principle (and in a lab equipped with monochromators), the intensity of light at any wavelength is independent of its intensity at any other, but in the world, reflectance spectra have massive redundancies, because the intensity at one wavelength is highly predictive of the intensities at neighboring wavelengths. This redundancy greatly reduces the available information. The brain's way of encoding color captures a large part of the information available from the reflectance profiles of surfaces in the natural world (Boker Reference Boker1997; Maloney Reference Maloney, Masufeld and Heyer2003). Vague talk about the “dynamic, circular, distributed” nature of brain processes does not deliver this kind of insight.
A computing machine like the brain has four material foundations: its signals, which transmit information from place to place within the machine: the symbols in its memory, which transmit information from the past into the future; the machinery for executing signal processing operations; and the machinery for executing operations on symbols (Gallistel & King Reference Gallistel and King2010). The machinery for processing the signals and operating on the symbols cannot be designed – if one is building the machine – or understood – if one is reverse engineering it – until one has decided on, or come to know, the code or codes by which the information will be, or is, represented in the signals and the symbols. The foundational role of the code is clear to computer engineers and to those who know the history of molecular biology (Judson Reference Judson1980). Its importance to neuroscience is well illustrated by the Rieke et al. (Reference Rieke, Warland, van Stevenick R. and Bialek1997) book, Spikes: Explorations of the Neural Code, which Brette cites, but otherwise ignores.
Much of the information conveyed by neural signals and stored in neural memory is quantitative. The physically realized representatives of quantities in a computing machine (e.g., bit patterns) are what computers scientists understand by numbers. The brain performs arithmetic operations on the signals and symbols, which is one good reason for conceptualizing brain function in computational terms.
An example of neural arithmetic is the time-compensated sun compass. Animals learn and store in memory the sun's azimuth as a function of the time of day (Dyer & Dickinson Reference Dyer and Dickinson1996; von Frisch Reference von Frisch1967; von Frisch & Lindauer Reference von Frisch and Lindauer1954). They can then steer by the sun while flying a compass bearing to a food source (whose location may have been obtained the previous day by following the dance of another forager [Menzel et al. Reference Menzel, Kirbach, Hass, Fischer, Fuchs, Koblofsky, Lehmann, Reiter, Meyer, Nguyen, Jones, Norton and Greggers2011]). To enable that behavior, their brain must subtract the current solar azimuth from the desired compass course to obtain the current solar bearing of the source, the angle at which they must hold the sun's image while flying to their destination.
Understanding the neural machinery that performs angular subtraction requires understanding how the brain encodes angular quantity (Gallistel Reference Gallistel2018). Proponents of dynamical system theory do not seem prepared to consider how the brain does the arithmetic required in navigating. This refusal frees them from the need to ponder how it encodes quantities (Gallistel Reference Gallistel2017; Reference Gallistel2018).
An understanding of the brain's codes is as essential to neuroscience as an understanding of the genetic code is to biology. Shannon's theory is now, and is likely to remain, the foundation of that understanding (cf. Qian & Zhang Reference Qian and Zhang2019; Stevens Reference Stevens2018).
Brette's treatment of information theory does not do justice to the role of the receiver in Shannon's (Reference Shannon1948) theory. The three elements of communication essential to understanding the role of codes in neural communication are not “correspondence, representation, causality”; they are source, signal, and receiver. The receiver must have a set of possible messages it may receive about some state of the world (the source) by way of a signal. It must also have a probability distribution over the set of possible messages, because Shannon's formula makes information a property of probability distributions. Absent a distribution, there is no measure of information. Shannon's theory establishes the conceptual foundation for a scientific understanding of the flow of information from the world into and within brains (Gallistel, Reference Gallistelin press).
Brett fails to ask the first question that must be asked when applying Shannon's theory to understanding world-brain communication: What determines the set of possible messages? Not the world; its role is passive. Brain structure determines the sets of possible messages; they are all and only the sets of messages its highly differentiated structures enable it to receive. Sensory transducers and the signal processing machinery that extracts information about distal stimuli (things out there in the world) from the signals generated in those transducers by proximal stimuli (the stimuli that act directly on the transducers) determine the messages a brain can receive.
Color vision provides an illustrative example. The set of messages the human brain receives about the reflectance spectra of surfaces is determined by the distinguishable locations in a neural vector space with three bipolar dimensions. Neither the dimensionality of the vector space nor the bipolarity of the vectors that encode color is a property of spectra. The brain imposes this encoding when it creates the three types of cones in the retina and the multiple stages of signal processing that map from cone photon catches to the encodings that mediate color percepts. In setting up three and only three cone types, brain epigenesis establishes the dimensionality of the space. In setting up the circuitry for subtracting the signal from one cone type from the signal of another cone type, it establishes the bipolarity of the representational structure. The bipolarity creates a distinctive feature of color perception, the mutual exclusivity of certain color pairs (red-green and yellow-blue, for example).
Shannon's source-coding theorem enables us to understand why the brain imposes this structure on the color messages it receives: An efficient code must reflect the source statistics. In the case of color, the source statistics are the statistics of the reflectance spectra of surfaces in the natural world. In principle (and in a lab equipped with monochromators), the intensity of light at any wavelength is independent of its intensity at any other, but in the world, reflectance spectra have massive redundancies, because the intensity at one wavelength is highly predictive of the intensities at neighboring wavelengths. This redundancy greatly reduces the available information. The brain's way of encoding color captures a large part of the information available from the reflectance profiles of surfaces in the natural world (Boker Reference Boker1997; Maloney Reference Maloney, Masufeld and Heyer2003). Vague talk about the “dynamic, circular, distributed” nature of brain processes does not deliver this kind of insight.
A computing machine like the brain has four material foundations: its signals, which transmit information from place to place within the machine: the symbols in its memory, which transmit information from the past into the future; the machinery for executing signal processing operations; and the machinery for executing operations on symbols (Gallistel & King Reference Gallistel and King2010). The machinery for processing the signals and operating on the symbols cannot be designed – if one is building the machine – or understood – if one is reverse engineering it – until one has decided on, or come to know, the code or codes by which the information will be, or is, represented in the signals and the symbols. The foundational role of the code is clear to computer engineers and to those who know the history of molecular biology (Judson Reference Judson1980). Its importance to neuroscience is well illustrated by the Rieke et al. (Reference Rieke, Warland, van Stevenick R. and Bialek1997) book, Spikes: Explorations of the Neural Code, which Brette cites, but otherwise ignores.
Much of the information conveyed by neural signals and stored in neural memory is quantitative. The physically realized representatives of quantities in a computing machine (e.g., bit patterns) are what computers scientists understand by numbers. The brain performs arithmetic operations on the signals and symbols, which is one good reason for conceptualizing brain function in computational terms.
An example of neural arithmetic is the time-compensated sun compass. Animals learn and store in memory the sun's azimuth as a function of the time of day (Dyer & Dickinson Reference Dyer and Dickinson1996; von Frisch Reference von Frisch1967; von Frisch & Lindauer Reference von Frisch and Lindauer1954). They can then steer by the sun while flying a compass bearing to a food source (whose location may have been obtained the previous day by following the dance of another forager [Menzel et al. Reference Menzel, Kirbach, Hass, Fischer, Fuchs, Koblofsky, Lehmann, Reiter, Meyer, Nguyen, Jones, Norton and Greggers2011]). To enable that behavior, their brain must subtract the current solar azimuth from the desired compass course to obtain the current solar bearing of the source, the angle at which they must hold the sun's image while flying to their destination.
Understanding the neural machinery that performs angular subtraction requires understanding how the brain encodes angular quantity (Gallistel Reference Gallistel2018). Proponents of dynamical system theory do not seem prepared to consider how the brain does the arithmetic required in navigating. This refusal frees them from the need to ponder how it encodes quantities (Gallistel Reference Gallistel2017; Reference Gallistel2018).
An understanding of the brain's codes is as essential to neuroscience as an understanding of the genetic code is to biology. Shannon's theory is now, and is likely to remain, the foundation of that understanding (cf. Qian & Zhang Reference Qian and Zhang2019; Stevens Reference Stevens2018).