Hostname: page-component-6bf8c574d5-t27h7 Total loading time: 0 Render date: 2025-02-21T00:43:33.784Z Has data issue: false hasContentIssue false

Crash Testing an Engineering Framework in Neuroscience: Does the Idea of Robustness Break Down?

Published online by Cambridge University Press:  01 January 2022

Rights & Permissions [Opens in a new window]

Abstract

In this article, I discuss the concept of robustness in neuroscience. Various mechanisms for making systems robust have been discussed across biology and neuroscience (e.g., redundancy and fail-safes). Many of these notions originate from engineering. I argue that concepts borrowed from engineering aid neuroscientists in (1) operationalizing robustness, (2) formulating hypotheses about mechanisms for robustness, and (3) quantifying robustness. Furthermore, I argue that the significant disanalogies between brains and engineered artifacts raise important questions about the applicability of the engineering framework. I argue that the use of such concepts should be understood as a kind of simplifying idealization.

Type
Cognitive Sciences
Copyright
Copyright © The Philosophy of Science Association

The brain is a physical device that performs specific functions; therefore, its design must obey general principles of engineering.

(Sterling and Laughlin Reference Sterling and Laughlin2015, xv)

1. Introduction

In this article, I discuss a cluster of issues around the understanding of robustness in neuroscience. The systems biologist Hiroaki Kitano (Reference Kitano2004, 826) defines robustness as “a property that allows a system to maintain its functions against internal and external perturbations.” According to this definition, in order to determine whether or not a system is robust, one must specify its function and the kinds of perturbation it faces. Empirically determinable questions then follow about how exactly the system achieves its robustness. Various means for making systems robust have been discussed across biology and neuroscience: copy redundancy, fail-safes, degeneracy, modularity, passive reserve, active compensation, plasticity, decoupling, and feedback (see fig. 1). It is obvious, but still worth emphasizing, that most of these notions originate from engineering.

Figure 1. Engineering framework for robustness. A set of terms originating from engineering and control theory that are applied to biological systems to explain how they achieve robust performance.

In section 2 of this article, I argue that the framework of concepts borrowed from engineering aids neuroscientists in (1) operationalizing robustness by specifying functions of the system and determining possible sources of perturbation, (2) formulating hypotheses about means for the system to achieve robustness, and (3) showing how robustness may be precisely quantified. This will be shown with examples of neuroscientific research that aims to measure robustness in a retinal circuit (Sterling and Freed Reference Sterling and Freed2007) and in the motor cortex (Svoboda Reference Svoboda2015), as well as to develop models of homeostatic control (Davis Reference Davis2006; O’Leary et al. Reference O’Leary, Williams, Franci and Marder2014).

In section 3, I argue that the use of the engineering framework in neuroscience gets stretched, perhaps to the breaking point, when applied to systems where (1) no principled distinction exists between processes for robustness and processes that continually maintain the life of the cell, (2) perturbations are a regular occurrence rather than anomalous events, and (3) one should not conceive of the system as seeking to maintain a steady state. I will argue that the limitations of the engineering notions are put into stark relief when one examines neural systems through the lens of the process approach to biology (Dupré Reference Dupré2012). The engineering perspective, to the extent that it treats biological systems as prespecified objects with fixed functions, misses many of the features that make robust biological systems fascinating and that are highlighted by the process view.

In section 4, I will consider if it is necessary to reengineer the concepts of robustness to be more in line with the dynamicism of biological systems or, alternatively, if we should accept the engineering perspective as it is, as one among many idealizing and simplifying heuristics for understanding complex systems like the brain.

2. Putting the Engineering Framework to Use

The robustness of the brain is one of its many extraordinary attributes. By this I mean that brains can undergo moderately severe external perturbations while still maintaining approximately normal function. Obviously, robustness has its limits, and the brain’s characteristic patterns of resilience and fragility are an important target of research (Sporns Reference Sporns2010, chap. 10). In order to investigate robustness, it is necessary first to specify what sorts of perturbations the system is robust to and then to quantify how robust it actually is. Explanations of robustness can be developed by testing hypotheses concerning the exact mechanisms by which robust performance is achieved. The engineering framework can be put to effective use in each of these processes.

For example, Sterling and Freed (Reference Sterling and Freed2007) pose the question of how robust the retinal circuit is. They define robustness as the factor by which intrinsic capacity exceeds normal demand, which is the engineer’s notion of margin of safety (563). The idea can be illustrated through their comparison with bridge design. An engineer designing a road bridge will consider both the anticipated normal demand (e.g., commuter traffic) as well as the unusual demands that might occasionally be placed on the bridge (e.g., the passage of a 30-ton military vehicle). The unusual demand can be thought of as a “perturbation” in Kitano’s terms. A robust design will ensure that the system does not break when pushed beyond normal conditions. For a bridge, this can be achieved with passive reserve (using thicker steel than is needed under normal conditions) and redundancy (including additional beams so that there are backup structures if any parts are compromised).

Sterling and Freed take the bridge case to be analogous to the retinal circuit. Normal demand, for the retina, is the intensity of illumination that the eye will encounter under naturalistic stimulation conditions. The safety factor is calculated by experimental determination of the maximum illumination level under which neurons in the retina can maintain their ability to signal to downstream neurons. Sterling and Freed (Reference Sterling and Freed2007, 570) report that

across successive stages in this neural circuit, safety factors are on the order of 2–10. Thus, they resemble those in other tissues and systems. Their similarity across stages also accords with the principle of symmorphosis—that efficient design matches capacities across stages that are functionally coupled.

Sterling and Freed’s explanation of robustness depends on the notion of passive reserve. For photoreceptor neurons, this is calculated as the number of vesicles of neurotransmitter available in their synapse for continuous signaling at high rates without restocking of the vesicles (565–66). In arriving at their conclusion about retinal safety margins, they argue that there are at least twice as many vesicles as needed under normal stimulation conditions. In this case, we see that a design approach borrowed from civil engineering plays a clear and striking role in these neuroscientists’ definition, operationalization, and explanation of robustness in the retina.

Another example comes from Davis’s (Reference Davis2006) review of work on homeostatic regulation in the nervous system.Footnote 1 As he writes,

Homeostatic control systems are best understood in engineering theory, where they are routinely implemented in systems such as aircraft flight control. Recently, biological signaling systems have been analyzed with the tools of engineering theory. (314)

Accordingly, homeostatic control systems have a number of “required features”: (1) a set point that defines the target output of the system; (2) feedback; (3) precision in resetting the output back to the set point, following a perturbation; and, normally, (4) sensors that measure the difference between the actual output and the set point (309). Thus, control theory offers neuroscientists clear and experimentally testable criteria for determining whether a system undergoes homeostatic regulation, by looking for these required features (e.g., the existence of a set point) in a system. The operating conditions of homeostatic regulation and the biophysical mechanisms of feedback, sensors, and so on, are also open to experimental investigation. Reported examples of properties under homeostatic control are muscle excitation at the neuromuscular junction (309) and bursting properties of invertebrate neurons (311). More recently, O’Leary et al. (Reference O’Leary, Williams, Franci and Marder2014, 818) argue that ion channel expression in their simplified model of invertebrate neurons can be understood as an implementation of integral control, a standard control-theoretic architecture.

3. Crash Testing the Framework

Before considering the question of whether the engineering framework becomes structurally unsound when applied to some kinds of neural systems, I would like to draw our attention to some of its features. The basic ideas are clearly illustrated in Sterling and Freed’s (Reference Sterling and Freed2007) example of the bridge. When one considers the robustness of an engineered artifact like a bridge, it is presupposed that the system is built up from component parts in such a way as to achieve a specific function. The robustness of the bridge is conceptually distinct from its other designed features or functions, and it can trade off against some of them. For example, the more robust the bridge is to the passage of the occasional heavy vehicle, the more expensive it will be to build, because it will require more steel (563). Moreover, the perturbations against which the system is robust are thought of as atypical events, also conceptually distinct from the normal operations of the system.

There is also the tendency to think of robustness as allowing the system, following a perturbation, to return to its initial stable state. Some experiments specifically involve the operationalization of the robustness of a system as the reversion to a prior state. For example, reporting on an experiment in which mouse premotor cortex in one hemisphere was inhibited using optogenetics during the preparation period for the animal’s movement, Svoboda (Reference Svoboda2015) writes that “this preparatory activity is remarkably robust to large-scale unilateral optogenetic perturbations: detailed dynamics that drive specific future movements are quickly and selectively restored by the network.”Footnote 2 This notion of robustness as the ability of the system to revert to a prior functional state is similar to the idea of homeostasis as the ability of a system to stabilize some quantity in spite of external changes.

Eve Marder’s laboratory has carried out a long-term investigation into the ability of neurons to maintain stable electrophysiological properties despite continual turnover of the ion channels embedded in the cell membrane that are responsible for its electrical excitability. This research project is one of the central examples of the study of robustness in neural systems. Marder and her collaborators make ample use of the engineering framework when reviewing other results and reporting their findings. For example, O’Leary et al. (Reference O’Leary, Williams, Caplan and Marder2013, E2645) write,

Both theoretical and experimental studies suggest that maintaining stable intrinsic excitability is accomplished via homeostatic, negative feedback processes that use intracellular Ca2+ concentrations as a sensor of activity and then alters the synthesis, insertion, and degradation of membrane conductances to achieve a target activity level.

What is striking about the characterization of electrophysiological stability in the face of ion channel turnover as a kind of robustness in the face of a perturbation is the fact that the turnover is just part of the normal physiology of the cell (e.g., E2651). There is no functional and stable state of the cell in which this turnover does not occur—a fact that these authors also highlight.Footnote 3 This brings our attention to some strains in the application of the engineering framework to this biological system.

In the basic engineering characterization of robustness, sketched above, perturbations are different from the normal circumstances in which the system is expected to operate. “Perturbation” carries the everyday connotation of an event that throws the system off balance and is deleterious to its normal functioning. We cannot think of the events of ion channel turnover as perturbations in this sense; they are business as usual for the cell.

Furthermore, it is not in the nature of the system to seek to return to a prior, stable arrangement of its parts. A crucial property of the nervous system is its plasticity: the tendency for its component parts and the connections linking them to be continually sculpted by experience. The homeostatic mechanisms that Marder and colleagues investigate need to be understood as maintaining specific properties (such as a cell’s Ca2+ concentration) at a certain point, not (nor do these researchers claim) as some generalized operation for achieving system-wide internal stability (see sec. 4.4).

In the basic engineering conception of robustness, there is a clear conceptual distinction between the features of a system that allow it to carry out its intended function and those that make the system robust (even if in reality one individual feature can serve both purposes). In the case of the neuron that has continual ion channel turnover and no definite stable state to return to following these “perturbations,” it is not clear that we can make this distinction. A more natural way to think about this and other biological systems is as ones, unlike engineered artifacts, “designed” to keep changing and “designed” to maintain functional stability in the midst of this constant change.Footnote 4

The tensions and strains associated with the application of the basic engineering framework to biological systems can be felt more sharply if we appeal to a process metaphysics of biological “things” (Dupré Reference Dupré2012). According to this view, organisms are not substances but processes—items whose existence depends on certain changes taking place. This highlights the fact that the life of organisms depends on a continual turnover of their component parts and that the system as a whole, while living, persists longer than its parts. Yet, features and functions of the organism remain relatively stable. For example, memories can endure for decades, even though the neurons that form them have undergone material change. This stability must be achieved—somehow. And so processes for robustness are not cleanly distinct from the general maintenance processes that keep the organism alive.

The processual nature of neurons is nicely described by Marder and Goaillard (Reference Marder and Goaillard2012, 563):

Each neuron is constantly rebuilding itself from its constituent proteins, using all of the molecular and biochemical machinery of the cell. (See also n. 3, above.)

We can contrast this with the substance metaphysics that we usually assume when thinking about engineered artifacts. A bridge or an airplane is what it is because of the parts that compose it. Its existence does not depend on the occurrence of any process. This is not to deny that an expert on the theory of matter might well argue that the steel of the bridge maintains its integrity because of some fundamental processes. The point is that when characterizing the robustness of the bridge or the airplane, we would not resort to such sophistication. Rather, we think of the bridge as a substance and not a process—a steel structure that, in order to maintain its function in the face of perturbation, must resist rather than effect the swapping around of its component parts.

4. Examining Reasons to Reengineer

Now that we have noted these disanalogies between biological organisms and engineered things, we ought to worry that the framework borrowed from engineering is misleading when thinking about robustness in the brain and other biological systems. Is it time to reengineer our conceptual tools for thinking about robustness to make them more suitable for characterizing living things? In this section, I consider four possible answers to this question.

4.1. No. The Terms in the Engineering Framework Are Just Words That Are Used to Facilitate Communication of the Neuroscientific Results

One potential response to the concerns raised in the previous section is that they stem from a superficial fixation on the vocabulary neuroscientists use when writing about their research.Footnote 5 Just because the authors discussed above have employed certain words first introduced by engineers, it does not follow that their understanding of neurophysiology is distorted by comparisons with engineering. For example, I mentioned that the word “perturbation” has a negative connotation that makes it seem inappropriate when describing nonpathological and frequent events like ion channel turnover. It could well be that in the context of this research, the term takes on a different meaning—for example, as any event that the system cannot directly control, such as changes in protein configuration owing to thermal noise.Footnote 6

I believe that this response is warranted by what we know of the methodology of some of the investigations discussed above, but not all of them. In the case of Sterling and Freed (Reference Sterling and Freed2007), I was careful to show that the engineering conceptions directly shaped how the two neuroscientists operationalized and quantified robustness and how they identified mechanisms by which robustness is achieved. There is no indication that they used terms such as “safety factor” to mean something radically different in the context of neuroscience.

A very explicit statement of the aim of applying engineering principles directly to the understanding of the premotor cortex comes from Svoboda (Reference Svoboda2015):

Preparatory activity is distributed in a redundant manner across weakly coupled modules. These are the same principles used to build robustness into engineered control systems. Our studies therefore provide an example of consilience between neuroscience and engineering.

Thus, the convergence between a neurophysiological and the engineering perspective on the mouse motor planning system is taken to be an important result of Svoboda’s study. This echoes Sterling and Laughlin’s (Reference Sterling and Laughlin2015, xiii–xv) proposal that inquiring to see how engineering principles are implemented in neural systems, and the attempt thereby to reverse-engineer the brain, leads to insights not otherwise available through routine data collection.

4.2. No. The Inadequacies You Point out with the Engineering Framework Are Based on a Caricature of Mechanical Engineering, Not the Actual Complex Discipline

My characterization of the engineering framework assumes that mechanical engineering (the design of bridges, airplanes, and such like) is paradigmatic of the engineering approach in general.Footnote 7 But of course there are many different kinds of engineering, from mechanical to electronic to communications to chemical. It could well be that the mismatch between understanding the robustness of a highly dynamic entity like the brain and the rather static conception of robust objects that falls out of the basic engineering framework is just an artifact of only focusing narrowly on the kind of engineering that is actually furthest away from neuroscience.

It would take me beyond the scope of this short article (and well beyond my own knowledge of the subject) to sketch out the various possible frameworks associated with each field of engineering specifically and to see which conception of robustness is most suitable for biology. However, what I will say is that there is evidence in the studies discussed above that neuroscientists themselves do sometimes draw from the mechanically based caricature. This is particularly true of Sterling and Freed (Reference Sterling and Freed2007). In contrast, when Davis (Reference Davis2006) and O’Leary et al. (Reference O’Leary, Williams, Franci and Marder2014) make direct appeals to engineering, they refer specifically to models in control theory.Footnote 8 This invites questions, still, about whether the paradigm examples of controlled systems (e.g., a car driven on cruise control, a Watt governor, or an airplane flown on autopilot) are dynamic enough capture the processual nature of the nervous system.

4.3. Yes. The Brain Is So Different from an Engineered Artifact That the Framework Is Misleading and Inappropriate

In sections 4.1 and 4.2, I discussed two reasons for thinking that we should not be concerned about any radical disanalogy between robustness in biological and engineered systems. While I agree that these are important points to keep in mind, I do not think that they diffuse the fundamental concern that when neuroscientists borrow engineers’ terms in order to study robustness, they risk mischaracterizing the brain as more like an engineered artifact than it actually is. Is the appropriate conclusion, then, that a neural circuit is so different from a bridge or an airplane that the engineering framework is simply misleading and should be discarded?

One way to make this strong negative case is to consider a historical example in which reasoning by analogy with engineered systems was misleading. One case in point comes from von Békésy, a physicist and communications engineer who turned his attention to inhibition in the nervous system. In his book Sensory Inhibition, he notes that there are feedback loops everywhere in nervous system, and he asks how it is that the system manages to avoid ending up in a dysfunctional oscillatory state (Reference von Békésy1967, 25). It seems that von Békésy is importing his understanding of systems containing feedback from engineering, and in that context oscillations are normally problematic and efforts must be made to dampen them. Thus, he inferred that oscillations in the nervous system would also be nonfunctional or dysfunctional. These days, neuroscientists seek to understand how oscillations in a healthy brain (i.e., its characteristic patterns of endogenous activity) are actually responsible for cognitive functions and how these oscillations differ from the ones associated with pathologies such as epilepsy and Parkinson’s disease.Footnote 9

The cautionary tale just told gives some concrete indications of how the imposition of the engineering framework on neural systems can lead to assumptions that in retrospect appear misguided. But it would be too hasty to infer from this example that current work on robustness in neuroscience is of dubious standing whenever it appeals to the concepts of engineering. A more general argument is the following: the brain is not like a bridge (or a computer, or an airplane on autopilot); therefore, whenever neuroscientists appeal to terms borrowed from the analysis of such systems, they risk saying things that are simply false, because they fail to notice relevant disanalogies. This lays all the skeptical cards on the table. In the last part of the article, I attempt to mitigate these worries.

4.4. No. Use of the Engineering Framework Should Be Thought of as a Simplifying Strategy

The neuroscientist Steven Rose (Reference Rose, Choudhury and Slaby2012, 61) writes that “one of the most common but misleading terms in the biology student’s lexicon is homeostasis,” that is,

[the] concept of the stability of the body’s internal environment. But such stability is achieved by dynamic responses; stasis is death, and homeodynamics needs to replace homeostasis as the relevant concept.Footnote 10

This seems to capture the problem that was first noted in section 3, that we should not be mislead by the engineering framework into thinking of neural systems as seeking to maintain an initial stable state. But we also noted that the neuroscientists employing control-theoretic models of homeostatic mechanisms are not thinking of their systems as seeking stability in this very general way. Instead, they are modeling the stability of a specific variable—in the case of O’Leary et al. (Reference O’Leary, Williams, Franci and Marder2014), the concentration of Ca2+—and investigating the mechanisms by which it is controlled. To this end, it is reasonable to interpret the system as an integral controller (O’Leary et al. Reference O’Leary, Williams, Franci and Marder2014, 818).Footnote 11 Thus it is still useful to talk about homeostasis with respect to Ca2+ concentration even while thinking of the system as a whole and, in reality, as a “homeodynamic” one.

I think of neuroscientists whose investigation of robustness in the brain is scaffolded by the engineering framework as providing idealized mechanistic explanations. Their explanatory target is, for example, the process by which overall neuronal activity level is controlled via regulation of ion channel gene transcription through a Ca2+-sensitive feedback loop. This is standard fodder for mechanistic explanation. At the same time, the framework of engineering—in this case, the schematic of the integral controller—serves to direct attention to specific parts and processes in the extremely complex cellular machinery and to interpret them in control-theoretic terms (sensors, feedback loops, etc.) while bracketing other aspects not immediately relevant to the explanation of robustness.

Bechtel (Reference Bechtel2015, 92) has presented the case that

mechanisms are [to be] viewed not as entities in the world, but as posits in mechanistic explanations that provide idealized accounts of what is in the world.

His example is the idealization (understood as “falsehood”) that scientists introduce by putting boundaries around putative mechanisms that in nature do not exist. In the cases explored in Bechtel’s paper, the idealization comes in through the analogical reasoning of treating a neuronal system as if it is an engineered artifact.Footnote 12 This, like the positing of boundaries, is a useful way to simplify the explanandum. It enables neuroscientists to bracket some of the known facts about the brain’s messy, Heraclitean nature. But it means, perhaps, that there is a stark difference between the brain viewed sub specie aeternatis (what some neuroscientists call the “ground truth” of the brain) and viewed sub specie mechinae (in the guise of a machine).

Footnotes

I am greatly indebted to Timothy O’Leary, Nancy Nersessian, and Peter Sterling for their feedback on this work. I would also like to thank the participants in a fall 2015 workshop on robustness in neuroscience for discussion of the ideas behind this article and the audience at a spring 2016 reengineering biology conference for their questions and comments on it. Both events were hosted by the Center for Philosophy of Science at the University of Pittsburgh. I am also grateful to the audience members at the 2016 Philosophy of Science Association meeting for a lively discussion.

1. Note that Davis (Reference Davis2006, 308) makes a conceptual distinction between robust properties and properties under homeostatic control: “In general, robustness describes a system with a reproducible output, whereas homeostasis refers to a system with a constant output.” I will ignore this difference for the present purposes since homeostatic systems conform to Kitano’s general definition robust systems.

3. “Neurons in the brains of long-lived animals must maintain reliable function over the animal’s lifetime while all of their ion channels and receptors are replaced in the membrane over hours, days, or weeks. Consequently, ongoing turnover of ion channels of various types must occur without compromising the essential excitability properties of the neuron” (O’Leary et al. Reference O’Leary, Williams, Caplan and Marder2013, E2645).

4. This blurring of the lines between mechanisms for robustness and mechanisms for life is highlighted by Edelman and Gally (Reference Edelman and Gally2001, 13763) in their discussion of the difference between redundancy and degeneracy in biological systems: “The term redundancy somewhat misleadingly suggests a property selected exclusively during evolution, either for excess capacity or for fail-safe security. We take the contrary position that degeneracy is not a property simply selected by evolution, but rather is a prerequisite for and an inescapable product of the process of natural selection itself.” They also discuss another disanalogy between engineered and biological systems—the applicability of “design” talk.

5. A response along the lines articulated in the section heading was suggested to me by Timothy O’Leary, in conversation.

6. I thank Timothy O’Leary for suggesting any event that the system cannot directly control.

7. The concern articulated in the section heading was raised by Arnon Levy and Timothy O’Leary.

8. See also Zhang and Chase (Reference Zhang and Chase2015) on the physical control system perspective on brain-computer interfaces for motor rehabilitation.

9. For a scientific overview, see Buzsáki (Reference Buzsáki2006). For discussion of philosophical implications, see Bechtel and Abrahamsen (Reference Bechtel and Abrahamsen2013). See also Knuuttila and Loettgers (Reference Knuuttila and Loettgers2013, 160) on a parallel difference across engineering and cell biology, in which oscillations were found to have unexpected functional roles in cell physiology.

10. Compare Sterling (Reference Sterling2012) on the concept of allostasis—stability through change, with an emphasis on predictive regulation. Day (Reference Day2005) and O’Leary and Wyllie (Reference O’Leary and Wyllie2011), in contrast, argue that the concept of homeostasis easily accommodates these dynamic and predictive aspects and that the term “allostasis” is therefore superfluous. It is an interesting question (but beyond the scope of this article) whether the narrow or wide definition of homeostasis is currently more prevalent among biologists and neuroscientists.

11. Note that O’Leary et al.’s (Reference O’Leary, Williams, Franci and Marder2014) study of homeostasis is via a model of a neuron. But the model is realistic enough that it is expected to shed light on actual biophysical mechanisms.

12. The connection between the use of analogy and idealization in modeling is flagged by Hesse (Reference Hesse1953) but remains underexplored in more recent philosophy of science.

References

Bechtel, W. 2015. “Can Mechanistic Explanation Be Reconciled with Scale-Free Constitution and Dynamics?Studies in History and Philosophy of Science C 53:8493.CrossRefGoogle ScholarPubMed
Bechtel, W., and Abrahamsen, A. 2013. “Thinking Dynamically about Biological Mechanisms: Networks of Coupled Oscillators.” Foundations of Science 18:707–23.CrossRefGoogle Scholar
Buzsáki, G. 2006. Rhythms of the Brain. Oxford: Oxford University Press.CrossRefGoogle Scholar
Davis, G. W. 2006. “Homeostatic Control of Neural Activity: From Phenomenology to Molecular Design.” Annual Review of Neuroscience 29:307–23.CrossRefGoogle ScholarPubMed
Day, T. A. 2005. “Defining Stress as a Prelude to Mapping Its Neurocircuitry: No Help from Allostasis.” Progress in Neuro-Psychopharmacology and Biological Psychiatry 29:11951200.CrossRefGoogle ScholarPubMed
Dupré, J. 2012. Processes of Life. Oxford: Oxford University Press.Google Scholar
Edelman, G. M., and Gally, J. A. 2001. “Degeneracy and Complexity in Biological Systems.” Proceedings of the National Academy of Sciences 98 (24): 13763–68.CrossRefGoogle ScholarPubMed
Hesse, M. B. 1953. “Models in Physics.” British Journal for the Philosophy of Science 4 (15): 198214.CrossRefGoogle Scholar
Kitano, H. 2004. “Biological Robustness.” Nature Reviews Genetics 5:826–37.CrossRefGoogle ScholarPubMed
Knuuttila, T., and Loettgers, A. 2013. “Basic Science through Engineering? Synthetic Modeling and the Idea of Biology-Inspired Engineering.” Studies in History and Philosophy of Science C 48:158–69.Google Scholar
Li, N., Daie, K., Svoboda, K., and Druckmann, S. 2016. “Robust Neuronal Dynamics in Premotor Cortex during Motor Planning.” Nature 532:459–64.CrossRefGoogle ScholarPubMed
Marder, E., and Goaillard, J.-M. 2012. “Variability, Compensation, and Homeostasis in Neuron and Network Function.” Nature Reviews Neuroscience 7:563–74.Google Scholar
O’Leary, T., Williams, A. H., Caplan, J. C., and Marder, E. 2013. “Correlations in Ion Channel Expression Emerge from Homeostatic Tuning Rules.” Proceedings of the National Academy of Sciences 110 (28): E2645E2654.CrossRefGoogle ScholarPubMed
O’Leary, T., Williams, A. H., Franci, A., and Marder, E. 2014. “Cell Types, Network Homeostasis, and Pathological Compensation from a Biologically Plausible Ion Channel Expression Model.” Neuron 82 (4): 809–21.CrossRefGoogle ScholarPubMed
O’Leary, T., and Wyllie, D. J. A. 2011. “Neuronal Homeostasis: Time for a Change?Journal of Physiology 589 (20): 4811–26.CrossRefGoogle ScholarPubMed
Rose, S. 2012. “The Need for a Critical Neuroscience.” In Critical Neuroscience: A Handbook of the Social and Cultural Contexts of Neuroscience, ed. Choudhury, S. and Slaby, J. Hoboken, NJ: Wiley-Blackwell.Google Scholar
Sporns, O. 2010. Networks of the Brain. Cambridge, MA: MIT Press.CrossRefGoogle Scholar
Sterling, P. 2012. “Allostasis: A Model of Predictive Regulation.” Physiology and Behavior 106:515.CrossRefGoogle Scholar
Sterling, P., and Freed, M. 2007. “How Robust Is a Neural Circuit?Visual Neuroscience 24:563–71.CrossRefGoogle ScholarPubMed
Sterling, P., and Laughlin, S. B. 2015. Principles of Neural Design. Cambridge, MA: MIT Press.Google Scholar
Svoboda, K. 2015. Abstract of “Probing Frontal Cortical Networks during Motor Planning.” Seminar, Center for the Neural Basis of Cognition, November 10. http://www.braininstitute.pitt.edu/event/probing-frontal-cortical-networks-during-motor-planning.Google Scholar
von Békésy, G. 1967. Sensory Inhibition. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Zhang, Y., and Chase, S. M. 2015. “Recasting Brain-Machine Interface Design from a Physical Control System Perspective.” Journal of Computational Neuroscience 39:107–18.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Engineering framework for robustness. A set of terms originating from engineering and control theory that are applied to biological systems to explain how they achieve robust performance.