Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by Crossref.
Montone, Guglielmo
Donnarumma, Francesco
and
Prevete, Roberto
2011.
Adaptive and Natural Computing Algorithms.
Vol. 6593,
Issue. ,
p.
250.
Donnarumma, Francesco
Prevete, Roberto
and
Trautteur, Giuseppe
2012.
Programming in the brain: a neural network theoretical framework.
Connection Science,
Vol. 24,
Issue. 2-3,
p.
71.
Donnarumma, Francesco
Prevete, Roberto
Chersi, Fabian
and
Pezzulo, Giovanni
2015.
A Programmer–Interpreter Neural Network Architecture for Prefrontal Cognitive Control.
International Journal of Neural Systems,
Vol. 25,
Issue. 06,
p.
1550017.
Donnarumma, Francesco
Prevete, Roberto
de Giorgio, Andrea
Montone, Guglielmo
and
Pezzulo, Giovanni
2016.
Learning programs is better than learning dynamics: A programmable neural network hierarchical architecture in a multi-task scenario.
Adaptive Behavior,
Vol. 24,
Issue. 1,
p.
27.
Prevete, Roberto
Donnarumma, Francesco
d’Avella, Andrea
and
Pezzulo, Giovanni
2018.
Evidence for sparse synergies in grasping actions.
Scientific Reports,
Vol. 8,
Issue. 1,
Reuse, working, function
Merriam-Webster's Collegiate Dictionary, 11th edition, gives the definition of reuse as: “to use again, especially in a different way or after reclaiming or reprocessing.” Thus, for example, the well-known evolutionary sequence from jaw bones of reptiles to the ossicles of mammalian ears may be taken as an instance of an acoustic reuse of the manducatory reptilian jaw bones after an (extensive and) exaptive “reprocessing.” Is this the use of “reuse” (no pun intended) in the target article?
Notice that, in the above example, reuse completely obliterates original use. On the contrary, the overwhelming connotation of the term one gleans from an overview of the target article is: “new use or uses, without losing the original function.” In the article's Note 5, Anderson clarifies the meaning of working: “brain regions have fixed low-level functions (‘workings’) that are put to many high-level ‘uses’.”
“Function” or “functionalities” occur in contexts in which it is difficult to separate their meanings from working, or cortical bias, except on the basis of the granularity of the neural circuits considered. “Working” is used for local circuits; “function,” for overall cortical, cognitive behavior.
Drawing on numerous excerpts of the article, we summarize the gist of the reuse idea in the massive redeployment hypothesis (MRH), and stress the timescale aspect, as follows: The brain – at least, but not exclusively, in sensorimotor tasks – obtains its enormously diversified functional capabilities by rearranging in different ways (i.e., putting to different uses) local, probably small, neural circuits endowed with essentially fixed mini-functionalities, identified as “workings,” and does so on a timescale comparable with the reactivity timescale of the organism.
There is one exception where reuse seems to originate in the circuit itself – as contrasted with the empirical rejection of “small neural regions [were] locally polyfunctional” (sect. 1.1, para. 5) – and not in the putting together of circuits: “in at least some cases, circuit reuse is arranged such that different data – both information pertaining to different targets, as well as information about the same targets but at different levels of abstraction – can be fed without translation to the same circuits and still produce useful outputs” (sect. 6.2, para. 9; emphasis Anderson's).
A surprising disconnection occurs, though, with respect to timescales. Indeed, Anderson states: “Massive redeployment is a theory about the evolutionary emergence of the functional organization of the brain” (sect. 6.4, para. 1). But the actual reuse of neural circuits must occur at the timescale of the organism's intercourse with the environment, as we stressed above. Any “evolutionary emergence” does not explain how the mechanism of reuse is deployed at real time.
Synaptic plasticity is of no use here, both because of its slower timescale with respect to the reactivity timescale and because the synaptic structure of the neural tissue gets altered and the previous function is lost. Indeed, plasticity is very aptly distinguished, in the target article's Abstract, from reuse and, therefore, from learning.
Need of programming
The conundrum implicit in the MRH, succintly stated in the quote we chose for our title, is as follows: Evolutionary or exaptive processes have determined a structure of synaptic connections, which must be considered as fixed over current reactivity timescales, bringing about all possible useful “arrangements” of local circuits which give rise to the multiplicity of cognitive functions. But how can a fixed structure deploy at reactivity time selectivity over the specific pre-wired arrangements? How can a specific routing of connections be selectively enabled at reactivity time, if the connections are fixed?
The answer is, by programming. Anderson almost says so: “I have used the metaphor of component reuse in software engineering” (he writes in sect. 6.4, para. 3; our emphasis) – but then he argues against assuming the metaphor as literal.
Fixed-weight programmable networks
We propose a model that allows real-time programmability in fixed-weight networks, thus solving the conundrum. The model is realized in the Continuous Time Recurrent Neural Networks (CTRNNs) environment. CTRNNs are well known, neurobiologically plausible, modeling tools – as attested, for instance, by Dunn et al. (Reference Dunn, Lockery, Pierce-Shimomura and Conery2004). The architecture we developed sustains a programming capability which is usually associated with algorithmic, symbolic systems only. By means of this architecture one can design either local circuits or networks of local circuits having the capability of exhibiting on-the-fly qualitative changes of behavior (function) caused and controlled by auxiliary (programming) inputs, without changing either connectivity and weights associated with the connections.
The main idea underlying this approach is as follows: The post-synaptic input to biological neurons is usually modeled in artificial neural networks – and it is so in CTRNNs – as sums of products between pre-synaptic signals originating from other neurons, and the weights associated with the synapses. So, the behavior of a network is grounded into sums of products between pre-synaptic signals and weights. In the proposed architecture, we “pull out” the multiplication operation by using auxiliary (interpreting) CTRNN sub-networks providing the outcome of the multiplication operation between the output of the pre-synaptic neuron and the synaptic weight. In this way, one obtains a Programmable Neural Network (PNN) architecture with two kinds of input lines: programming input lines fed to the interpreting CTRNN subnetworks, in addition to standard data input lines. As a consequence, a PNN changes on the fly the mapping (working/function) it is performing on standard input data, on the basis of what is being fed into its programming input lines. Notice that a PNN is strictly fixed-weight. More importantly, notice that the two kinds of input signals are different only on a contextual basis. If input signals are fed to the appropriate lines, then they will be interpreted as code, but – as in programming practice – they have the nature of data, and, as such, can be processed or originated by other parts of a complex network.
The proposed solution
By using PNNs, one can develop an artificial neural network composed of fixed, that is, non-programmable, local neural circuits which can be rearranged in different ways at “run-time” by programmable, and still fixed-weight, routing networks. The local circuits will be thus reused in different arrangements, giving rise to different overall functions and cognitive tasks. But PNNs are also hypothetical models for fully programmable local networks, thus suggesting an answer to the “exception” we mentioned above. There we take “without translation” to mean that those data are fed to the programming inputs – an enticing possibility.
Bibliographical notice
The seminal motivations for programming neural networks, in a more general setting than that of reuse, are expounded in Tamburrini and Trautteur (Reference Tamburrini and Trautteur2007) and Garzillo and Trautteur (Reference Garzillo and Trautteur2009); some toy realizations were presented in Donnarumma et al. (Reference Donnarumma, Prevete and Trautteur2007), and a full implementation of the concept is reported in Donnarumma (Reference Donnarumma2010).