Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-07T03:57:27.685Z Has data issue: false hasContentIssue false

How and over what timescales does neural reuse actually occur?

Published online by Cambridge University Press:  22 October 2010

Francesco Donnarumma
Affiliation:
Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario Monte Sant'Angelo, I-80126 Napoli, Italy. donnarumma@na.infn.itprevete@na.infn.ittrau@na.infn.ithttp://vinelab.na.infn.it
Roberto Prevete
Affiliation:
Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario Monte Sant'Angelo, I-80126 Napoli, Italy. donnarumma@na.infn.itprevete@na.infn.ittrau@na.infn.ithttp://vinelab.na.infn.it
Giuseppe Trautteur
Affiliation:
Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Complesso Universitario Monte Sant'Angelo, I-80126 Napoli, Italy. donnarumma@na.infn.itprevete@na.infn.ittrau@na.infn.ithttp://vinelab.na.infn.it

Abstract

We isolate some critical aspects of the reuse notion in Anderson's massive redeployment hypothesis (MRH). We notice that the actual rearranging of local neural circuits at a timescale comparable with the reactivity timescale of the organism is left open. We propose the concept of programmable neural network as a solution.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2010

Reuse, working, function

Merriam-Webster's Collegiate Dictionary, 11th edition, gives the definition of reuse as: “to use again, especially in a different way or after reclaiming or reprocessing.” Thus, for example, the well-known evolutionary sequence from jaw bones of reptiles to the ossicles of mammalian ears may be taken as an instance of an acoustic reuse of the manducatory reptilian jaw bones after an (extensive and) exaptive “reprocessing.” Is this the use of “reuse” (no pun intended) in the target article?

Notice that, in the above example, reuse completely obliterates original use. On the contrary, the overwhelming connotation of the term one gleans from an overview of the target article is: “new use or uses, without losing the original function.” In the article's Note 5, Anderson clarifies the meaning of working: “brain regions have fixed low-level functions (‘workings’) that are put to many high-level ‘uses’.”

“Function” or “functionalities” occur in contexts in which it is difficult to separate their meanings from working, or cortical bias, except on the basis of the granularity of the neural circuits considered. “Working” is used for local circuits; “function,” for overall cortical, cognitive behavior.

Drawing on numerous excerpts of the article, we summarize the gist of the reuse idea in the massive redeployment hypothesis (MRH), and stress the timescale aspect, as follows: The brain – at least, but not exclusively, in sensorimotor tasks – obtains its enormously diversified functional capabilities by rearranging in different ways (i.e., putting to different uses) local, probably small, neural circuits endowed with essentially fixed mini-functionalities, identified as “workings,” and does so on a timescale comparable with the reactivity timescale of the organism.

There is one exception where reuse seems to originate in the circuit itself – as contrasted with the empirical rejection of “small neural regions [were] locally polyfunctional” (sect. 1.1, para. 5) – and not in the putting together of circuits: “in at least some cases, circuit reuse is arranged such that different data – both information pertaining to different targets, as well as information about the same targets but at different levels of abstraction – can be fed without translation to the same circuits and still produce useful outputs” (sect. 6.2, para. 9; emphasis Anderson's).

A surprising disconnection occurs, though, with respect to timescales. Indeed, Anderson states: “Massive redeployment is a theory about the evolutionary emergence of the functional organization of the brain” (sect. 6.4, para. 1). But the actual reuse of neural circuits must occur at the timescale of the organism's intercourse with the environment, as we stressed above. Any “evolutionary emergence” does not explain how the mechanism of reuse is deployed at real time.

Synaptic plasticity is of no use here, both because of its slower timescale with respect to the reactivity timescale and because the synaptic structure of the neural tissue gets altered and the previous function is lost. Indeed, plasticity is very aptly distinguished, in the target article's Abstract, from reuse and, therefore, from learning.

Need of programming

The conundrum implicit in the MRH, succintly stated in the quote we chose for our title, is as follows: Evolutionary or exaptive processes have determined a structure of synaptic connections, which must be considered as fixed over current reactivity timescales, bringing about all possible useful “arrangements” of local circuits which give rise to the multiplicity of cognitive functions. But how can a fixed structure deploy at reactivity time selectivity over the specific pre-wired arrangements? How can a specific routing of connections be selectively enabled at reactivity time, if the connections are fixed?

The answer is, by programming. Anderson almost says so: “I have used the metaphor of component reuse in software engineering” (he writes in sect. 6.4, para. 3; our emphasis) – but then he argues against assuming the metaphor as literal.

Fixed-weight programmable networks

We propose a model that allows real-time programmability in fixed-weight networks, thus solving the conundrum. The model is realized in the Continuous Time Recurrent Neural Networks (CTRNNs) environment. CTRNNs are well known, neurobiologically plausible, modeling tools – as attested, for instance, by Dunn et al. (Reference Dunn, Lockery, Pierce-Shimomura and Conery2004). The architecture we developed sustains a programming capability which is usually associated with algorithmic, symbolic systems only. By means of this architecture one can design either local circuits or networks of local circuits having the capability of exhibiting on-the-fly qualitative changes of behavior (function) caused and controlled by auxiliary (programming) inputs, without changing either connectivity and weights associated with the connections.

The main idea underlying this approach is as follows: The post-synaptic input to biological neurons is usually modeled in artificial neural networks – and it is so in CTRNNs – as sums of products between pre-synaptic signals originating from other neurons, and the weights associated with the synapses. So, the behavior of a network is grounded into sums of products between pre-synaptic signals and weights. In the proposed architecture, we “pull out” the multiplication operation by using auxiliary (interpreting) CTRNN sub-networks providing the outcome of the multiplication operation between the output of the pre-synaptic neuron and the synaptic weight. In this way, one obtains a Programmable Neural Network (PNN) architecture with two kinds of input lines: programming input lines fed to the interpreting CTRNN subnetworks, in addition to standard data input lines. As a consequence, a PNN changes on the fly the mapping (working/function) it is performing on standard input data, on the basis of what is being fed into its programming input lines. Notice that a PNN is strictly fixed-weight. More importantly, notice that the two kinds of input signals are different only on a contextual basis. If input signals are fed to the appropriate lines, then they will be interpreted as code, but – as in programming practice – they have the nature of data, and, as such, can be processed or originated by other parts of a complex network.

The proposed solution

By using PNNs, one can develop an artificial neural network composed of fixed, that is, non-programmable, local neural circuits which can be rearranged in different ways at “run-time” by programmable, and still fixed-weight, routing networks. The local circuits will be thus reused in different arrangements, giving rise to different overall functions and cognitive tasks. But PNNs are also hypothetical models for fully programmable local networks, thus suggesting an answer to the “exception” we mentioned above. There we take “without translation” to mean that those data are fed to the programming inputs – an enticing possibility.

Bibliographical notice

The seminal motivations for programming neural networks, in a more general setting than that of reuse, are expounded in Tamburrini and Trautteur (Reference Tamburrini and Trautteur2007) and Garzillo and Trautteur (Reference Garzillo and Trautteur2009); some toy realizations were presented in Donnarumma et al. (Reference Donnarumma, Prevete and Trautteur2007), and a full implementation of the concept is reported in Donnarumma (Reference Donnarumma2010).

References

Donnarumma, F. (2010) A model for programmability and virtuality in dynamical neural networks. Doctoral dissertation in Scienze Computazionali ed Informatiche (Computational and Information Sciences), Dipartimento di Matematica e Applicazioni “R. Caccioppoli,” Università di Napoli Federico II. Available at: http://people.na.infn.it/~donnarumma/files/donnarumma09model.pdf.Google Scholar
Donnarumma, F., Prevete, R. & Trautteur, G. (2007) Virtuality in neural dynamical systems. Poster presented at the International Conference on Morphological Computation, ECLT, Venice, Italy, March 26–28, 2007. Available at: http://vinelab.na.infn.it/research/pubs/donnarumma07virtuality.pdf.Google Scholar
Dunn, N. A., Lockery, S. R., Pierce-Shimomura, J. T. & Conery, J. S. (2004) A neural network model of chemotaxis predicts functions of synaptic connections in the nematode Caenorhabditis elegans . Journal of Computational Neuroscience. 17(2):137–47.CrossRefGoogle ScholarPubMed
Garzillo, C. & Trautteur, G. (2009) Computational virtuality in biological systems. Theoretical Computer Science 410:323–31.Google Scholar
Tamburrini, G. & Trautteur, G. (2007) A note on discreteness and virtuality in analog computing. Theoretical Computer Science 371:106–14.Google Scholar