Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T04:45:49.620Z Has data issue: false hasContentIssue false

A neural-symbolic perspective on analogy

Published online by Cambridge University Press:  29 July 2008

Rafael V. Borges
Affiliation:
Department of Computing, City University London, Northampton Square, London, EC1V 0HB, United Kingdom Institute of Informatics, Federal University of Rio Grande do Sul, Brazil, Porto Alegre, RS, 91501-970, Brazil. Rafael.Borges.1@soi.city.ac.ukhttp://www.soi.city.ac.uk/~Rafael.Borges.1aag@soi.city.ac.ukhttp://www.soi.city.ac.uk/~aagLuisLamb@acm.orghttp://www.inf.ufrgs.br/~lamb
Artur S. d'Avila Garcez
Affiliation:
Department of Computing, City University London, Northampton Square, London, EC1V 0HB, United Kingdom
Luis C. Lamb
Affiliation:
Institute of Informatics, Federal University of Rio Grande do Sul, Brazil, Porto Alegre, RS, 91501-970, Brazil. Rafael.Borges.1@soi.city.ac.ukhttp://www.soi.city.ac.uk/~Rafael.Borges.1aag@soi.city.ac.ukhttp://www.soi.city.ac.uk/~aagLuisLamb@acm.orghttp://www.inf.ufrgs.br/~lamb
Rights & Permissions [Opens in a new window]

Abstract

The target article criticises neural-symbolic systems as inadequate for analogical reasoning and proposes a model of analogy as transformation (i.e., learning). We accept the importance of learning, but we argue that, instead of conflicting, integrated reasoning and learning would model analogy much more adequately. In this new perspective, modern neural-symbolic systems become the natural candidates for modelling analogy.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2008

The target article identifies two different stages as important for analogical reasoning: the learning of transformations (or relations) between objects, and the application of the acquired knowledge. The importance of learning, building a knowledge base, is highlighted in the article, as the analogy performance improves when more expertise is acquired. In what regards the application of knowledge for analogy, the process can be divided into two steps: (1) the recognition of the context, exemplified by the search for the most appropriate relation to be considered, and (2) the further reasoning over a different object, which might be seen either as a search for the most relevant target, or as the application of a transformation (as advocated in the article).

Among other works, the authors mention Shastri and Ajjanagadde (Reference Shastri and Ajjanagadde1993), an important reference for the research on the integration of neural and symbolic artificial intelligence approaches. Since then, the research on neural-symbolic systems has evolved considerably with a strong focus on the integration of expressive reasoning and robust learning into effective computational systems. Neural-symbolic systems might be considered nowadays as an alternative to traditional intelligent systems, enhancing the main features of trainable neural networks with symbolic representation and reasoning. Among the different neural-symbolic systems, we will consider those that translate symbolic knowledge into the initial architecture of a neural network, such as Connectionist Inductive Learning and Logic Programming (CILP) (d'Avila Garcez et al. Reference D'Avila Garcez, Broda and Gabbay2002). In CILP, knowledge is represented by a propositional logic program, and the translation consists in setting up a three-layer feed-forward neural network in which input and output neurons represent propositional variables, and the hidden neurons represent clauses (rules) involving such variables, with the overall network presenting a semantic equivalence with the original logic program.

In light of these developments, we can propose a neural-symbolic approach to model the procedures involved in the experiments shown in the target article. Instead of focusing on the dichotomy between relations and transformations, we prefer to analyse each stage of the described analogical procedure as subsuming aspects of learning and reasoning. Regarding the stage of building the knowledge base, neural-symbolic systems would cater for empirical learning with the possibility of integrating background symbolic knowledge about the domain, which can improve performance as shown in Towell and Shavlik (Reference Towell and Shavlik1994) and d'Avila Garcez et al. (Reference D'Avila Garcez, Broda and Gabbay2002).

As for the application of analogical reasoning, in an “A:B::C:D”-like example, we can consider two steps. First, the recognition of the relation (or transformation) between A and B. Considering, again, a neural-symbolic representation of possible functions, the system should be able to receive two objects as input, recognise the relation R between them, and keep this information in a short-term memory for immediate use. This kind of mapping with the use of short-term memory can be found in neural-symbolic systems such as Sequential Connectionist Temporal Logic (SCTL) (Lamb et al. Reference Lamb, Borges and d'Avila Garcez2007), an extension of CILP catering for the representation of temporal logic programs. Once the relation has been identified, the process can be seen as a simple inference, as described above.

We can show the adequacy of neural-symbolic systems in representing this kind of reasoning considering the same example used in the target article. The initial learning step should build the knowledge represented by a set of clauses, where we are considering two different atoms (propositional variables) for representing the relations: Recognise_R, to represent when a relation R is recognised between two instances, and Apply_R, illustrating the application of R over an instance of the input. The set of clauses for the example is expressed below:

  • Cut_Apple if Whole_Apple and Apply_Cut;

  • Recognise_Cut if Whole_Apple and Cut_Apple;

  • Cut_Cheese if Whole_Cheese and Apply_Cut;

  • Recognise_Cut if Whole_Cheese and Cut_Cheese.

Also, for the application of this case of (A:B::C:D), the agent should have some knowledge about the task, which can be represented in temporal propositional logic by inserting a clause “(next) Apply_R if Recognise_R,” for each relation “R,” where “(next)” is the operator referring to the next time point. Therefore, if the system recognises a relation “R” at time point t, it applies the reasoning over the next item presented at time point t+1 using “R.” In Figure 1, we show an example of a network based on such a program. After the network is built, we have a first step where information about objects “A” and “B” would activate an output neuron representing the relation between them (Recognise_R). As for the second step, we would have information about “C” as input, together with the relation obtained at the previous time point (e.g., propagated to a context unit [Apply_R] through a recurrent link as done in Elman [Reference Elman1990]). With “C” and “R,” the network is capable of inferring “D” according to the stored knowledge.

Figure 1. Recurrent network for analogical reasoning.

A simple example like the one above shows that, with a slight change in perspective, a neural-symbolic system can perform the same analogical reasoning as proposed in the target article. And the benefits of this are to allow for the use of symbolic background knowledge, which is important to model cognitive tasks such as language, and to integrate explicitly robust learning and relational reasoning abilities in the same system, as part of what we consider a more appropriate (or complete) modelling of analogy.

Finally, we corroborate the target article's idea that analogy should be seen as an umbrella to different aspects of cognition, serving also as a way of dealing with the problem of absence of explicitly represented knowledge, even in symbolic cases. Cases like the use of language illustrate the array of possibilities in the development of models of cognitive behaviour. Sound deductive inference and inference by induction, analogy, and even discovery have a role to play in the new logic landscape. This constitutes a challenge for different research areas and in particular computer science, as suggested by Valiant (Reference Valiant2003), according to whom the modelling and integration of different cognitive abilities, such as reasoning and learning, is a great challenge for computer science in this century.

ACKNOWLEDGMENTS

Rafael Borges is funded by City University London, UK. Luis Lamb is partly supported by CNPq, Brazil.

References

D'Avila Garcez, A., Broda, K., & Gabbay, D. (2002) Neural-symbolic learning systems: Foundations and applications. Springer-Verlag.CrossRefGoogle Scholar
Elman, J. L. (1990) Finding structure in time. Cognitive Science 14(2):179211.CrossRefGoogle Scholar
Lamb, L., Borges, R. V. & d'Avila Garcez, A. (2007) A connectionist cognitive model for temporal synchronisation and learning. In: Proceedings of the Twenty-second AAAI Conference on Artificial Intelligence, pp. 827–32. AAAI Press.Google Scholar
Shastri, L. & Ajjanagadde, V. (1993) From simple associations to systematic reasoning: A connectionist representation of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences 16:417–51.CrossRefGoogle Scholar
Towell, G. & Shavlik, J. (1994) Knowledge-based artificial neural networks. Artificial Intelligence 70(1–2):119–65.CrossRefGoogle Scholar
Valiant, L (2003) Three problems in computer science. Journal of the ACM 50(1):9699.CrossRefGoogle Scholar
Figure 0

Figure 1. Recurrent network for analogical reasoning.