Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T15:55:38.789Z Has data issue: false hasContentIssue false

Reciprocity between second-person neuroscience and cognitive robotics

Published online by Cambridge University Press:  25 July 2013

Peter Ford Dominey*
Affiliation:
INSERM U846, Integrative Neuroscience and Robotics, INSERM Stem Cell and Brain Research Institute, 69675 Bron cedex, France. peter.dominey@inserm.frhttp://www.sbri.fr/members/peter-ford-dominey.html

Abstract

As there is “dark matter” in the neuroscience of individuals engaged in dynamic interactions, similar dark matter is present in the domain of interaction between humans and cognitive robots. Progress in second-person neuroscience will contribute to the development of robotic cognitive systems, and such developed robotic systems will be used to test the validity of the underlying theories.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2013 

The second-person neuroscience framework presented by Schilbach et al. is of particular interest for researchers in the domain of cognitive robotics for two reasons that constitute a strong reciprocity between these domains. First, the “dark matter” in second-person neuroscience is also present in the domain of second-person robotics, and it is likely that advances in the neuroscience of second-person systems will contribute to robotic cognitive systems. Second, as these robot cognitive systems become increasingly advanced, they will become increasingly useful as tools in the pursuit of second-person neuroscience.

As second-person neuroscience matures, it will be able to characterize the human cognitive system so as to inform the implementation of artificial embodied systems capable of second-person cognition. Certain robot cognitive systems have primitive notions of “self” and “other” that can be revealed in mechanisms for shared planning (Dominey & Warneken Reference Dominey and Warneken2011, Lallée et al. Reference Lallée, Pattacini, Boucher, Lemaignan, Lenz, Melhuish, Natale, Skachek, Hamann, Steinwender, Sisbot, Metta, Alami, Warnier, Guitton, Warneken and Dominey2012). We have developed robot systems that can learn shared plans that specify the coordinated actions of self and other, and use these plans, including reversing roles. However, the underlying mechanisms are impoverished, as the robot cognitive system is somewhat spectatorial, and coupled to the other only through the succession of alternating actions in the shared plan. It is closer to the high functioning autist who prefers to follow an explicit fixed plan, rather than adapt in real-time to open ended social interaction.

In their target article, Schilbach et al. outline a program for second-person neuroscience from which we can extract the following principals for second-person robot cognition:

  1. 1. The system should be motivated to socially engage. This is related to the intrinsic motivation to share intentions described in Tomasello (Reference Tomasello2009), cited in the target article.

  2. 2. This engagement is driven by the perception of social affordances.

  3. 3. Social affordances derive from the interaction dynamics of the system that is constituted by the two agents.

  4. 4. This requires that the system can perceive the other as a subject, which is more than recognizing the other as an agent. Instead, the other agent must be recognized as affording social interaction. Schilbach et al. stress that social interaction involves contingency between self and other. The detection of contingency is present in the earliest stages of infancy and plays a central role in the infant's elaboration of its ecological self and its interpersonal or social self.

  5. 5. Once the system has perceived these affordances, it must engage. Crucially – what is the nature of the mechanism for engagement? Rochat (Reference Rochat2010) importantly notes that in this context, Merleau-Ponty (Reference Merleau-Ponty1967) breaks with a “self”/“other” distinction, and suggests instead that one's own body perception and representation is fundamentally inseparable from, and melded with, the perception and representation of others in a process that he refers to as mutual alienation. “In the perception of others, my body and the body of others are coupled, as if performing and acting in concert” (Merleau-Ponty Reference Merleau-Ponty1967. p. 24, translated from the French in Rochat Reference Rochat2010, p. 741).

  6. 6. Finally, the system must have a notion of what Schilbach et al. refer to as emotional engagement. We can consider that this is related to the intrinsic motivation to share experiences (Tomasello Reference Tomasello2009), partially overlapping with point (1). Interestingly, the implementation of this motivation can be based on the lower-level detection of contingencies, and creating a form of internal reward for their detection. This would apply both for self-contingencies (motivating exploration) and mutual self-other contingencies, extending motivated exploration and engagement into the social domain.

As a first step toward a second-person robotics, responding to the 6 defined requirements, we can provide the cognitive system with mechanisms for contingency (correlation) detection. This can first be applied to self-contingencies that arise during development, for example, where limb motion leads to perfectly contingent vision and proprioception, allowing the system to begin to construct a self-model (Rochat & Striano Reference Rochat and Striano2001). This can then be gracefully extended to provide the basis for self-object, and self-other contingencies. Again, Schilbach et al. stress that social engagement and interaction relies crucially on contingent responses. Thus, as the system detects contingent responses in the other (e.g., gaze following, imitation of actions), it will construct the Merleau-Ponty mutual-relation, in which contingent relations and co-regulations form the basis for social games and norms in an extended self-other shared body schema. Such a robotic cognitive system could then be demonstrated to display child-like responses in the establishment of social interaction games, including negative emotional responses when the rules of the games are violated.

Such robotic systems will be used to validate second-person theories when tested with naïve subjects, and thereby contribute to shining light on the dark matter of second-person neuroscience. A principal advantage of robots in second-person neuroscience is that their behavior can be parametrically controlled within and between subjects. Although it is difficult to force experimental confederates to behave in controlled ways (e.g., “move your eyes but not your head”), robots can produce such behavior. In this context, we have recently studied human behavior in a second-person context during human-robot interaction (Boucher et al. Reference Boucher, Pattacini, Lelong, Bailly, Elisei, Fagel, Dominey and Ventre-Dominey2012). The goal was to determine whether, during cooperative physical interaction, humans would exploit the gaze of their robot cooperator in order to reach more quickly to an object whose location was indeed indicated both by the robot's speech and gaze. We established that in ecological human–human interactions, naïve subjects indeed exploited gaze cues from their cooperation partner. We then implemented a human-based gaze heuristic for the oculomotor system of the robot, and demonstrated that naïve subjects benefit from robot gaze in the same way they do with human gaze (Boucher et al. Reference Boucher, Pattacini, Lelong, Bailly, Elisei, Fagel, Dominey and Ventre-Dominey2012). Interestingly, the experiments also revealed that aspects of eye and head coordination in human gaze is less deterministic than we initially considered, and hence led to the potential refinement of the characterization of human gaze in cooperative physical interaction.

Thus, the development of second-person neuroscience as proposed by Schilbach et al. also helps to lay the groundwork for a reciprocal interaction between neuroscience and robotics, where neuroscience helps to define the system, and robotics provides a novel tool for implementation and investigation of the emerging hypotheses.

References

Boucher, J. D., Pattacini, U., Lelong, A., Bailly, G., Elisei, F., Fagel, S., Dominey, P. F. & Ventre-Dominey, J. (2012) I reach faster when I see you look: Gaze effects in human-human and human-robot face-to-face cooperation. Frontiers in NeuroRobotics 6:3.Google Scholar
Dominey, P. F. & Warneken, F. (2011) The basis of shared intentions in human and robot cognition. New Ideas in Psychology 29(3):260–74.CrossRefGoogle Scholar
Lallée, S., Pattacini, U., Boucher, J. D., Lemaignan, S., Lenz, A., Melhuish, C., Natale, L., Skachek, S., Hamann, K., Steinwender, J., Sisbot, E. A., Metta, G., Alami, R., Warnier, M., Guitton, J., Warneken, F. & Dominey, P. F. (2012) Towards a platform-independent cooperative human–robot interaction system: III. An architecture for learning and executing actions and shared plans, in press. IEEE Transactions on Autonomous Mental Development 4(3):239–53.Google Scholar
Merleau-Ponty, M. (1967) Les relations avec autrui chez l'enfant. Les cours de Sorbonne. Centre de Documentation Universitaire.Google Scholar
Rochat, P. (2010) The innate sense of the body develops to become a public affair by 2–3 years. Neuropsychologia 48(2010):738–45.CrossRefGoogle ScholarPubMed
Rochat, P. & Striano, T. (2001) Perceived self in infancy. Infant Behavior and Development 23:513–30.Google Scholar
Tomasello, M. (2009) Why we cooperate. MIT Press.CrossRefGoogle Scholar