Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T19:43:11.535Z Has data issue: false hasContentIssue false

Parity still isn't a generalisation problem

Published online by Cambridge University Press:  01 April 1998

R. I. Damper
Affiliation:
Cognitive Sciences Centre and Department of Electronics and Computer Science, University of Southampton, Southampton SO17 1BJ, Englandrid@ecs.soton.ac.uk http://isis.ecs.soton.ac.uk/
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Clark & Thornton take issue with my claim that parity is not a generalisation problem, and that nothing can be inferred about back-propagation in particular, or learning in general, from failures of parity generalisation. They advance arguments to support their contention that generalisation is a relevant issue. In this continuing commentary, I examine generalisation more closely in order to refute these arguments. Different learning algorithms will have different patterns of failure: back-propagation has no special status in this respect. This is not to deny that a particular algorithm might fortuitously happen to produce the “intended” function in an (oxymoronic) parity-generalisation task.

Type
Continuing Commentary
Copyright
© 1998 Cambridge University Press