Title: A Symbolic-Connectionist Model of Relation Discovery - eScholarship
Abstract: A Symbolic-Connectionist Model of Relation Discovery Leonidas A. A. Doumas ([email protected]) John E. Hummel ([email protected]) Department of Psychology, University of California, Los Angeles 405 Hilgard Ave. Los Angeles, CA 90095-1563 Abstract Relations are Hard to Learn Relational reasoning is central in human cognition. Numerous computational models address the component processes of relational reasoning, however these models require the modeler to hand-code the vocabulary of relations on which the model operates. The acquisition of relational concepts remains poorly understood. We present a theory of relation discovery instantiated in a symbolic-connectionist model, which learns structured representations of attributes and relations from unstructured distributed representations of objects by a process of comparison, and subsequently refines these representations through a process of mapping-based schema induction. Keywords: relations, learning, neural network, symbolic processing, structured representations Relational Reasoning Virtually every conscious thought you have expresses a relation. From the mundane, like “I’m late for work”, to the sublime, like Cantor’s proof that the cardinal number of the real numbers is greater than that of the integers, we are constantly representing and reasoning with relations. Relational thinking is so commonplace it is easy to take for granted, but the ability to form and manipulate relational representations appears late in human development (Gentner & Rattermann, 1991; Smith, 1989), and is a late evolutionary development that appears to distinguish human cognition from that of other animals (Holyoak & Thagard, 1995; Thompson & Oden, 2000). An important theme that has emerged from the study of relational thinking – both empirical and theoretical – is that the kinds of problems a person (or model) can solve depend critically on what the person (or model) can and does represent. However, little empirical work, and almost no theoretical work, has addressed the problem of how we acquire relational concepts. Models based on relational representations (e.g., Falkenhainer, Forbus, & Gentner, 1989; Hummel & Holyoak, 1997, 2003) have made important strides elucidating the nature of relational thought. However, these models are all granted a vocabulary of relational representations by the modeler; they do not learn the relations they need for themselves. Although they address our capacity to manipulate relational representations, they do not address the question of where these representations come from in the first place. Learning relational concepts is difficult for two reasons. The first begins with the very definition of a relation: A relation is a property that holds over a collection of arguments; it is never observable in a single object, so learning relations is vastly underconstrained by the examples from which we learn them. Take, for example, the relation same-shape (x, y). When universally quantified it takes any shape as input, and therefore its truth-value (i.e., whether x and y are the same shape) is completely uncorrelated with the specific visual features of any shapes (or any pair of shapes for that matter). As a result, it cannot be learned from the simple covariation of visual features. The second difficulty stems from the properties of relational representations. Relational representations are structure sensitive and semantically rich (Hummel & Holyoak, 1997). In a relational expression the meaning of the individual relational roles and their fillers is invariant with their arrangement in the expression (i.e., they are independent), but the meaning of the expression as a whole is a function of both the elements that compose the expression and their arrangement (i.e., the bindings of fillers to relational roles). Consider the statements chase (Bill, Joe) and chase (Joe, Bill). We can appreciate that they mean different things (even though they are composed of the same elements) because we can appreciate that the bindings of objects to relational roles is reversed in the two statements. We can also appreciate that the individual elements chase, Joe, and Bill, mean the same things in both statements (despite the fact that they are in different compositions). Additionally, we can cast these elements in novel configurations, for example generalizing the chase relation to novel arguments (e.g. chase (spoon, sprocket)). Thus, a relational concept must be represented independently of the examples from which it is learned, must be able to take arguments from both within and outside the set on which it was learned (i.e., we must be able to extrapolate the relation to novel values), and must specify the bindings of its arguments to its relational roles explicitly. Relational representations also explicitly specify the semantic content of objects and relational roles (e.g., the lover and beloved roles of love (x, y) or the liker and liked roles of like (x, y)): We know what it means to be a lover, and that knowledge is part of our representation of the relation itself. Consequently, it is easy to appreciate that the patient (i.e., killed) role of murder (x, y) is like the patient manslaughter (x, y), even though the agent roles differ (i.e.,
Publication Year: 2005
Publication Date: 2005-01-01
Language: en
Type: article
Access and Citation
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot