Abstract: Grounding As Learning Gregory M. Kobele, Jason Riggle, Travis Collier, Yoosook Lee, Ying Lin, Yuan Yao, Charles Taylor, Edward P. Stabler University of California, Los Angeles http://taylor0.biology.ucla.edu/al/ 1 Grounding Communication among agents requires (among many other things) that each agent be able to identify the semantic values of the generators of the language. This is the ”grounding” problem: how do agents with different cognitive and perceptual experiences successfully converge on common (or at least sufficiently similar) meanings for the language? There are many linguistic studies of how human learners do this, and also studies of how this could be achieved in robotic contexts (e.g., (Steels, 1996; Kirby, 1999)). These studies provide insight, but few of them characterize the problem precisely. In what range of environments can which range of languages be properly grounded by distributed agents? This paper takes a first step toward bringing the tools of formal language theory to bear on this problem. In the first place, these tools easily reveal a number of grounding problems which are simply unsolvable with reasonable assumptions about the evidence available, and some problems that can be solved. In the second place, these tools provide a framework for exploring more sophisticated grounding strategies (Stabler et al., 2003). We explore here some preliminary ideas about how hypotheses about syntactic structure can interact with hypotheses about grounding in a fruitful way to provide a new perspective on the emergence of recursion in language. Simpler grounding methods look for some kind of correlation between the mere occurrence of particular basic generators and semantic elements, but richer hypotheses about relations among the generators themselves can provide valuable additional constraints on the problem. 2 Learning Grounding A first useful perspective on learning can be gained from the “identification in the limit” paradigm (Gold, 1967), a framework that is useful for identifying learning problems that are solvable (perfectly) when one makes very generous assumptions about the data poten- tially available to the learner. In this framework, the learner is successively presented with positive examples of a language, making a (possibly new) hypothesis after each example. Each possible order of presentation of every sentence of the language (repetitions allowed) is called a text (for that language). (Formally, a text is an infinite sequence t ∈ L ∞ such that for every s ∈ L, there is some i such that t i = s.) The learner learns the language if on each text there is a point after which the learner’s hypothesis never changes, and the hypothesis is correct.
Publication Year: 2003
Publication Date: 2003-05-05
Language: en
Type: article
Access and Citation
Cited By Count: 3
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot