Abstract: Concept Learning and Modal Reasoning Charles Kemp, Faye Han & Alan Jern Department of Psychology Carnegie Mellon University Abstract x is a C ↔ x is identical to some other object Philosophers and linguists have suggested that the meaning of a concept can be represented by a rule or function that picks out examples of the concept across all possible worlds. We turn this idea into a computational model of concept learning, and demonstrate that this model helps to account for two as- pects of human learning. Our first experiment explores how humans learn relational concepts such as “taller” that are de- fined with respect to a context set. Our second experiment ex- plores modal inferences, or inferences about whether states of affairs are possible or impossible. Our model accounts for the results of both experiments, and suggests that possible worlds semantics can help to explain how humans learn and use con- cepts. Keywords: concepts and categories; modal reasoning; possi- ble worlds semantics Figure 1: One actual world (solid rectangle) and three possi- ble worlds (dashed rectangles). Each world contains between two and four objects, and the black triangles indicate which objects are instances of concept C. The geometric objects used for this illustration and for our experiments are based on stimuli developed by Kemp and Jern (2009). Knowledge about concepts and categories must support many kinds of operations. Consider simple relational con- cepts such as “taller” or ‘heavier.” A learner who has acquired these concepts should be able to use them for classification: given a pair of objects, she should be able to pick out the member of the pair that is taller than the other. The learner may also be able to solve the problem of generation: for ex- ample, she may be able to draw a pair of objects where one is taller than the other. The learner may even be able to use these concepts for modal reasoning, or reasoning about pos- sibility and necessity. She may recognize, for example, that no possible pair (x, y) can satisfy the requirement that x is taller than y and that y is taller than x, but that it is possible for x to be taller than y and y to be heavier than x. The three problems just introduced demand increasingly more from the learner: classification requires that she supply one or more category labels, generation requires that she generate one or more instance of a concept, and modal reasoning requires that she make an inference about all possible instances of a con- cept, including many that have never been observed. This paper describes a formal model of concept learning that helps to explain how people solve all three of these problems, al- though we focus here on classification and modal reasoning. Our model relies on possible worlds semantics, an ap- proach that is often discussed by philosophers and lin- guists (Kripke, 1963; Lewis, 1973) but has received less at- tention in the psychological literature. The worlds we con- sider are much simpler than those typically discussed in the philosophical literature, and we focus on problems where each world includes a handful of objects that vary along a small number of dimensions. Figure 1 shows an exam- ple where the world under consideration is represented as a solid rectangle, and where three possible worlds are shown as dashed rectangles. We explore the idea that a concept corre- sponds to a rule represented in a compositional language of thought—for example, the rule in Figure 1 picks out dupli- cate objects. Given this setup, we explore how concepts can be learned from observing a small number of worlds, and how these concepts can be used to decide whether a statement is possible (true in some possible worlds) or necessary (true in all possible worlds). Our approach builds on previous accounts of concept learn- ing, and is related most closely to previous rule-based mod- els that rely on logic as a representation language (Nosof- sky, Palmeri, & McKinley, 1994; Feldman, 2000; Good- man, Tenenbaum, Feldman, & Griffiths, 2008; Kemp & Jern, 2009). Most of these models, however, do not allow the cate- gory label of an object to depend on its context, or the world to which it belongs. For example, models of Boolean concept learning (Feldman, 2000) cannot capture relational concepts such as “duplicate,” since Boolean logic cannot express rules that rely on comparisons between objects. Previous accounts of relational categorization and analogical reasoning (Gen- tner, 1983; Doumas, Hummel, & Sandhofer, 2008) often work with richer representation languages, and can therefore capture the idea that the category label of an object may de- pend on its role within the world (or configuration) to which it belongs. These accounts, however, are limited in another respect. In most cases they are able to compare two or more worlds that are provided as input, but they cannot generate new worlds, or account for inferences that require computa- tions over the space of all possible worlds. In particular, we believe that previous psychological models will not account for the modal inferences that we explore. Although previous accounts of concept learning have not
Publication Year: 2011
Publication Date: 2011-01-01
Language: en
Type: article
Access and Citation
Cited By Count: 1
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot