Title: Exploring the influence of particle filter parameters on order effects in causal learning
Abstract: Exploring the Influence of Particle Filter Parameters on Order Effects in Causal Learning Joshua T. Abbott ([email protected]) Thomas L. Griffiths (tom [email protected]) Department of Psychology, University of California at Berkeley, Berkeley, CA 94720 USA Abstract Monte Carlo methods in particular: importance sampling and particle filtering. Importance sampling draws samples from a known proposal distribution and weights these samples to correct for the difference from the desired target distribu- tion. Particle filters are a sequential Monte Carlo method that uses importance sampling recursively. When approximating Bayesian inference, the posterior distribution is represented using a set of discrete samples, known as particles, that are updated over time as more data are observed. These meth- ods can be shown to be formally related to existing psycho- logical process models such as exemplar models (Shi et al., 2008), and can be used to explain behavioral data inconsistent with standard Bayesian models in categorization (Sanborn et al., 2006), sentence parsing (Levy et al., 2009), and classical conditioning experiments (Daw & Courville, 2008). How- ever, there has not previously been a systematic investigation of how the parameters of these Monte Carlo methods affect the predictions they make. The order in which people observe data has an effect on their subsequent judgments and inferences. While Bayesian mod- els of cognition have had some success in predicting human inferences, most of these models do not produce order effects, being unaffected by the order in which data are observed. Re- cent work has explored approximations to Bayesian inference that make the underlying computations tractable, and also pro- duce order effects in a way that seems consistent with human behavior. One of the most popular approximations of this kind is a sequential Monte Carlo method known as a particle fil- ter. However, there has not been a systematic investigation of how the parameters of a particle filter influence its predictions, or what kinds of order effects (such as primacy or recency ef- fects) these models can produce. In this paper, we use a simple causal learning task as the basis for an investigation of these issues. Both primacy and recency effects are seen in this task, and we demonstrate that both kinds of effects can result from different settings of the parameters of a particle filter. Keywords: particle filters; order effects; causal learning; ra- tional process models Introduction How do people make such rapid inferences from the con- strained available data in the world and with limited cogni- tive resources? Previous research has provided a great deal of evidence that human inductive inference can be success- fully analyzed as Bayesian inference, using rational models of cognition (Anderson, 1990; Oaksford & Chater, 1998; Grif- fiths, Chater, Kemp, Perfors, & Tenenbaum, 2010). Ratio- nal models answer questions at Marr’s (1982) computational level of analysis, producing solutions to why humans behave as they do, whereas traditional models from cognitive psy- chology tend to analyze cognition on Marr’s level of algo- rithm and representation, focusing instead on how cognitive processes support these behaviors. Although Bayesian mod- els have become quite popular in recent years, it remains un- clear what psychological mechanisms could be responsible for carrying out these computations. Of particular concern is that the amount of computation required in these models becomes intractable in real-world scenarios with many vari- ables, yet people make rather accurate inferences effortlessly in their everyday lives. Are people implicitly approximating these probabilistic computations? Monte Carlo methods have become a primary candidate for connecting the computational and algorithmic levels of analysis (Sanborn, Griffiths, & Navarro, 2006; Levy, Reali, & Griffiths, 2009; Shi, Feldman, & Griffiths, 2008). The ba- sic principle underlying Monte Carlo methods is to approxi- mate a probability distribution using only a finite set of sam- ples from that distribution. Recent work has focused on two In this paper we explore how the parameters of particle filters affect the predictions that they make about order ef- fects, using a simple causal learning task to provide a context for this exploration. It is a common finding that the order in which people receive information has an effect on their subsequent judgments and inferences (Dennis & Ahn, 2001; Collins & Shanks, 2002). This poses a problem for rational models based on Bayesian inference as the process of updat- ing hypotheses in these models is typically invariant to the or- der in which the data are presented. Previous work has shown that particle filters can produce order effects similar to those seen in human learners (e.g., Sanborn et al., 2006). However, this work has focused on primacy effects, in which initial ob- servations have an overly strong influence on people’s con- clusions. In other settings, people produce recency effects, being more influenced by more recent observations. Causal learning tasks can result in both primacy and recency effects, with surprisingly subtle differences in the task leading to one or the other (Dennis & Ahn, 2001; Collins & Shanks, 2002). Causal learning thus provides an ideal domain in which to examine how the parameters of particle filters influence their predictions, and what kinds of order effects these models can produce. The plan of the paper is as follows. In the next section we discuss previous empirical and theoretical work on human causal learning, showing different kinds of observed order ef- fects and providing the Bayesian framework we will be work- ing in. We then formally introduce particle filters, followed by our investigation of how varying certain particle filter pa-
Publication Year: 2011
Publication Date: 2011-01-01
Language: en
Type: article
Access and Citation
Cited By Count: 20
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot