Abstract: A fragile inference is not worth taking seriously. All scientific disciplines routinely subject their inferences to studies of fragility. Why should economics be different? hasn't been different up to now. Nor do I think it ever will be, notwithstanding the comments of Michael McAleer, Adrian Pagan, and Paul Volker (1985). Decentralized studies of fragility are common whenever an inference matters enough to attract careful scrutiny. When Isaac Erlich (1975) claims to have demonstrated that capital punishment deters murders, he elicits a great outpouring of papers that show how the result depends on which variables are included (B. Forst 1977), which observations are included (A. Blumstein et al., 1978), how simultaneity problems are dealt with (P. Passell, 1975), etcetera, etcetera. These disorganized studies of fragility are inefficient, haphazard, and confusing. What we need instead are organized sensitivity analyses. We must insist that all empirical studies offer convincing evidence of inferential sturdiness. We need to be shown that minor changes in the list of variables do not alter fundamentally the conclusions, nor does a slight reweighting of observations, nor correction for dependence among observations, etcetera, etcetera. I have proposed a form of organized sensitivity that I call global sensitivity analysis in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful. But when an incredibly narrow set of assumptions is required to produce a usefully narrow set of conclusions, inferences from the given data set are reported to be too fragile to be believed. In dramatic conflict with real data analyses, theoretical econometricians behave as if a given data set admitted a unique inference. This priesthood takes as their self-appointed task the uncovering of the elaborate method by which the unique inference can be squeezed from a data set. Indeed, this is the reaction of McAleer et al., who offer a method of squeezing Thomas Cooley and Stephen Leroy's (1981) data set. They propose to deal with ambiguity by charting one ad hoc route through the thicket of possible models. Complicated ad hoc searches like the one they suggest have no support in statistical decision theory, and virtually none in classical sampling theory. What is to be made of a procedure that sets scores of parameters to zero if they are not statistically significant at arbitrarily chosen levels of significance? And what inferences are allowable after a model passes a battery of specification error tests that are sometimes more numerous than even the set of observations? This recommendation of McAleer et al. merits the retort: There are two things you are better off not seeing in the making: sausages and econometric estimates, to which they might reply: It must be right, I've been doing it since my youth.
Publication Year: 1985
Publication Date: 1985-01-01
Language: en
Type: article
Access and Citation
Cited By Count: 510
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot