Title: Optimal Range for the iid Test Based on Integration Across the Correlation Integral
Abstract: ABSTRACT This paper builds on Kočenda (Citation2001) and extends it in three ways. First, new intervals of the proximity parameter ϵ (over which the correlation integral is calculated) are specified. For these ϵ-ranges new critical values for various lengths of the data sets are introduced, and through Monte Carlo studies it is shown that within new ϵ-ranges the test is even more powerful than within the original ϵ-range. The range that maximizes the power of the test is suggested as the optimal range. Second, an extensive comparison with existing results of the controlled competition of Barnett et al. (Citation1997) as well as broad power tests on various nonlinear and chaotic data are provided. Test performance with real (exchange rate) data is provided as well. The results of the comparison strongly favor our robust procedure and confirm the ability of the test in finding nonlinear dependencies as well its function as a specification test. Finally, new user-friendly and fast software is introduced. Keywords: ChaosCorrelation integralHigh-frequency economic and financial dataMonte CarloNonlinear dynamicsPower testsSingle-blind competitionJEL Classification: C14C15C52C87F31G12 ACKNOWLEDGMENTS We would like to thank William Barnett, Tim Bollerslev, Jan Hanousek, Blake LeBaron, Elzevio Ronchetti, Jan Kmenta, Jan Ámos Vísek, Petr Zemcík, and the editor (Esfandiar Maasoumi) for helpful comments. We are grateful to an anonymous referee whose comments and suggestions helped to improve the paper considerably. We also benefited from several presentations. We thank Marian Baranec, Ivo Burger, Eva Cermáková, Petr Sklenár, and the CERGE-EI Computer Department for their assistance in the performing of simulations in this paper. The usual disclaimer applies. Notes 1We cite other well-known tests later in Section 4.1. 2Recent advances in the research of chaos allow researchers to control chaotically behaving systems in various fields of physics, biology, chemistry, and medicine. Effective control for chaos in economics does not seem to be more realistic than discovering Shangri-la, though. 3Some guidance can be found, for example, in Dechert (Citation1994), Brock et al. (Citation1996), de Lima (Citation1992), and Hsieh and LeBaron (Citation1988). 4It is worthwhile noting that originally an important reason to develop the BDS test was that point estimates of the correlation dimension were very unstable across values of ϵ. 5As β m is, in fact, an OLS estimate of the slope coefficient, by econometric tradition it should be labeled βˆ m . For the sake of notational simplicity, we decided to omit the hat. 6By simulation it was found that such a number lies in the interval between 40 and 50. To be on the safe side, the value of the correlation integral was constrained to be 50. The cutoff value for C m (ϵ) must be chosen before slope coefficient estimates are computed. C m (ϵ) = 50 resulted from simulations that were compared with various trajectories resulting from the analysis conducted on different time series. 7See Hsieh (Citation1991) for details. 8A compound random number generator based on the idea of Collings (Citation1987) and constructed from 17 generators described by Fishman and Moore (Citation1982) was employed to generate iid data. 9The issue of different ϵ-ranges is discussed also in Belaire-Franch (Citation2003), who argues that although the power of Kočenda's test can be more than that of the BDS test, more than one ϵ-range should be used. The two additional ranges used in his study were constructed only as an additive extension to the original range, without any theoretical or empirical argument given to support the choice. 10 Monte Carlo simulations are used instead of distribution theory because the test is nonparametric. 115000 replications are used for sensitivity analysis among selected subranges (see Section 4.3). 12 The described data-generating strategy was chosen for two reasons. First, an ICG effectively eliminates repetitiveness in the data caused by the limitations of computer hardware. Secondly, other methods such as hypothetically obtaining white noise residuals by estimating a generating process (i.e., AR, ARCH, GARCH, etc.) may possess some unaccounted for structural form that would bias the critical values in a Monte Carlo simulation. The issues of how the asymptotic distribution of the test statistics might be affected by the estimation process are discussed by de Lima (Citation1998). m denotes an embedding dimension. Based on 20,000 replications. m denotes an embedding dimension. Based on 20,000 replications. m denotes an embedding dimension. Based on 20,000 replications. 13We acknowledge that Brock et al. (Citation1993) performed similar tests, but these were done mainly on the BDS test and as such they are less suitable as a point of reference for our further purpose. 14For exhaustive details on models, data generating, as well as discussion on particular processes, see the original paper of Barnett et al. (Citation1997). 15The web address of the data is http://econwpa.wustl.edu/eprints/data/papers/9510/9510001.abs. (a) 1%, (b) 2%, (c) 5%, (d) 10%. 16The following summary of the competition results comes from Section 9.1 Overview of Barnett et al. (Citation1997). The Hinich bispectrum test was correct in three of the five cases and failed in two of the cases with the small sample. With the large sample, the test was correct in three of the five cases, failed in one case, and was ambiguous in one case. The associated Gaussianity test is a test of a necessary and not sufficient condition for Gaussianity and hence can reject but not accept. Judging the test on its rejections of Gaussianity, the small sample results produced only two rejections, and both were correct rejections. With the small sample, the test produced four rejections, and all four were valid rejections. With the small sample, the BDS test was correct in two cases of five and ambiguous in the other three. With the large sample, the test was correct in all five cases. The NEGM test was correct in all five small sample cases and all five large sample cases. In the small sample cases, White's test was correct in four of the five cases and failed in the remaining case. In the large sample cases, White's test again was correct in four of the five cases and failed in one case. Kaplan's test was correct in all five cases both with the small and with the large samples. 17Because the Feigenbaum process is deterministic, we have replicated 1000 times only the four other processes, to be precise. Since the competition performed by Barnett et al. (Citation1997) understandably does not contain power tests of participating tests, we do not offer any comparison in this respect. The entries are rejection rates in %, computed at the 5% level. 18This finding is in line with results reported by Hsieh and LeBaron (Citation1988), who have found that the type I error is large with the BDS test when the sample size is small. m denotes an embedding dimension. Based on 20,000 replications. The entries are differences between critical values of two different ranges; a reference range is the range (0.60σ–1.90σ) and a second range is one of the ranges (0.25σ–1.00σ), (0.50σ–1.50σ), or (0.25σ–2.00σ); for each interval the differences are computed for 2.5% quantile (the first row) and 97.5% quantile(the second row). 19This is in line with the findings of Brock et al. (Citation1993) and Kanzler (Citation1999) with respect to the BDS test: as the embedding dimension m increases, the BDS distribution moves away from its asymptotic distribution, the standard normal. The lower the dimension, the better the small-sample properties, whatever the sample size and size of the ϵ. The entries are rejection rates in %, computed at the 5% level. 20Ronchetti and Trojani (Citation2003) used data generated from the contaminated normal distribution CN(ϵ, K 2) given by the distribution function F(x) = (1 − ϵ)Φ (x) + ϵ Φ (x/K), x ∈ R, where Φ(x) is a cumulative distribution function of a standard normal random variable. 21This information is conveniently provided when employing our new software. 22Kugler and Lenz (Citation1990) found that the described correction successfully removed nonlinearity from the swiss franc and deutschemark. However, the BDS test did not allow rejection of the null hypothesis for the French franc (specifically at levels of N = 4 and 5) or Japanese yen (specifically at levels of N = 3, 4, and 5). (a) 1%, (b) 2%, (c) 5%, (d) 10%. 23The Japanese yen was dropped from the replication because of data inconsistency. 24In Brock et al. (Citation1993), the BDS test finds no evidence of nonlinearity in standardized residuals of CHF, some nonlinearity (at dimensions 8, 9, and 10) for the DEM, and strong nonlinearity for CAD and GBP; Belaire-Franch (Citation2003) concurs with this result. Kočenda (Citation2001) found that DEM and GBP show the presence of nonlinearity at the 1% significance level no matter what embedding dimension is considered. CAD and CHF show some presence of nonlinearity at various significance levels depending on embedding dimension m. (a) 1%, (b) 2%, (c) 5%, (d) 10%. 25 In Kugler and Lenz (Citation1993), results of the BDS test revealed no indication of dependence in the fitted residuals of any currency. Kočenda (Citation2001) confirmed the findings of independence for five of the ten currencies (CAD, BEF, FRF, NLG, and CHF) and detected nonlinear dependencies in the fitted residuals for the rest of the supposedly independent currencies (AUD, DEM, ITL, ESP, and JPY). Belaire-Franch (Citation2003) did not analyze these data. (a) 1%, (b) 2%, (c) 5%, (d) 10%.
Publication Year: 2005
Publication Date: 2005-07-01
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 22
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot