Title: Asymptotic and Bootstrap Confidence Intervals for the Process Capability Index<i>c<sub>py</sub></i>Based on Lindley Distributed Quality Characteristic
Abstract: SYNOPTIC ABSTRACTProcess capability indices (PCIs) have been widely applied in measuring product potential and performance. It is of great significance to quality control engineers, as it quantifies the relation between the actual performance of the process and the preset specifications of the product. Among the plethora of the suggested PCIs, most of them were developed for normally distributed processes. In this article, we consider generalized process capability index Cpy suggested by Maiti et al. (2010 Maiti, S. S., Saha, M., & Nanda, A. K. (2010). On generalizing process capability indices. Journal of Quality Technology and Quantitative Management, 7(3), 279–300. doi:10.1080/16843703.2010.11673233[Taylor & Francis Online], [Web of Science ®] , [Google Scholar]), which can be used for normal, non-normal, and continuous as well as discrete random variables. The objective of this article is twofold. First, we obtain maximum likelihood estimator (MLE) and minimum variance unbiased estimator (MVUE) of the PCI Cpy for the Lindley distributed quality characteristics. Second, we compare asymptotic confidence interval (ACI) with four bootstrap confidence intervals (BCIs); namely, standard bootstrap (s-boot), percentile bootstrap (p-boot), Student’s t bootstrap (t-boot), and bias-corrected accelerated bootstrap (BCa-boot) of Cpy based on maximum likelihood method of estimation. Monte Carlo simulations have been carried out to compare the performance of MLEs and MVUEs, and also investigate the average widths, coverage probabilities, and relative coverages of ACI and BCIs of Cpy. Two real data sets have been analyzed for illustrative purposes.
Publication Year: 2019
Publication Date: 2019-04-01
Language: en
Type: article
Indexed In: ['crossref']
Access and Citation
Cited By Count: 8
AI Researcher Chatbot
Get quick answers to your questions about the article from our AI researcher chatbot