Bootstrap Methods And Their Application
Bootstrap Methods And Their Application ::: https://urlca.com/2tD6zk
Chapter 1 investigates the weighted bootstrap of statistical functions and looks for some general regularity conditions under which the generalized bootstrap may be used. Chapter 2 gives some information concerning the practical choice of the weights and the difference between all these random weighted methods in the regular cases investigated in Chapter 1. Chapter 3 looks at some non-regular cases which require a drastic modification of the bootstrap. Chapters 4-6 contain proofs.
This is a review of bootstrap methods, concentrating on basic ideas and applications rather than theoretical considerations. It begins with an exposition of the bootstrap estimate of standard error for one-sample situations. Several examples, some involving quite complicated statistical procedures, are given. The bootstrap is then extended to other measures of statistical accuracy such as bias and prediction error, and to complicated data structures such as time series, censored data, and regression models. Several more examples are presented illustrating these ideas. The last third of the paper deals mainly with bootstrap confidence intervals.
A final set of relations of interest link the PIT-trap to classical resampling methods for models with iid errors. Consider first the model yij = μij + ϵij, where the μij are fixed and the random errors ϵij are parameterized by their standard deviation σ only, linear regression being an important special case. For such models, raw residuals are monotonically related to PIT-residuals:This monotonicity implies that bootstrapping PIT-residuals is equivalent to the standard residual resampling approach where raw residuals are bootstrapped [2], if one assumes errors are iid. Now consider the situation where we wish to test the null hypothesis that all observations are iid. In this case, and by a similar argument, the PIT-trap reduces to resampling the yi with replacement. Further, if resampling PIT-residuals without replacement, this would reduce to the usual permutation test [3]. Hence many classical resampling methods can be understood as special cases of the PIT-trap, the key innovation of the PIT-trap being its ability to extend these well-known resampling methods to parametric modeling where errors are no longer iid.
We used the same testing procedure here as in the practical application described previously: fitting negative binomial distributions to each species, then constructing a score statistic which estimates correlation between variables using a ridge-regularized correlation matrix [10]. We compared results when significance of this statistic was assessed using Pearson residual resampling, the PIT-trap, and the parametric bootstrap assuming either an unstructured correlation matrix or incorrectly assuming an exchangeable correlation structure. The latter choice looks at the question of robustness of the parametric bootstrap to misspecification of the correlation structure.
N2 - Microeconomic data often have within-cluster dependence, which affects standard error estimation and inference. When the number of clusters is small, asymptotic tests can be severely oversized. In the instrumental variables model, the potential presence of weak instruments further complicates hypothesis testing. We use wild bootstrap methods to improve inference in two empirical applications with these characteristics. Building from estimating equations and residual bootstraps, we identify variants robust to the presence of weak instruments and a small number of clusters. They reduce absolute size bias significantly and demonstrate that the wild bootstrap should join the standard toolkit in IV and cluster-dependent models.
AB - Microeconomic data often have within-cluster dependence, which affects standard error estimation and inference. When the number of clusters is small, asymptotic tests can be severely oversized. In the instrumental variables model, the potential presence of weak instruments further complicates hypothesis testing. We use wild bootstrap methods to improve inference in two empirical applications with these characteristics. Building from estimating equations and residual bootstraps, we identify variants robust to the presence of weak instruments and a small number of clusters. They reduce absolute size bias significantly and demonstrate that the wild bootstrap should join the standard toolkit in IV and cluster-dependent models. 781b155fdc