Home Page

Papers

Submissions

News

Editorial Board

Special Issues

Open Source Software

Proceedings (PMLR)

Data (DMLR)

Transactions (TMLR)

Search

Statistics

Login

Frequently Asked Questions

Contact Us



RSS Feed

Can We Trust the Bootstrap in High-dimensions? The Case of Linear Models

Noureddine El Karoui, Elizabeth Purdom; 19(5):1−66, 2018.

Abstract

We consider the performance of the bootstrap in high-dimensions for the setting of linear regression, where $p\lt n$ but $p/n$ is not close to zero. We consider ordinary least-squares as well as robust regression methods and adopt a minimalist performance requirement: can the bootstrap give us good confidence intervals for a single coordinate of $\beta$ (where $\beta$ is the true regression vector)? We show through a mix of numerical and theoretical work that the bootstrap is fraught with problems. Both of the most commonly used methods of bootstrapping for regression–residual bootstrap and pairs bootstrap–give very poor inference on $\beta$ as the ratio $p/n$ grows. We find that the residual bootstrap tend to give anti-conservative estimates (inflated Type I error), while the pairs bootstrap gives very conservative estimates (severe loss of power) as the ratio $p/n$ grows. We also show that the jackknife resampling technique for estimating the variance of $\hat{\beta}$ severely overestimates the variance in high dimensions. We contribute alternative procedures based on our theoretical results that result in dimensionality adaptive and robust bootstrap methods.

[abs][pdf][bib]       
© JMLR 2018. (edit, beta)

Mastodon