The hardness of conditional independence testing and the generalised covariance measure

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

The hardness of conditional independence testing and the generalised covariance measure. / Shah, Rajen D.; Peters, Jonas.

In: Annals of Statistics, Vol. 48, No. 3, 2020, p. 1514-1538.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Shah, RD & Peters, J 2020, 'The hardness of conditional independence testing and the generalised covariance measure', Annals of Statistics, vol. 48, no. 3, pp. 1514-1538. https://doi.org/10.1214/19-AOS1857

APA

Shah, R. D., & Peters, J. (2020). The hardness of conditional independence testing and the generalised covariance measure. Annals of Statistics, 48(3), 1514-1538. https://doi.org/10.1214/19-AOS1857

Vancouver

Shah RD, Peters J. The hardness of conditional independence testing and the generalised covariance measure. Annals of Statistics. 2020;48(3):1514-1538. https://doi.org/10.1214/19-AOS1857

Author

Shah, Rajen D. ; Peters, Jonas. / The hardness of conditional independence testing and the generalised covariance measure. In: Annals of Statistics. 2020 ; Vol. 48, No. 3. pp. 1514-1538.

Bibtex

@article{2c36de620e16443b9d0993349e646841,
title = "The hardness of conditional independence testing and the generalised covariance measure",
abstract = "It is a common saying that testing for conditional independence, that is, testing whether whether two random vectors X and Y are independent, given Z, is a hard statistical problem if Z is a continuous random variable (or vector). In this paper, we prove that conditional independence is indeed a particularly difficult hypothesis to test for. Valid statistical tests are required to have a size that is smaller than a pre-defined significance level, and different tests usually have power against a different class of alternatives. We prove that a valid test for conditional independence does not have power against any alternative. Given the nonexistence of a uniformly valid conditional independence test, we argue that tests must be designed so their suitability for a particular problem may be judged easily. To address this need, we propose in the case where X and Y are univariate to nonlinearly regress X on Z, and Y on Z and then compute a test statistic based on the sample covariance between the residuals, which we call the generalised covariance measure (GCM). We prove that validity of this form of test relies almost entirely on the weak requirement that the regression procedures are able to estimate the conditional means X given Z, and Y given Z, at a slow rate. We extend the methodology to handle settings where X and Y may be multivariate or even high dimensional. While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop the theoretical guarantees for kernel ridge regression. A simulation study shows that the test based on GCM is competitive with state of the art conditional independence tests. Code is available as the R package GeneralisedCovarianceMeasure on CRAN.",
keywords = "Conditional independence, Hypothesis testing, Kernel ridge regression, Testability, Wild bootstrap",
author = "Shah, {Rajen D.} and Jonas Peters",
year = "2020",
doi = "10.1214/19-AOS1857",
language = "English",
volume = "48",
pages = "1514--1538",
journal = "Annals of Statistics",
issn = "0090-5364",
publisher = "Institute of Mathematical Statistics",
number = "3",

}

RIS

TY - JOUR

T1 - The hardness of conditional independence testing and the generalised covariance measure

AU - Shah, Rajen D.

AU - Peters, Jonas

PY - 2020

Y1 - 2020

N2 - It is a common saying that testing for conditional independence, that is, testing whether whether two random vectors X and Y are independent, given Z, is a hard statistical problem if Z is a continuous random variable (or vector). In this paper, we prove that conditional independence is indeed a particularly difficult hypothesis to test for. Valid statistical tests are required to have a size that is smaller than a pre-defined significance level, and different tests usually have power against a different class of alternatives. We prove that a valid test for conditional independence does not have power against any alternative. Given the nonexistence of a uniformly valid conditional independence test, we argue that tests must be designed so their suitability for a particular problem may be judged easily. To address this need, we propose in the case where X and Y are univariate to nonlinearly regress X on Z, and Y on Z and then compute a test statistic based on the sample covariance between the residuals, which we call the generalised covariance measure (GCM). We prove that validity of this form of test relies almost entirely on the weak requirement that the regression procedures are able to estimate the conditional means X given Z, and Y given Z, at a slow rate. We extend the methodology to handle settings where X and Y may be multivariate or even high dimensional. While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop the theoretical guarantees for kernel ridge regression. A simulation study shows that the test based on GCM is competitive with state of the art conditional independence tests. Code is available as the R package GeneralisedCovarianceMeasure on CRAN.

AB - It is a common saying that testing for conditional independence, that is, testing whether whether two random vectors X and Y are independent, given Z, is a hard statistical problem if Z is a continuous random variable (or vector). In this paper, we prove that conditional independence is indeed a particularly difficult hypothesis to test for. Valid statistical tests are required to have a size that is smaller than a pre-defined significance level, and different tests usually have power against a different class of alternatives. We prove that a valid test for conditional independence does not have power against any alternative. Given the nonexistence of a uniformly valid conditional independence test, we argue that tests must be designed so their suitability for a particular problem may be judged easily. To address this need, we propose in the case where X and Y are univariate to nonlinearly regress X on Z, and Y on Z and then compute a test statistic based on the sample covariance between the residuals, which we call the generalised covariance measure (GCM). We prove that validity of this form of test relies almost entirely on the weak requirement that the regression procedures are able to estimate the conditional means X given Z, and Y given Z, at a slow rate. We extend the methodology to handle settings where X and Y may be multivariate or even high dimensional. While our general procedure can be tailored to the setting at hand by combining it with any regression technique, we develop the theoretical guarantees for kernel ridge regression. A simulation study shows that the test based on GCM is competitive with state of the art conditional independence tests. Code is available as the R package GeneralisedCovarianceMeasure on CRAN.

KW - Conditional independence

KW - Hypothesis testing

KW - Kernel ridge regression

KW - Testability

KW - Wild bootstrap

UR - http://www.scopus.com/inward/record.url?scp=85090443787&partnerID=8YFLogxK

U2 - 10.1214/19-AOS1857

DO - 10.1214/19-AOS1857

M3 - Journal article

AN - SCOPUS:85090443787

VL - 48

SP - 1514

EP - 1538

JO - Annals of Statistics

JF - Annals of Statistics

SN - 0090-5364

IS - 3

ER -

ID: 249057907