Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Standard

Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game. / Reisach, Alexander G. ; Seiler, Christof; Weichwald, Sebastian.

Advances in Neural Information Processing Systems 34 (NeurIPS). NeurIPS Proceedings, 2021. s. 1-13.

Publikation: Bidrag til bog/antologi/rapportKonferencebidrag i proceedingsForskningfagfællebedømt

Harvard

Reisach, AG, Seiler, C & Weichwald, S 2021, Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game. i Advances in Neural Information Processing Systems 34 (NeurIPS). NeurIPS Proceedings, s. 1-13, 35th Conference on Neural Information Processing Systems (NeurIPS 2021), Virtuel, 06/12/2021. <https://proceedings.neurips.cc/paper/2021/file/e987eff4a7c7b7e580d659feb6f60c1a-Paper.pdf>

APA

Reisach, A. G., Seiler, C., & Weichwald, S. (2021). Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game. I Advances in Neural Information Processing Systems 34 (NeurIPS) (s. 1-13). NeurIPS Proceedings. https://proceedings.neurips.cc/paper/2021/file/e987eff4a7c7b7e580d659feb6f60c1a-Paper.pdf

Vancouver

Reisach AG, Seiler C, Weichwald S. Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game. I Advances in Neural Information Processing Systems 34 (NeurIPS). NeurIPS Proceedings. 2021. s. 1-13

Author

Reisach, Alexander G. ; Seiler, Christof ; Weichwald, Sebastian. / Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game. Advances in Neural Information Processing Systems 34 (NeurIPS). NeurIPS Proceedings, 2021. s. 1-13

Bibtex

@inproceedings{3eca881c946e40e687fd78801ad36f70,
title = "Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game",
abstract = "Simulated DAG models may exhibit properties that, perhaps inadvertently, render their structure identifiable and unexpectedly affect structure learning algorithms. Here, we show that marginal variance tends to increase along the causal order for generically sampled additive noise models. We introduce varsortability as a measure of the agreement between the order of increasing marginal variance and the causal order. For commonly sampled graphs and model parameters, we show that the remarkable performance of some continuous structure learning algorithms can be explained by high varsortability and matched by a simple baseline method. Yet, this performance may not transfer to real-world data where varsortability may be moderate or dependent on the choice of measurement scales. On standardized data, the same algorithms fail to identify the ground-truth DAG or its Markov equivalence class. While standardization removes the pattern in marginal variance, we show that data generating processes that incur high varsortability also leave a distinct covariance pattern that may be exploited even after standardization. Our findings challenge the significance of generic benchmarks with independently drawn parameters. The code is available at https://github.com/Scriddie/Varsortability.",
author = "Reisach, {Alexander G.} and Christof Seiler and Sebastian Weichwald",
year = "2021",
language = "English",
pages = "1--13",
booktitle = "Advances in Neural Information Processing Systems 34 (NeurIPS)",
publisher = "NeurIPS Proceedings",
note = "35th Conference on Neural Information Processing Systems (NeurIPS 2021) ; Conference date: 06-12-2021 Through 14-12-2021",

}

RIS

TY - GEN

T1 - Beware of the Simulated DAG! Causal Discovery Benchmarks May Be Easy to Game

AU - Reisach, Alexander G.

AU - Seiler, Christof

AU - Weichwald, Sebastian

PY - 2021

Y1 - 2021

N2 - Simulated DAG models may exhibit properties that, perhaps inadvertently, render their structure identifiable and unexpectedly affect structure learning algorithms. Here, we show that marginal variance tends to increase along the causal order for generically sampled additive noise models. We introduce varsortability as a measure of the agreement between the order of increasing marginal variance and the causal order. For commonly sampled graphs and model parameters, we show that the remarkable performance of some continuous structure learning algorithms can be explained by high varsortability and matched by a simple baseline method. Yet, this performance may not transfer to real-world data where varsortability may be moderate or dependent on the choice of measurement scales. On standardized data, the same algorithms fail to identify the ground-truth DAG or its Markov equivalence class. While standardization removes the pattern in marginal variance, we show that data generating processes that incur high varsortability also leave a distinct covariance pattern that may be exploited even after standardization. Our findings challenge the significance of generic benchmarks with independently drawn parameters. The code is available at https://github.com/Scriddie/Varsortability.

AB - Simulated DAG models may exhibit properties that, perhaps inadvertently, render their structure identifiable and unexpectedly affect structure learning algorithms. Here, we show that marginal variance tends to increase along the causal order for generically sampled additive noise models. We introduce varsortability as a measure of the agreement between the order of increasing marginal variance and the causal order. For commonly sampled graphs and model parameters, we show that the remarkable performance of some continuous structure learning algorithms can be explained by high varsortability and matched by a simple baseline method. Yet, this performance may not transfer to real-world data where varsortability may be moderate or dependent on the choice of measurement scales. On standardized data, the same algorithms fail to identify the ground-truth DAG or its Markov equivalence class. While standardization removes the pattern in marginal variance, we show that data generating processes that incur high varsortability also leave a distinct covariance pattern that may be exploited even after standardization. Our findings challenge the significance of generic benchmarks with independently drawn parameters. The code is available at https://github.com/Scriddie/Varsortability.

M3 - Article in proceedings

SP - 1

EP - 13

BT - Advances in Neural Information Processing Systems 34 (NeurIPS)

PB - NeurIPS Proceedings

T2 - 35th Conference on Neural Information Processing Systems (NeurIPS 2021)

Y2 - 6 December 2021 through 14 December 2021

ER -

ID: 305683953