Local Independence Testing for Point Processes

Research output: Contribution to journalJournal articleResearchpeer-review

Standard

Local Independence Testing for Point Processes. / Thams, Nikolaj; Hansen, Niels Richard.

In: IEEE Transactions on Neural Networks and Learning Systems, Vol. 35, No. 4, 2024, p. 4902-4910.

Research output: Contribution to journalJournal articleResearchpeer-review

Harvard

Thams, N & Hansen, NR 2024, 'Local Independence Testing for Point Processes', IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 4, pp. 4902-4910. https://doi.org/10.1109/TNNLS.2023.3335265

APA

Thams, N., & Hansen, N. R. (2024). Local Independence Testing for Point Processes. IEEE Transactions on Neural Networks and Learning Systems, 35(4), 4902-4910. https://doi.org/10.1109/TNNLS.2023.3335265

Vancouver

Thams N, Hansen NR. Local Independence Testing for Point Processes. IEEE Transactions on Neural Networks and Learning Systems. 2024;35(4):4902-4910. https://doi.org/10.1109/TNNLS.2023.3335265

Author

Thams, Nikolaj ; Hansen, Niels Richard. / Local Independence Testing for Point Processes. In: IEEE Transactions on Neural Networks and Learning Systems. 2024 ; Vol. 35, No. 4. pp. 4902-4910.

Bibtex

@article{6c66b86413b74335b829583ca3f568f2,
title = "Local Independence Testing for Point Processes",
abstract = "Constraint-based causal structure learning for point processes require empirical tests of local independence. Existing tests require strong model assumptions, e.g., that the true data generating model is a Hawkes process with no latent confounders. Even when restricting attention to Hawkes processes, latent confounders are a major technical difficulty because a marginalized process will generally not be a Hawkes process itself. We introduce an expansion similar to Volterra expansions as a tool to represent marginalized intensities. Our main theoretical result is that such expansions can approximate the true marginalized intensity arbitrarily well. Based on this, we propose a test of local independence and investigate its properties in real and simulated data.",
keywords = "Causal discovery, Data models, Heuristic algorithms, Kernel, Learning systems, local independence, Mathematical models, Neurons, neuroscience, point processes, Testing",
author = "Nikolaj Thams and Hansen, {Niels Richard}",
note = "Publisher Copyright: IEEE",
year = "2024",
doi = "10.1109/TNNLS.2023.3335265",
language = "English",
volume = "35",
pages = "4902--4910",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "Institute of Electrical and Electronics Engineers",
number = "4",

}

RIS

TY - JOUR

T1 - Local Independence Testing for Point Processes

AU - Thams, Nikolaj

AU - Hansen, Niels Richard

N1 - Publisher Copyright: IEEE

PY - 2024

Y1 - 2024

N2 - Constraint-based causal structure learning for point processes require empirical tests of local independence. Existing tests require strong model assumptions, e.g., that the true data generating model is a Hawkes process with no latent confounders. Even when restricting attention to Hawkes processes, latent confounders are a major technical difficulty because a marginalized process will generally not be a Hawkes process itself. We introduce an expansion similar to Volterra expansions as a tool to represent marginalized intensities. Our main theoretical result is that such expansions can approximate the true marginalized intensity arbitrarily well. Based on this, we propose a test of local independence and investigate its properties in real and simulated data.

AB - Constraint-based causal structure learning for point processes require empirical tests of local independence. Existing tests require strong model assumptions, e.g., that the true data generating model is a Hawkes process with no latent confounders. Even when restricting attention to Hawkes processes, latent confounders are a major technical difficulty because a marginalized process will generally not be a Hawkes process itself. We introduce an expansion similar to Volterra expansions as a tool to represent marginalized intensities. Our main theoretical result is that such expansions can approximate the true marginalized intensity arbitrarily well. Based on this, we propose a test of local independence and investigate its properties in real and simulated data.

KW - Causal discovery

KW - Data models

KW - Heuristic algorithms

KW - Kernel

KW - Learning systems

KW - local independence

KW - Mathematical models

KW - Neurons

KW - neuroscience

KW - point processes

KW - Testing

U2 - 10.1109/TNNLS.2023.3335265

DO - 10.1109/TNNLS.2023.3335265

M3 - Journal article

C2 - 38109252

AN - SCOPUS:85181554691

VL - 35

SP - 4902

EP - 4910

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 4

ER -

ID: 384911241