Compositional Abstraction Error and a Category of Causal Models

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

Documents

  • Fulltext

    Final published version, 320 KB, PDF document

Interventional causal models describe several joint distributions over some variables used to describe a system, one for each intervention setting. They provide a formal recipe for how to move between the different joint distributions and make predictions about the variables upon intervening on the system. Yet, it is difficult to formalise how we may change the underlying variables used to describe the system, say moving from fine-grained to coarse-grained variables. Here, we argue that compositionality is a desideratum for such model transformations and the associated errors: When abstracting a reference model M iteratively, first obtaining M0 and then further simplifying that to obtain M00, we expect the composite transformation from M to M00 to exist and its error to be bounded by the errors incurred by each individual transformation step. Category theory, the study of mathematical objects via compositional transformations between them, offers a natural language to develop our framework for model transformations and abstractions. We introduce a category of finite interventional causal models and, leveraging theory of enriched categories, prove the desired compositionality properties for our framework.

Original languageEnglish
Title of host publicationProceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence,
PublisherPMLR
Publication date2021
Pages1013-1023
Publication statusPublished - 2021
Event37th Conference on Uncertainty in Artificial Intelligence, UAI 2021 - Virtual, Online
Duration: 27 Jul 202130 Jul 2021

Conference

Conference37th Conference on Uncertainty in Artificial Intelligence, UAI 2021
ByVirtual, Online
Periode27/07/202130/07/2021
SeriesProceedings of Machine Learning Research
Volume161
ISSN1938-7228

ID: 297607287