Discussion of Anne’s OCIS Talk��“What are we discovering? Two perspectives on�interpretable evaluation of causal discovery algorithms”
Vanessa Didelez
Leibniz Institute for Prevention Research and Epidemiology – BIPS
Faculty of Mathematics and Computer Science, University of Bremen, Germany
OCIS – January 2025
?? Causal DAGs ??
Quoting Dominik Janzing (keynote lecture, UAI, 2024)�� “All DAGs are wrong, but some are useful”
“causal discovery… not only that the results are often wrong even worse, we rarely know whether they are wrong and even worse, we rarely understand what ‘wrong’ means”
2
Validation in Causal Inference
Ultimate validation: Carry out the relevant intervention(s) and check if your causal claims hold up
► Choose evaluation to match purpose of analysis
3
Validation in Causal Inference
In simulations: what is the baseline?
Real-world ground truth very rarely available
4
Random Guessing
5
Random Guessing
Random guess (proportion true)
6
Random Guessing
Anne: “can be viewed as negative control concept”
I disagree:
► Two different things
7
What is Random Guessing ?
8
Caution
Evaluation against some form of random guessing:
�► Still only about “statistical” model fit…
… not about the causal nature
► “Causality” needs evaluation under (something like) interventions
9
Expert- & Data-Driven DAGs
10
Expert Constructed DAGs
In my experience, expert knowledge often does not come in form of individual directed edges
11
Combing Experts & Data for DAGs
Various proposals exist in literature
Important challenge:
12
Thanks for the nice paper!�And thanks for the attention!
Vanessa Didelez
didelez@leibniz-bips.de
Contact
www.leibniz-bips.de/en
Leibniz Institute for Prevention Research and Epidemiology – BIPS
Achterstraße 30
D-28359 Bremen