Asking “why” a specific event like a person’s illness or a plant’s extinction happened is a core part of being human. From the time we are children we seek explanations for events, both to satisfy our curiosities about the world and to understand how to behave within it. Causal explanations are vital to many fields including medicine (for diagnosis), law (to determine responsibility), and science (to understand biological and environmental processes).
Despite the importance of explanation for all of these purposes, most research efforts focus on type-level causes (e.g., aggregating data to find causes of a disease) and there is no current approach that can reliably explain everyday events. Computational methods are mainly theoretical, psychological studies use unrealistic set-ups disconnected from everyday life, and philosophical theories do not consider how people reason about causality. Computing and philosophy have lacked extensive validation of token causality frameworks against human understanding, leading to methods that conflict with human intuitions.
In this project we aim to advance causal explanation by uniting philosophy, computer science, and psychology. We bridge these areas to develop a rigorous approach to identify token causes that is consistent with and inspired by human behavior. We will: 1) conduct extensive studies with human subjects to inform and validate our theories and algorithms, 2) develop a new theory of and algorithm to identify token causes, 3) develop an interactive system for causal explanation, and 4) organize an interdisciplinary conference on causal explanation.
Our work will address a core question of human reasoning and lead to a novel approach to token causality, grounded in philosophy and psychology. The methods developed and insights from the project will support practical applications in medicine, law, environmental science and other areas, and will lay the foundation for new understanding of blame and moral responsibility.