Causal scientific explanations from machine learning
More Info
expand_more
Abstract
Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from such models. I argue that when machine learning is used to conduct causal inference we can give a new positive answer to this question. However, these ML models are purpose-built models and there are technical results showing that standard machine learning models cannot be used for the same type of causal inference. Instead, there is a pathway to causal explanations from predictive ML models through new explainability techniques; specifically, new methods to extract structural equation models from such ML models. The extracted models are likely to suffer from issues though: they will often fail to account for confounders and colliders, as well as deliver simply incorrect causal graphs due to ML models tendency to violate physical laws such as the conservation of energy. In this case, extracted graphs are a starting point for new explanations, but predictive accuracy is no guarantee for good explanations.