We define a counterfactual explanation as a set of features that is weakly causal and irreducible. Weakly causal means that removing the set of features from the instance causes the model decision to change.3 Irreducible means that removing any proper subset of the explanation would not change the model decision. The importance of an explanation being weakly causal is straightforward: the decision would have been different if not for the presence of this set of features. The “weakness” comes from the fact that all the features in the set may not actually be necessary. The irreducible condition serves to avoid including features that are superfluous. More formally, consider instance consisting of features , = {1,2, … , }, for which the decision-making system : → {1,2, … , } gives decision . Then, a set of features is a counterfactual explanation for () = if and only if: