Some researchers (Martens and Provost, 2014; Wachter, Mittelstadt and Russell, 2017) have proposed to explain models from a causal perspective: when the question we seek to answer is “why did the model-based system make a specific decision?”, we ask specifically, “which data inputs caused the system to make its decision?” This approach is advantageous because it standardizes the form that an explanation can take; it does not require all features to be part of the explanation, and the explanations can be separated from the specifics of the model. The main contribution of this paper is to provide a multi-faceted generalization of this perspective to providing explanations for data-driven decisions rather than model predictions. Our framework for explanations (i) can address features with arbitrary data types, (ii) is model-agnostic, (iii) is scalable to thousands of features, and (iv) can take into consideration the potential cost of changing features as a result of the explanation. We present the framework and an associated explanation-finding algorithm. Then, to showcase situations in which counterfactual explanations explain data-driven decisions better than feature importance weights, we apply the algorithm to data on credit-investment decisions.