However, explaining decisions may not be straightforward even with small symbolic models. For example, suppose the model is a binary decision tree that receives two inputs and predicts 1 when either of the inputs is 1. Then, by examining the tree, one may come to the wrong conclusion that (for one of the leaves) one of the inputs needs to be different from 1 for the tree to predict 1.