Explainable Artificial Intelligence (XAI) aims to make the products and decisions of artificial intelligence comprehensible to humans. One of the key characteristics of effective human explanations is their selectivity. While humans usually have no problem intuitively deciding what information to include in their explanation and what to leave out, this skill is absent in AI systems and therefore poses a challenge for XAI.

We attempted to achieve human-like selectivity in machine-generated explanations of plans in the Blocks World domain based on machine learning from human explanations of plans in the same domain. We performed a top-down induction of a binary decision tree from human explanations of plans to build a predictive model which gives us the most likely type of explanation a human would give to explain a specific action inside a specific plan which solves a specific task.

This website demonstrates the application of the induced decision tree to generate explanations of solutions to user-generated planning tasks. Define the task by specifying the starting state and the goal state, and the planner will find the optimal plan and explain it.