Explainable Planning

In human-aware planning systems, when the planning agent recommends a plan (e.g., a route from A to B) to a human user, it is often the case that the user might not understand why the recommended plan is good, for example, compared to an alternative plan in the user’s mind. In such a scenario, there is a need for the agent to explain its plan to the user, providing them with the necessary information to understand properties of the plan (e.g., optimality, feasibility, etc.).

We are approaching this problem from a knowledge representation and reasoning (KR) perspective, where we represent the mental models of both the planning agent and the human user using logical facts and rules. Within this framework, we adapt and generalize KR notions (e.g., entailment, hitting sets, model counting) to solve this problem.

Group Members and Collaborators
Representative Publications