Many real-world planning problems occur in environments where there may be incomplete information or where actions may not always lead to the same results. Examples include planning for retirement, where the state of the economy in the future is uncertain, and planning in logistics, where the duration of travel between two cities is uncertain due to potential congestion.
A Markov decision process (MDP) is a popular framework for modeling decision making in these kinds of problems, where an agent needs to plan a sequence of actions that maximizes its chances of reaching its goal. A partially observable MDP (POMDP) is an extension where the world that the agent is operating in is only partially observable, and a decentralized (PO)MDP is an extension where a team of agents needs to collectively plan their joint actions.
We are currently pursuing three orthogonal subprojects within this area:
- We proposed the risk-sensitive MDP and POMDP models, where the objective is to find a policy that maximizes the probability of reaching a goal state. Such a model and its accompanying algorithms are useful in high-risk applications, where a single failure can be catastrophic.
- We are investigating the topic of goal recognition design, where the objective is to find a modification of the underlying planning problem such that the goal of an agent in that environment can be recognized as early as possible.
- We aim to bridge MDPs and POMDPs with distributed constraint optimization problems (DCOPs). DCOPs are well-suited for modeling single-shot distributed coordination problems such as distributed task/resource allocation problems. Our goal is to enrich DCOPs to handle dynamic problems with uncertainty using MDP and POMDP concepts. See DCOP project page for more info.
- Khoi D. Hoang, Washington University in St. Louis
- Ping Hou, Uber
- Ferdinando Fioretto, Georgia Institute of Technology
- Alvitta Ottley, Washington University in St. Louis
- Christabel Wayllace, Washington University in St. Louis
- Tran Cao Son, New Mexico State University
- Pradeep Varakantham, Singapore Management University
- Makoto Yokoo, Kyushu University
- Roie Zivan, Ben Gurion University
CAREER: Decentralized Constraint-based Optimization for Multi-Agent Planning and Coordination.
National Science Foundation (2016 – 2021).
Integrated Computational and Cognitive Workflows for Improved Security and Usability.
Boeing (2019).
BSF: 2014012: Robust Solutions for Distributed Constraint Optimization Problems.
National Science Foundation (2015 – 2019).
- Christabel Wayllace, Sarah Keren, Avigdor Gal, Erez Karpas, William Yeoh, and Shlomo Zilberstein “Accounting for Partial Observability in Stochastic Goal Recognition Design: Messing with the Marauder’s Map.” In Proceedings of the European Conference on Artificial Intelligence (ECAI), to appear, 2020.
- Christabel Wayllace, Ping Hou, and William Yeoh. “New Metrics and Algorithms for Stochastic Goal Recognition Design Problems.” In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 4455-4462, 2017.
- Khoi D. Hoang, Ping Hou, Ferdinando Fioretto, William Yeoh, Roie Zivan, and Makoto Yokoo. “Infinite-Horizon Proactive Dynamic DCOPs.” In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 212-220, 2017.
- Christabel Wayllace, Ping Hou, William Yeoh, and Tran Cao Son. “Goal Recognition Design with Stochastic Agent Action Outcomes.” In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 3279-3285, 2016.
- Pritee Agrawal, Pradeep Varakantham, and William Yeoh. “Scalable Greedy Algorithms for Task/Resource Constrained Multi-Agent Stochastic Planning.” In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pages 10-16, 2016.
- Khoi D. Hoang, Ferdinando Fioretto, Ping Hou, Makoto Yokoo, William Yeoh, and Roie Zivan. “Proactive Dynamic Distributed Constraint Optimization.” In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 597-605, 2016.
- Tiep Le, Ferdinando Fioretto, William Yeoh, Tran Cao Son, and Enrico Pontelli. “ER-DCOPs: A Framework for Distributed Constraint Optimization with Uncertainty in Constraint Utilities.” In Proceedings of the International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pages 606-614, 2016.
- Ping Hou, William Yeoh, and Pradeep Varakantham. “Solving Risk-Sensitive POMDPs With and Without Cost Observations.” In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), pages 3138-3144, 2016.