We propose Stable Yet Memory Bounded Open-Loop (SYMBOL) planning, a general memory bounded approach to partially observable open-loop planning. SYMBOL maintains an adaptive stack of Thompson Sampling bandits, whose size is bounded by the planning horizon and can be automatically adapted according to the underlying domain without any prior domain knowledge beyond a generative model. We empirically test SYMBOL in four large POMDP benchmark problems to demonstrate its effectiveness and robustness w.r.t. the choice of hyperparameters and evaluate its adaptive memory consumption. We also compare its performance with other open-loop planning algorithms and POMCP.
@inproceedings{ phanIJCAI19,
author = "Thomy Phan and Thomas Gabor and Robert Müller and Christoph Roch and Claudia Linnhoff-Popien",
title = "Adaptive Thompson Sampling Stacks for Memory Bounded Open-Loop Planning",
year = "2019",
abstract = "We propose Stable Yet Memory Bounded Open-Loop (SYMBOL) planning, a general memory bounded approach to partially observable open-loop planning. SYMBOL maintains an adaptive stack of Thompson Sampling bandits, whose size is bounded by the planning horizon and can be automatically adapted according to the underlying domain without any prior domain knowledge beyond a generative model. We empirically test SYMBOL in four large POMDP benchmark problems to demonstrate its effectiveness and robustness w.r.t. the choice of hyperparameters and evaluate its adaptive memory consumption. We also compare its performance with other open-loop planning algorithms and POMCP.",
url = "https://www.ijcai.org/proceedings/2019/0778",
eprint = "https://thomyphan.github.io/files/2019-ijcai-1.pdf",
publisher = "International Joint Conferences on Artificial Intelligence Organization",
booktitle = "Proceedings of the 28th International Joint Conference on Artificial Intelligence",
pages = "5607--5613",
doi = "https://doi.org/10.24963/ijcai.2019/778"
}
Related Articles
- T. Phan et al., “Counterfactual Online Learning for Open-Loop Monte-Carlo Planning”, AAAI 2025
- T. Phan et al., “Adaptive Anytime Multi-Agent Path Finding using Bandit-Based Large Neighborhood Search”, AAAI 2024
- T. Phan et al., “A Distributed Policy Iteration Scheme for Cooperative Multi-Agent Policy Approximation”, ALA 2020
- T. Phan et al., “Distributed Policy Iteration for Scalable Approximation of Cooperative Multi-Agent Policies”, AAMAS 2019
- T. Phan et al., “Memory Bounded Open-Loop Planning in Large POMDPs Using Thompson Sampling”, AAAI 2019
- T. Gabor et al., “Subgoal-Based Temporal Abstraction in Monte-Carlo Tree Search”, IJCAI 2019
- T. Gabor et al., “Preparing for the Unexpected: Diversity Improves Planning Resilience in Evolutionary Algorithms”, ICAC 2018
- T. Phan, “Emergence and Resilience in Multi-Agent Reinforcement Learning”, PhD Thesis
Relevant Research Areas