Quantum computing (QC) in the current NISQ-era is still limited. To gain early insights and advantages, hybrid applications are widely considered mitigating those shortcomings. Hybrid quantum machine learning (QML) comprises both the application of QC to improve machine learning (ML), and the application of ML to improve QC architectures. This work considers the latter, focusing on leveraging reinforcement learning (RL) to improve current QC approaches. We therefore introduce various generic challenges arising from quantum architecture search and quantum circuit optimization that RL algorithms need to solve to provide benefits for more complex applications and combinations of those. Building upon these challenges we propose a concrete framework, formalized as a Markov decision process, to enable to learn policies that are capable of controlling a universal set of quantum gates. Furthermore, we provide benchmark results to assess shortcomings and strengths of current state-of-the-art algorithms.
@inproceedings{ altmannAAMAS24,
author = "Philipp Altmann and Adelina Bärligea and Jonas Stein and Michael Kölle and Thomas Gabor and Thomy Phan and Claudia Linnhof-Popien",
title = "Quantum Circuit Design: A Reinforcement Learning Challenge",
year = "2024",
abstract = "Quantum computing (QC) in the current NISQ-era is still limited. To gain early insights and advantages, hybrid applications are widely considered mitigating those shortcomings. Hybrid quantum machine learning (QML) comprises both the application of QC to improve machine learning (ML), and the application of ML to improve QC architectures. This work considers the latter, focusing on leveraging reinforcement learning (RL) to improve current QC approaches. We therefore introduce various generic challenges arising from quantum architecture search and quantum circuit optimization that RL algorithms need to solve to provide benefits for more complex applications and combinations of those. Building upon these challenges we propose a concrete framework, formalized as a Markov decision process, to enable to learn policies that are capable of controlling a universal set of quantum gates. Furthermore, we provide benchmark results to assess shortcomings and strengths of current state-of-the-art algorithms.",
url = "https://arxiv.org/pdf/2312.11337.pdf",
eprint = "https://arxiv.org/pdf/2312.11337.pdf",
location = "Auckland, New Zealand",
booktitle = "Extended Abstracts of the 23rd International Conference on Autonomous Agents and MultiAgent Systems",
pages = "2123--2125",
doi = "https://dl.acm.org/doi/10.5555/3635637.3663081"
}
Related Articles
- P. Altmann et al., “Challenges for Reinforcement Learning in Quantum Computing”, QCE 2024 (extended version)
- T. Phan et al., “Attention-Based Recurrence for Multi-Agent Reinforcement Learning under Stochastic Partial Observability”, ICML 2023
- P. Altmann et al., “CROP: Towards Distributional-Shift Robust Reinforcement Learning using Compact Reshaped Observation Processing”, IJCAI 2023
- P. Altmann et al., “DIRECT: Learning from Sparse and Shifting Rewards using Discriminative Reward Co-Training”, ALA 2023
- T. Gabor et al., “Scenario Co-Evolution for Reinforcement Learning on a Grid World Smart Factory Domain”, GECCO 2019
Relevant Research Areas