Reinforcement Learning for Semi-Autonomous Approximate Quantum Eigensolver

PROJECT OVERVIEW

The characterization of an operator by its eigenvectors and eigenvalues allows us to know its action over any quantum state. Here, we propose a protocol to obtain an approximation of the eigenvectors of an arbitrary Hermitian quantum operator. This protocol is based on measurement and feedback processes, which characterize a reinforcement learning protocol. Our proposal is composed of two systems, a black box named environment and a quantum state named agent. The role of the environment is to change any quantum state by a unitary matrix $\hat{U}_E = e^{-i \tau \hat{O}_E}$ where $\hat{O}_E$ is a Hermitian operator, and τ is a real parameter. The agent is a quantum state which adapts to some eigenvector of $\hat{O}_E$ by repeated interactions with the environment, feedback process, and semi-random rotations. With this proposal, we can obtain an approximation of the eigenvectors of a random qubit operator with average fidelity over 90% in less than 10 iterations, and surpass 98% in less than 300 iterations. Moreover, for the two-qubit cases, the four eigenvectors are obtained with fidelities above 89% in 8000 iterations for a random operator, and fidelities of 99% for an operator with the Bell states as eigenvectors. This protocol can be useful to implement semi-autonomous quantum devices which should be capable of extracting information and deciding with minimal resources and without human intervention.

DETAILS
  • Research Type Article
  • RESEARCH YEAR 2020
  • Journal Name Machine Learning: Science and Technology
  • Authors F. Albarrán-Arriagada, J. C. Retamal, E. Solano, and L. Lamata
  • DOI 10.1088/2632-2153/ab43b4