Optimising Adaptive Resource Generation in Near-Term Quantum Networks

A Markov Decision Process Model to Produce an Optimal Resource Generation Policy

Bachelor Thesis (2024)
Author(s)

B. Goranov (TU Delft - Electrical Engineering, Mathematics and Computer Science)

Contributor(s)

G.S. Vardoyan – Mentor (TU Delft - Quantum Computer Science)

B.J. Davies – Mentor (TU Delft - QID/Wehner Group)

R. Hai – Graduation committee member (TU Delft - Web Information Systems)

Faculty
Electrical Engineering, Mathematics and Computer Science
More Info
expand_more
Publication Year
2024
Language
English
Graduation Date
23-06-2024
Awarding Institution
Delft University of Technology
Project
['CSE3000 Research Project']
Programme
['Computer Science and Engineering']
Faculty
Electrical Engineering, Mathematics and Computer Science
Reuse Rights

Other than for strictly personal use, it is not permitted to download, forward or distribute the text or part of it, without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license such as Creative Commons.

Abstract

A quantum network allows us to connect quantum information processors to achieve capabilities that are not possible using classical computation. Quantum network protocols typically require several entangled states available simultaneously. Previously, an entanglement generation process was analysed where, at each time step, we generate an entangled state with success probability p. Here, we consider adaptive entangled state generation with more flexibility. At each time step, our process chooses a protocol (pi, Fi) from a discrete number of entanglement generation protocols. An entangled state is generated successfully with probability pi, and its fidelity Fi defines how close the entangled state is to an ideal Bell state. The new state is subject to depolarising noise in the quantum memory. Because of the memory noise, states are discarded after a certain number of time steps ti when they are no longer useful to our application. We model our process as a Markov decision process and derive a policy π to generate n entangled states with minimal expected time Eπ[τ]. We analyse the offered improvement of the optimal policy of our adaptive entanglement generation process over the previously studied static process. We conclude that this improvement becomes more significant as the required number of links in memory increases.

Files

Rp_final_paper.pdf
(pdf | 0.587 Mb)
License info not available