Simulation experiment data for training Quantum Neural Networks (QNNs) using entangled datasets.
The experiments investigate the validity of the lower bounds for the expected risk after training QNNs given by the extensions to the Quantum No-Free-Lunch theorem presented in the related publication. The QNNs are trained with (i) samples of varying Schmidt rank, (ii) orthogonal samples of fixed Schmidt rank and (iii) linearly dependent samples of fixed Schmidt rank.
The dataset contains raw experiment data (directory "raw_data"), analyzed mean risks and errors (directory "plot_data") and the resulting plots (directory "plots").
Experiments:
The experiments train QNNs using various compositions of training samples on a simulator and extract the risk after training to compute average risks.
- Experiment 1: Trains QNNs using entangled training samples of varying Schmidt rank. The average Schmidt rank and the number of training samples are controlled.
Raw data: average_rank_results.zip
; Computed average risks: avg_rank_risks.npy
; Computed average losses: avg_rank_losses.npy
; Plotted average risks: avg_rank_experiments.pdf
; Plotted average losses: avg_rank_losses.pdf
.
- Experiment 2: Trains QNNs using entangled orthogonal training samples. The number of training samples is controlled and the Schmidt rank is fixed such that d=r*t for the dimension d of the Hilbert space.
Raw data: orthogonal_results.zip
; Computed average risks: orthogonal_exp_points.npy
; Plotted average risks: orthogonal_experiments.pdf
.
- Experiment 3: Trains QNNs using entangled linearly dependent training samples. The number of training samples is controlled and the Schmidt rank is fixed such that d=r*t for the dimension d of the Hilbert space.
Raw data: not_linearly_independent_results.zip
; Computed average risks: nlihx_exp_points.npy
; Plotted average risks: nlihx_experiments.pdf
Additionally, this repository contains the reproduction data for Figure 1 (
phases_in_orthogonal_training.zip
). This file contains the training data, the target unitary and the resulting hypothesis unitary for orthogonal training samples of (i) high risk and (ii) low risk.
For the code to reproduce and analyze the experiments see the Code repository.
(2023-04-26)