Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add expectation value plot to GH benchmarks pipeline #147

Open
jordandsullivan opened this issue Dec 16, 2024 · 0 comments · May be fixed by #168
Open

Add expectation value plot to GH benchmarks pipeline #147

jordandsullivan opened this issue Dec 16, 2024 · 0 comments · May be fixed by #168
Assignees
Labels
feature New feature or request infrastructure Non-quantum things to improve the robustness of our package, e.g. CI/CD
Milestone

Comments

@jordandsullivan
Copy link
Collaborator

jordandsullivan commented Dec 16, 2024

Problem:

In the Github Actions workflow ucc-benchmarks, currently we run the compile time and gate count benchmarks and then run a couple python scripts to generate benchmark plots:

# Run the benchmarks in the Docker container
- name: Run benchmarks
run: |
docker run --rm \
-v "/home/runner/work/ucc/ucc:/ucc" \
ucc-benchmark bash -c "
source /venv/bin/activate && \
./benchmarks/scripts/run_benchmarks.sh 8 && \
python ./benchmarks/scripts/plot_avg_benchmarks_over_time.py && \
python ./benchmarks/scripts/plot_latest_benchmarks.py
"

run_benchmarks.sh is a shell script which runs different combinations of benchmark circuits and compilers in parallel by passing the QASM file and compiler name as command line args to the python script benchmark_script.py.

We want to add to this set of commands a script that runs the expectation value benchmark for a range of different benchmark circuits. To add these benchmarks to the pipeline will likely require several issues.

Proposed solution:

Here are the steps as I see them:

0. Add a circuit_name column to exp_val_benchmark.py results data
Currently benchmarks/scripts/expval_benchmark.py does not store the circuit_name, since it was only being run on one QASM file. First of all, we'll need to save an additional column to the expectation value data, for circuit_name and possibly other quantities of interest. I would recommend taking a look at the way benchmark_script.py makes use of the save_results function for a guide here.

1. Generating the benchmark circuits to be run
For our gate count and compile time benchmarks, we run the following circuits located in ucc/benchmarks/qasm_circuits

QASM_FILES=(
    "benchpress/qaoa_barabasi_albert_N100_3reps_basis_rz_rx_ry_cx.qasm"
    "benchpress/qv_N100_12345_basis_rz_rx_ry_cx.qasm"
    "benchpress/qft_N100_basis_rz_rx_ry_cx.qasm"
    "benchpress/square_heisenberg_N100_basis_rz_rx_ry_cx.qasm"
    "ucc/prep_select_N25_ghz_basis_rz_rx_ry_h_cx.qasm"
    "ucc/qcnn_N100_7layers_basis_rz_rx_ry_h_cx.qasm"
)

where N is the number of qubits. Notably, we cannot typically simulate the execution of circuits with O(100) qubits, nor likely run them on current hardware without losing all performance to noise. So we need a way to convert these large circuits into smaller feasible instances, say ~N=10.

You may also note that some of these QASM files are in a benchpress/ directory and others in ucc/. The files in the UCC folder (Prepare & Select and QCNN) were generated by us in the course of developing UCC. These can be generated using the generate_qasm.py script by modifying the number of qubits:

# ### Prepare & Select
num_qubits = 25
target_state="1" * num_qubits
circuit = cirq_prep_select(num_qubits, target_state=target_state)
filename = f"prep_select_N{num_qubits}_ghz"
write_qasm(circuit,
circuit_name=filename,
# basis_gates=['rz', 'rx', 'ry', 'h', 'cx']
)

Most of the QASM circuits in our UCC benchmarks, however, were taken from the benchpress library of QASM files. They have generated a wide range of qubit counts for each of their benchmark circuits, so we may be able to copy the N10 (i.e. 10 qubits) version of the above benchpress QASM files in order to have ones that are simulable for our expectation value benchmark.

2. Modify the expval_benchmark script so you can run it in parallel using a shell script
Currently the expval_benchmark.py script has hard-coded values for the benchmark circuit file and and loops through each compiler in python.
To make best use of the parallelism available to us, we want to be able to pass these parameters as command line arguments, e.g.

# Get the QASM file, compiler, and results folder passed as command-line arguments
qasm_file = sys.argv[1]
compiler_alias = sys.argv[2]
results_folder = sys.argv[3] # New argument for results folder

3. Add the expval_benchmark.py script to the shell script.
If we want to stick with just a single run_benchmarks.sh script that gets executed by the runner, we can it here in the list of commands that will be executed in parallel:

# Prepare the list of commands to run in parallel
commands=()
for qasm_file in "${QASM_FILES[@]}"; do
for compiler in "${COMPILERS[@]}"; do
# Combine the common folder path with the QASM file
full_qasm_file="${QASM_FOLDER}${qasm_file}"
# Build the command, passing the results folder as an argument
command="python3 $(dirname "$0")/benchmark_script.py \"$full_qasm_file\" \"$compiler\" \"$RESULTS_FOLDER\""
commands+=("$command")
done
done

where we would loops through benchmark commands like [gates_and_runtime, expval] which would run the corresponding python scripts (benchmarks/scripts/expval_benchmark.py, benchmarks/scripts/benchmark_script.py). But we would need to make sure the correct benchmark python script gets paired with the right QASM file (i.e. the N=10 qubit files get passed to the expval_benchmark.py script)
Or
Alternatively you could create your own shell script which works analogously to run_benchmarks.sh. This might be easier to keep apart.

3. Plot the expvalue performance.
This part is analogous to

python ./benchmarks/scripts/plot_avg_benchmarks_over_time.py && \
python ./benchmarks/scripts/plot_latest_benchmarks.py

Just create plots that can show the average performance (something like relative errors compared to ideal) over all benchmarks as well as individual performance for each benchmark.

Hopefully that's everything. Happy hunting!

@jordandsullivan jordandsullivan added feature New feature or request infrastructure Non-quantum things to improve the robustness of our package, e.g. CI/CD labels Dec 17, 2024
@jordandsullivan jordandsullivan added this to the 0.4.0 milestone Jan 2, 2025
@Misty-W Misty-W linked a pull request Jan 14, 2025 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New feature or request infrastructure Non-quantum things to improve the robustness of our package, e.g. CI/CD
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants