Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #23

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #23

Manually triggered June 23, 2024 18:59
Status Cancelled
Total duration 2h 13m 33s
Artifacts 16

nm-nightly.yml

on: workflow_dispatch
PYTHON-3-10  /  ...  /  BENCHMARK
17m 16s
PYTHON-3-10 / BENCHMARK / BENCHMARK
PYTHON-3-10  /  ...  /  TEST
1h 21m
PYTHON-3-10 / TEST-SOLO / TEST
PYTHON-3-10  /  ...  /  LM-EVAL
7m 8s
PYTHON-3-10 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-10  /  ...  /  LM-EVAL
6m 57s
PYTHON-3-10 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-11  /  ...  /  BENCHMARK
17m 19s
PYTHON-3-11 / BENCHMARK / BENCHMARK
PYTHON-3-11  /  ...  /  TEST
1h 12m
PYTHON-3-11 / TEST-SOLO / TEST
PYTHON-3-11  /  ...  /  LM-EVAL
8m 0s
PYTHON-3-11 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-11  /  ...  /  LM-EVAL
8m 15s
PYTHON-3-11 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-8  /  ...  /  BENCHMARK
18m 50s
PYTHON-3-8 / BENCHMARK / BENCHMARK
PYTHON-3-8  /  ...  /  TEST
1h 0m
PYTHON-3-8 / TEST-SOLO / TEST
PYTHON-3-8  /  ...  /  LM-EVAL
7m 10s
PYTHON-3-8 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-8  /  ...  /  LM-EVAL
7m 9s
PYTHON-3-8 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-9  /  ...  /  BENCHMARK
15m 30s
PYTHON-3-9 / BENCHMARK / BENCHMARK
PYTHON-3-9  /  ...  /  TEST
1h 15m
PYTHON-3-9 / TEST-SOLO / TEST
PYTHON-3-9  /  ...  /  LM-EVAL
7m 45s
PYTHON-3-9 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-9  /  ...  /  LM-EVAL
7m 31s
PYTHON-3-9 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-10  /  ...  /  BENCHMARK_REPORT
29s
PYTHON-3-10 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-10  /  ...  /  PUBLISH
PYTHON-3-10 / UPLOAD / PUBLISH
PYTHON-3-11  /  ...  /  BENCHMARK_REPORT
32s
PYTHON-3-11 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-11  /  ...  /  PUBLISH
PYTHON-3-11 / UPLOAD / PUBLISH
PYTHON-3-8  /  ...  /  BENCHMARK_REPORT
30s
PYTHON-3-8 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-8  /  ...  /  PUBLISH
PYTHON-3-8 / UPLOAD / PUBLISH
PYTHON-3-9  /  ...  /  BENCHMARK_REPORT
31s
PYTHON-3-9 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-9  /  ...  /  PUBLISH
PYTHON-3-9 / UPLOAD / PUBLISH
Fit to window
Zoom out
Zoom in

Annotations

4 errors
PYTHON-3-8 / TEST-SOLO / TEST
The run was canceled by @robertgshaw2-neuralmagic.
PYTHON-3-11 / TEST-SOLO / TEST
The run was canceled by @robertgshaw2-neuralmagic.
PYTHON-3-10 / TEST-SOLO / TEST
The run was canceled by @robertgshaw2-neuralmagic.
PYTHON-3-9 / TEST-SOLO / TEST
The run was canceled by @robertgshaw2-neuralmagic.

Artifacts

Produced during runtime
Name Size
3.10.12-nm-vllm-nightly-0.5.0.20240623.tar.gz Expired
660 KB
3.11.4-nm-vllm-nightly-0.5.0.20240623.tar.gz Expired
660 KB
3.8.17-nm-vllm-nightly-0.5.0.20240623.tar.gz Expired
659 KB
3.9.17-nm-vllm-nightly-0.5.0.20240623.tar.gz Expired
660 KB
9635737837-gcp-k8s-l4-solo-3.10.12 Expired
1.11 KB
9635737837-gcp-k8s-l4-solo-3.11.4 Expired
1.11 KB
9635737837-gcp-k8s-l4-solo-3.8.17 Expired
1.12 KB
9635737837-gcp-k8s-l4-solo-3.9.17 Expired
1.11 KB
gh_action_benchmark_jsons-9635737837-gcp-k8s-l4-solo-3.10.12 Expired
1 KB
gh_action_benchmark_jsons-9635737837-gcp-k8s-l4-solo-3.11.4 Expired
1 KB
gh_action_benchmark_jsons-9635737837-gcp-k8s-l4-solo-3.8.17 Expired
1 KB
gh_action_benchmark_jsons-9635737837-gcp-k8s-l4-solo-3.9.17 Expired
1 KB
nm_vllm_nightly-0.5.0.20240623-cp310-cp310-manylinux_2_17_x86_64.whl Expired
165 MB
nm_vllm_nightly-0.5.0.20240623-cp311-cp311-manylinux_2_17_x86_64.whl Expired
165 MB
nm_vllm_nightly-0.5.0.20240623-cp38-cp38-manylinux_2_17_x86_64.whl Expired
165 MB
nm_vllm_nightly-0.5.0.20240623-cp39-cp39-manylinux_2_17_x86_64.whl Expired
165 MB