Skip to content

Support FP8 Quantization and Inference Run on Intel Gaudi (HPU) using INC (Intel Neural Compressor) #190

Support FP8 Quantization and Inference Run on Intel Gaudi (HPU) using INC (Intel Neural Compressor)

Support FP8 Quantization and Inference Run on Intel Gaudi (HPU) using INC (Intel Neural Compressor) #190

Triggered via pull request January 21, 2025 18:41
Status Success
Total duration 14s
Artifacts

pre-commit.yml

on: pull_request
pre-commit
8s
pre-commit
Fit to window
Zoom out
Zoom in

Annotations

1 warning
pre-commit
ubuntu-latest pipelines will use ubuntu-24.04 soon. For more details, see https://github.com/actions/runner-images/issues/10636