Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Performance] Model inference 50x slower when using ORT_PARALLEL execution mode #23303

Closed
IDEA-V opened this issue Jan 9, 2025 · 1 comment
Closed
Labels
performance issues related to performance regressions

Comments

@IDEA-V
Copy link

IDEA-V commented Jan 9, 2025

Describe the issue

I am running a swin transformer backbone using onnxruntime python. The inference latency is normal when using sequential execution mode. After I change the execution mode to ORT_PARALLEL, the inference is way slower than before.

From the profiling it can be observed that actually no operation is done in parallel. Instead, operations are separated to difference threads and large amount of idle time is added in between.
Image

Does anyone know what is causing this problem?

To reproduce

import onnxruntime as ort
from mmengine.config import Config
from mmengine.runner import Runner
model_path = "backbone.onnx"

sess_options = ort.SessionOptions()
sess_options.execution_mode = ort.ExecutionMode.ORT_PARALLEL
session = ort.InferenceSession(model_path, sess_options, providers=["CUDAExecutionProvider"])


cfg = Config.fromfile("maskrcnn.py")
runner = Runner.from_cfg(cfg)
output_name = session.get_outputs()[0].name

data_iter = iter(runner.val_dataloader)
latencies = []
for i in range(200):
    print(i, 200)
    batch = next(data_iter)
    data = runner.model.data_preprocessor(batch, False)
    outputs = session.run([output_name], {'input': data['inputs'].cpu().numpy()})

Urgency

No response

Platform

Linux

OS Version

Red Hat Enterprise Linux release 8.10

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.17.0

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA 11.8

Model File

No response

Is this a quantized model?

No

@IDEA-V IDEA-V added the performance issues related to performance regressions label Jan 9, 2025
@snnn
Copy link
Member

snnn commented Jan 9, 2025

The ORT_PARALLEL feature is rarely used and does not always produce performance benefit.

@snnn snnn closed this as not planned Won't fix, can't repro, duplicate, stale Jan 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance issues related to performance regressions
Projects
None yet
Development

No branches or pull requests

2 participants