Replies: 1 comment
-
We don't have much control over how NNAPI executes the model. Please see #10692 (comment) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I follow the example code to run a model by using NNAPI EP. But I can't find any example to run a model in gpu or npu. How can I select different devices (cpu, gpu or npu) ? Please give me some tips! Thanks a lot!
Ort::Env onnx_env = Ort::Env{ORT_LOGGING_LEVEL_ERROR, "Default"};
Ort::SessionOptions so;
so.SetGraphOptimizationLevel(GraphOptimizationLevel::ORT_ENABLE_ALL);
uint32_t nnapi_flags = 0;
nnapi_flags |= NNAPI_FLAG_CPU_DISABLED;
//nnapi_flags |= NNAPI_FLAG_USE_NCHW;
Ort::ThrowOnError(OrtSessionOptionsAppendExecutionProvider_Nnapi(so, nnapi_flags));
Ort::Session session(onnx_env, model_content.data(), fileSize,so);
Beta Was this translation helpful? Give feedback.
All reactions