Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello,
This commit adds proof of concept support for the Intel NPU inference accelerator 123
that is present in Meteor and Lunar Lake CPUs.
I emphasize that this is a PoC implementation to show that NPU device implementation may reuse GPU codebase.
These two are asynchronous coprocessors that expose similar statistics (utilization, VRAM/RAM memory, clocks, power stats).
For now, the NPU device is added into vector and based on
is_npu_device
info to generate distinct strings for GPU/NPU at the drawing stage.To make a complete implementation there are a couple of ways I see and I am not sure which is appropriate:
keeping
is_npu_device
insupported_functions
to get the device string in a drawing.I think that the second option seems to be the most optimal and I would ask for some guidance.
Footnotes
NPU explained: https://intel.github.io/intel-npu-acceleration-library/npu.html ↩
Linux driver sources path:
root/drivers/accel/ivpu/
↩NPU UMD sources: https://github.com/intel/linux-npu-driver ↩