You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Arbor should support for the Intel GPUs because Arbor uses floating-point 64 as the main precision and Intel GPUs currently have the best performance per price when it comes to FP64 computations.
Rationale
Because unlike Nvidia who uses a 1:64 downgraded FP64 performance or AMD who uses a 1:32 downgrade, Intel actually uses 1:8 downgrade. Meaning Intel Arc A770 has the theoretical computation power of 2.458 TFLOPS at FP64, at the price point of 329$ right now. (https://www.techpowerup.com/gpu-specs/arc-a770.c3914)
Considering the fact that many "FP64" GPUs are professional grade (aka "pricey"), and many of the cheap ones aren't produced since forever, supporting Intel GPUs will be supporting the community.
And considering the fact that Intel B580 goes the same route by providing 1:8 downgrade, it would be a long-term benefit.
Scope
Just like ROCm and Cuda, give an experimental support for Intel for every feature. Probably through OpenCL or OneAPI.
Implementation
That is a too advanced topic for me, but since you give support for ROCm, it would probably be no problem for you, may be a fun small-scale challenge even.
The text was updated successfully, but these errors were encountered:
we have discussed this topic on and off for a long time. For now, nobody in our team has access
to these cards and the current / upcoming generation of the HPC machines we work with deploy
NVIDIA GPUs. Thus, until that situation changes, this is on our agenda, but not a priority. And without
stable access to the proper hardware, we cannot really start work on it.
Okay, very understandable.
My other sane option was AMD, but you said you'll mainly use NVIDIA GPUs. So should I have to know some important detail when it comes to using ARBOR's experimental ROCm support? Or should I just go all NVIDA?
Thanks.
Yes, for now NVIDIA is the backend that is most used and supported. As you might gather from our profiles, most of us work at HPC centres and as such, we follow their procurement trends. That doesn't mean alternatives fall to the wayside, but it guides our focus.
Hardware Support for Intel GPUs
Goal
Arbor should support for the Intel GPUs because Arbor uses floating-point 64 as the main precision and Intel GPUs currently have the best performance per price when it comes to FP64 computations.
Rationale
Because unlike Nvidia who uses a 1:64 downgraded FP64 performance or AMD who uses a 1:32 downgrade, Intel actually uses 1:8 downgrade. Meaning Intel Arc A770 has the theoretical computation power of 2.458 TFLOPS at FP64, at the price point of 329$ right now. (https://www.techpowerup.com/gpu-specs/arc-a770.c3914)
Considering the fact that many "FP64" GPUs are professional grade (aka "pricey"), and many of the cheap ones aren't produced since forever, supporting Intel GPUs will be supporting the community.
And considering the fact that Intel B580 goes the same route by providing 1:8 downgrade, it would be a long-term benefit.
Scope
Just like ROCm and Cuda, give an experimental support for Intel for every feature. Probably through OpenCL or OneAPI.
Implementation
That is a too advanced topic for me, but since you give support for ROCm, it would probably be no problem for you, may be a fun small-scale challenge even.
The text was updated successfully, but these errors were encountered: