You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was wondering whether it's better to combine this tool with something like cpuset, to reserve the cpus the benchmark and hyperfine run on.
Would this make a difference? I understand that hyperfine will catch outliers stemming from, e.g., competing programs, but would this be avoided in the first place by reserving the cpus?
Thank you for your time.
The text was updated successfully, but these errors were encountered:
This is a great question. I don't have the answer. I have experimented with hyperfine + cpuset in the past and I believe that it may have helped reducing outliers, but I can't provide definitive proof.
Note that context switches are not the only source of noise. There's also I/O, which often plays a huge role when benchmarking CLI apps, but that obviously spends a lot on your use case. cpuset will not help with this.
Ok, so my understanding is that if someone sees notable outliers/variability in their results, they should consider running hyperfine with cpuset as an option, but remember that this would help only if the variability is due to context switches.
Hello, thank you for the great tool!
I was wondering whether it's better to combine this tool with something like cpuset, to reserve the cpus the benchmark and hyperfine run on.
Would this make a difference? I understand that hyperfine will catch outliers stemming from, e.g., competing programs, but would this be avoided in the first place by reserving the cpus?
Thank you for your time.
The text was updated successfully, but these errors were encountered: