You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Suppose that I am trying to optimize a tool and I want to have a precise view of the performance impact of a given change. I can use hyperfine to compare a run of the tool with a run of a modified version of the tool, and it detects the number of iterations to perform automatically according to the duration of the first run. However, I may or may not be happy with this number of iterations, depending on how the comparison turns out to be. If it's very clear after 3 runs of each that the change improves/degrades performance a lot, I can just stop. If it's not clear but I am still interested in a precise measure, I will want to run more, say 50, so that the statistical variations average out to give a more precise value of the mean. Before discovering hyperfine, I was using a hand-made tool that runs the processes alternatively (relates to #21), and prints the standard error of the mean live (N.B. not standard deviation, cf. https://en.wikipedia.org/wiki/Standard_error; I think the "σ" value hyperfine prints is the standard deviation?), allowing me to stop when I am happy with the significance of the comparison.
This feature request is a superset of #21: in addition to alternating the runs, it would mean printing an error estimate while benchmarking too (not only the mean estimate like hyperfine currently does), and supporting a mode where benchmarking never stops until interrupted by Ctrl-C (I guess this would only differ from -r 1000000 by not printing something similar to "run x out of 1000000).
Thanks for this tool, and thanks for your interest.
The text was updated successfully, but these errors were encountered:
Thank you for your request. This sounds like a very useful feature. Any help in designing the details of this (CLI, implementation idea, …) would be very much appreciated
Hi!
Suppose that I am trying to optimize a tool and I want to have a precise view of the performance impact of a given change. I can use hyperfine to compare a run of the tool with a run of a modified version of the tool, and it detects the number of iterations to perform automatically according to the duration of the first run. However, I may or may not be happy with this number of iterations, depending on how the comparison turns out to be. If it's very clear after 3 runs of each that the change improves/degrades performance a lot, I can just stop. If it's not clear but I am still interested in a precise measure, I will want to run more, say 50, so that the statistical variations average out to give a more precise value of the mean. Before discovering hyperfine, I was using a hand-made tool that runs the processes alternatively (relates to #21), and prints the standard error of the mean live (N.B. not standard deviation, cf. https://en.wikipedia.org/wiki/Standard_error; I think the "σ" value hyperfine prints is the standard deviation?), allowing me to stop when I am happy with the significance of the comparison.
This feature request is a superset of #21: in addition to alternating the runs, it would mean printing an error estimate while benchmarking too (not only the mean estimate like hyperfine currently does), and supporting a mode where benchmarking never stops until interrupted by Ctrl-C (I guess this would only differ from
-r 1000000
by not printing something similar to "run x out of 1000000).Thanks for this tool, and thanks for your interest.
The text was updated successfully, but these errors were encountered: