You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a performance test for CompoundGenerator.prepare, that checks prepare runs in "reasonable" time when given a "unified" set of generators with sizes that multiply to 200 million points.
It's useful to keep an eye on the run time of this test, and keep the time limit quite tight, as if the startup for CompoundGenerator is too slow for large scans it undermines the effort made in #22 and its easy to negatively affect it (for instance, commit 3de9d66 in #24 added ~2 seconds to this test for a 30% increase in run time).
But the run times on travis are far too inconsistent for a tight bound, and builds are prone to the odd failure, particularly at peak times.
Do we want to keep the test or remove it? Do we want to relax the time limit? Or do we want to do some weird hacks for Travis, say applying the tighter limits for a workstation, but not for Travis VMs?
The text was updated successfully, but these errors were encountered:
Would taking an average of a few attempts give a more consistent value? Or would multiple runs in quick succession lead to similar long run times? That may allow us to put a tighter bound on.
The conversion from np.bool to np.int8 also negatively impacted this test (adding another couple of seconds). This conversion was necessary for use of the project in Jython.
The latest release of Dawn (with updates in the scisoftpy stuff) allows use of np.bool, but it performs worse.
There is a performance test for CompoundGenerator.prepare, that checks prepare runs in "reasonable" time when given a "unified" set of generators with sizes that multiply to 200 million points.
It's useful to keep an eye on the run time of this test, and keep the time limit quite tight, as if the startup for CompoundGenerator is too slow for large scans it undermines the effort made in #22 and its easy to negatively affect it (for instance, commit 3de9d66 in #24 added ~2 seconds to this test for a 30% increase in run time).
But the run times on travis are far too inconsistent for a tight bound, and builds are prone to the odd failure, particularly at peak times.
Do we want to keep the test or remove it? Do we want to relax the time limit? Or do we want to do some weird hacks for Travis, say applying the tighter limits for a workstation, but not for Travis VMs?
The text was updated successfully, but these errors were encountered: