Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CompoundGenerator performance test #39

Open
c-mita opened this issue Mar 17, 2017 · 3 comments
Open

CompoundGenerator performance test #39

c-mita opened this issue Mar 17, 2017 · 3 comments

Comments

@c-mita
Copy link
Contributor

c-mita commented Mar 17, 2017

There is a performance test for CompoundGenerator.prepare, that checks prepare runs in "reasonable" time when given a "unified" set of generators with sizes that multiply to 200 million points.

It's useful to keep an eye on the run time of this test, and keep the time limit quite tight, as if the startup for CompoundGenerator is too slow for large scans it undermines the effort made in #22 and its easy to negatively affect it (for instance, commit 3de9d66 in #24 added ~2 seconds to this test for a 30% increase in run time).

But the run times on travis are far too inconsistent for a tight bound, and builds are prone to the odd failure, particularly at peak times.

Do we want to keep the test or remove it? Do we want to relax the time limit? Or do we want to do some weird hacks for Travis, say applying the tighter limits for a workstation, but not for Travis VMs?

@GDYendell
Copy link
Contributor

Would taking an average of a few attempts give a more consistent value? Or would multiple runs in quick succession lead to similar long run times? That may allow us to put a tighter bound on.

@c-mita
Copy link
Contributor Author

c-mita commented Mar 17, 2017

I'm wondering if reducing the number of concurrent builds would help - the Python tests used to be quite reliable until I added the Jython one.

Averaging multiple runs might help.

@c-mita
Copy link
Contributor Author

c-mita commented Mar 22, 2017

The conversion from np.bool to np.int8 also negatively impacted this test (adding another couple of seconds). This conversion was necessary for use of the project in Jython.

The latest release of Dawn (with updates in the scisoftpy stuff) allows use of np.bool, but it performs worse.

So all paths are bad.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants