-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random fail of test_mpi_coarsen_2d #51
Comments
Fixed. |
Unfortunately, this test failed once again, here is the Travis log.
|
Unfortunately we are experiencing this quite a few times, here is another failure for Also, Finally, Note, all of those failures don't occur systematically, the failures are more or less random, as there are a number of builds that passed all tests with the same config options. |
Correction: The other failing tests (as stated above) are experienced randomly. |
Can you confirm you have that the right way around? Can you reproduce this by hand using a simple bash loop? Run it with -v so we have an actual error message to consider. Gerard (mobile) On 2 Jul 2015, at 18:46, Frank Milthaler <[email protected]mailto:[email protected]> wrote: Correction: test_mpi_adapt_3d seems to fail systematically iff ENABLE_OPENMP=FALSE and passes otherwise. The other failing tests (as stated above) are experienced randomly. — |
Good call:
I'll check the other two tests tomorrow. |
I see it segfaults inside generate_location() in Smoothing. I bet if you compile the code with debug support you will get an assertion failure - the one related to "tol > -DBL_EPSILON" or something like that. This is a problem I faced months ago and I haven't been able to fix it so far. It is not a multi-threading issue, the problem occurs even with one thread. Actually, if you want to make test_mpi_adapt_3d to fail systematically, just run it with OMP_NUM_THREADS=1. Multi-threading changes the order of operations, making the test pass sometimes and fail others. |
I may have a fix for this in my branch for boundary coarsening. I found that if highly anisotropic elements were generated the metric may have non positive determinant due to roundoff. Frank - can you perform the same test with my branch? Cheers On 3 Jul 2015, at 06:36, gr409 <[email protected]mailto:[email protected]> wrote: I see it segfaults inside generate_location() in Smoothing. I bet if you compile the code with debug support you will get an assertion failure - the one related to "tol > -DBL_EPSILON" or something like that. This is a problem I faced months ago and I haven't been able to fix it so far. It is not a multi-threading issue, the problem occurs even with one thread. Actually, if you want to make test_mpi_adapt_3d to fail systematically, just run it with OMP_NUM_THREADS=1. Multi-threading changes the order of operations, making the test pass sometimes and fail others. — |
Here we go: based on 500 runs of
The failures are the same as above. |
Following the merge of
cmake-enable-mpi-option
into master, the testtest_mpi_coarsen_2d
was failing on travis (travis build), withENABLE_VTK=TRUE
andENABLE_MPI
not set, thus it was built with MPI support.The same build (from the same commit/merge), but with
ENABLE_MPI=TRUE
andENABLE_VTK
not set, thus with VTK support, passed all the test (travis build).The text was updated successfully, but these errors were encountered: