You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After we've been successful at moving from Travis to GitHub actions (#155), another important step will be to revise the UnitTests themselves.
One main issue is that there are UnitTests that are not checked by the CI at all due to the long runtime of the current tests (these are the main CrossSection tests EPair, Brems, Photonuclear and MuPair). I recently tried to run them manually, and I found out that one of them (Brems) is actually failing on the current master... Furthermore, I figured out that the python scripts that generate the files for the reproducibility tests (/tests/gen_testfiles_scripts) are not fully working yet..
Rather than doing some quick and dirty fixes, I think it would be more appropriate to take some time to actually revise the UnitTests (maybe a "focus week" in a foreseeable future).
I would like to gather some ideas about what could and should be improved about the UnitTests. My first thoughts:
Not all CrossSectionTests are done automatically in CI (see above). At the very least, we should conduct a limited amount of testing in the CI to avoid such situations. One idea could be: Test at least all integration tests in the CI (since they only take some minutes compared to the interpolation tests) and do at least one interpolation test for the default cross sections.
For the CrossSection UnitTests, we conduct reproducibility tests (which is generally ok). However, the interpolation tests are currently also reproducibility tests (i.e. we compare the interpolated results to the interpolated results from previous versions saved in txt files). This could be easily turned into UnitTests by comparing the interpolated values to the integrated values. This way, we could also check the interpolation for a much bigger phase space to find possible interpolation issues.
In addition to the reproducibility tests, there should be much more physical tests. Ideas could be to make some sanity checks (e.g. "does the LPM effect actually suppress the differential cross sections?") or to compare results to theoretical predictions (e.g. dEdx values).
After we've been successful at moving from Travis to GitHub actions (#155), another important step will be to revise the UnitTests themselves.
One main issue is that there are UnitTests that are not checked by the CI at all due to the long runtime of the current tests (these are the main
CrossSection
testsEPair
,Brems
,Photonuclear
andMuPair
). I recently tried to run them manually, and I found out that one of them (Brems
) is actually failing on the current master... Furthermore, I figured out that the python scripts that generate the files for the reproducibility tests (/tests/gen_testfiles_scripts
) are not fully working yet..Rather than doing some quick and dirty fixes, I think it would be more appropriate to take some time to actually revise the UnitTests (maybe a "focus week" in a foreseeable future).
I would like to gather some ideas about what could and should be improved about the UnitTests. My first thoughts:
Please add your comments and further ideas.
The text was updated successfully, but these errors were encountered: