-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature gyroaverages #187
Feature gyroaverages #187
Conversation
…uding test in the test_scripts folder. Currently this version runs but produces nonsensical results for some choices of the input parameters, suggesting a bug. Boundary conditions are not treated, meaning that the gyroaverage will not be accurate within a gyroradius of the boundary of the domain.
…gyroaverage line integral exits the domain. This change makes it easier to understand if the core numerical method is correct. Comment out print statements. Numerical experiments suggest that to have the correct behaviour averaging over high-k fluctuations the number of points in the gyrophase grid should be very large ~ 100 - 1000. Note that this does not add cost at runtime, only in initialisation. Without the diagnostics print statements, the initialisation only takes a few seconds for reasonable z, r, vperp, resolutions, even with 1000 gyrophase points. Physically, we should expect that the number of points in gyrophase should increase with increasing maximum wavenumber to provide an effective averaging operation.
…a gyroaverage. Thanks to S. Newton for spotting the error.
…iodic function with a single wavenumber in each of the z and r dimensions. The test confirms that for reasonable resolutions (see the example call of gyroaverage_test() in test_scripts/gyroaverage_test.jl) the errors in computing the gyroaverage can be as small as 10^-8. Note that non-monotonic behaviour of the gyroaveraged phi with kperp rho is to be expected in this test since besselj0(krho) is not monotonic decreasing but descreases with oscillatory behaviour.
…cies. The gyroaverage of the fields is easily accomplished by adding new gfields which contain the necessary gyroaverages. Adding the gyroaverage to the velocity integration is substantially more complicated due to the many disparate locations where the pdf is integrated to form a moment. Further changes are required to test what is changed in this commit, and further modifications to the calculation of the velocity integrals are required.
…moments are computed in a single function. This permits the use of a gyroaveraged distribution function in a single location in the code, reducing the number of instances where we need to perform the gyroaverage.
…y the matrix to the pdf. Note that these functions require further changes to make the use the distributed and shared-memory MPI. The example test inputs give errors of order of 10^-7.
…atch certain errors for choosing ielement by reducing the zero value, and linking the two if statments for internal and boundary cases.
…nctions in the init_gyroaverage_operators() function. Correct some minor bugs. The code now time advances with gyrokinetic_ions = true.
…ct variable gyrokinetic_ions.
…age functions to use shared-memory MPI. The initialisation of the gyromatrix is done in serial.
… spatially periodic boundary conditions.
periodic_box_low_res.zip |
…e gyroaverge operator. Introduce indexing arrays that record which elements of the gyromatrix are nonzero and sum only over these indices in attempt to benefit from the sparsity of the gyromatrix for rhostar << 1 and large problem sizes with many elements. The gyromatrix is expected to be sparse only if nelement is large and rhostar is small. The matrix may be sparse for the smallest vperp but dense for the maximum vperp is the rho(vperp) becomes comparable to the box size. Confirmation of a speedup requires further testing to confirm that the Julia compiler behaves well for this implementation.
… they can be accomodated without introducing shared-memory arrays of novel types. This has the cost of introducing arrays that store undefined (and unused values), but avoids significant development in the shared-memory framework. Testing is still required to demonstrate that the gyroaverage operator is sped up by the combined changes represented by this and the last commit.
Marking this PR as ready for review as shared-memory MPI is now working, looking for feedback before implementing distributed-memory MPI support. @johnomotani @mabarnes Edit: Some tests still seem to be failing on CI... |
Commit cd85aa7 passes all tests apart from tests in parallel on MacOS, which mysteriously time out after seeming to finish with the tests reportedly passing (although a different number of tests seem to be carried out as compared to the Ubuntu case. |
The macOS parallel tests do this fairly frequently. Often just re-running the jobs lets them pass, so I've not investigated further. The macOS parallel tests were being slow, so I cut them down to 1 test run (4 process, no |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't checked the actual gyroaverage functionality or tests, but lets merge this to get the updates to moment-update functions into master
.
Gyroaverage feature is still a prototype, we should open an issue to track the remaining todos.
This issue can act as the tracker: #186 |
It looks like some of the parallel tests might have been broken, rerunning the tests as I don't understand how that can be. |
It looks like parallel tests sometimes (often?) fail and sometimes pass, which is a bit suspicious. It would be good to add a 'debug check' that uses the gyroaverage - I'll have a look at that now. |
Only use the FFTW plan here once, so does not need to be fast, and using FFTW.ESTIMATE ensures the weights will be identical on all processes.
Ah, the PR tests (in parallel) were failing because of changes from #203 (which had been merged into |
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
This was (should have been?) removed in #187, but probably got reintroduced in a bad merge.
To do list: