-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1D interpolation for gauss_legendre #214
Conversation
Means each discretization only needs to implement `single_element_interpolate!()`.
Add flag to the `coordinate` struct so that we do not need to test by coordinate name.
Removes some code duplication, and we might want other Lagrange-polynomial-related functions at some point in the future.
This speeds up some tests by about 2x.
Removes re-calculation of per-element matrices, which was inefficient. Should be more efficient than per-element operations followed by element boundary reconciliation.
Only works when not using distributed MPI.
Weak-form second derivatives, which do not support distributed-memory coordinates, now raise an error when used with distributed-memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without seeing the other changes yet, I wonder if you would be happy to make this commit 8819a5a into a separate (completely uncontroversial) PR? I would approve immediately as I should have done this refactor when I introduced the gyroaverage feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am happy with the changes in commit 60babb1.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Before going ahead to understand the rest of the changes, I think I need to understand better the interpolation routine interpolate_to_grid_1d! (formerly specific to Chebyshev grids 399dfdf).
Whilst the logic must be the same as one of the routines that I have written (e.g.
function interpolate_2D_vspace!(pdf_out,pdf_in,vpa,vperp,scalefac) |
As a general comment, it looks like this function has some quite specific assumptions that are not true in the general case: 1) the interpolation is done element-wise in the new grid (that would only be true in the Chebyshev case, when FFTs are used?) and 2) that we cannot use the polynomials to extrapolate (untrue on the Radau elements, where we have to extrapolate between [0,xmin], because the origin is not a point on the grid). Is there a reason that I am missing for keeping this general form of the interpolation routine, besides needing to support the existing Chebyshev methods? I think the number of lines of code would be much fewer if we used instead Lagrange interpolation (no FFTs) for both Gauss-Legendre and Chebyshev grids, which would have the advantage that no tests are required for whether an element is Radau or Lobatto.
For a coordinate with a Radau first element, the region between coord=0 and coord=coord.grid[1] is actually within the first element, and there will be no points to interpolate to to the left of the first element, because the coordinate range is 0<coord<infinity. Radau elements therefore need special handling, so that there is no Maxwellian-like extrapolation to the left of the first element.
The Doing the interpolation element-by-element (in the original grid) makes sense whether we use Lagrange polynomials or FFTs-with-Chebyshev - the lookups, e.g. for grid points within the element, only need to be done once per element, rather than once per interpolated point. It's true that Radau elements were not correctly handled. I've just fixed that. I've just noticed that when using gauss-legendre elements in a moment-kinetic run, a lot of time (about half the run time) is spent in the Edited to add: testing for a Radau first element will always be required, because they to handle whether the range between 0 and |
Enables the
interpolate_1d!()
function when usinggausslegendre_pseudospectral
, using Lagrange polynomial evaluation.Also optimises
second_derivative!()
when usinggausslegendre_pseudospectral
, by removing the re-evaluation of per-element matrices, replacing it by using the 'global'K_matrix
andL_matrix
. Adds partial handling of periodic dimensions to the weak-form matrices ingauss_legendre
- without that this update would have broken the second-derivative test incalculus_tests.jl
. The handling is only partial because it does not support distributed-MPI.It does not seem worth trying to work out how to support distributed-MPI with weak form derivatives at the moment, because that would require a distributed-MPI matrix solver. The closest options seem to be MUMPS.jl and PETSc.jl - both interfaces to external packages that do support MPI, but the Julia interfaces only partially support MPI and as far as I can see do not yet support what we would need to do. The only use-case for distributed-MPI at the moment would be the second derivative in parallel thermal conduction for 'Braginskii electrons', which will run well enough with only shared-memory parallelism for the moment.
As this PR adds
gausslegendre_pseudospectral
to the interpolation tests, the amount of time taken to set up agausslegendre_pseudospectral
discretization got more annoying, so I've extended theinit_YY=false
option to skip calculation of a few more matrices that are only used for the collision operator - also renamed it ascollision_operator_dim
since now not all the matrices are 'YY' matrices.