Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

1D interpolation for gauss_legendre #214

Merged
merged 11 commits into from
Jun 4, 2024

Conversation

johnomotani
Copy link
Collaborator

Enables the interpolate_1d!() function when using gausslegendre_pseudospectral, using Lagrange polynomial evaluation.

Also optimises second_derivative!() when using gausslegendre_pseudospectral, by removing the re-evaluation of per-element matrices, replacing it by using the 'global' K_matrix and L_matrix. Adds partial handling of periodic dimensions to the weak-form matrices in gauss_legendre - without that this update would have broken the second-derivative test in calculus_tests.jl. The handling is only partial because it does not support distributed-MPI.

It does not seem worth trying to work out how to support distributed-MPI with weak form derivatives at the moment, because that would require a distributed-MPI matrix solver. The closest options seem to be MUMPS.jl and PETSc.jl - both interfaces to external packages that do support MPI, but the Julia interfaces only partially support MPI and as far as I can see do not yet support what we would need to do. The only use-case for distributed-MPI at the moment would be the second derivative in parallel thermal conduction for 'Braginskii electrons', which will run well enough with only shared-memory parallelism for the moment.

As this PR adds gausslegendre_pseudospectral to the interpolation tests, the amount of time taken to set up a gausslegendre_pseudospectral discretization got more annoying, so I've extended the init_YY=false option to skip calculation of a few more matrices that are only used for the collision operator - also renamed it as collision_operator_dim since now not all the matrices are 'YY' matrices.

Means each discretization only needs to implement
`single_element_interpolate!()`.
Add flag to the `coordinate` struct so that we do not need to test by
coordinate name.
Removes some code duplication, and we might want other
Lagrange-polynomial-related functions at some point in the future.
Removes re-calculation of per-element matrices, which was inefficient.

Should be more efficient than per-element operations followed by element
boundary reconciliation.
Only works when not using distributed MPI.
@johnomotani johnomotani added the enhancement New feature or request label May 10, 2024
@johnomotani johnomotani requested a review from mrhardman May 10, 2024 16:01
Weak-form second derivatives, which do not support distributed-memory
coordinates, now raise an error when used with distributed-memory.
Copy link
Collaborator

@mrhardman mrhardman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Without seeing the other changes yet, I wonder if you would be happy to make this commit 8819a5a into a separate (completely uncontroversial) PR? I would approve immediately as I should have done this refactor when I introduced the gyroaverage feature.

Copy link
Collaborator

@mrhardman mrhardman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am happy with the changes in commit 60babb1.

Copy link
Collaborator

@mrhardman mrhardman left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before going ahead to understand the rest of the changes, I think I need to understand better the interpolation routine interpolate_to_grid_1d! (formerly specific to Chebyshev grids 399dfdf).

Whilst the logic must be the same as one of the routines that I have written (e.g.

function interpolate_2D_vspace!(pdf_out,pdf_in,vpa,vperp,scalefac)
), with an enquiry function to find which element the field position x_interp sits within, and then the interpolation on that element for the desired point x_interp, I cannot directly see the correspondence. Can you explain in a few words (or maybe we can have a meeting), what kstart is and why you use that instead of the element index?

As a general comment, it looks like this function has some quite specific assumptions that are not true in the general case: 1) the interpolation is done element-wise in the new grid (that would only be true in the Chebyshev case, when FFTs are used?) and 2) that we cannot use the polynomials to extrapolate (untrue on the Radau elements, where we have to extrapolate between [0,xmin], because the origin is not a point on the grid). Is there a reason that I am missing for keeping this general form of the interpolation routine, besides needing to support the existing Chebyshev methods? I think the number of lines of code would be much fewer if we used instead Lagrange interpolation (no FFTs) for both Gauss-Legendre and Chebyshev grids, which would have the advantage that no tests are required for whether an element is Radau or Lobatto.

For a coordinate with a Radau first element, the region between coord=0
and coord=coord.grid[1] is actually within the first element, and there
will be no points to interpolate to to the left of the first element,
because the coordinate range is 0<coord<infinity. Radau elements
therefore need special handling, so that there is no Maxwellian-like
extrapolation to the left of the first element.
@johnomotani
Copy link
Collaborator Author

johnomotani commented May 20, 2024

The interpolate_to_grid1d!() function uses the fact that both the original coordinate grid and newgrid are sorted (i.e. monotonically increasing values) to speed up the search. kstart is a set of indices that chunk newgrid into sections that each lie within a single element of the original grid.

Doing the interpolation element-by-element (in the original grid) makes sense whether we use Lagrange polynomials or FFTs-with-Chebyshev - the lookups, e.g. for grid points within the element, only need to be done once per element, rather than once per interpolated point.

It's true that Radau elements were not correctly handled. I've just fixed that.

I've just noticed that when using gauss-legendre elements in a moment-kinetic run, a lot of time (about half the run time) is spent in the interpolate_to_grid_1d!() function, so I'm going to have a go at optimising it a bit more. The optimisation will probably need more per-element lookups, so the current structure of the function does seem like the right thing to me.

Edited to add: testing for a Radau first element will always be required, because they to handle whether the range between 0 and coord.grid[1] is part of the grid (Radau element) or whether extrapolation to the left of coord.grid[1] might be needed (Lobatto element).

@johnomotani johnomotani merged commit 869d52e into master Jun 4, 2024
14 of 16 checks passed
@johnomotani johnomotani deleted the gausslegendre-1d-interpolation branch June 4, 2024 15:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants