Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial functionality for partially-implicit 'IMEX' timestepping #219

Merged
merged 75 commits into from
Jul 30, 2024

Conversation

johnomotani
Copy link
Collaborator

Runge-Kutta timestepping schemes that can handle terms split into an 'implicit' group and and 'explicit' group are known as 'IMEX' schemes. A couple of examples are introduced in this PR, from Kennedy & Carpenter 2003 and Kennedy & Carpenter 2019.

The IMEX schemes are built up from linear combinations of forward-Euler (explicit) and backward-Euler (implicit) steps. To implement the backward-Euler steps (added in this PR), we add a 'Jacobian-free Newton-Krylov' iterative solver (e.g. Knoll & Keyes 2004).

Current examples of choices for the 'implicit terms' are: i) just ion 'vpa advection' in the ion kinetic equation; ii) all terms in the ion kinetic equation (excluding the ion moment equations in the moment-kinetic case). These choices seem to be not that useful - the CFL limits for different terms (in particular the smallest two in a 1D1V test case including wall boundary conditions are the 'ion vpa advection' and 'neutral vz advection') are not that far apart, so the implicitness being added does not allow a huge increase in timestep (in the best case, would be about a factor of 2x to 4x). Unfortunately, the IMEX RK scheme cannot exceed the CFL limit by as large a factor as our 'standard' 4-stage 3rd-order explicit scheme, so we do not end up with a longer timestep than the best explicit scheme we have.

However, the motivation for introducing this implicit functionality is to apply it to the electron equation (once that is added...). The code in this PR provides a sufficient framework for that, so I propose to merge now as-is. Useful extensions to look at one day for use with the ion/neutral equations could be looking at different combinations of terms, and experimenting with preconditioners. The option to apply a function implementing a preconditioner is included, but no preconditioners are provided yet in this PR.

Some documentation is needed. I'd like to merge this PR anyway, as there are structural changes to the timestepping. If people are OK with that, I'll open an issue to flag that documentation is needed (when I get time...).

This was (should have been?) removed in #187, but probably got
reintroduced in a bad merge.
The gyroaveraged electric fields are not saved, so cannot be easily used
in post-processing. For now, just copy `Er` into `gEr` where it is
needed.
@johnomotani johnomotani added the enhancement New feature or request label May 28, 2024
@mrhardman
Copy link
Collaborator

If you think this PR is mission critical for the electron implementation then I am happy for you to go ahead once the CI tests pass and you have checked one of the MMS tests still run.

Question: If the ion kinetic equation is supported, does this mean that the Fokker-Planck implementation is also now IMEX with this PR? A good way to justify these changes in a report would be to use the IMEX collision operator and demonstrate that much longer timesteps are possible with a fine velocity resolution. I have been concerned about this problem since seeing how small a timestep was required for the fast-ion-electron collisions in the slowing down relaxation problems that we considered recently.

Question: how hard is it to extend the IMEX stepper to the moment equations? One interest in making the moment equations implicit would be to allow (in the distant future) for electromagnetic fields, for which the other finite-element Full-F/Full solution codes (like JOREK) seem to use a fully implicit solver.

In general documentation and more clarity is needed in the timestepping area of the code, so I am happy for documentation to be delayed until you are happy with the functionality.

@johnomotani johnomotani force-pushed the imex branch 2 times, most recently from bfefa9c to 3b4c77f Compare June 6, 2024 12:23
Note this update changes the indexing of which factor caused the
timestep limit, so timestep limits from older output files will no
longer be labelled correctly by the postprocessing tools.
This may work as long as the timestep is above `dt_minimum` so that it
can actually be decreased.
Bug meant `t_params.dt[]` on the step up to an output was being made
slightly too large.
Need to handle external source coefficients that may not be saved in
non-moment-kinetic cases.

Also need to pass `gEz` instead of `Ez`.
This is useful as the function may be called in implicit timestepping
functions without first checking whether the run is moment-kinetic.
@johnomotani johnomotani force-pushed the imex branch 2 times, most recently from 71767c1 to e7d9409 Compare June 11, 2024 15:23
Lets us see if ion and neutral limits overlap.
When not using adaptive timestepping, go back to checking the step
counter against `nwrite_moments` and `nwrite_dfns`. This is slightly
more complicated, as adaptive and non-adaptive schemes have different
ways of determining whether to write output, but will make it easier to
add a debug mode where adaptive timestep schemes write output after a
fixed number of steps rather than after a fixed simulation time.
...rather than after a fixed simulation time. May be useful for
debugging.
This can be useful for nonlinear solvers used for implicit parts of the
timestep.
...using solve_for() (or a hacked version of it...) from Symbolics.jl to
extract rk_coefs from equations defining y[i] in terms of Butcher tables
'a' and 'b', or in terms of rk_coefs.
Both distributed_norm() and distributed_dot() need to use rtol and atol.
Newton iteration and GMRES iteration now use the same norm. To use a
'Euclidean norm' (the standard thing for GMRES), pass rtol=0, atol=1.
The compiler is not able to fully resolve the previous complicated
generator expressions into efficient code, so write simpler versions for
fixed numbers of arguments. Also add specialised function for the
calculation of delta_x from V.y.
Allows operator splitting into implicit and explicit parts.
If the number of nonlinear iterations was large on the previous step
(greater than half of the maximum allowed value for some call to
`newton_solve!()` during the step) then it is likely that the nonlinear
solve will fail on the next step if the timestep is allowed to increase,
so prevent any timestep increase in this case (the timestep is allowed
to decrease, if the other conditions caused that).
... as long as there is no vpa diffusion active.
The preconditioner does not seem to work at the moment, but
unpreconditioned advance does. Needs more work in future if we want to
use this option.
A test run using implicit vpa advection fails using `epsilon=1.0e-8`
fails, but runs using `epsilon=1.0e-6`, so increase the hard-coded value
to 1.0e-6. Maybe at some point this should be a user-settable parameter.
Decreasing the step size scale factor below 1.0e-2 rarely results in a
decreasing residual if one has not already been found, so set 1.0e-2 as
the minimum to avoid wasted residual calculations.
...these were only introduced for debugging.
The arguments had not been fully updated when ion and neutral ionization
collisions were split into separate functions.
This will allow us to apply boundary conditions and constraints to the
low-order solution, which is easier than applying boundary conditions to
the error.
...before calculating the timestep error estimate. This ensures that we
do not get spurious large errors at grid points that are actually set by
  the boundary conditions.
MINPACK.jl is broken on macOS (possibly just on ARM?), so skip the
nonlinear solver tests on macOS.
Ensures implicitly advanced quantities always get updated even when
timestep is very small. Do allow continuing without doing an iteration
if the residual is very small, to avoid creating NaNs.
Now that `distributed_norm()`, etc. include error tolerances, vectors
normalised to '1.0' are actually very small, so instead of doing
`x + epsilon * v` for a small `epsilon`, we should do
`x + Jv_scale_factor * v` for a large-ish `Jv_scale_factor`. Otherwise
`x + epsilon * v` would be so small that rounding errors are large
relative errors on the estimate of `J.v`, which would prevent the Newton
solver from converging.
@johnomotani johnomotani merged commit 352297f into master Jul 30, 2024
16 of 17 checks passed
@johnomotani johnomotani deleted the imex branch July 30, 2024 21:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants