-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross species collisions #200
Conversation
… another, assuming that the physical meaning of the coordinate values change, but not the normalised values themselves. We assume that the pdf goes to zero by the end of the "natural" normalised grid, and thus can extrapolate the pdf with zeros into the region beyond which data exists.
…rectly to reference timescale.
… or beam ion collisions with the existing self collision operator and a Maxwellian (slow) ion and electron species. The special function is used because i) electrons are not available in the branch, ii) we do not wish the slow ions or the electrons to evolve. Noting this, we can construct the function so that only two matrix assembly steps are carried out, and only a single set of elliptic solves are needed (for the self collision operator). This means that this test operator is only a factor of two slower than the existing self-collision operator. Input parameters for the unevolved species are hard coded, but should be later made input parameters in the fokker planck input TOML namelist. Changes towards this are made, but currently they are incomplete.
@mrhardman I can't really tell (from what I know so far) how much editing elsewhere in the code it would take to change the |
This is the kind of thing that I would like to do. It would not be too much effort, but the changes might touch several files. These changes are not directly related to my cross-species collision operator, so they could go in a separate PR focussing on changing the existing input options. @johnomotani do you agree with the idea, should I make separate PR for the refactoring the existing inputs? |
The input restructuring sounds like a good idea, and I'm very in favour of doing that kind of preparatory change in a small PR that we can merge early to avoid conflicts later. |
Now in progress in #202. You may object to some of the input renaming, but I hope that we can find a way where we are all happy : ) |
…c beam initial conditions.
…erator, using the linearity of the collision operator in Fsp to sum the Rosenbluth potential contributions into a single array that can be passed to the assembly routine once. The collsion operator should now have the correct O(1) collision frequencies for the cross collisions in normalised units.
…d testability of formulation. We envisage that there should be two operators - the self collision operator already implemented and a cross collision operator that sums over all the Rosenbluth potentials of other species in the system. This is because self collisions should include the numerical conserving terms for n, u and T whereas cross collisions can only be forced to conserve n.
…lisions on fixed Maxwellian background distribution functions.
…s with slowing down problems.
…at the code still only handles a single ion species so normalisation witth respect to charge number may have the be correctly stated later if variable charge numbers are added.
… to model injection of particles around specific vpa0, vperp0, with a Gaussian shape function. In the "beam-with-losses" option, a sink is added to permit steady state solutions in a 0D2V simulation.
…he fokker-planck collision operator only. This adds Zi to the fokker planck input options. In the future, Zi could be inputted differently, but it will still be needed by the collision operator routines for calculating the correct collision frequency prefactor for each of the self and inter-species collision terms.
FYI, the test EDIT: This likely has to do with a poor choice for the default Zi for collisions. |
Edit: sorry, you've fixed it already! |
I'm now happy for this to be merged (or to have a longer discussion about the content), as I have all the features that I need for the short term. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me. @mrhardman just one comment you could look at if you have any time to look again at the interpolation, but we could also just copy it to an issue and leave for another day.
function interpolate_2D_vspace!(pdf_out,pdf_in,vpa,vperp,scalefac) | ||
|
||
begin_anyv_vperp_vpa_region() | ||
# loop over points in the output interpolated dataset | ||
@loop_vperp ivperp begin | ||
vperp_val = vperp.grid[ivperp]*scalefac | ||
# get element for interpolation data | ||
iel_vperp = ielement_loopup(vperp_val,vperp) | ||
if iel_vperp < 1 # vperp_interp outside of range of vperp.grid | ||
@loop_vpa ivpa begin | ||
pdf_out[ivpa,ivperp] = 0.0 | ||
end | ||
continue | ||
else | ||
# get nodes for interpolation | ||
ivperpmin, ivperpmax = vperp.igrid_full[1,iel_vperp], vperp.igrid_full[vperp.ngrid,iel_vperp] | ||
vperp_nodes = vperp.grid[ivperpmin:ivperpmax] | ||
#print("vperp: ",iel_vperp, " ", vperp_nodes," ",vperp_val) | ||
|
||
end | ||
@loop_vpa ivpa begin | ||
vpa_val = vpa.grid[ivpa]*scalefac | ||
# get element for interpolation data | ||
iel_vpa = ielement_loopup(vpa_val,vpa) | ||
if iel_vpa < 1 # vpa_interp outside of range of vpa.grid | ||
pdf_out[ivpa,ivperp] = 0.0 | ||
continue | ||
else | ||
# get nodes for interpolation | ||
ivpamin, ivpamax = vpa.igrid_full[1,iel_vpa], vpa.igrid_full[vpa.ngrid,iel_vpa] | ||
vpa_nodes = vpa.grid[ivpamin:ivpamax] | ||
#print("vpa: ", iel_vpa, " ", vpa_nodes," ",vpa_val) | ||
|
||
# do the interpolation | ||
pdf_out[ivpa,ivperp] = 0.0 | ||
for ivperpgrid in 1:vperp.ngrid | ||
# index for referencing pdf_in on orginal grid | ||
ivperpp = vperp.igrid_full[ivperpgrid,iel_vperp] | ||
# interpolating polynomial value at ivperpp for interpolation | ||
vperppoly = lagrange_poly(ivperpgrid,vperp_nodes,vperp_val) | ||
for ivpagrid in 1:vpa.ngrid | ||
# index for referencing pdf_in on orginal grid | ||
ivpap = vpa.igrid_full[ivpagrid,iel_vpa] | ||
# interpolating polynomial value at ivpap for interpolation | ||
vpapoly = lagrange_poly(ivpagrid,vpa_nodes,vpa_val) | ||
pdf_out[ivpa,ivperp] += vpapoly*vperppoly*pdf_in[ivpap,ivperpp] | ||
end | ||
end | ||
end | ||
end | ||
end | ||
return nothing | ||
end |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this interpolation is a significant part of the run-time - so worth optimising - it might be worth thinking about the following.
I think (although I haven't checked carefully so there could well be mistakes in the following!) two 1D interpolations should be significantly cheaper than a 2D interpolation, and is probably exactly equivalent (apart from round-off errors). The cost of one interpolation is something like nout * npoly
where nout
is the number of grid points of the output, and npoly
is the number of coefficients in the interpolating polynomial (from the input) that have to be evaluated.
So the cost for two 1D interpolations is:
nvperp * (interpolation in vpa) + nvpa * (interpolation in vperp)
= nvperp * nvpa * vpa_ngrid + nvpa * nvperp * vperp_ngrid
while the cost for the 2D interpolation is
nvperp * nvpa * (vpa_ngrid * vperp_ngrid)
so the cost is larger than two 1D interpolations by vpa_ngrid * vperp_ngrid / (vpa_ngrid + vperp_ngrid)
, which for vpa_ngrid=vperp_ngrid=5
would be a factor of 2.5, or 4.5 for vpa_ngrid=vperp_ngrid=9
.
Doing the operation in terms of 1D interpolation would also make me happy because it would mean that you could implement interpolate_to_grid_1d!
for gauss_legendre
and it would simplify this function (although you do need a funny-sized intermediate buffer array which might be a bit of a pain...) and also enable using gauss_legendre
with the interpolation everywhere else in moment_kinetics 😃
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently the 2D interpolation is not used for any run-time process. This functionality was added to prepare for the Rosenbluth potential solve for when s' species have arbitrary non-Maxwellian distributions. However, I did not finish that feature yet because I did not have a definite normalisation scheme for the multi-species moment kinetics. If you can provide a definite set of normalisations, I can finish this feature.
I do not think that 2 1D interpolations is equivalent to a single 2D interpolation unless the function is exactly represented by a polynomial (and the sampling is of the right order). The error will be of order of the discretisation error. Perhaps you imagine that the operations are all machine precision accurate? I do not think that is realistic for the functions that we are trying to solve for here, with the affordable resolutions available.
The reason that I went for the 2D interpolation is that we are using the outer product representation
where the 2D interpolation is obtained by sending
If we use 2 1D interpolations, I think you are arguing for first obtaining the set of points
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mrhardman, yes what you describe is what I was suggesting.
The '2D interpolation' is, like you said, evaluating the 2D polynomial interpolating function, defined on the original grid, at some new points. In particular at
Anyway if the interpolating is only done during initialisation, this is not important. I guess I imagine that eventually, at least for a moment-kinetic case, we would need to interpolate between the grids for different species at each time step, and then the performance of the interpolation might become more of an issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It sounds like there is a premise to test there -- we can do the check on the two different methods in a further PR (or reopen this branch & PR) and see what the accuracy and performance is like. For now this interpolation routine is only used in the testing module. The interpolation routine in the runtime collision operator would have to be used every time step regardless of whether we were moment kinetic or not (we have to keep moving Fi -> the grid of Fe and vice versa).
We can ignore the failing macOS parallel tests here - the cause is some change in the OpenMPI setup, which I've worked around in #206, so those CI tests should be fixed when we merge into |
…ial condition and physical parameters for alpha ion species.
Addition of a test collision operator that consists of the self collision operator, plus cross-species collisions with two fixed Maxwellian species, representing slow ions and electrons. Example input files are added. The distributions functions shown in the attached .gif files show the relaxation of the distribution function towards cold ash. Comments on the outstanding tasks below would be especially appreciated, as these have to do with the input parameters for collision operators generally.
(@LucasMontoya4 FYI do not feel pressured to read the code in detail at this point, but ideas for whether or not you are happy to restructure the collisions input struct would be very useful).
Plots of the simulated (fast) ion distribution function: