-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
F10.7 driver issue #101
Comments
From Adam: 2024-10-06 20:00:00 265.0 So the 277 -> 225 gets rejected for being an absolute jump of >35... the 225 is deleted, then 220, 216, etc. for the same reason, and we're left with a static value of 277 throughout the forecast period. The WRS parser initializes the same way, which gives that initial 277 value for five minutes I think, and then subsequent updates didn't grab the 6th and 7th as a part of writing new data to the driver file, so that jump limit isn't enforced. |
From Adam, The issue isn't with the data. The data matches GFZ. The issue is with the F10 jump limit of 35 that we vetted with the most recent round of upgrades. |
Apologies if this is overly verbose, but I am opting to dump absolutely everything I know on this topic into this email. I understand the forecast office to always use the observed 20UT value (put into the database at 21UT and available to us in an xml file on WCOSS2 just after 21:15UT) unless they need to do a flare adjustment. There's not much value in focusing on the specific dates; the reason why values from a week ago are included before interpolation is just a matter of operational redundancy. The script determines which dates it is trying to find values for, and then parses the F10 and Kp values into a lookup table for every entry in every xml file available in that span, overwriting the values from the older files with values from newer files. So for 10/14 00Z cycles, it has to search in order 10/12, 10/13, and 10/14, because e.g. MSIS in IPE needs Kpa values from up to 36h prior to initialization. An xml file issued on 10/12 would include values back to 10/9, since the files include 3 days prior, current day, and 3 days after. There's an additional two days buffer built in to protect against operational issues we've experienced in the past, or edge cases on timing at the start of a UT day. The difference between WFS and WRS resulting in that constant jumping behavior (WFS running with artificially high F10, WRS initializes from that runs forward with correct F10, density drops over time, then it reinitializes high six hours later from a bunk-F10 WFS run) would only express itself when the initially high jump limited value drops from the WRS backward search. The WRS process is to write the first IPE timestep's worth of the driver file normally (thus five minutes of that initial incorrect value) and then allow a wrapper script to append to the driver files (and write lockfiles that allow the nowcast to progress). When the wrapper does the backward search described above, it's no longer initializing from 36h back, so it stops seeing that older value above the jump limit ~36h sooner. As a result, there was approximately a five day stretch where both WFS and WRS F10 were static (and wrong) at 277, a ~36h stretch where WFS was wrong and WRS was right, and then things returned to normal. Even if we were more strict about which dates the script put into its internal database prior to interpolating and writing the values out, this issue would've only expressed and resolved itself sooner. I made the choice to preferentially delete the second value in the pair rather than the higher value for two reasons. We would have to make multiple (n-1, I think) passes at the jump limit algorithm if the higher value is to get wiped rather than a single pass. I'm also not entirely sure how the interpolation/extrapolation algorithm would work if there's no prior value to interpolate from. If there's no axis-wise right value for interpolation, it just keeps the prior value. If the left value is missing, I think the script would fail, and it'd default to the minimum value of 75, but I'm not entirely sure of that. It's unfortunate, I guess, that the code did exactly what it was intended to do. If I remember correctly, we tested this against an entire solar cycle of F10 values and identified no issues resembling this. I can look into a different approach that preferentially removes the high value to see how that would work through the same verification period, and am open to other ideas as well. Adam |
There were significant jumps in the WRS runs shown in the neutral density Testbed plots. The issue is associated with the way how we deal with jumps in F10.7 values. Keeping the records of these email exchanges.
The text was updated successfully, but these errors were encountered: