-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Features/operational functionality #212
base: main
Are you sure you want to change the base?
Conversation
Fixed bug where find_positions unintentionally returned tuple. The original implementation didn't work as expected because tuple comma has lower precedence than if-else one-liners. It is dangerous to use different output signature for the same function. find_positions is therefore updated to always return positions.
Updated get_l0tx to take explicit parameters * Renamed config path * Cleaned up white spaces Updated get_bufr to take parameters instead of parsing sys.argv * use logging instead of print Updated get_l3 to take parameters instaed of parsing sys.argv Updated join_l3 to take parameters instaed of parsing sys.argv * Cleaned up white spaces * use logger instead of print
Added AWSOperational class for managing states and local configuration for aws operational functionalities * Implemented azure aws and glacio01 operational functionalities as functions Added functionality for managing operational states and selecting stids to update * Add operational pipeline steps * Added io functionality for operational status * infer initial aws content states from l0 git repository. * Implemented file system based tx state query Added utility functions for executing steps in parallel * Added class for mapping kwargs parameters * Implemented monadic styled context handling to keep stid references. * Support for limiting the number of parallel processes Added modules implemented subscript functionalities from the aws-operational-processing repository * fileshare: handling file export and subfolder structure used for our thredds fileshare * Sync files to aws-l3 and aws-l3-flat * git_repository_utils: helper functions for managing data git repositories * bufr_upload: fpt functionalities for uploading bufr files. Other * Updated setup.py to add variables.csv to package_data to ensure the data file is included in the distributed package
configurations/maclu_laptop.toml
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe let's include an example file here, rather than your actual set-up. I don't think we should have the [aws]
info publicly here for example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's s good point about the aws and also DMI info.
I'am planning to remove all paths and data related to my local environment. I was just a lazy way for me to make an example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eccodes is not a hard dependency for public users because:
- It can be trickier to install, and slows down the package installation (especially for conda)
- It's only used for the bufr file generation, so not useful to most users.
So maybe let's take eccodes out here, and say that it is an optional dependency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay. Thanks for info.
I think it is a good idea to include it as an optional dependency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this in general, I just would like to add documentation and unit tests so that it makes sense to others.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could probably rename this to dev
rather than glacio01
, to better reflect that this workflow is for developmental purposes that do not get uploaded to our operational portal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea to avoid the server specific name.
I basically see these scripts as examples of how AWSOperational can be used. I image individual user can implement custom scripts using selected methods to solve their processing problems. As example, reprocessing some specific stations locally and publish.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about these CLI scripts now. They are mucking up the Github unit testing for some reason.
Generally looks good, but I would like to make some changes myself. Should I make changes directly to this branch, or make a branch from here and then open a PR @ladsmund? |
I guess it will be easiest if you do your own branch and either open a PR or just telling me to look at your branch. |
df9047b
to
3b7f922
Compare
Adding the operational processing functionality from aws-operational-processing to pypromice.
The main script has be completely reimplemented to:
TODO: