-
Notifications
You must be signed in to change notification settings - Fork 180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(api, robot-server): Add source key to the data_files database to support generated CSV files from the plate reader + other plate reader fixes. #16603
Conversation
…er is holding somewhere on the deck.
…rrentState endpoint.
…al to handle plate reader lid.
…show the remote-controlled takeover modal when we are automatically moving the plate reader lid
…ialized before calling close_lid.
… we can denote uploaded and generated csv data files.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks solid overall to me, the migrations work seems to have sensible coverage as well. I had a question/proposal down below regarding the current scope of uploaded/generated sources, but other than that should be good.
@@ -5,7 +5,7 @@ | |||
from ..errors import StorageLimitReachedError | |||
|
|||
|
|||
MAXIMUM_CSV_FILE_LIMIT = 40 | |||
MAXIMUM_CSV_FILE_LIMIT = 400 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be fine, but just as an aside we're using this same limit within an actual protocol run as well so someone could theoretically do 400 file writes during a single protocol. Not necessarily a problem, just means they'll use up all their write limit in one go.
data_file_usage_info = self._data_files_store.get_usage_info( | ||
DataFileSource.UPLOADED | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice, makes sense to separate our deletion methodology like this.
"""The source this data file is from.""" | ||
|
||
UPLOADED = "uploaded" | ||
GENERATED = "generated" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generated should be fine in general, but to look forward do we want to categorize these as something like "generated_csv" or more specifically "plate_reader_csv"? Would there be a reason in the future to want to tell the difference between types of generated/protocol output files via the file source field?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is a possibility. I wouldn't mind either way. If you keep it like this for v8.2, changing it to the other way is an easy and cheap migration to do later, if we need to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed the SQL bits, which all look good to me. Thanks!
I'll say though that taking a few steps back, I don't see a fundamental reason why generated and uploaded files couldn't be in one pool with a large upper limit, like in the hundreds or thousands. And I think we should try to steer ourselves towards that direction—one good fast persistence strategy—instead of adding increasingly granular "kinds" of things (quick transfer runs+protocols instead of normal ones, generated files instead of uploaded ones, ...) for the sake of detouring around existing limits.
"""The source this data file is from.""" | ||
|
||
UPLOADED = "uploaded" | ||
GENERATED = "generated" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that is a possibility. I wouldn't mind either way. If you keep it like this for v8.2, changing it to the other way is an easy and cheap migration to do later, if we need to.
While I do agree with not adding different kinds of objects, In this case, the
|
Overview
The Plate Reader generates absorbance readings that can be optionally saved as a CSV file on the robot with the
read(export_filename='filename.csv')
PAPI command. This generated data needs to be stored on the robot so the client can download them later on for post-processing or re-upload them to be used as RTP CSV for a different protocol. Unlike RTP CSV that are uploaded by a user from the client, generated readings are produced by the robot. Because of this, we need to keep track of the source these data files originate from, so we can limit their count independently of RTP CSVs. A typical absorbance protocol makes about 40 readings per run, so we could hit our maximum number of data files in one run. This pull request adds asource
key to thedata_files
table to keep track of the origin of the data files, changes to support the new functionality, and other fixes relating to the plate reader.Closes: PLAT-574 PLAT-575
Test Plan and Hands on Testing
close_lid
beforeinitialize
and make sure we raiseCannotPerformModuleAction
source
column to thedata_files
table in the database in v7 migration.Changelog
CannotPerformModuleAction
analysis error ifinitialize
is called beforeclose_lid
source
column to thedata_files
in the database to keep track of the origin of the data file.source
key to determine the number of generated files stored on the bot to prevent file creation.Review requests
Risk assessment