You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Still the general issues remains, while the job.status is updated by requesting the status from the database, this does not apply for other objects like the DataContainer attached to job.output. Previously this was not an issue as job.output was accessing the HDF5 file directly. Still now that the DataContainer is used for job.output it becomes more complicated.
The text was updated successfully, but these errors were encountered:
I had toyed on Wednesday with a change that adds a callback to the JobStatus object. So when a GenericJob creates its JobStatus object, it would also add a callback to it that would call from_hdf when the job status becomes finished. A user could then in the notebook do job.refresh_status or just Project.wait_for_jobs and it would regenerate output objects once the job status is updated. I can create a PR later.
@srmnitc brought up the issue that the data container is not reloaded when the job is submitted to the queuing system.
He introduced a fix in pyiron/pyiron_atomistics#1339
Still the general issues remains, while the
job.status
is updated by requesting the status from the database, this does not apply for other objects like theDataContainer
attached tojob.output
. Previously this was not an issue asjob.output
was accessing the HDF5 file directly. Still now that theDataContainer
is used forjob.output
it becomes more complicated.The text was updated successfully, but these errors were encountered: