You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible, in a multiple GPU scenario, to have each available GPU doing a separate trial? So far it seems that using multi_gpu_model is not accelerating our computer vision deep learning model (U-net / Mask RCNN), so having each trial running on a separate GPU could provide us with great speedups, but I've found no information on the matter.
Thank you.
The text was updated successfully, but these errors were encountered:
This is something we would have to raise in hyperopt itself. It's not a simple matter, but very interesting. certainly doesn't just happen out of the box
The simplest path to getting this to work would be to use the GPU identifier as a custom hyperparameter that always returns the next value in a list using itertools.cycle(GPU_IDS). From there you'd use mongoworker and make sure there's only ever len(GPU_IDS) concurrent workers.
Is it possible, in a multiple GPU scenario, to have each available GPU doing a separate trial? So far it seems that using multi_gpu_model is not accelerating our computer vision deep learning model (U-net / Mask RCNN), so having each trial running on a separate GPU could provide us with great speedups, but I've found no information on the matter.
Thank you.
The text was updated successfully, but these errors were encountered: