You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I'd like to use grounded SAM to find the 'main' object in an image, to assist with automated cropping. i.e. to detect the foreground then generate a coordinates of single bounding box to use for my crop.
Is there a way to understand the detected objects returned by grounded SAM so that I can automate this process and make a best guess at the 'main' foreground objects? Or alternatively a combination of all detected objects to define the foreground?
A single foreground object can be detected with other models such as InSPyReNet, but I already have groundedSAM implemented so would like to use this if possible.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, I'd like to use grounded SAM to find the 'main' object in an image, to assist with automated cropping. i.e. to detect the foreground then generate a coordinates of single bounding box to use for my crop.
Is there a way to understand the detected objects returned by grounded SAM so that I can automate this process and make a best guess at the 'main' foreground objects? Or alternatively a combination of all detected objects to define the foreground?
A single foreground object can be detected with other models such as InSPyReNet, but I already have groundedSAM implemented so would like to use this if possible.
Thanks!
Beta Was this translation helpful? Give feedback.
All reactions