Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: Clarify descriptions for mask_labels in Mask2Former #35514

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

canmike
Copy link

@canmike canmike commented Jan 5, 2025

This PR improves the documentation for class_labels in the Mask2Former model.
Clarified the description of class_labels to better explain the shape of the class_labels parameter.

@canmike
Copy link
Author

canmike commented Jan 5, 2025

@stevhliu, could you kindly review this PR when you get a chance? Thank you!

Copy link
Member

@qubvel qubvel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update! Can you update MaskFormer and OneFormer accordingly for consistency?

@@ -2393,8 +2393,7 @@ def forward(
mask_labels (`List[torch.Tensor]`, *optional*):
List of mask labels of shape `(num_labels, height, width)` to be fed to a model
class_labels (`List[torch.LongTensor]`, *optional*):
list of target class labels of shape `(num_labels, height, width)` to be fed to a model. They identify the
labels of `mask_labels`, e.g. the label of `mask_labels[i][j]` if `class_labels[i][j]`.
List of class labels of shape `(labels)` to be fed to a model.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
List of class labels of shape `(labels)` to be fed to a model.
List of class labels of shape `(num_labels,)` to be fed to a model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants