Skip to content

RuntimeError with SegFormer and multilabel FocalLoss #1163

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
simonreise opened this issue May 27, 2025 · 1 comment · Fixed by #1174
Closed

RuntimeError with SegFormer and multilabel FocalLoss #1163

simonreise opened this issue May 27, 2025 · 1 comment · Fixed by #1174

Comments

@simonreise
Copy link
Contributor

When I train SMP SegFormer with multilabel FocalLoss this RuntimeError appears:

     58 if self.mode in {BINARY_MODE, MULTILABEL_MODE}:
     59     y_true = y_true.view(-1)
---> 60     y_pred = y_pred.view(-1)
     62     if self.ignore_index is not None:
     63         # Filter predictions with ignore label from loss computation
     64         not_ignored = y_true != self.ignore_index

RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

It does not appear in multiclass mode.

I checked my y_pred and y_true shapes and they are

torch.Size([16, 14, 256, 256])
torch.Size([16, 14, 256, 256])

After changing

 loss = self.loss(inputs, labels)

to

 loss = self.loss(inputs.contiguous(), labels.contiguous())

the error disappeared.

Maybe it is related to #998 where .contiguous() is removed from SegFormer decoder.

As mode logic is usually the same in every loss, this error may appear in other losses too, and in binary mode too.

Maybe replacing view with reshape can be a good fix? Or it will slow computations down like contiguous() did?

@qubvel
Copy link
Collaborator

qubvel commented Jun 4, 2025

Hey @simonreise, thanks for reporting!
I suppose we can replace it with reshape - it should not be a big deal in terms of slowing down.

Let me know if you have bandwidth to submit the fix 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants