Skip to content

Commit 26403fa

Browse files
committed
revert torch.compile flex attn for now
1 parent fb5e157 commit 26403fa

File tree

1 file changed

+3
-1
lines changed
  • src/fairseq2/models/transformer/_sdpa

1 file changed

+3
-1
lines changed

src/fairseq2/models/transformer/_sdpa/_flex.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,9 @@
2626

2727
MaskFunction: TypeAlias = Callable[[Tensor, Tensor, Tensor, Tensor], Tensor]
2828

29-
flex_attention = torch.compile(flex_attention, dynamic=False)
29+
# TODO: Hitting some torch.compile issues with this enabled for different builds.
30+
# Commenting out for now until we can investigate.
31+
# flex_attention = torch.compile(flex_attention, dynamic=False)
3032

3133

3234
@final

0 commit comments

Comments
 (0)