Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why isn't SDPA supported in cuda within candle? #2725

Open
Murad-Awad opened this issue Jan 18, 2025 · 3 comments
Open

Why isn't SDPA supported in cuda within candle? #2725

Murad-Awad opened this issue Jan 18, 2025 · 3 comments

Comments

@Murad-Awad
Copy link

Hi all, forgive my ignorance but my understanding is that SDPA is cuda compatible but I can't find an implementation within candle_nn for it. I want to run some transformer models (mainly whisper) on T4 GPUs (which don't support flash-attention 2) and want to extract more performance. Optimally, would like something similar to the python transformers library where SDPA is by default used when able.

@Murad-Awad
Copy link
Author

@LaurentMazare any thoughts here?

@EricLBuehler
Copy link
Member

@Murad-Awad our SDPA impl is specialized for Metal currently, and only in the decode phase where there is no masking.

For CUDA, the equivalent would most likely be to use FlashAttention (the Candle ecosystem has V1, V2, and V3).

@Murad-Awad
Copy link
Author

I see @EricLBuehler; this is related then (saw that you reacted to it too): #2726. Do you know why/if flash attention v1 would be supported within this repo? From what I can tell the flash attention module is for flash v2 only which is incompatible with T4s. I saw https://github.com/huggingface/candle-flash-attn-v1 but I'm confused as to how I can make use of it with candle transformers/kernels.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants