Compile llama.cpp with CUDA 12 #13403
Unanswered
artyomboyko
asked this question in
Q&A
Replies: 2 comments
-
Those are cuda compute capabilities, basically device architectures, not cuda versions. See https://developer.nvidia.com/cuda-gpus |
Beta Was this translation helpful? Give feedback.
0 replies
-
@0cc4m OK. How do I set this parameter correctly so that RTX4000 and RTX5000 can be used? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am building llamacpp with CUDA 12 support (RTX5000). How can I add support for RTX4000 and RXT5000 using
cmake -B build -DGGML_CUDA=ON -DCMAKE_CUDA_ARCHITECTURES=“86;89”
? The range of acceptable values for theDCMAKE_CUDA_ARCHITECTURES
parameter is not specified in documentation. Should I specify CUDA 12 as120
or as12
? For example:Beta Was this translation helpful? Give feedback.
All reactions