Skip to content

Commit 1dd3f47

Browse files
committed
Remove references to FORCE_CMAKE
1 parent 6e89775 commit 1dd3f47

File tree

4 files changed

+12
-13
lines changed

4 files changed

+12
-13
lines changed

Makefile

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,19 +13,19 @@ build:
1313
python3 -m pip install -e .
1414

1515
build.cuda:
16-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 python3 -m pip install -e .
16+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" python3 -m pip install -e .
1717

1818
build.opencl:
19-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" FORCE_CMAKE=1 python3 -m pip install -e .
19+
CMAKE_ARGS="-DLLAMA_CLBLAST=on" python3 -m pip install -e .
2020

2121
build.openblas:
22-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" FORCE_CMAKE=1 python3 -m pip install -e .
22+
CMAKE_ARGS="-DLLAMA_CLBLAST=on" python3 -m pip install -e .
2323

2424
build.blis:
25-
CMAKE_ARGS="-DLLAMA_OPENBLAS=on -DLLAMA_OPENBLAS_VENDOR=blis" FORCE_CMAKE=1 python3 -m pip install -e .
25+
CMAKE_ARGS="-DLLAMA_OPENBLAS=on -DLLAMA_OPENBLAS_VENDOR=blis" python3 -m pip install -e .
2626

2727
build.metal:
28-
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 python3 -m pip install -e .
28+
CMAKE_ARGS="-DLLAMA_METAL=on" python3 -m pip install -e .
2929

3030
build.sdist:
3131
python3 -m build --sdist

README.md

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -48,36 +48,35 @@ Otherwise, while installing it will build the llama.ccp x86 version which will b
4848
### Installation with Hardware Acceleration
4949

5050
`llama.cpp` supports multiple BLAS backends for faster processing.
51-
Use the `FORCE_CMAKE=1` environment variable to force the use of `cmake` and install the pip package for the desired BLAS backend.
5251

5352
To install with OpenBLAS, set the `LLAMA_BLAS and LLAMA_BLAS_VENDOR` environment variables before installing:
5453

5554
```bash
56-
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" FORCE_CMAKE=1 pip install llama-cpp-python
55+
CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS" pip install llama-cpp-python
5756
```
5857

5958
To install with cuBLAS, set the `LLAMA_CUBLAS=1` environment variable before installing:
6059

6160
```bash
62-
CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
61+
CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
6362
```
6463

6564
To install with CLBlast, set the `LLAMA_CLBLAST=1` environment variable before installing:
6665

6766
```bash
68-
CMAKE_ARGS="-DLLAMA_CLBLAST=on" FORCE_CMAKE=1 pip install llama-cpp-python
67+
CMAKE_ARGS="-DLLAMA_CLBLAST=on" pip install llama-cpp-python
6968
```
7069

7170
To install with Metal (MPS), set the `LLAMA_METAL=on` environment variable before installing:
7271

7372
```bash
74-
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install llama-cpp-python
73+
CMAKE_ARGS="-DLLAMA_METAL=on" pip install llama-cpp-python
7574
```
7675

7776
To install with hipBLAS / ROCm support for AMD cards, set the `LLAMA_HIPBLAS=on` environment variable before installing:
7877

7978
```bash
80-
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
79+
CMAKE_ARGS="-DLLAMA_HIPBLAS=on" pip install llama-cpp-python
8180
```
8281

8382
#### Windows remarks

docker/cuda_simple/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ ENV LLAMA_CUBLAS=1
2121
RUN python3 -m pip install --upgrade pip pytest cmake scikit-build setuptools fastapi uvicorn sse-starlette pydantic-settings
2222

2323
# Install llama-cpp-python (build with cuda)
24-
RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python
24+
RUN CMAKE_ARGS="-DLLAMA_CUBLAS=on" pip install llama-cpp-python
2525

2626
# Run the server
2727
CMD python3 -m llama_cpp.server

docs/install/macos.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ conda activate llama
3030
*(you needed xcode installed in order pip to build/compile the C++ code)*
3131
```
3232
pip uninstall llama-cpp-python -y
33-
CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
33+
CMAKE_ARGS="-DLLAMA_METAL=on" pip install -U llama-cpp-python --no-cache-dir
3434
pip install 'llama-cpp-python[server]'
3535
3636
# you should now have llama-cpp-python v0.1.62 or higher installed

0 commit comments

Comments
 (0)