Skip to content
This repository was archived by the owner on Oct 25, 2024. It is now read-only.

Commit d40be7f

Browse files
authored
Update docker (#1453)
1 parent d182dc4 commit d40be7f

File tree

3 files changed

+6
-3
lines changed

3 files changed

+6
-3
lines changed

examples/huggingface/pytorch/text-generation/quantization/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -142,8 +142,11 @@ Intel-extension-for-pytorch dependencies are in oneapi package, before install i
142142
### Create Environment​
143143
Pytorch and Intel-extension-for-pytorch version for intel GPU > 2.1 are required, python version requests equal or higher than 3.9 due to [text evaluation library](https://github.com/EleutherAI/lm-evaluation-harness/tree/master) limitation, the dependent packages are listed in requirements_GPU.txt, we recommend create environment as the following steps. For Intel-exension-for-pytorch, we should install from source code now, and Intel-extension-for-pytorch will add weight-only quantization in the next version.
144144

145+
>**Note**: please install transformers==4.35.2.
146+
145147
```bash
146148
pip install -r requirements_GPU.txt
149+
pip install transformers==4.35.2
147150
source /opt/intel/oneapi/setvars.sh
148151
git clone https://github.com/intel/intel-extension-for-pytorch.git ipex-gpu
149152
cd ipex-gpu

examples/huggingface/pytorch/text-generation/quantization/requirements_GPU.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ protobuf
55
sentencepiece != 0.1.92
66
--extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
77
torch==2.1.0a0
8-
transformers==4.35.2
8+
transformers
99
optimum-intel
1010
bitsandbytes #baichuan
1111
transformers_stream_generator

intel_extension_for_transformers/neural_chat/docker/Dockerfile

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ RUN if [ "$REPO_PATH" == "" ]; then rm -rf intel-extension-for-transformers/* &&
5757
WORKDIR /intel-extension-for-transformers
5858

5959
RUN pip install oneccl_bind_pt --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/cpu/us/ && \
60-
cd /intel-extension-for-transformers && pip install -r requirements.txt && \
60+
cd /intel-extension-for-transformers && pip install schema==0.7.5 numpy==1.26.4 && \
6161
python setup.py install && \
6262
cd ./intel_extension_for_transformers/neural_chat/examples/finetuning/instruction && pip install -r requirements.txt && \
6363
cd /intel-extension-for-transformers/intel_extension_for_transformers/neural_chat && pip install -r requirements_cpu.txt && \
@@ -105,7 +105,7 @@ WORKDIR /intel-extension-for-transformers
105105
RUN cd /intel-extension-for-transformers && \
106106
sed -i '/find-links https:\/\/download.pytorch.org\/whl\/torch_stable.html/d' requirements.txt && \
107107
sed -i '/^torch/d;/^intel-extension-for-pytorch/d' requirements.txt && \
108-
pip install -r requirements.txt && \
108+
pip install schema==0.7.5 numpy==1.26.4 && \
109109
python setup.py install
110110

111111
RUN git clone https://github.com/huggingface/optimum-habana.git && \

0 commit comments

Comments
 (0)