Skip to content

Commit fb26b62

Browse files
authored
[doc] Format change (#1735)
* format change for doc generation * remove old library name in error info * remove model zoo info
1 parent a8182e9 commit fb26b62

File tree

2 files changed

+28
-38
lines changed

2 files changed

+28
-38
lines changed

docs/tutorials/examples.md

-4
Original file line numberDiff line numberDiff line change
@@ -592,7 +592,3 @@ with torch.no_grad():
592592
## C++
593593

594594
Intel® Extension for PyTorch\* provides its C++ dynamic library to allow users to implement custom DPC++ kernels to run on the XPU device. Refer to the [DPC++ extension](./features/DPC++_Extension.md) for the details.
595-
596-
## Model Zoo
597-
598-
Use cases that are already optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models). You can get performance benefits out-of-box by simply running scripts in the Model Zoo.

docs/tutorials/releases.md

+28-34
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ This release provides the following features:
1313
- Auto Mixed Precision (AMP)
1414
- support of AMP with BFloat16 and Float16 optimization of GPU operators
1515
- Channels Last
16-
- support of channels_last (NHWC) memory format for most key GPU operators
16+
- support of channels\_last (NHWC) memory format for most key GPU operators
1717
- DPC++ Extension
1818
- mechanism to create PyTorch\* operators with custom DPC++ kernels running on the XPU device
1919
- Optimized Fusion
@@ -34,7 +34,7 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
3434
- Linear + Sigmoid
3535
- Linear + Div(scalar)
3636
- Linear + GeLu
37-
- Linear + GeLu_
37+
- Linear + GeLu\_
3838
- T + Addmm
3939
- T + Addmm + ReLu
4040
- T + Addmm + Sigmoid
@@ -52,8 +52,8 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
5252
- Dequantize + PixelShuffle + Quantize
5353
- Mul + Add
5454
- Add + ReLU
55-
- Conv2D + Leaky_relu
56-
- Conv2D + Leaky_relu_
55+
- Conv2D + Leaky\_relu
56+
- Conv2D + Leaky\_relu\_
5757
- Conv2D + Sigmoid
5858
- Conv2D + Dequantize
5959
- Softplus + Tanh
@@ -64,61 +64,55 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
6464

6565
### Known Issues
6666

67-
#### [CRITICAL ERROR] Kernel 'XXX' removed due to usage of FP64 instructions unsupported by the targeted hardware
67+
- [CRITICAL ERROR] Kernel 'XXX' removed due to usage of FP64 instructions unsupported by the targeted hardware
6868

69-
FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) platform. If you run any AI workload on that platform and receive this error message, it means a kernel requiring FP64 instructions is removed and not executed, hence the accuracy of the whole workload is wrong.
69+
FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) platform. If you run any AI workload on that platform and receive this error message, it means a kernel requiring FP64 instructions is removed and not executed, hence the accuracy of the whole workload is wrong.
7070

71-
#### symbol undefined caused by \_GLIBCXX_USE_CXX11_ABI
71+
- symbol undefined caused by \_GLIBCXX\_USE\_CXX11\_ABI
7272

73-
Error info: <br>
74-
75-
```bash
76-
File "/root/.local/lib/python3.9/site-packages/ipex/__init__.py", line 4, in <module>
77-
from . import _C
78-
ImportError: /root/.local/lib/python3.9/site-packages/ipex/lib/libipex_gpu_core.so: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
79-
```
80-
81-
DPC++ does not support \_GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch\* is always compiled with \_GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch\* is compiled with \_GLIBCXX_USE_CXX11_ABI=0. Update PyTorch\* CMAKE file to set \_GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch\* with particular compiler which supports \_GLIBCXX_USE_CXX11_ABI=1. We recommend to use gcc version 9.4.0 on ubuntu 20.04. <br>
82-
83-
#### Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
73+
```bash
74+
ImportError: undefined symbol: _ZNK5torch8autograd4Node4nameB5cxx11Ev
75+
```
76+
77+
DPC++ does not support \_GLIBCXX\_USE\_CXX11\_ABI=0, Intel® Extension for PyTorch\* is always compiled with \_GLIBCXX\_USE\_CXX11\_ABI=1. This symbol undefined issue appears when PyTorch\* is compiled with \_GLIBCXX\_USE\_CXX11\_ABI=0. Update PyTorch\* CMAKE file to set \_GLIBCXX\_USE\_CXX11\_ABI=1 and compile PyTorch\* with particular compiler which supports \_GLIBCXX\_USE\_CXX11\_ABI=1. We recommend to use gcc version 9.4.0 on ubuntu 20.04.
8478

85-
Error info: <br>
79+
- Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
8680
8781
```bash
88-
/usr/bin/ld: cannot find -lmkl_sycl <br>
89-
/usr/bin/ld: cannot find -lmkl_intel_ilp64 <br>
90-
/usr/bin/ld: cannot find -lmkl_core <br>
91-
/usr/bin/ld: cannot find -lmkl_tbb_thread <br>
92-
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation) <br>
82+
/usr/bin/ld: cannot find -lmkl_sycl
83+
/usr/bin/ld: cannot find -lmkl_intel_ilp64
84+
/usr/bin/ld: cannot find -lmkl_core
85+
/usr/bin/ld: cannot find -lmkl_tbb_thread
86+
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
9387
```
94-
88+
9589
When PyTorch\* is built with oneMKL library and Intel® Extension for PyTorch\* is built without oneMKL library, this linker issue may occur. Resolve it by setting:
96-
90+
9791
```bash
9892
export USE_ONEMKL=OFF
9993
export MKL_DPCPP_ROOT=${PATH_To_Your_oneMKL}/__release_lnx/mkl
10094
```
101-
95+
10296
Then clean build Intel® Extension for PyTorch\*.
10397
104-
#### undefined symbol: mkl_lapack_dspevd. Intel MKL FATAL ERROR: cannot load libmkl_vml_avx512.so.2 or libmkl_vml_def.so.2
98+
- undefined symbol: mkl\_lapack\_dspevd. Intel MKL FATAL ERROR: cannot load libmkl\_vml\_avx512.so.2 or libmkl\_vml\_def.so.2
10599
106100
This issue may occur when Intel® Extension for PyTorch\* is built with oneMKL library and PyTorch\* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly and trigger this issue. Resolve it by installing MKL library from conda:
107-
101+
108102
```bash
109103
conda install mkl
110104
conda install mkl-include
111105
```
112-
106+
113107
then clean build PyTorch\*.
114108
115-
#### OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such file or directory
109+
- OSError: libmkl\_intel\_lp64.so.1: cannot open shared object file: No such file or directory
116110
117111
Wrong MKL library is used when multiple MKL libraries exist in system. Preload oneMKL by:
118-
112+
119113
```bash
120114
export LD_PRELOAD=${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_lp64.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_intel_ilp64.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_sequential.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_core.so.1:${MKL_DPCPP_ROOT}/lib/intel64/libmkl_sycl.so.1
121115
```
122-
123-
If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL_DPCPP_ROOT}/lib/intel64/ by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
116+
117+
If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL\_DPCPP\_ROOT}/lib/intel64/ by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
124118

0 commit comments

Comments
 (0)