You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/tutorials/examples.md
-4
Original file line number
Diff line number
Diff line change
@@ -592,7 +592,3 @@ with torch.no_grad():
592
592
## C++
593
593
594
594
Intel® Extension for PyTorch\* provides its C++ dynamic library to allow users to implement custom DPC++ kernels to run on the XPU device. Refer to the [DPC++ extension](./features/DPC++_Extension.md) for the details.
595
-
596
-
## Model Zoo
597
-
598
-
Use cases that are already optimized by Intel engineers are available at [Model Zoo for Intel® Architecture](https://github.com/IntelAI/models). You can get performance benefits out-of-box by simply running scripts in the Model Zoo.
Copy file name to clipboardExpand all lines: docs/tutorials/releases.md
+28-34
Original file line number
Diff line number
Diff line change
@@ -13,7 +13,7 @@ This release provides the following features:
13
13
- Auto Mixed Precision (AMP)
14
14
- support of AMP with BFloat16 and Float16 optimization of GPU operators
15
15
- Channels Last
16
-
- support of channels_last (NHWC) memory format for most key GPU operators
16
+
- support of channels\_last (NHWC) memory format for most key GPU operators
17
17
- DPC++ Extension
18
18
- mechanism to create PyTorch\* operators with custom DPC++ kernels running on the XPU device
19
19
- Optimized Fusion
@@ -34,7 +34,7 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
34
34
- Linear + Sigmoid
35
35
- Linear + Div(scalar)
36
36
- Linear + GeLu
37
-
- Linear + GeLu_
37
+
- Linear + GeLu\_
38
38
- T + Addmm
39
39
- T + Addmm + ReLu
40
40
- T + Addmm + Sigmoid
@@ -52,8 +52,8 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
52
52
- Dequantize + PixelShuffle + Quantize
53
53
- Mul + Add
54
54
- Add + ReLU
55
-
- Conv2D + Leaky_relu
56
-
- Conv2D + Leaky_relu_
55
+
- Conv2D + Leaky\_relu
56
+
- Conv2D + Leaky\_relu\_
57
57
- Conv2D + Sigmoid
58
58
- Conv2D + Dequantize
59
59
- Softplus + Tanh
@@ -64,61 +64,55 @@ This release supports the following fusion patterns in PyTorch\* JIT mode:
64
64
65
65
### Known Issues
66
66
67
-
####[CRITICAL ERROR] Kernel 'XXX' removed due to usage of FP64 instructions unsupported by the targeted hardware
67
+
-[CRITICAL ERROR] Kernel 'XXX' removed due to usage of FP64 instructions unsupported by the targeted hardware
68
68
69
-
FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) platform. If you run any AI workload on that platform and receive this error message, it means a kernel requiring FP64 instructions is removed and not executed, hence the accuracy of the whole workload is wrong.
69
+
FP64 is not natively supported by the [Intel® Data Center GPU Flex Series](https://www.intel.com/content/www/us/en/products/docs/discrete-gpus/data-center-gpu/flex-series/overview.html) platform. If you run any AI workload on that platform and receive this error message, it means a kernel requiring FP64 instructions is removed and not executed, hence the accuracy of the whole workload is wrong.
70
70
71
-
####symbol undefined caused by \_GLIBCXX_USE_CXX11_ABI
71
+
- symbol undefined caused by \_GLIBCXX\_USE\_CXX11\_ABI
72
72
73
-
Error info: <br>
74
-
75
-
```bash
76
-
File "/root/.local/lib/python3.9/site-packages/ipex/__init__.py", line 4, in<module>
DPC++ does not support \_GLIBCXX_USE_CXX11_ABI=0, Intel® Extension for PyTorch\* is always compiled with \_GLIBCXX_USE_CXX11_ABI=1. This symbol undefined issue appears when PyTorch\* is compiled with \_GLIBCXX_USE_CXX11_ABI=0. Update PyTorch\* CMAKE file to set \_GLIBCXX_USE_CXX11_ABI=1 and compile PyTorch\* with particular compiler which supports \_GLIBCXX_USE_CXX11_ABI=1. We recommend to use gcc version 9.4.0 on ubuntu 20.04. <br>
82
-
83
-
#### Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
DPC++ does not support \_GLIBCXX\_USE\_CXX11\_ABI=0, Intel® Extension for PyTorch\* is always compiled with \_GLIBCXX\_USE\_CXX11\_ABI=1. This symbol undefined issue appears when PyTorch\* is compiled with \_GLIBCXX\_USE\_CXX11\_ABI=0. Update PyTorch\* CMAKE file to set\_GLIBCXX\_USE\_CXX11\_ABI=1 and compile PyTorch\* with particular compiler which supports \_GLIBCXX\_USE\_CXX11\_ABI=1. We recommend to use gcc version 9.4.0 on ubuntu 20.04.
84
78
85
-
Error info: <br>
79
+
- Can't find oneMKL library when build Intel® Extension for PyTorch\* without oneMKL
86
80
87
81
```bash
88
-
/usr/bin/ld: cannot find -lmkl_sycl <br>
89
-
/usr/bin/ld: cannot find -lmkl_intel_ilp64 <br>
90
-
/usr/bin/ld: cannot find -lmkl_core <br>
91
-
/usr/bin/ld: cannot find -lmkl_tbb_thread <br>
92
-
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation) <br>
82
+
/usr/bin/ld: cannot find -lmkl_sycl
83
+
/usr/bin/ld: cannot find -lmkl_intel_ilp64
84
+
/usr/bin/ld: cannot find -lmkl_core
85
+
/usr/bin/ld: cannot find -lmkl_tbb_thread
86
+
dpcpp: error: linker command failed with exit code 1 (use -v to see invocation)
93
87
```
94
-
88
+
95
89
When PyTorch\* is built with oneMKL library and Intel® Extension for PyTorch\* is built without oneMKL library, this linker issue may occur. Resolve it by setting:
This issue may occur when Intel® Extension for PyTorch\* is built with oneMKL library and PyTorch\* is not build with any MKL library. The oneMKL kernel may run into CPU backend incorrectly and trigger this issue. Resolve it by installing MKL library from conda:
107
-
101
+
108
102
```bash
109
103
conda install mkl
110
104
conda install mkl-include
111
105
```
112
-
106
+
113
107
then clean build PyTorch\*.
114
108
115
-
####OSError: libmkl_intel_lp64.so.1: cannot open shared object file: No such file or directory
109
+
- OSError: libmkl\_intel\_lp64.so.1: cannot open shared object file: No such file or directory
116
110
117
111
Wrong MKL library is used when multiple MKL libraries exist in system. Preload oneMKL by:
If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL_DPCPP_ROOT}/lib/intel64/ by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
116
+
117
+
If you continue seeing similar issues for other shared object files, add the corresponding files under ${MKL\_DPCPP\_ROOT}/lib/intel64/ by `LD_PRELOAD`. Note that the suffix of the libraries may change (e.g. from .1 to .2), if more than one oneMKL library is installed on the system.
0 commit comments