You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ Intel® Neural Compressor
4
4
===========================
5
5
<h3> An open-source Python library supporting popular model compression techniques on all mainstream deep learning frameworks (TensorFlow, PyTorch, and ONNX Runtime)</h3>
* Intel Core Ultra Processors (Meteor Lake, Lunar Lake)
107
121
108
122
#### Intel® Neural Compressor supports GPUs built on Intel's Xe architecture:
109
123
110
124
* Intel Data Center GPU Flex Series (Arctic Sound-M)
111
125
* Intel Data Center GPU Max Series (Ponte Vecchio)
126
+
* Intel® Arc™ B-Series Graphics (Battlemage)
112
127
113
128
#### Intel® Neural Compressor quantized ONNX models support multiple hardware vendors through ONNX Runtime:
114
129
115
130
* Intel CPU, AMD/ARM CPU, and NVidia GPU. Please refer to the validated model [list](./validated_model_list.md#validated-onnx-qdq-int8-models-on-multiple-hardware-through-onnx-runtime).
116
131
117
132
### Validated Software Environment
118
133
119
-
* OS version: CentOS 8.4, Ubuntu 22.04, MacOS Ventura 13.5, Windows 11
120
-
* Python version: 3.8, 3.9, 3.10, 3.11
134
+
* OS version: CentOS 8.4, Ubuntu 24.04, MacOS Ventura 13.5, Windows 11
135
+
* Python version: 3.10, 3.11, 3.12
121
136
122
137
<tableclass="docutils">
123
138
<thead>
@@ -142,13 +157,13 @@ The AI Kit is distributed through many common channels, including from Intel's w
0 commit comments