You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While doing some research, I tried optimizing a number models with opset=7, 8, 9. These models have ONNX v3 format, but when optimized, although the structure remains the same, the format is upgraded implicitly to ONNX v4. This affects model correctness, to a small, but non-negligible extent, affecting top-5 and top-10 labels for classification models while the top-1 label remained accurate. This does not happen for models of higher opset (10 and above).
For instance, using ILSVRC2017 validation dataset (5.5k images) for classification models obtained from the ONNX Hub: (1) ResNet50-caffe2 presented ~13% difference in results across original and optimized model for top-10 labels and ~2% for top-5 labels, (2) ShuffleNet had 0.5-1% differences for the same cases. While it also slightly affected the F1 score for the bounding boxes of the object detection models (E.g., YOLO v2, had 0.2 % difference). There were also other models with even smaller differences.
I emphasize and report this, because such a small change is done implicitly, no indication is given, and is difficult to observe changes. However, the accuracy, even to a small extent, is essentially affected. I highly recommend that at least a warning is raised, if not patched altogether. The opsets I mentioned above are included in the official ONNX Hub for a large variety of models, and are expected to be backwards compartible with the current ONNX version.
The text was updated successfully, but these errors were encountered:
While doing some research, I tried optimizing a number models with
opset=7, 8, 9
. These models haveONNX v3
format, but when optimized, although the structure remains the same, the format is upgraded implicitly toONNX v4
. This affects model correctness, to a small, but non-negligible extent, affecting top-5 and top-10 labels for classification models while the top-1 label remained accurate. This does not happen for models of higher opset (10 and above).For instance, using
ILSVRC2017
validation dataset (5.5k images) for classification models obtained from the ONNX Hub: (1)ResNet50-caffe2
presented ~13% difference in results across original and optimized model for top-10 labels and ~2% for top-5 labels, (2)ShuffleNet
had 0.5-1% differences for the same cases. While it also slightly affected the F1 score for the bounding boxes of the object detection models (E.g.,YOLO v2
, had 0.2 % difference). There were also other models with even smaller differences.I emphasize and report this, because such a small change is done implicitly, no indication is given, and is difficult to observe changes. However, the accuracy, even to a small extent, is essentially affected. I highly recommend that at least a warning is raised, if not patched altogether. The opsets I mentioned above are included in the official ONNX Hub for a large variety of models, and are expected to be backwards compartible with the current ONNX version.
The text was updated successfully, but these errors were encountered: