-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Open
Labels
Module:QuantizationIssues related to QuantizationIssues related to QuantizationtriagedIssue has been triaged by maintainersIssue has been triaged by maintainers
Description
Description
Hey,guys! I met a problem when i trying to using a quantified ONNX int8 model to transform to engine file.I don't even know that how to solve.I have already tried searching similar issue or questions.There is my error information:
Here is my yolov8n_int8 onnx graph and next is the code export with onnx :

Environment
python version: 3.11.9
tensorrt version:10.6.0
onnxruntime version:1.20.0
GPU: rtx3090
Maybe there's something wrong with the way I quantify?Anyway,thanks for your reply!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
Module:QuantizationIssues related to QuantizationIssues related to QuantizationtriagedIssue has been triaged by maintainersIssue has been triaged by maintainers