Skip to content

quantified ONNX int8 model transform to engine? #4542

@Akiy071

Description

@Akiy071

Description

Hey,guys! I met a problem when i trying to using a quantified ONNX int8 model to transform to engine file.I don't even know that how to solve.I have already tried searching similar issue or questions.There is my error information:

Image

Here is my yolov8n_int8 onnx graph and next is the code export with onnx :
Image

Image

Environment

python version: 3.11.9
tensorrt version:10.6.0
onnxruntime version:1.20.0
GPU: rtx3090

Maybe there's something wrong with the way I quantify?Anyway,thanks for your reply!

Metadata

Metadata

Assignees

Labels

Module:QuantizationIssues related to QuantizationtriagedIssue has been triaged by maintainers

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions