Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QNN EP] Initialization errors - unsupported op BatchNormalization #22819

Open
uros-ms opened this issue Nov 13, 2024 · 2 comments
Open

[QNN EP] Initialization errors - unsupported op BatchNormalization #22819

uros-ms opened this issue Nov 13, 2024 · 2 comments
Labels
ep:QNN issues related to QNN exeution provider

Comments

@uros-ms
Copy link

uros-ms commented Nov 13, 2024

When I try to run the inference using QNN EP, I get initialization errors (below) which seems to originate from unsupported BatchNormalization operation. According to QNN docs and ONNX-QNN EP docs, this operation is supported. The model quantization uses int8 precision for all ops.

The model is trained in Tensorflow 2, then converted to ONNX using TF2ONNX, the unsupported op corresponds to tf.keras.layers.BatchNormalization().

Framework ONNX 1.19.0

Unsupported nodes in QNN EP: { name: StatefulPartitionedCall/model_v23_1/dec06/batch_normalization_96/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec05/batch_normalization_95/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec04/batch_normalization_94/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec03/batch_normalization_93/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec02/batch_normalization_92/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec01/batch_normalization_91/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/hdec06/batch_normalization_103/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/hdec05/batch_normalization_102/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/m
Unsupported nodes in QNN EP: { name: StatefulPartitionedCall/model_v23_1/dec06/batch_normalization_96/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec05/batch_normalization_95/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec04/batch_normalization_94/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec03/batch_normalization_93/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec02/batch_normalization_92/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/dec01/batch_normalization_91/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/hdec06/batch_normalization_103/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/model_v23_1/hdec05/batch_normalization_102/FusedBatchNormV3, type: BatchNormalization }{ name: StatefulPartitionedCall/m
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (163 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (105315 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (11229 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (9276 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (2400 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (3659 us)
Starting stage: Completion
Completed stage: Completion (177 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (80 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (3869 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (734 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (221 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (45 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (129 us)
Starting stage: Completion
Completed stage: Completion (10 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (69 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (3580 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (773 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (289 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (55 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (139 us)
Starting stage: Completion
Completed stage: Completion (9 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (71 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (3314 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (718 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (362 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (46 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (120 us)
Starting stage: Completion
Completed stage: Completion (8 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (100 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (3117 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (663 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (322 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (177 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (325 us)
Starting stage: Completion
Completed stage: Completion (40 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (176 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (6223 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (2188 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (1029 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (116 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (273 us)
Starting stage: Completion
Completed stage: Completion (19 us)
Starting stage: Graph Preparation Initializing
Completed stage: Graph Preparation Initializing (325 us)
Starting stage: Graph Transformations and Optimizations
Completed stage: Graph Transformations and Optimizations (14867 us)
Starting stage: Graph Sequencing for Target
Completed stage: Graph Sequencing for Target (7283 us)
Starting stage: VTCM Allocation
Completed stage: VTCM Allocation (2453 us)
Starting stage: Parallelization Optimization
Completed stage: Parallelization Optimization (380 us)
Starting stage: Finalizing Graph Sequence
Completed stage: Finalizing Graph Sequence (616 us)
Starting stage: Completion
Completed stage: Completion (37 us)
This session contains graph nodes that are assigned to the default CPU EP, but fallback to CPU EP has been explicitly disabled by the user.
Skip invalid vtcm_mb: 0
failed to initialize onnx model

@github-actions github-actions bot added the ep:QNN issues related to QNN exeution provider label Nov 13, 2024
@HectorSVC
Copy link
Contributor

Please enable verbose log to get more details. Search Validation FAILED from the detailed log. You should be able to see the reason why it failed for op validation.

@uros-ms
Copy link
Author

uros-ms commented Nov 14, 2024

Thanks for the suggestion. I got this reason (for all batch_norm ops):
Validation FAILED for nodes (NodeUnit):
Operator type: BatchNormalization Node name: StatefulPartitionedCall/model_v23_1/dec06/batch_normalization_96/FusedBatchNormV3 Node index: 672
REASON : batch_norm_op_builder.cc:466 onnxruntime::qnn::BatchNormOpBuilder::IsOpSupported QNN BatchNorm doesn't support dynamic scale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:QNN issues related to QNN exeution provider
Projects
None yet
Development

No branches or pull requests

2 participants