We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
paddlepaddle2.5.2 paddleslim2.6, paddlelite2.13rc0, python 3.10 平台 x86
目前,我模型中有一个matmul_v2算子,我使用paddleslim的自动压缩量化后,使用paddle inference可以正确推理量化后的模型;但使用paddle_lite_opt --model_dir=./inference_model_q --optimize_out=p --optimize_out_type=naive_buffer --valid_targets=x86 ,报错如下:
1.Model is successfully loaded! [F 2/ 3 22:10: 6.227 ...optimizer/mir/static_kernel_pick_pass.cc:180 Apply] Check failed: !instruct.kernels().empty(): No kernels found for matmul_v2 Aborted (core dumped)
如果我不量化matmul_v2算子,就能够使用paddle_lite_opt进行转换成nb文件,最终也能推理; 请问这是matmul_v2算子的问题,还是x86平台的问题,我看到x86平台不支持matmul_v2算子,但我不量化matmul_v2算子,模型也是能正确转换和推理部署的;
The text was updated successfully, but these errors were encountered:
麻烦能看看这个问题吗?感谢
Sorry, something went wrong.
zzjjay
No branches or pull requests
paddlepaddle2.5.2 paddleslim2.6, paddlelite2.13rc0, python 3.10 平台 x86
目前,我模型中有一个matmul_v2算子,我使用paddleslim的自动压缩量化后,使用paddle inference可以正确推理量化后的模型;但使用paddle_lite_opt --model_dir=./inference_model_q --optimize_out=p --optimize_out_type=naive_buffer --valid_targets=x86 ,报错如下:
1.Model is successfully loaded!
[F 2/ 3 22:10: 6.227 ...optimizer/mir/static_kernel_pick_pass.cc:180 Apply] Check failed: !instruct.kernels().empty(): No kernels found for matmul_v2
Aborted (core dumped)
如果我不量化matmul_v2算子,就能够使用paddle_lite_opt进行转换成nb文件,最终也能推理;
请问这是matmul_v2算子的问题,还是x86平台的问题,我看到x86平台不支持matmul_v2算子,但我不量化matmul_v2算子,模型也是能正确转换和推理部署的;
The text was updated successfully, but these errors were encountered: