You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After quantizing the model i am loading model to cuda then this error has come oftenly.
[/home/ubuntu/miniconda3/envs/dev-2/lib/python3.12/site-packages/optimum/quanto/library/ops.py:66](http://13.48.147.30:8001/lab/workspaces/auto-H/tree/LogoMaker/App/miniconda3/envs/dev-2/lib/python3.12/site-packages/optimum/quanto/library/ops.py#line=65): UserWarning: An exception was raised while calling the optimized kernel for quanto::unpack: /home/ubuntu/miniconda3/envs/dev-2/lib/python3.12/site-packages/zmq/backend/cython/../../../../.././libstdc++.so.6: version `GLIBCXX_3.4.32' not found (required by [/home/ubuntu/miniconda3/envs/dev-2/lib/python3.12/site-packages/optimum/quanto/library/extensions/cuda/build/quanto_cuda.so](http://13.48.147.30:8001/lab/workspaces/auto-H/tree/LogoMaker/App/miniconda3/envs/dev-2/lib/python3.12/site-packages/optimum/quanto/library/extensions/cuda/build/quanto_cuda.so)) Falling back to default implementation.
warnings.warn(message + " Falling back to default implementation.")
The text was updated successfully, but these errors were encountered:
for libname in ["quanto", "quanto_py", "quanto_ext"]:
torch.library.define(f"{libname}::{name}", schema)
# Provide the inplementation for all dispatch keys in the main library
@torch.library.impl(f"quanto::{name}", "default")
def impl(*args, **kwargs):
if _ext_enabled:
try:
return getattr(torch.ops.quanto_ext, name)(*args, **kwargs)
except Exception as e:
if isinstance(e, NotImplementedError):
message = f"No optimized kernel found for quanto::{name}."
else:
message = f"An exception was raised while calling the optimized kernel for quanto::{name}: {e}"
warnings.warn(message + " Falling back to default implementation.")
return getattr(torch.ops.quanto_py, name)(*args, **kwargs)
After quantizing the model i am loading model to cuda then this error has come oftenly.
The text was updated successfully, but these errors were encountered: