-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for new diffuser: flux1.schnell #272
Comments
Yes, exactly, you will need that. But since the classes are simple as you can see here, you can probably start making progress already by having them implemented in your projects:
But of course, we would appreciate your contributions to quanto, as well :) |
Thanks for the quick answer! Did it like this:
Do you know if that's the correct way to use it? Or is an additional freeze step necessary between quantization and saving?
|
After you quantize, it's not necessary to freeze as you can see it happens within the
Rest looks good to me. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
This issue was closed because it has been stalled for 5 days with no activity. |
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days. |
@sayakpaul
Hello,
I am looking for support for saving and loading the flux1.schnell model from Blackforest.
Following your code from the "Bonus" here
Saving
Loading
I am looking for a similar option to save and load the two quantized models in this repo here,
see line 38,45,46
and 35,48,49.
If I'm correctly understanding this, there would need to be somethin like
from optimum.quanto import QuantizedFluxTransformer2DModel
for the transformerand
from optimum.quanto import T5EncoderModel
for the text_encoder_2to be able to save and load the quantized models for the transformer and encoder. Is that correct? Or is tehre another possibility which avoids importing the quantized version of a model from optimum.quanto?
Thank you!
@sayakpaul
The text was updated successfully, but these errors were encountered: