[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more
π§βπ¨ Introducing LoRAs with fal.ai and Replicate providers
We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed β‘
from huggingface_hub import InferenceClient
client = InferenceClient(provider="fal-ai") # or provider="replicate"
# output is a PIL.Image object
image = client.text_to_image(
"a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
model="openfree/flux-chatgpt-ghibli-lora",
)
- [Inference Providers] LoRAs with Replicate by @hanouticelina in #3054
- [Inference Providers] Support for LoRAs with fal by @hanouticelina in #3005
βοΈ auto
mode for provider selection
You can now automatically select a provider for a model using auto
mode β it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.
from huggingface_hub import InferenceClient
# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto")
completion = client.chat.completions.create(
model="Qwen/Qwen3-235B-A22B",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
provider
argument. Previously, the default was hf-inference
, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient
or AsyncInferenceClient
.
π§ Embeddings support with Sambanova (feature-extraction)
We added support for feature extraction (embeddings) inference with sambanova provider.
- [Inference Providers] sambanova supports feature extraction by @hanouticelina in #3037
β‘ Other Inference features
HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported β if a model isn't already deployed, it wonβt be available via HF Inference API.
Miscellaneous improvements and some bug fixes:
- Fix 'sentence-transformers/all-MiniLM-L6-v2' doesn't support task 'feature-extraction' by @Wauplin in #2968
- fix text generation by @hanouticelina in #2982
- Fix HfInference conversational by @Wauplin in #2985
- Fix 'sentence_similarity' on InferenceClient by @tomaarsen in #3004
- Update inference types (automated commit) by @HuggingFaceInfra in #3015
- update text to speech input by @hanouticelina in #3025
- [Inference Providers] fix inference with URL endpoints by @hanouticelina in #3041
- Update inference types (automated commit) by @HuggingFaceInfra in #3051
β
Of course, all of those inference changes are available in the AsyncInferenceClient
async equivalent π€
π Xet
Thanks to @bpronan's PR, Xet now supports uploading byte arrays:
from huggingface_hub import upload_file
file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo
upload_file(
path_or_fileobj=file_content,
repo_id=repo_id,
)
Additionally, weβve added documentation for environment variables used by hf-xet
to optimize file download/upload performance β including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES
), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS
), high-performance mode (HF_XET_HIGH_PERFORMANCE
), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY
).
- Docs for xet env variables by @rajatarya in #3024
- Minor xet changes: HF_HUB_DISABLE_XET flag, suppress logger.info by @rajatarya in #3039
Miscellaneous improvements:
β¨ HF API
We added HTTP download support for files larger than 50GB β enabling more reliable handling of large file downloads.
- Add HTTP Download support for files > 50GB by @rajatarya in #2991
We also added dynamic batching to upload_large_folder
, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration β improving performance and reducing the risk of hitting the commits rate limit on large repositories.
- Fix dynamic commit size by @maximizemaxwell in #3016
We added support for new arguments when creating or updating Hugging Face Inference Endpoints.
- add route payload to deploy Inference Endpoints by @Vaibhavs10 in #3013
- Add the 'env' parameter to creating/updating Inference Endpoints by @tomaarsen in #3045
π Breaking changes
- The default value of the
provider
argument inInferenceClient
andAsyncInferenceClient
is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior. - HF Inference API Routing Update: The inference URL path for
feature-extraction
andsentence-similarity
tasks has changed fromhttps://router.huggingface.co/hf-inference/pipeline/{task}/{model}
tohttps://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}
.
- [inference] Necessary breaking change: nest task-specific route inside of model route by @julien-c in #3044
π οΈ Small fixes and maintenance
π QoL improvements
- Unlist TPUs from SpaceHardware by @Wauplin in #2973
- dev(narugo): disable hf_transfer when custom 'Range' header is assigned by @narugo1992 in #2979
- Improve error handling for invalid eval results in model cards by @hanouticelina in #3000
- Handle Rate Limits in Pagination with Automatic Retries by @Weyaxi in #2970
- Add example for downloading files in subdirectories, related to #3014 by @mixer3d in #3023
- Super-micro-tiny-PR to allow for direct copy-paste :) by @fracapuano in #3030
- Migrate to logger.warning usage by @emmanuel-ferdman in #3056
π Bug and typo fixes
- Retry on transient error in download workflow by @Wauplin in #2976
- fix snapshot download behavior in offline mode when downloading to a local dir by @hanouticelina in #3009
- fix docstring by @hanouticelina in #3040
- fix default CACHE_DIR by @albertcthomas in #3050
ποΈ internal
- fix: fix test_get_hf_file_metadata_from_a_lfs_file as since xet migration by @XciD in #2972
- A better security-wise style bot GH Action by @hanouticelina in #2914
- prepare for next release by @hanouticelina in #2983
- Bump
hf_xet
min version to 1.0.0 + make it required dep on 64 bits by @hanouticelina in #2971 - fix permissions for style bot by @hanouticelina in #3012
- remove (inference only) VCR tests by @hanouticelina in #3021
- remove test by @hanouticelina in #3028
Community contributions
The following contributors have made significant changes to the library over the last release:
- @bpronan
- @tomaarsen
- @Weyaxi
- Handle Rate Limits in Pagination with Automatic Retries (#2970)
- @rajatarya
- @Vaibhavs10
- add route payload to deploy Inference Endpoints (#3013)
- @maximizemaxwell
- Fix dynamic commit size (#3016)
- @emmanuel-ferdman
- Migrate to logger.warning usage (#3056)