Skip to content

[v0.31.0] LoRAs with Inference Providers, `auto` mode for provider selection, embeddings models and more

Compare
Choose a tag to compare
@hanouticelina hanouticelina released this 06 May 20:59
· 76 commits to main since this release

πŸ§‘β€πŸŽ¨ Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by
fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚑

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

# output is a PIL.Image object
image = client.text_to_image(
    "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.",
    model="openfree/flux-chatgpt-ghibli-lora",
)

βš™οΈ auto mode for provider selection

You can now automatically select a provider for a model using auto mode β€” it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

# will select the first provider available for the model, sorted by your order.
client = InferenceClient(provider="auto") 

completion = client.chat.completions.create(
    model="Qwen/Qwen3-235B-A22B",
    messages=[
        {
            "role": "user",
            "content": "What is the capital of France?"
        }
    ],
)

print(completion.choices[0].message)

⚠️ Note: This is now the default value for the provider argument. Previously, the default was hf-inference, so this change may be a breaking one if you're not specifying the provider name when initializing InferenceClient or AsyncInferenceClient.

🧠 Embeddings support with Sambanova (feature-extraction)

We added support for feature extraction (embeddings) inference with sambanova provider.

⚑ Other Inference features

HF Inference API provider is now fully integrated as an Inference Provider, this means it only supports a predefined list of deployed models, selected based on popularity.
Cold-starting arbitrary models from the Hub is no longer supported β€” if a model isn't already deployed, it won’t be available via HF Inference API.

Miscellaneous improvements and some bug fixes:

βœ… Of course, all of those inference changes are available in the AsyncInferenceClient async equivalent πŸ€—

πŸš€ Xet

Thanks to @bpronan's PR, Xet now supports uploading byte arrays:

from huggingface_hub import upload_file

file_content = b"my-file-content"
repo_id = "username/model-name" # `hf-xet` should be installed and Xet should be enabled for this repo

upload_file(
    path_or_fileobj=file_content,
    repo_id=repo_id,
)

Additionally, we’ve added documentation for environment variables used by hf-xet to optimize file download/upload performance β€” including options for caching (HF_XET_CHUNK_CACHE_SIZE_BYTES), concurrency (HF_XET_NUM_CONCURRENT_RANGE_GETS), high-performance mode (HF_XET_HIGH_PERFORMANCE), and sequential writes (HF_XET_RECONSTRUCT_WRITE_SEQUENTIALLY).

Miscellaneous improvements:

  • Removing workaround for deprecated refresh route headers by @bpronan in #2993

✨ HF API

We added HTTP download support for files larger than 50GB β€” enabling more reliable handling of large file downloads.

We also added dynamic batching to upload_large_folder, replacing the fixed 50-files-per-commit rule with an adaptive strategy that adjusts based on commit success and duration β€” improving performance and reducing the risk of hitting the commits rate limit on large repositories.

We added support for new arguments when creating or updating Hugging Face Inference Endpoints.

  • add route payload to deploy Inference Endpoints by @Vaibhavs10 in #3013
  • Add the 'env' parameter to creating/updating Inference Endpoints by @tomaarsen in #3045

πŸ’” Breaking changes

  • The default value of the provider argument in InferenceClient and AsyncInferenceClient is now "auto" instead of "hf-inference" (HF Inference API). This means provider selection will now follow your preferred order set in your inference provider settings.
    If your code relied on the previous default ("hf-inference"), you may need to update it explicitly to avoid unexpected behavior.
  • HF Inference API Routing Update: The inference URL path for feature-extraction and sentence-similarity tasks has changed from https://router.huggingface.co/hf-inference/pipeline/{task}/{model}to https://router.huggingface.co/hf-inference/models/{model}/pipeline/{task}.
  • [inference] Necessary breaking change: nest task-specific route inside of model route by @julien-c in #3044

πŸ› οΈ Small fixes and maintenance

😌 QoL improvements

πŸ› Bug and typo fixes

πŸ—οΈ internal

Community contributions

The following contributors have made significant changes to the library over the last release: