Releases: huggingface/huggingface_hub
[v0.33.4] [Tiny-Agent]: Fix schema validation error for default MCP tools
- Omit parameters in default tools of tiny-agent #3214
Full Changelog: v0.33.3...v0.33.4
[v0.33.3] [Tiny-Agent]: Update tiny-agents example
- Update tiny-agents example #3205
Full Changelog: v0.33.2...v0.33.3
[v0.33.2] [Tiny-Agent]: Switch to VSCode MCP format
Full Changelog: v0.33.1...v0.33.2
Breaking changes:
- no more config nested mapping => everything at root level
- headers at root level instead of inside options.requestInit
- updated the way values are pulled from ENV (based on input id)
Example of agent.json
:
{
"model": "Qwen/Qwen2.5-72B-Instruct",
"provider": "nebius",
"inputs": [
{
"type": "promptString",
"id": "hf-token",
"description": "Token for Hugging Face API access",
"password": true
}
],
"servers": [
{
"type": "http",
"url": "https://huggingface.co/mcp",
"headers": {
"Authorization": "Bearer ${input:hf-token}"
}
}
]
}
Find more examples in https://huggingface.co/datasets/tiny-agents/tiny-agents
[v0.33.1]: Inference Providers Bug Fixes, Tiny-Agents Message handling Improvement, and Inference Endpoints Health Check Update
Full Changelog: v0.33.0...v0.33.1
This release introduces bug fixes for chat completion type compatibility and feature extraction parameters, enhanced message handling in tiny-agents, and updated inference endpoint health check:
- [Tiny agents] Add tool call to messages #3159 by @NielsRogge
- fix: update payload preparation to merge parameters into the output dictionary #3160 by @mishig25
- fix(inference_endpoints): use GET healthRoute instead of GET / to check status #3165 by @mfuntowicz
- Recursive filter_none in Inference Providers #3178 by @Wauplin
[v0.33.0]: Welcoming Featherless.AI and Groq as Inference Providers!
⚡ New provider: Featherless.AI
Featherless AI is a serverless AI inference provider with unique model loading and GPU orchestration abilities that makes an exceptionally large catalog of models available for users. Providers often offer either a low cost of access to a limited set of models, or an unlimited range of models with users managing servers and the associated costs of operation. Featherless provides the best of both worlds offering unmatched model range and variety but with serverless pricing. Find the full list of supported models on the models page.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="featherless-ai")
completion = client.chat.completions.create(
model="deepseek-ai/DeepSeek-R1-0528",
messages=[
{
"role": "user",
"content": "What is the capital of France?"
}
],
)
print(completion.choices[0].message)
⚡ New provider: Groq
At the heart of Groq's technology is the Language Processing Unit (LPU™), a new type of end-to-end processing unit system that provides the fastest inference for computationally intensive applications with a sequential component, such as Large Language Models (LLMs). LPUs are designed to overcome the limitations of GPUs for inference, offering significantly lower latency and higher throughput. This makes them ideal for real-time AI applications.
Groq offers fast AI inference for openly-available models. They provide an API that allows developers to easily integrate these models into their applications. It offers an on-demand, pay-as-you-go model for accessing a wide range of openly-available LLMs.
from huggingface_hub import InferenceClient
client = InferenceClient(provider="groq")
completion = client.chat.completions.create(
model="meta-llama/Llama-4-Scout-17B-16E-Instruct",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "Describe this image in one sentence."},
{
"type": "image_url",
"image_url": {"url": "https://vagabundler.com/wp-content/uploads/2019/06/P3160166-Copy.jpg"},
},
],
}
],
)
print(completion.choices[0].message)
🤖 MCP and Tiny-agents
It is now possible to run tiny-agents using a local server e.g. llama.cpp. 100% local agents are right behind the corner!
- [MCP] Add local/remote endpoint inference support by @hanouticelina in #3121
Fixing some DX issues in the tiny-agents
CLI.
- Fix
tiny-agents
cli exit issues by @Wauplin in #3125 - [MCP] reinject JSON parse & runtime tool errors back into the chat history by @hanouticelina in #3137
📚 Documentation
New translation from the Hindi-speaking community, for the community!
- Added Hindi translation for git_vs_http.md in concepts section by @february-king in #3156
🛠️ Small fixes and maintenance
😌 QoL improvements
- Make hf-xet more silent by @Wauplin in #3124
- [HfApi] Collections in collections by @hanouticelina in #3120
- Fix inference search by @Wauplin in #3022
- [Inference Providers] Raise warning if provider's status is in error mode by @hanouticelina in #3141
🐛 Bug and typo fixes
- Fix snapshot_download on very large repo (>50k files) by @Wauplin in #3122
- fix tqdm_class argument of subclass of tqdm by @andyxning in #3111
- fix quality by @hanouticelina in #3128
- second example in oauth documentation by @thanosKivertzikidis in #3136
- fix table question answering by @hanouticelina in #3154
🏗️ internal
- Create claude.yml by @julien-c in #3118
- [Internal] prepare for 0.33.0 release by @hanouticelina in #3138
Significant community contributions
The following contributors have made significant changes to the library over the last release:
[v0.32.6] [Upload large folder] fix for wrongly saved upload_mode/remote_oid
- Fix for wrongly saved upload_mode/remote_oid #3113
Full Changelog: v0.32.5...v0.32.6
[v0.32.5] [Tiny-Agents] inject environment variables in headers
- Inject env var in headers + better type annotations #3142
Full Changelog: v0.32.4...v0.32.5
[v0.32.4]: Bug fixes in `tiny-agents`, and fix input handling for question-answering task.
Full Changelog: v0.32.3...v0.32.4
This release introduces bug fixes to tiny-agents
and InferenceClient.question_answering
:
- [MCP]
asyncio.wait()
does not accept bare coroutines #3135 by @hanouticelina - [MCP] Fix vestigial token yield on early exit #3132 by @danielholanda
- Fix question_answering #3134 by @eugenos-programos
[v0.32.3]: Handle env variables in `tiny-agents`, better CLI exit and handling of MCP tool calls arguments
Full Changelog: v0.32.2...v0.32.3
This release introduces some improvements and bug fixes to tiny-agents
:
[v0.32.2]: Add endpoint support in Tiny-Agent + fix `snapshot_download` on large repos
Full Changelog: v0.32.1...v0.32.2