-
Notifications
You must be signed in to change notification settings - Fork 310
Agents landing page #1728
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Agents landing page #1728
Changes from 26 commits
c631e6c
8173750
e87b33d
64b6241
600bb3b
d99f6d2
e4e1ef1
d10be2b
a3dc9c5
ddc6745
5451f02
1fcf691
6f99832
3fe50ab
69f4f63
efd6f20
9ea20e7
ac7663e
d4cb687
5cd1f6c
8139c97
b951f2a
38bcfa4
dfbf60b
75390d5
28a41a9
bc904c3
0979d18
4a9e958
54f60f5
d4a18c4
a0ee2b3
b209798
ccab741
eff50a9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,103 @@ | ||
# Agents on the Hub | ||
|
||
This page compiles all the libraries and tools Hugging Face offers for agentic workflows. | ||
|
||
## smolagents | ||
|
||
[smolagents](https://github.com/huggingface/smolagents) is a lightweight library to cover all agentic use cases, from code-writing agents to computer use, in few lines of code. It is model agnostic, supporting local models served with Hugging Face Transformers, as well as models offered with [Inference Providers](../inference-providers/index.md), and proprietary model providers. | ||
|
||
It offers a unique kind of agent :`CodeAgent`, an agent that writes its actions in Python code. | ||
It also supports the standard agent that writes actions in JSON blobs as most other agentic frameworks do, called `ToolCallingAgent`. | ||
To learn more about write actions in code vs JSON, check out our [new short course on DeepLearning.AI](https://www.deeplearning.ai/short-courses/building-code-agents-with-hugging-face-smolagents/). | ||
|
||
If you want to avoid defining agents yourself, the easiest way to start an agent is through the CLI, using the `smolagent` command. | ||
|
||
```bash | ||
smolagent "Plan a trip to Tokyo, Kyoto and Osaka between Mar 28 and Apr 7." --model-type "InferenceClientModel" --model-id "Qwen/Qwen2.5-Coder-32B-Instruct" --imports "pandas numpy" --tools "web_search" | ||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
|
||
Agents can be pushed to Hugging Face Hub as Spaces. Check out all the cool agents people have built [here](https://huggingface.co/spaces?filter=smolagents&sort=likes). | ||
|
||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
smolagents also supports MCP servers as tools, as follows: | ||
|
||
```python | ||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
from smolagents import MCPClient, CodeAgent | ||
from mcp import StdioServerParameters | ||
import os | ||
|
||
server_parameters = StdioServerParameters( | ||
command="uvx", # Using uvx ensures dependencies are available | ||
args=["--quiet", "[email protected]"], | ||
env={"UV_PYTHON": "3.12", **os.environ}, | ||
) | ||
|
||
with MCPClient(server_parameters) as tools: | ||
agent = CodeAgent(tools=tools, model=model, add_base_tools=True) | ||
agent.run("Please find the latest research on COVID-19 treatment.") | ||
``` | ||
|
||
Learn more [in the documentation](https://huggingface.co/docs/smolagents/tutorials/tools#use-mcp-tools-with-mcpclient-directly). | ||
|
||
## huggingface.js mcp-client | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One nit: would prefer putting smolagents above huggingface.js agents, since it has more usage and content. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. to me it makes sense to make it high to low level, but I have no strong opinion There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. smolagents seems more high level IMO, this part about huggingface.js sounds like a low level detail There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. CLI yes, but other than that it's not I think there's more customization There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't really see the logic in having "one MCP server of a package" listed above "one MCP server of a bigger package" (smolagents has more usage than the agentic part of huggingface.js): for me it would make sense for the bigger one to be listed first There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. personally don't think the order matter much, it's just three options anyway, IMHO having more hub related things up front makes wee bit more sense - we can always change the order later on if it doesn't make sense. |
||
|
||
Huggingface.js offers an MCP client served with [Inference Providers](https://huggingface.co/docs/inference-providers/en/index) or local LLMs. Getting started with them is as simple as running `pnpm agent`. You can plug and play different models and providers by setting `PROVIDER` and `MODEL_ID` environment variables. | ||
|
||
```bash | ||
export HF_TOKEN="hf_..." | ||
export MODEL_ID="Qwen/Qwen2.5-72B-Instruct" | ||
export PROVIDER="nebius" | ||
npx @huggingface/mcp-client | ||
``` | ||
|
||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
or, you can use any Local LLM (for example via lmstudio): | ||
|
||
```bash | ||
ENDPOINT_URL=http://localhost:1234/v1 \ | ||
MODEL_ID=lmstudio-community/Qwen3-14B-GGUF \ | ||
npx @huggingface/mcp-client | ||
You can get more information about mcp-client [here](https://huggingface.co/docs/huggingface.js/en/mcp-client/README). | ||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
``` | ||
|
||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
## Gradio MCP Server / Tools | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. small suggestion, it'd be nice to add a snippet of MCP via a space. |
||
|
||
You can build an MCP server in just a few lines of Python with Gradio. If you have an existing Gradio app or Space you'd like to use as an MCP server / tool, it's just a single-line change. | ||
|
||
To make a Gradio application an MCP server, simply pass in `mcp_server=True` when launching your demo like follows. | ||
|
||
```python | ||
merveenoyan marked this conversation as resolved.
Show resolved
Hide resolved
|
||
def generate_image(prompt: str): | ||
""" | ||
Generate an image based on a text prompt | ||
|
||
Args: | ||
prompt: a text string describing the image to generate | ||
""" | ||
pass | ||
|
||
demo = gr.Interface( | ||
fn=generate_image, | ||
inputs="text", | ||
outputs="image", | ||
title="Image Generator" | ||
) | ||
|
||
demo.launch(mcp_server=True) | ||
``` | ||
|
||
The MCP server will be available at `http://your-server:port/gradio_api/mcp/sse` where your application is served. It will have a tool corresponding to each function in your Gradio app, with the tool description automatically generated from the docstrings of your functions. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. for the deployed space on the hub - it'd be https://www.gradio.app/guides/using-docs-mcp#installing-in-the-clients cc: @abidlabs |
||
|
||
Lastly, add this to the settings of the MCP Client of your choice (e.g. Cursor). | ||
|
||
``` | ||
{ | ||
"mcpServers": { | ||
"gradio": { | ||
"url": "http://your-server:port/gradio_api/mcp/sse" | ||
} | ||
} | ||
} | ||
``` | ||
|
||
|
||
This is very powerful because it lets the LLM use any Gradio application as a tool. You can find thousands of them on [Spaces](https://huggingface.co/spaces). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we format it on multiple lines? see current rendering: https://moon-ci-docs.huggingface.co/docs/hub/pr_1728/en/agents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry it was supposed to be bash
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
made a suggestion on above here: #1728 (comment)