Skip to content

Conversation

@dsfaccini
Copy link
Contributor

@dsfaccini dsfaccini commented Nov 17, 2025

web-based chat interface for Pydantic AI agents

  1. new module pydantic_ai.ui.web
  2. new method Agent.to_web()

fastapi

  • app = create_chat_app(agent)

  • the following endpoints come preconfigured:

    • GET / and /:id - serve the chat UI
    • POST /api/chat - Main chat endpoint using VercelAIAdapter
    • GET /api/configure - Returns available models and builtin tools
    • GET /api/health - Health check
    • NOTE: I'm counting on FastAPI to complain if the user tried adding conflicting routes, otherwise we could add a warning on the respective docs.

options and example

NOTE: the module for options is currently pydantic_ai.ui.web.

  • pre-configured model options:

    • anthropic:claude-sonnet-4-5
    • openai-responses:gpt-5
    • google-gla:gemini-2.5-pro
  • supported builtin tools:

    • web_search
    • code_execution
    • image_generation
# app.py
import logfire
from pydantic_ai import Agent

logfire.configure(send_to_logfire='if-token-present')
logfire.instrument_pydantic_ai()

agent = Agent('openai:gpt-5')

@agent.tool
def get_weather(city: str) -> str:
    return f"The weather in {city} is sunny"

app = agent.to_web()

logfire.instrument_fastapi(app, capture_headers=True)

# Run with: uvicorn app:app

testing

  • 7 tests in tests/test_ui_web.py

notes

  • UI is served from CDN: @pydantic/[email protected]
  • Uses Vercel AI protocol for chat streaming
  • TODO: add clai web command to launch from the CLI (as in uvx pydantic-work without the whole URL magic)
  • TODO: should I add a new doc at docs/ui/to_web.md? I'd also reference this in docs/ui/overview.md and docs/agents.md

EDIT: if you try it out it's worth noting that the current hosted UI doesn't handle ErrorChunks, so you will get no spinner and no response when there's a model-level error and fastapi will return a 200 any way.
This will happen for instance when you use a model for which you don't have a valid API key in your environment
I opened a PR for the error chunks here pydantic/ai-chat-ui#4.

Closes #3295

args = parser.parse_args(args_list)

# Handle web subcommand
if args.command == 'web':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it should be --web so it doesn't conflict with the prompt arg?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeahp, changed that now


self._get_toolset().apply(_set_sampling_model)

def to_web(self) -> Any:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're gonna need some args here -- have a look at the to_a2a and to_ag_ui methods. Not saying we need all of those args, but some may be useful

@dsfaccini
Copy link
Contributor Author

I just pushed an update to this removing the AST aspect and (hopefully) fixing the tests so they pass in CI

haven't addressed the comments yet so it isn't reviewable yet

args = parser.parse_args(args_list)

# Handle web subcommand
if args.command == 'web':
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeahp, changed that now


@app.get('/')
@app.get('/{id}')
async def index(request: Request, version: str | None = Query(None)): # pyright: ignore[reportUnusedFunction]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand the need for a version arg. An older version than the default is worse, and a newer version may not work with the API data model. I think they should develop in tandem, with a pinned version on this side.

What we could do is add a frontend_url argument to the to_web method to allow the entire thing to be overridden easily?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

frontend_url remains relevant, I haven't included logic for this

@dsfaccini
Copy link
Contributor Author

dsfaccini commented Nov 29, 2025

been having some fun testing the combinations, for anyone who wants to try them:

combinations to try out the agent UI

1. Generic agent with explicit model

source .env && uv run --with . clai web -m openai:gpt-4.1-mini

2. Agent with model (uses agent's configured model)

source .env && uv run --with . clai web -a clai.clai._test_agents:chat_agent

3. Agent without model + CLI model

source .env && uv run --with . clai web -a clai.clai._test_agents:agent_no_model -m anthropic:claude-haiku-4-5

4. Multiple models (first is default, others are options in UI)

source .env && uv run --with . clai web -m google-gla:gemini-2.5-flash-lite -m openai:gpt-4.1-mini -m anthropic:claude-haiku-4-5

5. Override agent's model with different one

source .env && uv run --with . clai web -a clai.clai._test_agents:chat_agent -m google-gla:gemini-2.5-flash-lite

6. Single tool - web search enabled

source .env && uv run --with . clai web -m openai:gpt-4.1-mini -t web_search

7. Multiple tools

source .env && uv run --with . clai web -m anthropic:claude-haiku-4-5 -t web_search -t code_execution

8. Agent with builtin_tools configured

source .env && uv run --with . clai web -a clai.clai._test_agents:agent_with_tools

9. Roleplay waiter instructions

source .env && uv run --with . clai web -m openai:gpt-4.1-mini -i "You're a grumpy Parisian waiter at a trendy bistro frequented by American tourists. You're secretly proud of the food but act annoyed by every question. Pepper your
responses with French words and sighs."

10. With MCP config

source .env && uv run --with . clai web -m google-gla:gemini-2.5-flash-lite --mcp mcp_servers.json

11. ERROR: No model, no agent (should error)

source .env && uv run --with . clai web

12. ERROR: Agent without model, no CLI model (should error)

source .env && uv run --with . clai web -a clai.clai._test_agents:agent_no_model

13. WARNING: Unknown tool (should warn)

source .env && uv run --with . clai web -m openai:gpt-4.1-mini -t definitely_not_a_real_tool

Some fun alternative instructions you could swap in for #9:

Pirate customer service

source .env && uv run --with . clai web -m anthropic:claude-haiku-4-5 -i "You're a pirate who somehow ended up working tech support. Answer questions helpfully but can't stop using nautical terms and saying 'arrr'."

Overly enthusiastic fitness coach

source .env && uv run --with . clai web -m google-gla:gemini-2.5-flash-lite -i "You're an extremely enthusiastic fitness coach who relates EVERYTHING back to exercise and healthy living. Even coding questions get workout analogies."

Noir detective

source .env && uv run --with . clai web -m openai:gpt-4.1-mini -i "You're a 1940s noir detective narrating your investigation. Every question is a 'case' and every answer is delivered in hard-boiled prose with lots of rain metaphors."

.env.example

export ANTHROPIC_API_KEY=""
export OPENAI_API_KEY=""
export GOOGLE_API_KEY=""

When we publish it should naturally just run as uvx clai web ..., we could support a --url to fetch the agent code from, that would make it easier to publish examples. Though that sounds a bit dangerous as well.

@dsfaccini dsfaccini requested a review from DouweM December 1, 2025 18:12
clai/README.md Outdated

- `--agent`, `-a`: Agent to serve in `module:variable` format
- `--models`, `-m`: Comma-separated models to make available (e.g., `gpt-5,sonnet-4-5`)
- `--tools`, `-t`: Comma-separated builtin tool IDs to enable (e.g., `web_search,code_execution`)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Link to builtin tools docs please. We may also need to list all the IDs as I don't think they're documented anywhere

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is "to enable" correct? Are they all enabled by default or just available as options?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

most correctest would probably be offer or put at the disposal I guess, but enable seems good enough enable for the user

clai/README.md Outdated

```
usage: clai [-h] [-m [MODEL]] [-a AGENT] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt]
usage: clai [-h] [-m [MODEL]] [-a AGENT] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt] {web} ...
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the real help output right? We should be able to get things to render such that web is clearly a subcommand with its own options, right now it looks like it's a "web" word at the end of the regular command.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image

that's how it shows

if tools:
for tool_id in tools:
tool_cls = get_builtin_tool_cls(tool_id)
if tool_cls is None or tool_id in ('url_context', 'mcp_server'):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We shouldn't have magic values like this inline; these should be a constant

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right, replaced with a constant

for tool_id in tools:
tool_cls = get_builtin_tool_cls(tool_id)
if tool_cls is None or tool_id in ('url_context', 'mcp_server'):
console.print(f'[yellow]Warning: Unknown tool "{tool_id}", skipping[/yellow]')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's have a separate error for mcp_server like for memory below

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah. made a reusable error, lists all errors

if tool_cls is None or tool_id in ('url_context', 'mcp_server'):
console.print(f'[yellow]Warning: Unknown tool "{tool_id}", skipping[/yellow]')
continue
if tool_id == 'memory':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather heave a constant set of unsupported builtin tool IDs, and then have a generic error X is not supported in the web UI because it requires configuration or something like that

| None = None,
builtin_tools: list[AbstractBuiltinTool] | None = None,
) -> Starlette:
"""Create a Starlette app that serves a web chat UI for this agent.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a comment somewhere about exposing more starlette arguments like to_a2a and to_ag_ui do, can we do that please?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should also support instructions, and any other options to run/iter that makes sense to override. Like model_settings etc. I believe to_ag_ui does that too

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took that comment to mean that we should be passing the models and builtin tools as args, since they were part of a config file before, I'm not sure otherwise what other options we should make available... do you have a link to an example?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dsfaccini Essentially to_web wraps both agent.run and Starlette(), which both take many args people may want to tweak. So for example look at AbstractAgent.to_ag_ui

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added settings and deps for now, the Starlette-specific args we discussed can be set externally, let's talk if there are any agent-specific ones that are missing here

for tool in params.builtin_tools:
if not isinstance(tool, tuple(supported_types)):
raise UserError(
f'Builtin tool {type(tool).__name__} is not supported by this model. '
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If multiple tools are unsupported, we'd now get separate exceptions and only see the first. I'd rather have one exception that lists all the unsupported types. So we can do effectively if len(params.builtin_tools - self.profile.supported_builtin_tools) > 0 (with the correct types of course)

return _format_model_label(self.model_name)

@classmethod
def supported_builtin_tools(cls) -> frozenset[type[AbstractBuiltinTool]]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be a property? Assuming a class property is a thing

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nope, properties need "The Self"


if model_request_parameters.builtin_tools:
if model_request_parameters.builtin_tools: # pragma: no cover
# this check is done in the base Model class - leave this as a placeholder for when Bedrock supports built-in tools
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove it like we did for Cohere

agent=agent,
model=extra_data.model,
builtin_tools=request_builtin_tools,
toolsets=_toolsets,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm starting to think that maybe --mcp doesn't make sense as we've currently implemented it, as they are always included instead of being selectable like the builtin tools... For consistency, either both should always be on, both should be optional, or it should be configurable which are enabled by default and which are optional, but the current behavior is confusingly inconsistent.

Of course that wasn't an issue when we used MCPServerTool before, but I thought that was too limiting as it requied on provider support when we don't really have to.

From a product perspective, I think we should make the MCP servers selectable as well, but I don't want to wait to get that ready before we ship this, as we'd need to update the frontend (i.e. not use the builtinTools key? have a new key for tools that do not rely on model support?). I think it's best to split that out for now, and create a new followup PR.

In general smaller PRs with incremental improvements are better than dumping a huge feature in one go, so I'd rather ship what's ready now, and then do the rest later.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

noted, will remove mcp support for now

streaming_response = await VercelAIAdapter.dispatch_request(
request,
agent=agent,
model=extra_data.model,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a model instance was provided, we should use it instead of passing in the ID directly! We need to have a map of model IDs to the original thing that was provided (model instance or name)

def add_api_routes(
app: Starlette,
agent: Agent,
models: list[ModelInfo] | None = None,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment on line model=extra_data.model makes me thing we should be passing the raw model instance/names into this method, not the pre-processed ModelInfo. And we should generate the ModelInfo inside this method / inside the configure endpoint.


app = Starlette()

add_api_routes(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can refactor this to return a Starlette router for just the API that can then be mounted into the main starlette app. that way it doesn't need to take app, and the API is more cleanly separate from the UI

Added link for web chat UI screenshot.
@dsfaccini
Copy link
Contributor Author

working on removing mcp support and the remaining 4 comments

- return routes instead of passing app into craete api routes
- handle model instance resolution inside api route creator
- update docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Web chat interface for any agent

2 participants