Skip to content

Commit c1ba780

Browse files
committed
separate chat command
1 parent cd1e683 commit c1ba780

File tree

6 files changed

+94
-78
lines changed

6 files changed

+94
-78
lines changed

clai/README.md

Lines changed: 11 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -25,26 +25,26 @@ export OPENAI_API_KEY='your-api-key-here'
2525
Then with [`uvx`](https://docs.astral.sh/uv/guides/tools/), run:
2626

2727
```bash
28-
uvx clai
28+
uvx clai chat
2929
```
3030

3131
Or to install `clai` globally [with `uv`](https://docs.astral.sh/uv/guides/tools/#installing-tools), run:
3232

3333
```bash
3434
uv tool install clai
3535
...
36-
clai
36+
clai chat
3737
```
3838

3939
Or with `pip`, run:
4040

4141
```bash
4242
pip install clai
4343
...
44-
clai
44+
clai chat
4545
```
4646

47-
Either way, running `clai` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
47+
Either way, running `clai chat` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
4848

4949
- `/exit`: Exit the session
5050
- `/markdown`: Show the last response in markdown format
@@ -112,30 +112,17 @@ For full documentation, see [Web Chat UI](https://ai.pydantic.dev/ui/web/).
112112
## Help
113113

114114
```
115-
usage: clai [-h] [-m [MODEL]] [-a AGENT] [-l] [-t [CODE_THEME]] [--no-stream] [--version] [prompt] {web} ...
115+
usage: clai [-h] [-l] [--version] {chat,web} ...
116116
117117
Pydantic AI CLI v...
118118
119-
Special prompts:
120-
* `/exit` - exit the interactive mode (ctrl-c and ctrl-d also work)
121-
* `/markdown` - show the last markdown output of the last question
122-
* `/multiline` - toggle multiline mode
123-
* `/cp` - copy the last response to clipboard
124-
125119
positional arguments:
126-
prompt AI Prompt, if omitted fall into interactive mode
127-
{web} Available commands
128-
web Launch web chat UI for an agent
120+
{chat,web} Available commands
121+
chat Interactive chat with an AI model
122+
web Launch web chat UI for an agent
129123
130124
options:
131-
-h, --help show this help message and exit
132-
-m [MODEL], --model [MODEL]
133-
Model to use, in format "<provider>:<model>" e.g. "openai:gpt-5" or "anthropic:claude-sonnet-4-5". Defaults to "openai:gpt-5".
134-
-a AGENT, --agent AGENT
135-
Custom Agent to use, in format "module:variable", e.g. "mymodule.submodule:my_agent"
136-
-l, --list-models List all available models and exit
137-
-t [CODE_THEME], --code-theme [CODE_THEME]
138-
Which colors to use for code, can be "dark", "light" or any theme from pygments.org/styles/. Defaults to "dark" which works well on dark terminals.
139-
--no-stream Disable streaming from the model
140-
--version Show version and exit
125+
-h, --help show this help message and exit
126+
-l, --list-models List all available models and exit
127+
--version Show version and exit
141128
```

docs/cli.md

Lines changed: 8 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -22,26 +22,26 @@ export OPENAI_API_KEY='your-api-key-here'
2222
Then with [`uvx`](https://docs.astral.sh/uv/guides/tools/), run:
2323

2424
```bash
25-
uvx clai
25+
uvx clai chat
2626
```
2727

2828
Or to install `clai` globally [with `uv`](https://docs.astral.sh/uv/guides/tools/#installing-tools), run:
2929

3030
```bash
3131
uv tool install clai
3232
...
33-
clai
33+
clai chat
3434
```
3535

3636
Or with `pip`, run:
3737

3838
```bash
3939
pip install clai
4040
...
41-
clai
41+
clai chat
4242
```
4343

44-
Either way, running `clai` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
44+
Either way, running `clai chat` will start an interactive session where you can chat with the AI model. Special commands available in interactive mode:
4545

4646
- `/exit`: Exit the session
4747
- `/markdown`: Show the last response in markdown format
@@ -122,14 +122,16 @@ To get help on the CLI, use the `--help` flag:
122122

123123
```bash
124124
uvx clai --help
125+
uvx clai chat --help
126+
uvx clai web --help
125127
```
126128

127129
### Choose a model
128130

129131
You can specify which model to use with the `--model` flag:
130132

131133
```bash
132-
uvx clai --model anthropic:claude-sonnet-4-0
134+
uvx clai chat --model anthropic:claude-sonnet-4-0
133135
```
134136

135137
(a full list of models available can be printed with `uvx clai --list-models`)
@@ -147,7 +149,7 @@ agent = Agent('openai:gpt-5', instructions='You always respond in Italian.')
147149
Then run:
148150

149151
```bash
150-
uvx clai --agent custom_agent:agent "What's the weather today?"
152+
uvx clai chat --agent custom_agent:agent "What's the weather today?"
151153
```
152154

153155
The format must be `module:variable` where:

docs/ui/web.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -50,11 +50,11 @@ Create a web app from an agent instance using [`Agent.to_web()`][pydantic_ai.age
5050
from pydantic_ai import Agent
5151
from pydantic_ai.builtin_tools import WebSearchTool
5252
from pydantic_ai.models.anthropic import AnthropicModel
53-
from pydantic_ai.models.openai import OpenAIModel
53+
from pydantic_ai.models.openai import OpenAIChatModel
5454

5555
# Create separate models with their own custom configuration
5656
anthropic_model = AnthropicModel('claude-sonnet-4-5')
57-
openai_model = OpenAIModel('gpt-5', api_key='custom-key')
57+
openai_model = OpenAIChatModel('gpt-5', provider='openai')
5858

5959
agent = Agent(openai_model)
6060

pydantic_ai_slim/pydantic_ai/_cli/__init__.py

Lines changed: 51 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -128,18 +128,34 @@ def cli_exit(prog_name: str = 'pai'): # pragma: no cover
128128
sys.exit(cli(prog_name=prog_name))
129129

130130

131-
def cli( # noqa: C901
132-
args_list: Sequence[str] | None = None, *, prog_name: str = 'pai', default_model: str = 'openai:gpt-5'
133-
) -> int:
131+
def cli(args_list: Sequence[str] | None = None, *, prog_name: str = 'pai', default_model: str = 'openai:gpt-5') -> int:
134132
"""Run the CLI and return the exit code for the process."""
135133
# we don't want to autocomplete or list models that don't include the provider,
136134
# e.g. we want to show `openai:gpt-4o` but not `gpt-4o`
137135
qualified_model_names = [n for n in get_literal_values(KnownModelName.__value__) if ':' in n]
138136

139137
parser = argparse.ArgumentParser(
140138
prog=prog_name,
141-
description=f"""\
142-
Pydantic AI CLI v{__version__}\n\n
139+
description=f'Pydantic AI CLI v{__version__}',
140+
formatter_class=argparse.RawTextHelpFormatter,
141+
)
142+
143+
parser.add_argument(
144+
'-l',
145+
'--list-models',
146+
action='store_true',
147+
help='List all available models and exit',
148+
)
149+
parser.add_argument('--version', action='store_true', help='Show version and exit')
150+
151+
subparsers = parser.add_subparsers(dest='command', help='Available commands')
152+
153+
# Chat subcommand
154+
chat_parser = subparsers.add_parser(
155+
'chat',
156+
help='Interactive chat with an AI model',
157+
description="""\
158+
Interactive chat mode with an AI model.
143159
144160
Special prompts:
145161
* `/exit` - exit the interactive mode (ctrl-c and ctrl-d also work)
@@ -149,39 +165,29 @@ def cli( # noqa: C901
149165
""",
150166
formatter_class=argparse.RawTextHelpFormatter,
151167
)
152-
153-
parser.add_argument('prompt', nargs='?', help='AI Prompt, if omitted fall into interactive mode')
154-
155-
arg = parser.add_argument(
168+
chat_parser.add_argument('prompt', nargs='?', help='AI Prompt, if omitted fall into interactive mode')
169+
chat_model_arg = chat_parser.add_argument(
156170
'-m',
157171
'--model',
158172
nargs='?',
159173
help=f'Model to use, in format "<provider>:<model>" e.g. "openai:gpt-5" or "anthropic:claude-sonnet-4-5". Defaults to "{default_model}".',
160174
)
161-
arg.completer = argcomplete.ChoicesCompleter(qualified_model_names) # type: ignore[reportPrivateUsage]
162-
parser.add_argument(
175+
chat_model_arg.completer = argcomplete.ChoicesCompleter(qualified_model_names) # type: ignore[reportPrivateUsage]
176+
chat_parser.add_argument(
163177
'-a',
164178
'--agent',
165179
help='Custom Agent to use, in format "module:variable", e.g. "mymodule.submodule:my_agent"',
166180
)
167-
parser.add_argument(
168-
'-l',
169-
'--list-models',
170-
action='store_true',
171-
help='List all available models and exit',
172-
)
173-
parser.add_argument(
181+
chat_parser.add_argument(
174182
'-t',
175183
'--code-theme',
176184
nargs='?',
177185
help='Which colors to use for code, can be "dark", "light" or any theme from pygments.org/styles/. Defaults to "dark" which works well on dark terminals.',
178186
default='dark',
179187
)
180-
parser.add_argument('--no-stream', action='store_true', help='Disable streaming from the model')
181-
parser.add_argument('--version', action='store_true', help='Show version and exit')
182-
183-
subparsers = parser.add_subparsers(dest='command', help='Available commands')
188+
chat_parser.add_argument('--no-stream', action='store_true', help='Disable streaming from the model')
184189

190+
# Web subcommand
185191
web_parser = subparsers.add_parser(
186192
'web',
187193
help='Launch web chat UI for an agent',
@@ -211,7 +217,6 @@ def cli( # noqa: C901
211217
help=f'Builtin tool to enable (can be repeated, e.g., -t web_search -t code_execution). '
212218
f'Available: {", ".join(_CLI_TOOL_IDS)}.',
213219
)
214-
215220
web_parser.add_argument(
216221
'-i',
217222
'--instructions',
@@ -229,6 +234,18 @@ def cli( # noqa: C901
229234
argcomplete.autocomplete(parser)
230235
args = parser.parse_args(args_list)
231236

237+
console = Console()
238+
name_version = f'[green]{prog_name} - Pydantic AI CLI v{__version__}[/green]'
239+
240+
if args.version:
241+
console.print(name_version, highlight=False)
242+
return 0
243+
if args.list_models:
244+
console.print(f'{name_version}\n\n[green]Available models:[/green]')
245+
for model in qualified_model_names:
246+
console.print(f' {model}', highlight=False)
247+
return 0
248+
232249
if args.command == 'web':
233250
from .web import run_web_command
234251

@@ -242,17 +259,18 @@ def cli( # noqa: C901
242259
mcp=args.mcp,
243260
)
244261

245-
console = Console()
246-
name_version = f'[green]{prog_name} - Pydantic AI CLI v{__version__}[/green]'
247-
if args.version:
248-
console.print(name_version, highlight=False)
249-
return 0
250-
if args.list_models:
251-
console.print(f'{name_version}\n\n[green]Available models:[/green]')
252-
for model in qualified_model_names:
253-
console.print(f' {model}', highlight=False)
254-
return 0
262+
if args.command == 'chat':
263+
return _run_chat_command(args, console, name_version, default_model, prog_name)
255264

265+
# No command specified - show help
266+
parser.print_help()
267+
return 0
268+
269+
270+
def _run_chat_command(
271+
args: argparse.Namespace, console: Console, name_version: str, default_model: str, prog_name: str
272+
) -> int:
273+
"""Handle the chat subcommand."""
256274
agent: Agent[None, str] = cli_agent
257275
if args.agent:
258276
loaded = load_agent(args.agent)

tests/models/test_outlines.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -479,7 +479,7 @@ def test_request_image_url(transformers_multimodal_model: OutlinesModel) -> None
479479
def test_tool_definition(llamacpp_model: OutlinesModel) -> None:
480480
# builtin tools
481481
agent = Agent(llamacpp_model, builtin_tools=[WebSearchTool()])
482-
with pytest.raises(UserError, match='Builtin tool WebSearchTool is not supported by this model'):
482+
with pytest.raises(UserError, match=r"Builtin tool\(s\) \['WebSearchTool'\] not supported by this model"):
483483
agent.run_sync('Hello')
484484

485485
# function tools

0 commit comments

Comments
 (0)