Skip to content

Commit 85e929f

Browse files
committed
Merge branch 'main' into 3225
2 parents 4e305d7 + 6c01b52 commit 85e929f

File tree

6 files changed

+177
-9
lines changed

6 files changed

+177
-9
lines changed

docs/agents.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -708,7 +708,7 @@ print(result_sync.output)
708708
The final request uses `temperature=0.0` (run-time), `max_tokens=500` (from model), demonstrating how settings merge with run-time taking precedence.
709709

710710
!!! note "Model Settings Support"
711-
Model-level settings are supported by all concrete model implementations (OpenAI, Anthropic, Google, etc.). Wrapper models like `FallbackModel`, `WrapperModel`, and `InstrumentedModel` don't have their own settings - they use the settings of their underlying models.
711+
Model-level settings are supported by all concrete model implementations (OpenAI, Anthropic, Google, etc.). Wrapper models like [`FallbackModel`](models/overview.md#fallback-model), [`WrapperModel`][pydantic_ai.models.wrapper.WrapperModel], and [`InstrumentedModel`][pydantic_ai.models.instrumented.InstrumentedModel] don't have their own settings - they use the settings of their underlying models.
712712

713713
### Model specific settings
714714

docs/models/overview.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -87,8 +87,7 @@ in sequence until one successfully returns a result. Under the hood, Pydantic AI
8787
from one model to the next if the current model returns a 4xx or 5xx status code.
8888

8989
!!! note
90-
91-
The provider SDKs on which Models are based (like OpenAI, Anthropic, etc.) often have built-in retry logic that can delay the `FallbackModel` from activating.
90+
The provider SDKs on which Models are based (like OpenAI, Anthropic, etc.) often have built-in retry logic that can delay the `FallbackModel` from activating.
9291

9392
When using `FallbackModel`, it's recommended to disable provider SDK retries to ensure immediate fallback, for example by setting `max_retries=0` on a [custom OpenAI client](openai.md#custom-openai-client).
9493

@@ -173,7 +172,9 @@ In the year 2157, Captain Maya Chen piloted her spacecraft through the vast expa
173172

174173
In this example, if the OpenAI model fails, the agent will automatically fall back to the Anthropic model with its own configured settings. The `FallbackModel` itself doesn't have settings - it uses the individual settings of whichever model successfully handles the request.
175174

176-
In this next example, we demonstrate the exception-handling capabilities of `FallbackModel`.
175+
### Exception Handling
176+
177+
The next example demonstrates the exception-handling capabilities of `FallbackModel`.
177178
If all models fail, a [`FallbackExceptionGroup`][pydantic_ai.exceptions.FallbackExceptionGroup] is raised, which
178179
contains all the exceptions encountered during the `run` execution.
179180

@@ -230,3 +231,6 @@ By default, the `FallbackModel` only moves on to the next model if the current m
230231
[`ModelAPIError`][pydantic_ai.exceptions.ModelAPIError], which includes
231232
[`ModelHTTPError`][pydantic_ai.exceptions.ModelHTTPError]. You can customize this behavior by
232233
passing a custom `fallback_on` argument to the `FallbackModel` constructor.
234+
235+
!!! note
236+
Validation errors (from [structured output](../output.md#structured-output) or [tool parameters](../tools.md)) do **not** trigger fallback. These errors use the [retry mechanism](../agents.md#reflection-and-self-correction) instead, which re-prompts the same model to try again. This is intentional: validation errors stem from the non-deterministic nature of LLMs and may succeed on retry, whereas API errors (4xx/5xx) generally indicate issues that won't resolve by retrying the same request.

pydantic_ai_slim/pydantic_ai/models/openrouter.py

Lines changed: 32 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,9 @@
22

33
from collections.abc import Iterable
44
from dataclasses import dataclass, field
5-
from typing import Any, Literal, cast
5+
from typing import Annotated, Any, Literal, TypeAlias, cast
66

7-
from pydantic import BaseModel
7+
from pydantic import BaseModel, Discriminator
88
from typing_extensions import TypedDict, assert_never, override
99

1010
from ..exceptions import ModelHTTPError
@@ -22,9 +22,13 @@
2222
try:
2323
from openai import APIError, AsyncOpenAI
2424
from openai.types import chat, completion_usage
25-
from openai.types.chat import chat_completion, chat_completion_chunk
25+
from openai.types.chat import chat_completion, chat_completion_chunk, chat_completion_message_function_tool_call
2626

27-
from .openai import OpenAIChatModel, OpenAIChatModelSettings, OpenAIStreamedResponse
27+
from .openai import (
28+
OpenAIChatModel,
29+
OpenAIChatModelSettings,
30+
OpenAIStreamedResponse,
31+
)
2832
except ImportError as _import_error:
2933
raise ImportError(
3034
'Please install `openai` to use the OpenRouter model, '
@@ -341,6 +345,27 @@ def _into_reasoning_detail(thinking_part: ThinkingPart) -> _OpenRouterReasoningD
341345
assert_never(data.type)
342346

343347

348+
class _OpenRouterFunction(chat_completion_message_function_tool_call.Function):
349+
arguments: str | None # type: ignore[reportIncompatibleVariableOverride]
350+
"""
351+
The arguments to call the function with, as generated by the model in JSON
352+
format. Note that the model does not always generate valid JSON, and may
353+
hallucinate parameters not defined by your function schema. Validate the
354+
arguments in your code before calling your function.
355+
"""
356+
357+
358+
class _OpenRouterChatCompletionMessageFunctionToolCall(chat.ChatCompletionMessageFunctionToolCall):
359+
function: _OpenRouterFunction # type: ignore[reportIncompatibleVariableOverride]
360+
"""The function that the model called."""
361+
362+
363+
_OpenRouterChatCompletionMessageToolCallUnion: TypeAlias = Annotated[
364+
_OpenRouterChatCompletionMessageFunctionToolCall | chat.ChatCompletionMessageCustomToolCall,
365+
Discriminator(discriminator='type'),
366+
]
367+
368+
344369
class _OpenRouterCompletionMessage(chat.ChatCompletionMessage):
345370
"""Wrapped chat completion message with OpenRouter specific attributes."""
346371

@@ -350,6 +375,9 @@ class _OpenRouterCompletionMessage(chat.ChatCompletionMessage):
350375
reasoning_details: list[_OpenRouterReasoningDetail] | None = None
351376
"""The reasoning details associated with the message, if any."""
352377

378+
tool_calls: list[_OpenRouterChatCompletionMessageToolCallUnion] | None = None # type: ignore[reportIncompatibleVariableOverride]
379+
"""The tool calls generated by the model, such as function calls."""
380+
353381

354382
class _OpenRouterChoice(chat_completion.Choice):
355383
"""Wraps OpenAI chat completion choice with OpenRouter specific attributes."""

pydantic_ai_slim/pydantic_ai/providers/deepseek.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,9 @@ def model_profile(self, model_name: str) -> ModelProfile | None:
4444
# we need to maintain that behavior unless json_schema_transformer is set explicitly.
4545
# This was not the case when using a DeepSeek model with another model class (e.g. BedrockConverseModel or GroqModel),
4646
# so we won't do this in `deepseek_model_profile` unless we learn it's always needed.
47-
return OpenAIModelProfile(json_schema_transformer=OpenAIJsonSchemaTransformer).update(profile)
47+
return OpenAIModelProfile(
48+
json_schema_transformer=OpenAIJsonSchemaTransformer, supports_json_object_output=True
49+
).update(profile)
4850

4951
@overload
5052
def __init__(self) -> None: ...
Lines changed: 86 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,86 @@
1+
interactions:
2+
- request:
3+
headers:
4+
accept:
5+
- application/json
6+
accept-encoding:
7+
- gzip, deflate
8+
connection:
9+
- keep-alive
10+
content-length:
11+
- '362'
12+
content-type:
13+
- application/json
14+
host:
15+
- openrouter.ai
16+
method: POST
17+
parsed_body:
18+
messages:
19+
- content: Can you find me any education content?
20+
role: user
21+
model: anthropic/claude-sonnet-4.5
22+
stream: false
23+
tool_choice: auto
24+
tools:
25+
- function:
26+
description: ''
27+
name: find_education_content
28+
parameters:
29+
properties:
30+
title:
31+
anyOf:
32+
- type: string
33+
- type: 'null'
34+
default: null
35+
type: object
36+
type: function
37+
uri: https://openrouter.ai/api/v1/chat/completions
38+
response:
39+
headers:
40+
access-control-allow-origin:
41+
- '*'
42+
connection:
43+
- keep-alive
44+
content-length:
45+
- '611'
46+
content-type:
47+
- application/json
48+
permissions-policy:
49+
- payment=(self "https://checkout.stripe.com" "https://connect-js.stripe.com" "https://js.stripe.com" "https://*.js.stripe.com"
50+
"https://hooks.stripe.com")
51+
referrer-policy:
52+
- no-referrer, strict-origin-when-cross-origin
53+
transfer-encoding:
54+
- chunked
55+
vary:
56+
- Accept-Encoding
57+
parsed_body:
58+
choices:
59+
- finish_reason: tool_calls
60+
index: 0
61+
logprobs: null
62+
message:
63+
content: I'll search for education content for you.
64+
reasoning: null
65+
refusal: null
66+
role: assistant
67+
tool_calls:
68+
- function:
69+
name: find_education_content
70+
id: toolu_vrtx_015QAXScZzRDPttiPoc34AdD
71+
index: 0
72+
type: function
73+
native_finish_reason: tool_calls
74+
created: 1764308342
75+
id: gen-1764308342-FInFdBZR9TF8jmnOwZGZ
76+
model: anthropic/claude-sonnet-4.5
77+
object: chat.completion
78+
provider: Google
79+
usage:
80+
completion_tokens: 48
81+
prompt_tokens: 568
82+
total_tokens: 616
83+
status:
84+
code: 200
85+
message: OK
86+
version: 1

tests/models/test_openrouter.py

Lines changed: 48 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -358,3 +358,51 @@ async def test_openrouter_map_messages_reasoning(allow_model_requests: None, ope
358358
}
359359
]
360360
)
361+
362+
363+
async def test_openrouter_tool_optional_parameters(allow_model_requests: None, openrouter_api_key: str) -> None:
364+
provider = OpenRouterProvider(api_key=openrouter_api_key)
365+
366+
class FindEducationContentFilters(BaseModel):
367+
title: str | None = None
368+
369+
model = OpenRouterModel('anthropic/claude-sonnet-4.5', provider=provider)
370+
response = await model_request(
371+
model,
372+
[ModelRequest.user_text_prompt('Can you find me any education content?')],
373+
model_request_parameters=ModelRequestParameters(
374+
function_tools=[
375+
ToolDefinition(
376+
name='find_education_content',
377+
description='',
378+
parameters_json_schema=FindEducationContentFilters.model_json_schema(),
379+
)
380+
],
381+
allow_text_output=True, # Allow model to either use tools or respond directly
382+
),
383+
)
384+
385+
assert len(response.parts) == 2
386+
387+
tool_call_part = response.parts[1]
388+
assert isinstance(tool_call_part, ToolCallPart)
389+
assert tool_call_part.tool_call_id == snapshot('toolu_vrtx_015QAXScZzRDPttiPoc34AdD')
390+
assert tool_call_part.tool_name == 'find_education_content'
391+
assert tool_call_part.args == snapshot(None)
392+
393+
mapped_messages = await model._map_messages([response], None) # type: ignore[reportPrivateUsage]
394+
tool_call_message = mapped_messages[0]
395+
assert tool_call_message['role'] == 'assistant'
396+
assert tool_call_message.get('content') == snapshot("I'll search for education content for you.")
397+
assert tool_call_message.get('tool_calls') == snapshot(
398+
[
399+
{
400+
'id': 'toolu_vrtx_015QAXScZzRDPttiPoc34AdD',
401+
'type': 'function',
402+
'function': {
403+
'name': 'find_education_content',
404+
'arguments': '{}',
405+
},
406+
}
407+
]
408+
)

0 commit comments

Comments
 (0)