You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Merge remote-tracking branch 'upstream/main' into custom-models
* upstream/main:
Examples: Fix financial_research_agent instructions (openai#573)
Adding extra_headers parameters to ModelSettings (openai#550)
v0.0.12 (openai#564)
Pass through organization/project headers to tracing backend, fix speech_group enum (openai#562)
Docs and tests for litellm (openai#561)
RFC: automatically use litellm if possible (openai#534)
Fix visualize graph filename to without extension. (openai#554)
Start and finish streaming trace in impl metod (openai#540)
Enable non-strict output types (openai#539)
Examples for image inputs (openai#553)
Copy file name to clipboardExpand all lines: README.md
+1-3
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
# OpenAI Agents SDK
2
2
3
-
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows.
3
+
The OpenAI Agents SDK is a lightweight yet powerful framework for building multi-agent workflows. It is provider-agnostic, supporting the OpenAI Responses and Chat Completions APIs, as well as 100+ other LLMs.
4
4
5
5
<imgsrc="https://cdn.openai.com/API/docs/images/orchestration.png"alt="Image of the Agents Tracing UI"style="max-height: 803px;">
6
6
@@ -13,8 +13,6 @@ The OpenAI Agents SDK is a lightweight yet powerful framework for building multi
13
13
14
14
Explore the [examples](examples) directory to see the SDK in action, and read our [documentation](https://openai.github.io/openai-agents-python/) for more details.
15
15
16
-
Notably, our SDK [is compatible](https://openai.github.io/openai-agents-python/models/) with any model providers that support the OpenAI Chat Completions API format.
Copy file name to clipboardExpand all lines: docs/models/index.md
+40-15
Original file line number
Diff line number
Diff line change
@@ -5,11 +5,40 @@ The Agents SDK comes with out-of-the-box support for OpenAI models in two flavor
5
5
-**Recommended**: the [`OpenAIResponsesModel`][agents.models.openai_responses.OpenAIResponsesModel], which calls OpenAI APIs using the new [Responses API](https://platform.openai.com/docs/api-reference/responses).
6
6
- The [`OpenAIChatCompletionsModel`][agents.models.openai_chatcompletions.OpenAIChatCompletionsModel], which calls OpenAI APIs using the [Chat Completions API](https://platform.openai.com/docs/api-reference/chat).
7
7
8
+
## Non-OpenAI models
9
+
10
+
You can use most other non-OpenAI models via the [LiteLLM integration](./litellm.md). First, install the litellm dependency group:
11
+
12
+
```bash
13
+
pip install "openai-agents[litellm]"
14
+
```
15
+
16
+
Then, use any of the [supported models](https://docs.litellm.ai/docs/providers) with the `litellm/` prefix:
You can integrate other LLM providers in 3 more ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
26
+
27
+
1.[`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
28
+
2.[`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
29
+
3.[`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md).
30
+
31
+
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
32
+
33
+
!!! note
34
+
35
+
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
36
+
8
37
## Mixing and matching models
9
38
10
39
Within a single workflow, you may want to use different models for each agent. For example, you could use a smaller, faster model for triage, while using a larger, more capable model for complex tasks. When configuring an [`Agent`][agents.Agent], you can select a specific model by either:
11
40
12
-
1. Passing the name of an OpenAI model.
41
+
1. Passing the name of a model.
13
42
2. Passing any model name + a [`ModelProvider`][agents.models.interface.ModelProvider] that can map that name to a Model instance.
14
43
3. Directly providing a [`Model`][agents.models.interface.Model] implementation.
15
44
@@ -64,20 +93,6 @@ english_agent = Agent(
64
93
)
65
94
```
66
95
67
-
## Using other LLM providers
68
-
69
-
You can use other LLM providers in 3 ways (examples [here](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/)):
70
-
71
-
1.[`set_default_openai_client`][agents.set_default_openai_client] is useful in cases where you want to globally use an instance of `AsyncOpenAI` as the LLM client. This is for cases where the LLM provider has an OpenAI compatible API endpoint, and you can set the `base_url` and `api_key`. See a configurable example in [examples/model_providers/custom_example_global.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_global.py).
72
-
2.[`ModelProvider`][agents.models.interface.ModelProvider] is at the `Runner.run` level. This lets you say "use a custom model provider for all agents in this run". See a configurable example in [examples/model_providers/custom_example_provider.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_provider.py).
73
-
3.[`Agent.model`][agents.agent.Agent.model] lets you specify the model on a specific Agent instance. This enables you to mix and match different providers for different agents. See a configurable example in [examples/model_providers/custom_example_agent.py](https://github.com/openai/openai-agents-python/tree/main/examples/model_providers/custom_example_agent.py). An easy way to use most available models is via the [LiteLLM integration](./litellm.md).
74
-
75
-
In cases where you do not have an API key from `platform.openai.com`, we recommend disabling tracing via `set_tracing_disabled()`, or setting up a [different tracing processor](../tracing.md).
76
-
77
-
!!! note
78
-
79
-
In these examples, we use the Chat Completions API/model, because most LLM providers don't yet support the Responses API. If your LLM provider does support it, we recommend using Responses.
80
-
81
96
## Common issues with using other LLM providers
82
97
83
98
### Tracing client error 401
@@ -100,7 +115,17 @@ The SDK uses the Responses API by default, but most other LLM providers don't ye
100
115
Some model providers don't have support for [structured outputs](https://platform.openai.com/docs/guides/structured-outputs). This sometimes results in an error that looks something like this:
101
116
102
117
```
118
+
103
119
BadRequestError: Error code: 400 - {'error': {'message': "'response_format.type' : value is not one of the allowed values ['text','json_object']", 'type': 'invalid_request_error'}}
120
+
104
121
```
105
122
106
123
This is a shortcoming of some model providers - they support JSON outputs, but don't allow you to specify the `json_schema` to use for the output. We are working on a fix for this, but we suggest relying on providers that do have support for JSON schema output, because otherwise your app will often break because of malformed JSON.
124
+
125
+
## Mixing models across providers
126
+
127
+
You need to be aware of feature differences between model providers, or you may run into errors. For example, OpenAI supports structured outputs, multimodal input, and hosted file search and web search, but many other providers don't support these features. Be aware of these limitations:
128
+
129
+
- Don't send unsupported `tools` to providers that don't understand them
130
+
- Filter out multimodal inputs before calling models that are text-only
131
+
- Be aware that providers that don't support structured JSON outputs will occasionally produce invalid JSON.
0 commit comments