You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
docs(openai): Document reasoning_effort summary field options (BerriAI#16549)
Related to PR BerriAI#16210 which fixed automatic summary field addition
Changes:
- Document reasoning_effort string vs dict formats
- Add summary field options (auto, detailed, concise)
- Add table of supported reasoning_effort values by GPT-5 model
- Clarify model-specific support and limitations
- Note that summary field requires org verification
The previous implementation automatically added summary field causing
400 errors for unverified orgs. Now users can opt-in by passing
reasoning_effort as dict with explicit summary field.
Copy file name to clipboardExpand all lines: docs/my-website/docs/providers/openai.md
+71Lines changed: 71 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -410,6 +410,77 @@ Expected Response:
410
410
411
411
```
412
412
413
+
### Advanced: Using `reasoning_effort` with `summary` field
414
+
415
+
By default, `reasoning_effort` accepts a string value (`"low"`, `"medium"`, `"high"`, `"minimal"`) and only sets the effort level without including a reasoning summary.
416
+
417
+
To opt-in to the `summary` feature, you can pass `reasoning_effort` as a dictionary. **Note:** The `summary` field requires your OpenAI organization to have verification status. Using `summary` without verification will result in a 400 error from OpenAI.
418
+
419
+
<Tabs>
420
+
<TabItemvalue="sdk"label="SDK">
421
+
```python
422
+
# Option 1: String format (default - no summary)
423
+
response = litellm.completion(
424
+
model="openai/responses/gpt-5-mini",
425
+
messages=[{"role": "user", "content": "What is the capital of France?"}],
-`"auto"`: System automatically determines the appropriate summary level based on the model
466
+
-`"concise"`: Provides a shorter summary (not supported by GPT-5 series models)
467
+
-`"detailed"`: Offers a comprehensive reasoning summary
468
+
469
+
**Note:** GPT-5 series models support `"auto"` and `"detailed"`, but do not support `"concise"`. O-series models (o3-pro, o4-mini, o3) support all three options. Some models like o3-mini and o1 do not support reasoning summaries at all.
470
+
471
+
**Supported `reasoning_effort` values by model:**
472
+
473
+
| Model | Default (when not set) | Supported Values |
|`gpt-5-codex`|`adaptive`|`low`, `medium`, `high` (no `minimal`) |
478
+
|`gpt-5-pro`|`high`|`high` only |
479
+
480
+
**Note:**`gpt-5-pro` only accepts `reasoning_effort="high"`. Other values will return an error. When `reasoning_effort` is not set (None), OpenAI defaults to the value shown in the "Default" column.
481
+
482
+
See [OpenAI Reasoning documentation](https://platform.openai.com/docs/guides/reasoning) for more details on organization verification requirements.
483
+
413
484
## OpenAI Chat Completion to Responses API Bridge
414
485
415
486
Call any Responses API model from OpenAI's `/chat/completions` endpoint.
0 commit comments