Skip to content

Conversation

@Chesars
Copy link
Contributor

@Chesars Chesars commented Nov 12, 2025

Title

fix(responses-api): apply drop_params for GPT-5
temperature validation

Relevant issues

Fixes #16090

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🐛 Bug Fix

Changes

Problem

The Responses API (/v1/responses) was not applying model-specific parameter validation. While temperature is listed in supported params, GPT-5 has a specific constraint (only accepts temperature=1).
The drop_params setting was not being respected because the code wasn't checking model-specific restrictions.

Solution

  • Modified
    OpenAIResponsesAPIConfig.map_openai_params() to apply model-specific validation for GPT-5
  • Reuses existing OpenAIGPT5Config validation logic from chat completions endpoint
  • Now drop_params=True correctly drops temperature != 1 for GPT-5
  • Ensures consistent behavior between /v1/chat/completions and /v1/responses

Breaking Changes

None

Tests Added

Apply model-specific parameter validation in the Responses API to respect
drop_params setting for GPT-5 temperature constraints.

Changes:
- Modified OpenAIResponsesAPIConfig.map_openai_params() to validate
  temperature parameter for GPT-5 models
- Reuses existing GPT-5 validation logic from chat completions endpoint
- GPT-5 only accepts temperature=1, other values are now dropped when
  drop_params=True or raise error when drop_params=False
- Added comprehensive test suite with 7 test cases covering:
  * temperature != 1 with drop_params enabled/disabled
  * temperature = 1 (valid value)
  * gpt-5-codex variant support
  * non-GPT-5 models remain unaffected
  * multiple parameters handling

This ensures consistent behavior between /v1/chat/completions and
/v1/responses endpoints for GPT-5 temperature handling.
@vercel
Copy link

vercel bot commented Nov 12, 2025

@Chesars is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: LiteLLM Proxy doesn't drop temperature parameter for gpt-5 models when drop_params is enabled

1 participant