Skip to content

Conversation

@grishasen
Copy link

@grishasen grishasen commented Oct 26, 2025

Summary

Adds GPT-5 support using the new responses.create(...) API and keeps all existing GPT-4 / GPT-3.5 behavior working.

Key changes

  • Added responses_completion() in base.py and updated call() to route automatically:

    • GPT-5 models (gpt-5, gpt-5-mini, gpt-5-nano, etc.) → Responses API
    • GPT-4 / GPT-3.5 → Chat Completions API
    • legacy instruct → Completions API
  • Added _is_responses_model / _is_responses_api_like() to detect GPT-5 models.

  • Added new parameters for reasoning models:

    • reasoning_effort
    • verbosity
    • max_output_tokens
  • Ensured GPT-5 requests do not send temperature, top_p, logprobs, etc. (unsupported on reasoning models).

  • Added _responses_params in base.py to build reasoning.effort, text.verbosity, and max_output_tokens.

  • Updated openai.py:

    • Created root_client = openai.OpenAI(...)
    • Added _supported_responses_models = ["gpt-5", "gpt-5-mini", "gpt-5-nano"]
    • Wired self.responses_client = root_client.responses
    • GPT-5 models now set _is_responses_model = True
  • Updated azure_openai.py:

    • Azure client now also exposes responses_client
    • GPT-5-style deployments use Responses API automatically

Backward compatibility

  • Existing GPT-4.x / GPT-3.5 usage is unchanged (still supports temperature, top_p, etc.).
  • New GPT-5 usage can pass reasoning_effort="minimal" and verbosity="low" as recommended in the migration guide.

Important

Adds GPT-5 support using the Responses API, updates routing logic in base.py, and enhances OpenAI and AzureOpenAI classes for new model handling.

  • Behavior:
    • Adds responses_completion() in base.py for GPT-5 models using the Responses API.
    • Updates call() in base.py to route requests based on model type: GPT-5 to Responses API, GPT-4/3.5 to Chat Completions API, and legacy instruct to Completions API.
    • Introduces _is_responses_model and _is_responses_api_like() in base.py to identify GPT-5 models.
    • Ensures GPT-5 requests exclude unsupported parameters like temperature and top_p.
  • Parameters:
    • Adds reasoning_effort, verbosity, and max_output_tokens for GPT-5 models in base.py.
    • Implements _responses_params in base.py to handle new parameters for the Responses API.
  • Classes:
    • Updates OpenAI and AzureOpenAI to support GPT-5 models and Responses API.
    • Sets up responses_client in openai.py and azure_openai.py for handling GPT-5 requests.
  • Tests:
    • Adds tests for GPT-5 model parameters and Responses API in test_openai.py.

This description was created by Ellipsis for 3380cb9. You can customize this summary. It will automatically update as commits are pushed.

### Summary

Adds GPT-5 support using the new `responses.create(...)` API and keeps all existing GPT-4 / GPT-3.5 behavior working.

### Key changes

* Added `responses_completion()` in `base.py` and updated `call()` to route automatically:

  * GPT-5 models (`gpt-5`, `gpt-5-mini`, `gpt-5-nano`, etc.) → Responses API
  * GPT-4 / GPT-3.5 → Chat Completions API
  * legacy instruct → Completions API
* Added `_is_responses_model` / `_is_responses_api_like()` to detect GPT-5 models.
* Added new parameters for reasoning models:

  * `reasoning_effort`
  * `verbosity`
  * `max_output_tokens`
* Ensured GPT-5 requests do **not** send `temperature`, `top_p`, `logprobs`, etc. (unsupported on reasoning models).
* Added `_responses_params` in `base.py` to build `reasoning.effort`, `text.verbosity`, and `max_output_tokens`.
* Updated `openai.py`:

  * Created `root_client = openai.OpenAI(...)`
  * Added `_supported_responses_models = ["gpt-5", "gpt-5-mini", "gpt-5-nano"]`
  * Wired `self.responses_client = root_client.responses`
  * GPT-5 models now set `_is_responses_model = True`
* Updated `azure_openai.py`:

  * Azure client now also exposes `responses_client`
  * GPT-5-style deployments use Responses API automatically

### Backward compatibility

* Existing GPT-4.x / GPT-3.5 usage is unchanged (still supports `temperature`, `top_p`, etc.).
* New GPT-5 usage can pass `reasoning_effort="minimal"` and `verbosity="low"` as recommended in the migration guide.
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 3380cb9 in 1 minute and 37 seconds. Click for details.
  • Reviewed 721 lines of code in 4 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. extensions/llms/openai/pandasai_openai/base.py:138
  • Draft comment:
    Potential issue: The code unconditionally wraps the 'stop' parameter in a list (e.g. out["stop"] = [self.stop] in _responses_params, and similarly in _chat_params). If 'stop' is passed as a list (as in tests), this will result in a nested list. Consider checking the type of 'stop' and wrapping only if it isn’t already a list.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 20% vs. threshold = 50% 1. The type hint clearly shows stop is Optional[str], not List[str]. 2. The code is consistent in treating it as a string. 3. If tests are passing with lists, that would be a type violation. 4. The comment assumes behavior not supported by the type system. 5. The OpenAI API docs should be checked to confirm the expected type. I don't have access to the OpenAI API docs or tests to verify if lists are actually supported. The type hint could be wrong. However, the code is internally consistent with its declared types. If lists should be supported, that would require a larger change to the type system, not just a local fix. The comment makes assumptions that conflict with the declared types. Any change here should be part of a broader discussion about the type system.
2. extensions/llms/openai/tests/test_openai.py:115
  • Draft comment:
    Note: In the responses_completion test, the expected response object lacks an 'output_text' property, whereas the method returns response.output_text. Although the method is patched here, ensure that integration tests later verify that the real response contains the expected property.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and asks the author to ensure that integration tests verify the expected property. It doesn't provide a specific suggestion or point out a clear issue with the code. It violates the rule against asking the author to ensure behavior is tested.

Workflow ID: wflow_Y6k7r4meTvnBFZHp

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant