Skip to content

Adding extra_headers parameters to ModelSettings #550

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

jonnyk20
Copy link

Added the possibility to pass extra_headers when sending a request.
I used the same implementation as was recently used for extra_body and extra_query in #500

In this implementation I added the attributes to ModelSettings as suggested here
https://github.com/openai/openai-agents-python/issues/487 .

I'll be happy to add some tests if you have any suggestions.

@jonnyk20
Copy link
Author

@rm-openai It doesn't look like I have the ability to request reviews. Is there a process I should follow?
image

Copy link
Collaborator

@rm-openai rm-openai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jonnyk20, looks good. Two change requests:

  1. Can you please add tests? e.g. ensuring that extra_headers is passed on to the model if set.
  2. Can you also add this to the litellm model?

Thanks1

@jonnyk20 jonnyk20 force-pushed the add_extra_headers_parameters branch from 025d58e to e102319 Compare April 21, 2025 18:16
@jonnyk20 jonnyk20 force-pushed the add_extra_headers_parameters branch from e102319 to f4ac7ed Compare April 21, 2025 18:19
@jonnyk20 jonnyk20 requested a review from rm-openai April 21, 2025 18:19
@jonnyk20
Copy link
Author

jonnyk20 commented Apr 21, 2025

Thanks @jonnyk20, looks good. Two change requests:

  1. Can you please add tests? e.g. ensuring that extra_headers is passed on to the model if set.
  2. Can you also add this to the litellm model?

Thanks1

@rm-openai no problem - done! I wasn't able to create a LitellmModel test that didn't have exceptions, and I couldn't find any other tests using LitellmModel to use as reference. This is the closes I got:


@pytest.mark.allow_call_model_methods
@pytest.mark.asyncio
async def test_extra_headers_passed_to_litellm_model(monkeypatch):
    """
    Ensure extra_headers in ModelSettings is passed to the LitellmModel.
    """
    from agents.extensions.models.litellm_model import LitellmModel

    called_kwargs = {}

    async def dummy_acompletion(*args, **kwargs):
        nonlocal called_kwargs
        called_kwargs = kwargs
        return None

    monkeypatch.setattr("agents.extensions.models.litellm_model.litellm.acompletion", dummy_acompletion)

    model = LitellmModel(model="any-model")
    extra_headers = {"X-Test-Header": "test-value"}
    
    await model.get_response(
        system_instructions=None,
        input="hi",
        model_settings=ModelSettings(extra_headers=extra_headers),
        tools=[],
        output_schema=None,
        handoffs=[],
        tracing=ModelTracing.DISABLED,
        previous_response_id=None,
    )
    assert "extra_headers" in called_kwargs
    assert called_kwargs["extra_headers"]["X-Test-Header"] == "test-value"

Which resulted in the following:

=================================== FAILURES ===================================
__________________ test_extra_headers_passed_to_litellm_model __________________

monkeypatch = <_pytest.monkeypatch.MonkeyPatch object at 0x119490730>

    @pytest.mark.allow_call_model_methods
    @pytest.mark.asyncio
    async def test_extra_headers_passed_to_litellm_model(monkeypatch):
        """
        Ensure extra_headers in ModelSettings is passed to the LitellmModel.
        """
        from agents.extensions.models.litellm_model import LitellmModel
    
        called_kwargs = {}
    
        async def dummy_acompletion(*args, **kwargs):
            nonlocal called_kwargs
            called_kwargs = kwargs
            return None
    
        monkeypatch.setattr("agents.extensions.models.litellm_model.litellm.acompletion", dummy_acompletion)
    
        model = LitellmModel(model="any-model")
        extra_headers = {"X-Test-Header": "test-value"}
    
>       await model.get_response(
            system_instructions=None,
            input="hi",
            model_settings=ModelSettings(extra_headers=extra_headers),
            tools=[],
            output_schema=None,
            handoffs=[],
            tracing=ModelTracing.DISABLED,
            previous_response_id=None,
        )

tests/test_extra_headers.py:116: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <agents.extensions.models.litellm_model.LitellmModel object at 0x119490a30>
system_instructions = None, input = 'hi'
model_settings = ModelSettings(temperature=None, top_p=None, frequency_penalty=None, presence_penalty=None, tool_choice=None, parallel_...None, store=None, include_usage=None, extra_query=None, extra_body=None, extra_headers={'X-Test-Header': 'test-value'})
tools = [], output_schema = None, handoffs = []
tracing = <ModelTracing.DISABLED: 0>, previous_response_id = None

    async def get_response(
        self,
        system_instructions: str | None,
        input: str | list[TResponseInputItem],
        model_settings: ModelSettings,
        tools: list[Tool],
        output_schema: AgentOutputSchema | None,
        handoffs: list[Handoff],
        tracing: ModelTracing,
        previous_response_id: str | None,
    ) -> ModelResponse:
        with generation_span(
            model=str(self.model),
            model_config=dataclasses.asdict(model_settings)
            | {"base_url": str(self.base_url or ""), "model_impl": "litellm"},
            disabled=tracing.is_disabled(),
        ) as span_generation:
            response = await self._fetch_response(
                system_instructions,
                input,
                model_settings,
                tools,
                output_schema,
                handoffs,
                span_generation,
                tracing,
                stream=False,
            )
    
>           assert isinstance(response.choices[0], litellm.types.utils.Choices)
E           AttributeError: 'tuple' object has no attribute 'choices'

src/agents/extensions/models/litellm_model.py:94: AttributeError
=============================== warnings summary ===============================
tests/test_extra_headers.py::test_extra_headers_passed_to_litellm_model
  /Users/jonnykalambay/projects/oss/openai-agents-python/.venv/lib/python3.9/site-packages/pydantic/_internal/_config.py:323: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.11/migration/
    warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=========================== short test summary info ============================
FAILED tests/test_extra_headers.py::test_extra_headers_passed_to_litellm_model - AttributeError: 'tuple' object has no attribute 'choices'
==================== 1 failed, 2 passed, 1 warning in 1.08s ====================
(openai-agents) ➜  openai-agents-python git:(add_extra_headers_parameters) ✗ 

Any suggestions for how to fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants