Skip to content

Conversation

@Sameerlite
Copy link
Collaborator

@Sameerlite Sameerlite commented Oct 14, 2025

Title

Add OpenAI video generation and content retrieval support

Relevant issues

Pre-Submission checklist

Please complete all items before asking a LiteLLM maintainer to review your PR

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement - see details
  • I have added a screenshot of my new test passing locally
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem

Type

🆕 New Feature

Changes

  1. Video Generation (litellm.video_generation):

    • Text-to-video generation using OpenAI Sora models
    • Support for video editing with reference images
    • Configurable video duration and dimensions
    • Async support with litellm.avideo_generation
  2. Video Content Retrieval (litellm.video_content):

    • Download generated videos as MP4 bytes
    • Async support with litellm.avideo_content
    • Proper error handling and status checking

API Examples:

# Video Generation
response = litellm.video_generation(
    prompt="A cat playing with a ball of yarn",
    model="sora-2",
    seconds="8",
    size="720x1280"
)

# Video Content Retrieval
video_bytes = litellm.video_content(
    video_id=response.data[0].id,
    model="sora-2"
)
image Working end to end image

@vercel
Copy link

vercel bot commented Oct 14, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
litellm Error Error Oct 18, 2025 2:59pm

💡 Enable Vercel Agent with $100 free credit for automated AI reviews

```
```

## Video Generation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you have this be a separate doc - it'll be easier to find

have it be in the same openai folder though

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's odd to call this folder video_retrieval and the top-level api - video_content

Can you pick one?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, rewrote everything to video_content

"supports_vision": true,
"supports_web_search": true
},
"sora-2": {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's no cost for this model?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added cost tracking

# Check for mock response first
mock_response = kwargs.get("mock_response", None)
if mock_response is not None:
if isinstance(mock_response, str):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

keep in separate function

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add unit testing for this new endpoint @Sameerlite

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added

_usage = ResponseAPILoggingUtils._transform_response_api_usage_to_chat_usage(
_usage
).model_dump()
# Skip token validation for video generation as it doesn't use token-based pricing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this matter? if usage obj doesn't exist we will have an empty usage here

ideally we should avoid having endpoint specific if/else statements in here.

since there are other endpoints that don't need usage objects and can use this function, i wonder why we need an exception here.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed the conditioning.

return output_cost_per_second * duration_seconds

# If no cost information found, return 0
verbose_logger.warning(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will spam logs in traffic, probably emit as either debug or info log.

warnings will trigger alerting

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, changed it to info

ishaan-jaff and others added 20 commits October 17, 2025 23:59
* fix(router): update model_name_to_deployment_indices on deployment removal

When a deployment is deleted, the model_name_to_deployment_indices map
was not being updated, causing stale index references. This could lead
to incorrect routing behavior when deployments with the same model_name
were dynamically removed.

Changes:
- Update _update_deployment_indices_after_removal to maintain
  model_name_to_deployment_indices mapping
- Remove deleted indices and decrement indices greater than removed index
- Clean up empty entries when no deployments remain for a model name
- Update test to verify proper index shifting and cleanup behavior

* fix(router): remove redundant index building during initialization

Remove duplicate index building operations that were causing unnecessary
work during router initialization:

1. Removed redundant `_build_model_id_to_deployment_index_map` call in
   __init__ - `set_model_list` already builds all indices from scratch

2. Removed redundant `_build_model_name_index` call at end of
   `set_model_list` - the index is already built incrementally via
   `_create_deployment` -> `_add_model_to_list_and_index_map`

Both indices (model_id_to_deployment_index_map and
model_name_to_deployment_indices) are properly maintained as lookup
indexes through existing helper methods. This change eliminates O(N)
duplicate work during initialization without any behavioral changes.

The indices continue to be correctly synchronized with model_list on
all operations (add/remove/upsert).
* docs: fix doc

* docs(index.md): bump rc

* [Fix] GEMINI - CLI -  add google_routes to llm_api_routes (#15500)

* fix: add google_routes to llm_api_routes

* test: test_virtual_key_llm_api_routes_allows_google_routes

* build: bump version

* bump: version 1.78.0 → 1.78.1

* add application level encryption in SQS

* add application level encryption in SQS

---------

Co-authored-by: Krrish Dholakia <[email protected]>
Co-authored-by: Ishaan Jaff <[email protected]>
Co-authored-by: deepanshu <[email protected]>
…t/completions API with LiteLLM (#15509)

* docs: fix doc

* docs(index.md): bump rc

* [Fix] GEMINI - CLI -  add google_routes to llm_api_routes (#15500)

* fix: add google_routes to llm_api_routes

* test: test_virtual_key_llm_api_routes_allows_google_routes

* add AnthropicCitation

* fix async_post_call_success_deployment_hook

* fix add vector_store_custom_logger to global callbacks

* test_e2e_bedrock_knowledgebase_retrieval_with_llm_api_call

* async_post_call_success_deployment_hook

* add async_post_call_streaming_deployment_hook

* async def test_e2e_bedrock_knowledgebase_retrieval_with_llm_api_call_streaming(setup_vector_store_registry):

* fix _call_post_streaming_deployment_hook

* fix async_post_call_streaming_deployment_hook

* test update

* docs: Accessing Search Results

* docs KB

* fix chatUI

* fix searchResults

* fix onSearchResults

* fix kb

---------

Co-authored-by: Krrish Dholakia <[email protected]>
* docs: fix doc

* docs(index.md): bump rc

* [Fix] GEMINI - CLI -  add google_routes to llm_api_routes (#15500)

* fix: add google_routes to llm_api_routes

* test: test_virtual_key_llm_api_routes_allows_google_routes

* build: bump version

* bump: version 1.78.0 → 1.78.1

* fix: KeyRequestBase

* fix rpm_limit_type

* fix dynamic rate limits

* fix use dynamic limits here

* fix _should_enforce_rate_limit

* fix _should_enforce_rate_limit

* fix counter

* test_dynamic_rate_limiting_v3

* use _create_rate_limit_descriptors

---------

Co-authored-by: Krrish Dholakia <[email protected]>
@krrishdholakia
Copy link
Contributor

Screenshot 2025-10-17 at 1 20 49 PM

@Sameerlite can you make sure your PR passes testing + linting

@Sameerlite Sameerlite changed the base branch from litellm_staging_oct to litellm_sameer_oct_staging October 20, 2025 18:03
@Sameerlite
Copy link
Collaborator Author

Closing and creating a new one as this has gotten too many conflicts - #15745

@Sameerlite Sameerlite closed this Oct 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants