-
Notifications
You must be signed in to change notification settings - Fork 6.5k
Enforce 25Kb limit for infinite transcription #13301
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Current implementation breaks when a new stream is created, even under 5 min limit. This is due to the missing logic to handle 25KB stream size limit [1] Updated the 'generator' function to yield data as soon as API limit is reached. [1] - GoogleCloudPlatform#12053
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hello @suvigyajain0101, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request addresses an issue where the infinite transcription implementation was breaking due to a missing logic to handle the 25KB stream size limit imposed by the API. The changes modify the generator
function in speech/microphone/transcribe_streaming_infinite_v2.py
to yield data in chunks that respect this limit, ensuring that the API's constraints are met and the transcription process doesn't break.
Highlights
- Bug Fix: Fixes an issue where the infinite transcription would break due to exceeding the 25KB stream size limit.
- API Limit Enforcement: Implements logic to split audio data into chunks smaller than 25KB before sending to the API.
- Code Modification: Modifies the
generator
function to handle the 25KB limit by yielding smaller chunks of data.
Changelog
- speech/microphone/transcribe_streaming_infinite_v2.py
- Added
MAX_STREAMING_CHUNK
constant to define the 25KB API limit at line 43. - Modified the
generator
function to split audio data into chunks smaller thanMAX_STREAMING_CHUNK
at line 218. - Added logic to handle cases where a single chunk exceeds the
MAX_STREAMING_CHUNK
limit by further splitting it into smaller chunks at line 222.
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
A stream of speech, so grand,
But size limits, close at hand.
Split the data small,
Heed the API's call,
And transcription will stand.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an important issue where the streaming transcription breaks due to exceeding the API's 25KB limit. The solution implemented to split the data into smaller chunks is a good approach. However, there are a few areas that could be improved for clarity and efficiency.
Summary of Findings
- Clarity of Chunking Logic: The chunking logic within the generator function could benefit from more descriptive variable names and comments to enhance readability.
- Potential Inefficiency: The current implementation iterates through the
data
list twice: once to calculatecombined_size
and again to split the data into chunks. This could be optimized by combining these operations.
Merge Readiness
The pull request addresses a critical bug and implements a reasonable solution. However, addressing the comments regarding code clarity and potential efficiency improvements would significantly enhance the quality of the code. I recommend addressing these before merging. I am unable to directly approve this pull request, and others should review and approve this code before merging.
combined_size = sum(len(chunk) for chunk in data) | ||
if combined_size <= MAX_STREAMING_CHUNK: | ||
yield b"".join(data) | ||
else: | ||
run_chunks = [] | ||
run_size = 0 | ||
for chunk in data: | ||
if len(chunk) + run_size > MAX_STREAMING_CHUNK: | ||
yield b"".join(run_chunks) | ||
run_chunks = [chunk] | ||
run_size = len(chunk) | ||
else: | ||
run_chunks.append(chunk) | ||
run_size += len(chunk) | ||
if run_chunks: | ||
yield b"".join(run_chunks) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code iterates through the data
list twice. First to calculate combined_size
and then to create the chunks. This could be combined into a single loop for better efficiency. Consider calculating the combined_size
while creating the chunks, and only yielding the combined data if it's within the limit. If it exceeds the limit, yield the accumulated chunks and start a new chunk.
current_chunk = []
current_chunk_size = 0
for chunk in data:
if current_chunk_size + len(chunk) <= MAX_STREAMING_CHUNK:
current_chunk.append(chunk)
current_chunk_size += len(chunk)
else:
if current_chunk:
yield b''.join(current_chunk)
current_chunk = [chunk]
current_chunk_size = len(chunk)
if current_chunk:
yield b''.join(current_chunk)
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
Hi @kharvey-google Please let me know if you need additional information from my side. Thanks! |
Description
Current implementation breaks when a new stream is created, even under 5 min limit. This is due to the missing logic to handle 25KB stream size limit [1]
Updated the 'generator' function to yield data as soon as API limit is reached.
[1] - #12053
Fixes #12053
Note: Before submitting a pull request, please open an issue for discussion if you are not associated with Google.
Checklist
nox -s py-3.9
(see Test Environment Setup)nox -s lint
(see Test Environment Setup)