Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hey, I've been wondering, aside from setting top_k, max_token_for_text_unit, and max_token_for_global_context, what other ways can I truncate the context window to make sure my request doesn't exceed the maximum context length? Any tips? #259

Open
DragonLsy opened this issue Nov 12, 2024 · 0 comments

Comments

@DragonLsy
Copy link

openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "This model's maximum context length is 32768 tokens. However, you requested 34273 tokens in the messages, Please reduce the length of the messages.", 'type': 'BadRequestError', 'param': None, 'code': 400}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant