You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I've been wondering, aside from setting top_k, max_token_for_text_unit, and max_token_for_global_context, what other ways can I truncate the context window to make sure my request doesn't exceed the maximum context length? Any tips?
#259
Open
DragonLsy opened this issue
Nov 12, 2024
· 0 comments
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "This model's maximum context length is 32768 tokens. However, you requested 34273 tokens in the messages, Please reduce the length of the messages.", 'type': 'BadRequestError', 'param': None, 'code': 400}
The text was updated successfully, but these errors were encountered:
openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "This model's maximum context length is 32768 tokens. However, you requested 34273 tokens in the messages, Please reduce the length of the messages.", 'type': 'BadRequestError', 'param': None, 'code': 400}
The text was updated successfully, but these errors were encountered: