Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai.error.InvalidRequestError #2

Open
myrzx opened this issue Mar 2, 2023 · 6 comments
Open

openai.error.InvalidRequestError #2

myrzx opened this issue Mar 2, 2023 · 6 comments

Comments

@myrzx
Copy link

myrzx commented Mar 2, 2023

You: Hi, are you a chatbot for me?
Traceback (most recent call last):
File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 77, in
main()
File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 48, in main
response = send_message(message_log)
File "C:\Users\Otp_Lab\Desktop\LXH2022\Fun\chat.py", line 10, in send_message
response = openai.ChatCompletion.create(
File "E:\Python\lib\site-packages\openai\api_resources\chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "E:\Python\lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 226, in request
resp, got_stream = self._interpret_response(result, stream)
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 619, in _interpret_response
self._interpret_response_line(
File "E:\Python\lib\site-packages\openai\api_requestor.py", line 679, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This model's maximum context length is 4096 tokens. However, you requested 4135 tokens (39 in the messages, 4096 in the completion). Please reduce the length of the messages or completion.

@kydycode
Copy link
Owner

kydycode commented Mar 2, 2023

Please reduce the length of the messages or completion.

@myrzx
Copy link
Author

myrzx commented Mar 2, 2023

Please reduce the length of the messages or completion.

It's the first request and I just said one sentence.
I change the max_tokens=4096 to 2048, it run.
What is '4096 in the completion' mean? 4096 empty tokens in the message to chatgpt api?

@jane00
Copy link

jane00 commented Mar 3, 2023

same question

@myrzx
Copy link
Author

myrzx commented Mar 3, 2023

same question

From OpenAI Document
max_tokens The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
Prompt tokens (your message) + max_tokens should <= 4096, else it raise error.

@kydycode
Copy link
Owner

kydycode commented Mar 3, 2023

That flow works like that - previous messages are added to each subsequent request so that the context of the conversation is kept. so after a while, the number of tokens can get to more than 4096

@myrzx
Copy link
Author

myrzx commented Mar 4, 2023

That flow works like that - previous messages are added to each subsequent request so that the context of the conversation is kept. so after a while, the number of tokens can get to more than 4096

https://platform.openai.com/docs/api-reference/chat/create#chat/create-max_tokens
max_tokens defaults to inf, if set 3800, the send messages must < 296 tokens, it will raises error quickly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants