-
Notifications
You must be signed in to change notification settings - Fork 91
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for the reasoning_effort parameter #329
base: main
Are you sure you want to change the base?
Conversation
It cannot be used without a high-teir API key so it is better to use it through OpenRouter instead.
Note that o3-mini was added back in when main was merged into this branch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR! Sorry for the delay. Left some comments.
chatgpt-shell-openai.el
Outdated
@@ -79,6 +91,7 @@ HANDLER, FILTER and OTHER-PARAMS." | |||
:version "o3-mini" | |||
:token-width 3 | |||
:context-window 200000 | |||
:reasoning-effort chatgpt-shell-openai-default-reasoning-effort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we should consider deferring access to the actual level value until the the payload is constructed? This would enable changing chatgpt-shell-openai-default-reasoning-effort
at any time and be readily usable (without having to reload the models).
For some values, we've achieved this by setting the value to a function that returns the latest value of a variable with the same name.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is one option. My thought was that in the future we might have a UI for changing parameters like this on a per model basis. In that case, it makes more sense for it to be a default as it is currently. However, if we decide not to do that then your way makes more sense.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess the two aren't mutually exclusive anyway. The UI could just have a list of buffer-local variables that can be modified and work that way so probably this change is the way to go anyway.
chatgpt-shell-openai.el
Outdated
@@ -34,7 +34,18 @@ | |||
(declare-function chatgpt-shell--make-chatgpt-url "chatgpt-shell") | |||
(declare-function chatgpt-shell-validate-no-system-prompt "chatgpt-shell") | |||
|
|||
(cl-defun chatgpt-shell-openai-make-model (&key version short-version token-width context-window validate-command (headers #'chatgpt-shell-openai--make-headers) (key chatgpt-shell-openai-key) (url-base 'chatgpt-shell-api-url-base) (path "/v1/chat/completions") (provider "OpenAI") (label "ChatGPT") (handler #'chatgpt-shell-openai--handle-chatgpt-command) (filter #'chatgpt-shell-openai--filter-output) other-params) | |||
;; See https://platform.openai.com/docs/guides/reasoning | |||
(defcustom chatgpt-shell-openai-default-reasoning-effort "medium" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be buffer local? It would enable setting level per shell/buffer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As it is used currently I don't see any reason to make it buffer local but see my above comment.
chatgpt-shell-openai.el
Outdated
:validate-command #'chatgpt-shell-validate-no-system-prompt) | ||
(chatgpt-shell-openai-make-model | ||
:version "o1-preview" | ||
:token-width 3 | ||
;; https://platform.openai.com/docs/models/gpt-01 | ||
:context-window 128000 | ||
;; Reasoning effort is only supported for o1-pro, o1 and o3-mini. | ||
;; :reasoning-effort chatgpt-shell-openai-default-reasoning-effort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this meant to be commented out? If so, perhaps consider :reasoning-effort nil
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is meant to be commented out. :reasoning-effort is already nil by default when not provided. I wanted a comment here so someone wouldn't think it should be added in the future.
:validate-command #'chatgpt-shell-validate-no-system-prompt) | ||
(chatgpt-shell-openai-make-model | ||
:version "o1-mini" | ||
:token-width 3 | ||
;; https://platform.openai.com/docs/models/gpt-01-mini | ||
:context-window 128000 | ||
;; Reasoning effort is only supported for o1 and o3-mini. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ditto
chatgpt-shell-openai.el
Outdated
@@ -206,6 +224,10 @@ and OTHER-PARAMS (list)." | |||
:context context)))) | |||
(when temperature | |||
`((temperature . ,temperature))) | |||
;; Unfortunately, while it is documented by OpenAI, the reasoning_effort |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't quite understand the comment. Does it mean the patch doesn't currently work as expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be working now. Previously, when I tested it the API didn't accept the parameter (that's why this branch is a bit old and I didn't submit it before). However, OpenAI seems to have fixed that now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment was from an old version and I neglected to remove it before submitting this PR.
;; See https://openrouter.ai/openai/o3-mini-high | ||
:context-window 200000 | ||
:validate-command #'chatgpt-shell-validate-no-system-prompt) | ||
:other-params '((provider (quantizations . ["bf16"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't we need to delete altogether?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is for qwen and not o3-mini-high. It looks like it ended up in the diff due to the parens unless I'm missing something here.
The |
If we add an interface for setting parameters such as reasoning_effort, this would allow models such as o3-mini-high to be removed as redundant.