Description
Summary of the new feature / enhancement
The DeepSeek R1
exposes the CoT tokens and has a really nice way of dealing with it in their version of OpenAI compatible Chat Completion APIs
(see its reasoning model doc for details). However, the .NET OpenAI SDK doesn't expose reasoning_content
today. The tracking issue is openai/openai-dotnet#259 (comment)
reasoning_content:The content of the CoT, which is at the same level as content in the output structure.
Today, when using DeepSeek R1
with openai-gpt
agent, you will have to wait for a long time before seeing the completion output displayed, because we cannot get its CoT tokens with .NET OpenAI SDK and thus cannot display them to users as they stream in.
We can support it with reflection today (a workaround is shown in the OpenAI issue above), but it'd be better to wait for the official support in the .NET OpenAI SDK.
[NOTE] Gemini's OpenAI-compatible Chat Completion APIs
offer limited support to CoT tokens. It outputs CoT tokens along with the completion tokens today and doesn't have any delimiter to separate them. See https://discuss.ai.google.dev/t/reasoning-tokens-combined-with-completion-tokens-in-openai-compatibility-mode/58354/7?utm_source=chatgpt.com for details. It may support exposing CoT tokens with the same way as DeepSeek R1
in future.
Proposed technical implementation details (optional)
Once the reasoning_content
field is supported in .NET OpenAI SDK, we can capture the CoT tokens and display them as they're streaming in. But be noted that we should only save the completion output to the history, not the CoT tokens. See DeepSeek's doc for details.