Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Order of message parts get mixed in onFinish() #5275

Open
mikkokut opened this issue Mar 19, 2025 · 4 comments
Open

Order of message parts get mixed in onFinish() #5275

mikkokut opened this issue Mar 19, 2025 · 4 comments
Labels
ai/core bug Something isn't working

Comments

@mikkokut
Copy link
Contributor

Description

📄 Description

When using @ai-sdk/vue's useChat() composable with streamText() from ai, there is an inconsistency in the order of message parts between the streamed response and the streamText({onFinish}) callback.

This causes issues with rendering messages consistently in chat UIs after reloads.


⚠️ Problem

If a message includes tool invocations, the order of parts (e.g., text vs. tool-calls) can differ between:

  1. Streamed message parts (received progressively)
  2. Final message in the onFinish (streamText()) callback

📎 Example

Streamed events:

f: {"messageId":"msg-llkj9PNWMkILH5gkJRFMjlu3"}
9: {"toolCallId":"ezv9IDtNtQrqvptU","toolName":"updateState","args":{"op":"replace","path":"/basic/document_name/fi","value":"Testing testing"}}
a: {"toolCallId":"ezv9IDtNtQrqvptU","result":"State updated."}
0: "\n"
e: {"finishReason":"tool-calls","usage":{"promptTokens":2405,"completionTokens":17},"isContinued":false}
d: {"finishReason":"tool-calls","usage":{"promptTokens":2405,"completionTokens":17}}

Resulting message in useChat() messages array:

{
  "id": "msg-llkj9PNWMkILH5gkJRFMjlu3",
  "createdAt": "...",
  "role": "assistant",
  "content": "\n",
  "parts": [
    {
      "type": "tool-invocation",
      "toolInvocation": {
        "state": "result",
        "step": 0,
        "toolCallId": "ezv9IDtNtQrqvptU",
        "toolName": "updateState",
        "args": {
          "op": "replace",
          "path": "/basic/document_name/fi",
          "value": "Testing testing"
        },
        "result": "State updated."
      }
    },
    { "type": "text", "text": "\n" }
  ],
  "toolInvocations": [
    {
      "state": "result",
      "step": 0,
      "toolCallId": "ezv9IDtNtQrqvptU",
      "toolName": "updateState",
      "args": {
        "op": "replace",
        "path": "/basic/document_name/fi",
        "value": "Testing testing"
      },
      "result": "State updated."
    }
  ],
  "revisionId": "ogxXwFFQmJomF6lT"
}

Message in onFinish callback:

{
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "\n"
    },
    {
      "type": "tool-call",
      "toolCallId": "HK5S5IJB24yscGaa",
      "toolName": "updateState",
      "args": {
        "op": "replace",
        "path": "/basic/document_name/fi",
        "value": "Testing"
      }
    }
  ],
  "id": "msg-GnW9lDH1sdfeDxeXVcSA0LN0"
}

🔍 Suspected Cause

The issue likely originates from toResponseMessages(), where the transformation logic reorders message parts.


✅ Expected Behavior

The order of message parts (text, tool-call, etc.) should remain consistent between:

  • Streamed message assembly (useChat internal state), and
  • Final message passed to streamText({onFinish}) callback.

🙏 Request

Please consider ensuring toResponseMessages() maintains a deterministic and consistent part ordering in both streamed and finalized responses.

Code example

No response

AI provider

@ai-sdk/google 1.1.12

Additional context

No response

@mikkokut mikkokut added the bug Something isn't working label Mar 19, 2025
@mikkokut
Copy link
Contributor Author

Note: This bug made it difficult to identify the actual cause of issue #5276.

@lgrammel
Copy link
Collaborator

The \n that's being send after the tool calls is an interesting edge case. Curious which model you have used?

@leoreisdias
Copy link

The \n that's being send after the tool calls is an interesting edge case. Curious which model you have used?

I also have faced it recently, even with the OpenAI 4o model, through streamText and toolCalls.

@mikkokut
Copy link
Contributor Author

The \n that's being send after the tool calls is an interesting edge case. Curious which model you have used?

I have used gemini-2.0-flash-001 and it seems to happen quite often, like half of the function calls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai/core bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants