Skip to content

feat(ai): implement streamModelCall function for streaming text gener…#13682

Draft
lgrammel wants to merge 2 commits intomainfrom
lg/UaYaVbRl
Draft

feat(ai): implement streamModelCall function for streaming text gener…#13682
lgrammel wants to merge 2 commits intomainfrom
lg/UaYaVbRl

Conversation

@lgrammel
Copy link
Collaborator

…ation

  • Added a new streamModelCall function to handle streaming text generation with customizable tool choices and retry logic.
  • Integrated this function into the existing DefaultStreamTextResult class, replacing previous inline logic for improved modularity and readability.
  • Enhanced notification handling during the streaming process to include prompt messages and step details.

This change aims to streamline the text generation process and improve the overall architecture by promoting code reuse.

Background

Summary

Manual Verification

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Future Work

Related Issues

…ation

- Added a new `streamModelCall` function to handle streaming text generation with customizable tool choices and retry logic.
- Integrated this function into the existing `DefaultStreamTextResult` class, replacing previous inline logic for improved modularity and readability.
- Enhanced notification handling during the streaming process to include prompt messages and step details.

This change aims to streamline the text generation process and improve the overall architecture by promoting code reuse.
@tigent tigent bot added ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. maintenance CI, internal documentation, automations, etc labels Mar 20, 2026
@gr2m
Copy link
Collaborator

gr2m commented Mar 20, 2026

Claude's assessment

Here's the root cause of the abort signal test failures:

Root Cause

The issue is a subtle microtask timing difference caused by extracting pipeThrough(createStreamTextPartTransform) into the async function streamModelCall.

Original code (working)

After await retry(() => doStream()), everything happens synchronously in a single block:
pipeThrough(createStreamTextPartTransform) → invokeToolCallbacksFromStream → pipeThrough(createExecuteToolsTransformation) → addStream(...)

There's no microtask boundary between pipeThrough(createStreamTextPartTransform) and addStream. The internal pipe from modelStream → createStreamTextPartTransform is set up but doesn't
execute until the consumer starts pulling through the stitchable stream.

PR code (broken)

pipeThrough(createStreamTextPartTransform) is called inside the async function streamModelCall, right before return. Because streamModelCall is async, its return implicitly wraps the
result in a resolved Promise. The caller's await streamModelCall(...) introduces a microtask boundary before the remaining sync operations (invokeToolCallbacksFromStream → addStream).

During this microtask boundary, the pipe from modelStream → createStreamTextPartTransform (set up by pipeThrough) gets a chance to run and read ahead from the model stream. This changes
the internal buffering state of the stream pipeline. By the time addStream is called and the resilient stream starts reading, more data is already buffered in the pipeline, which means
the abort error from the model (triggered at pull 3) doesn't propagate fast enough to prevent extra chunks from being read.

Concrete execution trace comparison

Original: After the resilient stream reads start-step, its next pull triggers model pull 3 (abort) as a side-effect, and the error propagates before text-start can be read from the
buffer → abort detected immediately.

PR: After the resilient stream reads start-step, the buffering state is different due to the microtask gap. text-start is successfully read from the buffer before model pull 3 triggers →
one extra chunk appears before abort.

Fix

The fix is to keep pipeThrough(createStreamTextPartTransform) in the same synchronous block as addStream. Either:

  1. Don't include pipeThrough in streamModelCall — have streamModelCall return the raw languageModelStream and let the caller pipe through createStreamTextPartTransform synchronously
    alongside the other transforms.
  2. Move all the remaining pipe operations into streamModelCall — so that everything from pipeThrough through addStream is synchronous within the same function.
  3. Don't make streamModelCall async — this isn't really feasible since it needs to await several operations internally.

Option 1 is the simplest and most correct fix.

@lgrammel
Copy link
Collaborator Author

This breaks LLM suspense stopStream behavior. There may be fixes for that that we can explore further.

@gr2m
Copy link
Collaborator

gr2m commented Mar 20, 2026

As a next step, we will look deeper into stitchable stream and abort timing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/core core functions like generateText, streamText, etc. Provider utils, and provider spec. maintenance CI, internal documentation, automations, etc

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants