Skip to content

Make Apply Streaming Cancelable #5694

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 16 commits into
base: main
Choose a base branch
from
Open

Conversation

RomneyDa
Copy link
Collaborator

@RomneyDa RomneyDa commented May 15, 2025

Made the "Apply" streaming process cancelable so users can stop an in-progress code apply operation in the UI or via the reject diffs OR cancel stream commands.

Approach:

  • Because of our apply architecture I decided to make a simple abort manager id -> abort controller singleton that can be used by the streamDiffLines function, which is the bottom level of all apply/edit streams.
  • Also makes the llm stream chat actually abort the controller rather than just stopping the generator when aborted message id found
  • Then, changes fetch package to catch abort errors right next to the fetch call and return a 499 client cancelled request error
  • This error is caught in the streamResponse function used by all streaming within Continue and just causes a premature return. This is for cases where fetch is aborted after streaming starts
  • Then I added specific error handling for 499 in the ~20 abortable non-streaming places - mostly odd APIs and complete requests throughout where response.json() is used
  • This requires publishing a new version of fetch (1.0.10), and openai-adapters (1.0.25) packages (DONE)

Also

  • Fixes VertexAI streams not being abortable
  • Shows Applying... and the cancel keyboard shortcuts if currently applying (same location as "Generating...)
  • Removes duplicate streaming logic - uses @continuedev/fetch package for streaming utils

To test:
Update package.jsons to use relative paths for fetch package and rebuild fetch package
Use a slower apply model
Cancel streaming mid-apply

Copy link

netlify bot commented May 15, 2025

Deploy Preview for continuedev canceled.

Name Link
🔨 Latest commit 10e3ae3
🔍 Latest deploy log https://app.netlify.com/projects/continuedev/deploys/682bd7a8596bda00086c5a77

@RomneyDa RomneyDa marked this pull request as ready for review May 16, 2025 04:13
@RomneyDa RomneyDa requested a review from a team as a code owner May 16, 2025 04:13
@RomneyDa RomneyDa requested review from tomasz-stefaniak and removed request for a team May 16, 2025 04:13
@dosubot dosubot bot added the size:XL This PR changes 500-999 lines, ignoring generated files. label May 16, 2025
@RomneyDa RomneyDa self-assigned this May 16, 2025
@RomneyDa RomneyDa requested a review from sestinj May 16, 2025 04:13
@@ -62,61 +61,7 @@ import { LLMLogger } from "./llm/logger";
import { llmStreamChat } from "./llm/streamChat";
import type { FromCoreProtocol, ToCoreProtocol } from "./protocol";
import type { IMessenger, Message } from "./protocol/messenger";

// This function is used for jetbrains inline edit and apply
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed streamDiffLinesGenerator/moved logic inline because point of it was to account for aborted message ids which wasn't being used, made ~1/2 the lines duplicate, and is now defunct

input: string;
language: string | undefined;
onlyOneInsertion: boolean;
overridePrompt: ChatMessage[] | undefined;
rulesToInclude: RuleWithSource[] | undefined;
}): AsyncGenerator<DiffLine> {
const abortManager = StreamAbortManager.getInstance();
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Took the singleton approach so that functional stream diff lines function (also used in vscode-only code) can have persisted abort controllers that can be cancelled with core messaging

} else {
const gen = model.streamChat(
messages,
new AbortController().signal,
abortController.signal,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is significant because before, the connection would be hanging/running in the background until complete even if we've "aborted" using message ids at that point

@@ -323,14 +341,14 @@ class VertexAI extends BaseLLM {
? this.geminiInstance.removeSystemMessage(messages)
: messages;
if (this.vertexProvider === "gemini") {
yield* this.streamChatGemini(convertedMsgs, options);
yield* this.streamChatGemini(convertedMsgs, options, signal);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

VertexAI fixes - weren't abortable

sestinj added a commit that referenced this pull request May 17, 2025
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:XL This PR changes 500-999 lines, ignoring generated files. labels May 19, 2025
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels May 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:XL This PR changes 500-999 lines, ignoring generated files.
Projects
Status: Todo
Development

Successfully merging this pull request may close these issues.

1 participant