Skip to content

feat: experimental query.live remote function#15563

Open
Rich-Harris wants to merge 38 commits intomainfrom
query-live
Open

feat: experimental query.live remote function#15563
Rich-Harris wants to merge 38 commits intomainfrom
query-live

Conversation

@Rich-Harris
Copy link
Copy Markdown
Member

@Rich-Harris Rich-Harris commented Mar 19, 2026

This implements a new query.live remote function that provides a live view of some real-time data. The semantics are similar to query and query.batch you can {await myLiveQuery(123)} in your component (or in a $derived) and it will update automatically.

  • instead of returning a value (like query) or a function that returns data for a specific input (like query.batch), the callback to query.live should return an AsyncIterator. Typically, it will be implemented as an async generator function
  • If called during SSR, the first yielded value is rendered/serialized, then the iterator is immediately closed. Upon hydration, the client initially uses the serialized value, then connects to the live query
  • The same live query can be used in multiple places — each instance will share the same underlying connection to the server
  • When the query is no longer used anywhere on the page (defined in Svelte reactivity terms as 'in a currently-active effect'), we disconnect
  • When the client disconnects (whether because the query is no longer used, or because the tab was closed or connectivity was lost etc) the iterator is closed, allowing any cleanup to happen
  • The live query has a .connected property which turns false if the connection drops while the query is actively used. It will proactively attempt to reconnect with exponential backoff with jitter, and also if navigator.onLine goes from false to true, or you can force a reconnection attempt with the .reconnect() method
  • In the same way that a query can (since feat: hydratable and a more consistent remote functions model #15533) be accessed in e.g. an event handler with the myQuery(123).run() method, which returns a Promise, you can .run() a live query to get the raw AsyncIterator
  • The live query callback receives a { signal } as well as the (validated) argument correction: there's no need for this, we can just do getRequestEvent().request.signal

So the most basic example might look like this:

import { query } from '$app/server';

const sleep = (ms) => new Promise((f) => setTimeout(f, ms));

export const now = query.live(async function* () {
  while (true) {
    yield new Date();
    await sleep(1000);
  }
});
<p>the time is {await now()}</p>

Because it is stateless, it is well suited to e.g. serverless environments — if you hit the duration limit, then now().connected will become false for a moment, then the client will automatically reconnect. Under the hood, it is implemented as a normal Response with chunked transfer encoding (not an EventSource, since that provides less control over reconnection).

In cases where the callback needs to do some setup and disposal work, you could either use this pattern...

export const foo = query.live(async function* () {
  setup();

  try {
    while (true) {
      yield ...
    }
  } finally {
    cleanup();
  }
});

...or, if you're using a sufficiently modern server runtime, using:

export const foo = query.live(async function* () {
 using setup();

 while (true) {
   yield ...
 }
});

Another useful construct is yield*. My expectation is that by leaning on a language primitive, rather than using e.g. a callback-based API, we will both encourage and benefit from the ecosystem standardising on async iterators and disposables.

Possible follow-ups:

  • the example service worker we include in the docs doesn't work for live queries, because the response is cloned — this means that if you close the tab, the connection stays open. I'm not sure if there's a sensible way to fix that in existing apps, but we can certainly fix it for new apps
  • in production you can have lots of open connections, but on http (which includes most people's local dev setups) you can have a very small number (I think it's 6?) which you can very quickly exhaust. It might make sense to use websockets for transport in local dev, rather than HTTP (unfortunately websockets aren't a great default for prod, because not all environments support them)
  • handle finite iterators (e.g. a progress report) — don't attempt to reconnect once it finishes
  • the code could probably use a tidy-up in parts
  • build some fun demos, improve the docs
  • probably some other stuff

Please don't delete this checklist! Before submitting the PR, please make sure you do the following:

  • It's really useful if your PR references an issue where it is discussed ahead of time. In many cases, features are absent for a reason. For large changes, please create an RFC: https://github.com/sveltejs/rfcs
  • This message body should clearly illustrate what problems it solves.
  • Ideally, include a test that fails without this PR but passes with it.

Tests

  • Run the tests with pnpm test and lint the project with pnpm lint and pnpm check

Changesets

  • If your PR makes a change that should be noted in one or more packages' changelogs, generate a changeset by running pnpm changeset and following the prompts. Changesets that add features should be minor and those that fix bugs should be patch. Please prefix changeset messages with feat:, fix:, or chore:.

Edits

  • Please ensure that 'Allow edits from maintainers' is checked. PRs without this option may be closed.

@changeset-bot
Copy link
Copy Markdown

changeset-bot bot commented Mar 19, 2026

🦋 Changeset detected

Latest commit: ff17ca9

The changes in this PR will be included in the next version bump.

This PR includes changesets to release 1 package
Name Type
@sveltejs/kit Minor

Not sure what this means? Click here to learn what changesets are.

Click here if you're a maintainer who wants to add another changeset to this PR

@svelte-docs-bot
Copy link
Copy Markdown

@teemingc
Copy link
Copy Markdown
Member

Does this replace #14292 ?

@Rich-Harris
Copy link
Copy Markdown
Member Author

Ohhhh, I know why that happens — it's because the live query contains this code...

if (!user_id) {
  yield null;
  return;
}

...which means the response is finite, which means it disconnects, which means it attempts to reconnect. So it's a bug in both the app code and the framework — app code because the returned async iterator shouldn't be finite, and framework because we allow it to be finite (and/or attempt to reconnect to the endpoint despite that, if we wanted to allow finite iterators for some reason).

@Rich-Harris
Copy link
Copy Markdown
Member Author

all the logic for this query.live function needs to be run on a Durable Object

Is the websocket piece necessary for that? Or do I just need a reference to the DO class? (I don't really understand how they work and the docs are kinda dense)

@ottomated
Copy link
Copy Markdown
Contributor

Is the websocket piece necessary for that? Or do I just need a reference to the DO class? (I don't really understand how they work and the docs are kinda dense)

You can get a handle to the DO class on the platform.env object, and then either call RPC methods on it or forward a Request object for it to handle and return a Response. WebSockets would be best, but you can stream any kind of response.

@Rich-Harris
Copy link
Copy Markdown
Member Author

So could the DO class implement publish and subscribe?

@ottomated
Copy link
Copy Markdown
Contributor

So could the DO class implement publish and subscribe?

Publish yes, subscribe no unless you make an internal WebSocket connection from the query function to the DO. That might work for smaller apps, but it would increase latency and without the ability to handle multiple live queries over a single WS connection it limits the amount of traffic possible (DOs don't horizontally scale without manually implementing that).

@langpavel
Copy link
Copy Markdown

Hi @Rich-Harris and @ottomated.

TL;DR: This PR is about query.live only, and looks great! Let's offer this ASAP 🔥 🙏 👏👏👏

There are some more suggestions, please, direct me to good place to discuss:

Let's split this a parts:

  • query.live – this PR – complete snapshot, continuously replaced state.
    Each yield is the whole truth; previous values are irrelevant.

    • always complete, same shape
    • can skip yields (only latest matters)
    • unidirectional
    • possibly infinite
    • Use case: current time, price ticker, online viewer count…
  • query.stream – new paradigm (sorry @dummdidumm) – event stream, constructed state.
    continuously adding new information on top of previous.
    (Every new message is addition, not replacement.)

    • every yield matters, order matters
    • cannot miss the event
    • unidirectional
    • possibly finite
    • state → message → new state (familiar to anyone coming from Redux)
    • Use case: AI response stream, log tailing, new mentions, progress report…
  • channelproposal – bidirectional channel primitive (flux, in flux, not yet)

    • multiplexed by adapter (WebSocket is native for this),
    • connection allowed by adapter, advanced, future proof

@Rich-Harris
Copy link
Copy Markdown
Member Author

Publish yes, subscribe no unless you make an internal WebSocket connection from the query function to the DO.

Do you have any thoughts on whether and how it would be possible to enable subscriptions within a live query? The API here is deliberately unopinionated, the hope being that it can be made to work with a variety of setups, but by extension it might not be able to compete with a websocket-based approach.

For example the demo above uses polling, but I also got it to work with postgres LISTEN on a non-pooled connection, and separately I'm building an Electron app that uses chokidar to create a live view of a directory. All work great. The polling approach is obviously limited in frequency, and the LISTEN approach is limited in scale, but a combination of polling + proactive reconnection (you can call my_query().reconnect() in a command, similar to refreshing a regular query in a single-flight mutation) yields decent results with minimal fuss.

One API design question I'm pondering: should yielded values be deduped? In a polling scenario it's mildy annoying to have to do this sort of thing:

export const getThings = query.live(async function*() {
  const signal = getRequestEvent().request.signal;

  let prev;

  while (!signal.aborted) {
    const data = await db.getThings();

    if (prev !== (prev = JSON.stringify(data)) {
      yield data;
    }

    await sleep(1000);
  }
});

If deduping was automatic, it could just be this:

export const getThings = query.live(async function*() {
  const signal = getRequestEvent().request.signal;

  while (!signal.aborted) {
    yield await db.getThings();
    await sleep(1000);
  }
});

It does mean that you can't use query.live for streams, where every chunk matters even if it's identical to the previous one. But maybe that's okay?

@henrykrinkle01
Copy link
Copy Markdown
Contributor

I think deduping should be automatic. For streams you can always include something to differentiate between chunks like a timestamp

@Rich-Harris Rich-Harris marked this pull request as ready for review March 23, 2026 18:04
Comment on lines +166 to +168
if (done) {
throw new Error(`query.live '${__.name}' did not yield a value`);
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done and value could be both initialized if the first statement is return

async function* test(){
    return 42;
}
const gen = test();
console.log(await gen.next()); // {value: 42, done: true}

it might be an edge case, but maybe it could happen if someone is checking a condition and returning before starting the live query?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The correct way to write that would be this:

async function* test(){
    yield 42;
    return;
}

I don't think it makes sense to treat return and yield interchangeably

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But is technically valid JS and return should be considered "the last yield" no?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really — if you run this...

function* foo() {
	yield 1;
	yield 2;
	yield 3;
	return 4;
}

for (const value of foo()) {
	console.log(value);
}

...it will log 1 then 2 then 3, but not 4

result: stringify(result, transport),
refreshes: result.issues ? undefined : await serialize_refreshes(meta.remote_refreshes)
refreshes: result.issues ? undefined : await serialize_refreshes(meta.remote_refreshes),
reconnects: serialize_reconnects()
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just throwing this out there (not spent any time thinking if that could be the case)...could this also have the same problem as client refreshes?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, it doesn't create any additional server-side work

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...shouldn't it?

Shouldn't the behavior here be the same as SSR, where we gather the first response from the iterables and return it with the command result? Otherwise you'll have the command complete and some indeterminate amount of time while the live queries reconnect and bring in their first values at different times, rather than all at once.

@phi-bre
Copy link
Copy Markdown
Contributor

phi-bre commented Mar 23, 2026

Just to give an example that would speak against dedupe by default: Memory consumption.

Cloudflare workers are incredibly limited in terms of RAM (128MB) so having a mechanism that caches an entire payload by default for a long time (query.live is long lived by design, why else would you use them over a normal query) with no way to opt out feels a bit odd. It could still be added in userland with a composer function like this or similar:

export const getThings = query.live(deduped(
  async function*() {
    const signal = getRequestEvent().request.signal;
  
    while (!signal.aborted) {
      yield await db.getThings();
      await sleep(1000);
    }
  }
));

Additionally, this way the deduping logic can be customized (e.g. trading off CPU time vs RAM by hashing instead of string comparison of the entire payload)

@Rich-Harris
Copy link
Copy Markdown
Member Author

A Cloudflare worker only handles a single request at once, so unless a single serialized payload is large enough to burst the memory banks (in which case query.live is surely the wrong tool for the job — do you really want to push megabytes of data to the client on a continual basis?) you're in the clear

@phi-bre
Copy link
Copy Markdown
Contributor

phi-bre commented Mar 23, 2026

https://developers.cloudflare.com/workers/platform/limits/#memory

Each isolate can consume up to 128 MB of memory, including the JavaScript heap and WebAssembly allocations. This limit is per-isolate, not per-invocation. A single isolate can handle many concurrent requests.

I totally agree, having a single multi-megabyte payload should not be the use case of query.live but I'm afraid the "sleep" will let the worker start processing other requests in the same isolate at the same time, where even a (rough napkin math) payload of 1MB would only require around 100 requests to take down the whole isolate.

I ran into CF memory limits more often than I would like to admit, so I just wanted to at least mention it in this thread.

(I've had similar concerns with the way normal queries and refreshes attach their entire payload to the async local storage, but that's a bit too off-topic for here)

@ottomated
Copy link
Copy Markdown
Contributor

Do you have any thoughts on whether and how it would be possible to enable subscriptions within a live query? The API here is deliberately unopinionated, the hope being that it can be made to work with a variety of setups, but by extension it might not be able to compete with a websocket-based approach.

It's a tough problem to solve. This is the closest I've got:

hooks.server.ts
// Provide two default transports that the user can switch between as they wish
type Input = {
  // e.g. "k932er/myLiveQuery"
  queryId: string;
  event: RequestEvent;
  transport: {
    stream(queryId: string, event: RequestEvent): Response;
    websocket(queryId: string, event: RequestEvent): Response;
  }
};

// Default hook would just use the stream transport
export const handleLiveQuery = async ({ transport, queryId, event }: Input) => {
  return transport.stream(queryId, event);
};

// Durable Object example
export const handleLiveQuery = async ({ queryId, event }: Input) => {
  const upgrade = event.request.headers.get('Upgrade');
  if (upgrade !== 'websocket') error(426, 'Expected websocket');
  // Pass off to a Durable Object
  // canonical way of passing variables like this I think
  const headers = new Headers(event.request.headers);
  headers.set('X-LiveQueryId', queryId);

  // do extra stuff like load balancing
  const socketName = queryId + '_' + Math.floor(Math.random() * 5);
  const socket = event.platform.env.SOCKET.getByName(socketName);
  return socket.fetch(event.request, {
    headers
  });
};
query.remote.ts
import { query } from '$app/server';

export const myLiveQuery = query.live(async function* () {
  while (true) {
    yield await globalThis.subscribe();
  }
});
worker.ts
// worker.ts
export class Socket extends DurableObject {
  #subscriptions = new Set();
  constructor() {
    globalThis.subscribe = () => {
      const { promise, resolve } = Promise.withResolvers();
      this.#subscriptions.add(resolve);
      return promise;
    };
  }

  publish(payload) {
    this.#subscriptions.forEach(resolve => resolve());
    this.#subscriptions.clear();
  }

  async fetch(req: Request): Promise<Response> {
    const queryId = req.headers.get('X-LiveQueryId');

    const { 0: client, 1: server } = new WebSocketPair();
    this.ctx.acceptWebSocket(server);

    this.ctx.waitUntil(
      (async () => {
        for await (const payload of INTERNAL_SVELTEKIT_MAGIC_GET_QUERY_LIVE_ITERABLE[queryId]()) {
          server.send(payload);
        }
      })()
    );

    return new Response(null, {
      status: 101,
      webSocket: client,
    });
  }
}

Cons:

  • Hacky globalThis.subscribe stuff
  • Needs to run a new async iterable for each connected client - ideally I would be able to have my query.live logic run once and broadcast to all clients
  • Having the query.live async iterable running all the time doesn't allow the DO to hibernate
  • Complicated to write

I think the current API just isn't flexible enough for this use case, unfortunately.

On deduping, I think it should be opt-in and turned off by default. It's unintuitive that the async iterable you get on the client isn't the exact same as the one on the server - too magic.

@Rich-Harris
Copy link
Copy Markdown
Member Author

Note that deduping can't be delegated to a helper without that helper needing to implement its own serialization, which is straightforward in simple cases but can get more complex when you have custom types and self-references and whatnot. So it would need to at minimum take a second argument that hashes a payload (which could perhaps default to JSON.stringify).

But even then, you're paying the cost of serialization twice, which seems silly. The efficient thing to do is dedupe at the point that you were going to serialize anyway. I'm not sure doing it automatically is 'too magic' — it's a very straightforward and easy-to-document behaviour that saves effort and bandwidth.

If we hashed the payload automatically would that ease the concerns around memory? The hash function we use elsewhere in the codebase will happily turn tens of kilobytes into a small number in a fraction of a millisecond, so I'm not particularly worried about the CPU trade-off.

(Honestly though, if a worker runs out of memory because it's trying to serve too many users, that seems like a bug in workers rather than something every user needs to tiptoe around!)

@phi-bre
Copy link
Copy Markdown
Contributor

phi-bre commented Mar 23, 2026

Ah true... should've taken a closer look at the code. Also makes perfect sense if the sveltekit team decides to not include workarounds for all users just to alleviate the (very) strict limitations of a single platform, but the hashing does sound like an appropriate fix (that also could still be added in a later release if needed).

Unrelated to the memory issue, I did agree with @ottomated on the unintuitive part as well, but on closer inspection maybe that's just due to me misinterpreting the actual usecase of query.live by assuming it was the same as query.stream (for example streaming a large list to show the first few items immediately, charts data, LLM streaming output, or rather anything I would need to go out of my way to introduce an endpoint + ReadableStream for)

For a "live" view of a single data point it makes a lot more sense to do deduping by default.

@leon
Copy link
Copy Markdown

leon commented Mar 27, 2026

I saw that WebTransport is coming in Safari 26.4
Which means we are closer to being able to use that as a replacement for web sockets.
Don't know if it's applicable here. But thought I'd share if someone had missed it.

https://developer.mozilla.org/en-US/docs/Web/API/WebTransport

@Antonio-Bennett
Copy link
Copy Markdown
Contributor

Wonder if cap n web would be a nice transport layer for this or a query.stream could also solve the worker friction while working everywhere else https://github.com/cloudflare/capnweb

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Finally got around to a full review of this.

I think the behavior of reconnecting on the server should be a little different...

Other than that, mostly a bunch of nits

/** `true` if the live stream is currently connected. */
readonly connected: boolean;
/** `true` once the live stream iterator has completed. */
readonly finished: boolean;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is kind of confusing... Can finish go back to false if you call refresh?

) => RemoteQuery<Output>;

/**
* The return value of a remote `query.live` function. See [Remote functions](https://svelte.dev/docs/kit/remote-functions#query.live) for full documentation.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This JSDoc is a little confusing. It's true that this is literally what query.live returns but it's probably more helpful to say it's just "The type of a live query function" or something similar.

);
}

reconnects.add(create_remote_key(__.id, stringify_remote_arg(arg, state.transport)));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should probably add docs for reconnecting (...single-flight reconnects?)

void invalidateAll();
}

if (form_result.reconnects) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be tied into invalidateAll like form_result.refreshes is? Should invalidateAll reconnect all live queries? Should calling reconnect in a form handler cause us to not invalidateAll automatically?

Comment on lines +76 to +82
if (DEV) {
for (const [key, entry] of live_query_map) {
if (key === id || key.startsWith(id + '/')) {
void entry.resource.reconnect();
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How should live queries work with single-flight mutations? If we want to be able to do updates(live_query) to reconnect all instances of that query, this is going to need the two-tiered caching I added in the requested PR. This logic will also have to change. We'd also need to add some additional handling to support requested(live_query, ...)

* @implements {Promise<T>}
*/
export class LiveQuery {
_key;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We removed _key in favor of a symbol property in my requested PR; might as well use the same thing here?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Especially if we're going to integrate this with requested (I think we should)

* @template T
* @implements {Promise<T>}
*/
export class LiveQuery {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran a couple of different AI code reviews on this PR -- the most useful feedback was probably this:

- I re-checked the output and continued the review; there is still one actionable issue and no additional high-confidence bugs.
- Medium severity bug: query.live can’t recover for await consumers after an initial connection failure.
- In packages/kit/src/runtime/client/remote-functions/query.svelte.js:601, LiveQuery creates a single first-value promise once.
- In packages/kit/src/runtime/client/remote-functions/query.svelte.js:701, that promise is rejected if the first stream attempt fails before readiness.
- In packages/kit/src/runtime/client/remote-functions/query.svelte.js:566, then keeps chaining from that same rejected promise, so {await live} remains permanently failed even if reconnect later succeeds and current updates.

cached.count += 1;

return cached;
return /** @type {RemoteQueryCacheEntry<T>} */ (cached);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this actually necessary? Why?

withOverride(fn) {
const entry = this.#get_or_create_cache_entry();
const override = entry.resource.withOverride(fn);
const override = /** @type {Query<T>} */ (entry.resource).withOverride(fn);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

?

result: stringify(result, transport),
refreshes: result.issues ? undefined : await serialize_refreshes(meta.remote_refreshes)
refreshes: result.issues ? undefined : await serialize_refreshes(meta.remote_refreshes),
reconnects: serialize_reconnects()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...shouldn't it?

Shouldn't the behavior here be the same as SSR, where we gather the first response from the iterables and return it with the command result? Otherwise you'll have the command complete and some indeterminate amount of time while the live queries reconnect and bring in their first values at different times, rather than all at once.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

10 participants