Out of order streaming #16
Replies: 7 comments 3 replies
-
This is out-of-order streaming and we'll take a look at how to do it after v2 is released. |
Beta Was this translation helpful? Give feedback.
-
Just a note to mention that the same goes for Should we update this RFC to be about Out of order streaming? Which would enable non-blocking |
Beta Was this translation helpful? Give feedback.
-
Summarizing Misko here:
|
Beta Was this translation helpful? Give feedback.
-
Some more thoughts
Right now we do two passes over data that we serialize, to identify shared
references. When we do out of order streaming, we can't do this any more.
This means that we should serialize in a single pass, and keep a map of all
objects and their emitted location.
If we simply emit everything in the root array, we'll get a very long array
and the references will take more bytes.
Instead, we could encode a reference as the path to the object in the
nested serialized array, as an array of indexes. To prevent having to
specify this every time, we would store this as a root object.
On my phone so hard to draw a picture sorry.
So the serialization would become:
- single pass, but await promises
- keep a map between each object and it's nested array output location
- when you encounter an object again, output the backreference as a root
object that you then reference. So a `[Ref, 123]` to a `[Ref, 4, 3, 1]` to
the object. Maybe skip the double ref if the path is short.
- streaming just continues adding to the root array
- the vnode map needs similar handling, perhaps it should just be output as
part of the serialization.
- the object map must be kept in memory until SSR is complete
Wout.
…On Fri, 20 Dec 2024, 23:29 Shai Reznik, ***@***.***> wrote:
Following our RFC monthly
This is all the potential things to consider for the implementation of OOO
streaming 👆
—
Reply to this email directly, view it on GitHub
<#16 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAANNFRVMQICTWZZHA2NPQD2GR42XAVCNFSM6AAAAABNY6ACYWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNRTGM2DGNY>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Actually, for Promises we could remember their position and emit the value
as a root ref with empty offset. Then when deserializing, we scan root for
these results and we patch the promised ref
Wout.
…On Sat, 21 Dec 2024, 12:28 Wout Mertens, ***@***.***> wrote:
Some more thoughts
Right now we do two passes over data that we serialize, to identify shared
references. When we do out of order streaming, we can't do this any more.
This means that we should serialize in a single pass, and keep a map of
all objects and their emitted location.
If we simply emit everything in the root array, we'll get a very long
array and the references will take more bytes.
Instead, we could encode a reference as the path to the object in the
nested serialized array, as an array of indexes. To prevent having to
specify this every time, we would store this as a root object.
On my phone so hard to draw a picture sorry.
So the serialization would become:
- single pass, but await promises
- keep a map between each object and it's nested array output location
- when you encounter an object again, output the backreference as a root
object that you then reference. So a `[Ref, 123]` to a `[Ref, 4, 3, 1]` to
the object. Maybe skip the double ref if the path is short.
- streaming just continues adding to the root array
- the vnode map needs similar handling, perhaps it should just be output
as part of the serialization.
- the object map must be kept in memory until SSR is complete
Wout.
On Fri, 20 Dec 2024, 23:29 Shai Reznik, ***@***.***> wrote:
> Following our RFC monthly
> This is all the potential things to consider for the implementation of
> OOO streaming 👆
>
> —
> Reply to this email directly, view it on GitHub
> <#16 (reply in thread)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AAANNFRVMQICTWZZHA2NPQD2GR42XAVCNFSM6AAAAABNY6ACYWVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNRTGM2DGNY>
> .
> You are receiving this because you commented.Message ID:
> ***@***.***>
>
|
Beta Was this translation helpful? Give feedback.
-
It may be a niche use-case, but out-of-order streaming would be super ergonomic for search forms. Rather than calling a This would simplify a lot of logic for some projects. Currently, you need to mix loaders + actions, or resort to Overall, I think deferred/streaming loaders and the ability to opt-out of the |
Beta Was this translation helpful? Give feedback.
-
Some more thoughts: Out-of-order streamingwhen we allow out-of-order streaming but we want interactivity while the page is still streaming, we can't scan the state once for cycles/Promises/references, because later state might refer to earlier state. Therefore we must make the serializer single-pass. To do this, we could store each object as a root, but that will make references take more bytes. Instead, we could implement sub-paths for references, and when we encounter a reference, we output that instead. For later references it would be best to have a root reference, so we'd probably add a root for the sub-reference and later we can refer to that. We'll still need to keep track of Promises. We can do this by waiting, but that halts streaming. Instead, we could write a forward reference id, and when the promise is resolved, we store the result as the next root item. At the end of the stream we can emit a mapping from forward references to root index. Then later, when we send out-of-order state, it will append to the existing state on the client. This single-pass approach might just be better in general, because several bugs were found due to differences between the first and second pass code, and it allows streaming the data while the promises are being resolved. |
Beta Was this translation helpful? Give feedback.
-
What is it about?
allow page rendering while still using routeLoaders by streaming the response
What's the motivation for this proposal?
Problems you are trying to solve:
Goals you are trying to achieve:
Any other context or information you want to share:
Im aware that this has been discussed before but just thought it would be cool to add it here as it would fit the usage of this proposals repo
Proposed Solution / Feature
What do you propose?
api for it already exists using with onPending and onResolved but the render blocking is still there
Code examples
find below the example by gioboa
Links / References
https://qwik.dev/docs/cookbook/streaming-deferred-loaders/
Beta Was this translation helpful? Give feedback.
All reactions