Skip to content

H2 streams reserve 1 more byte than needed from the connection, leading to deadlock #4003

@jackkleeman

Description

@jackkleeman

Version
All versions of hyper with this line: https://github.com/hyperium/hyper/blob/master/src/proto/h2/mod.rs#L122

Platform
Platform independent

Description

I am observing occasional deadlocks in production. The cause is that occasionally an outbound h2 request will not be sent until a different h2 request to the same destination has had its body closed, and those two requests happen to have a dependency where they have to run concurrently. The issue occurs after the http2 session has sent exactly 65534 bytes of body data, so as you can imagine, this is a HTTP2 window size issue.

Hyper's h2 PipeToSendStream future always keeps reserved one byte of flow control capacity while its waiting for more data from the body to pipe out. This is not a miscounting issue within a stream, as when more bytes come in from the body, that 1 byte contributes towards the new bytes. However, it creates a problematic dependency between requests when dealing with a server that only sends a window update when it is totally exhausted. As an example:

  • Request A sends 65534 bytes (eg, in 4 data frames), and then waits for more body data. The connection window has one byte remaining from the perspective of the server. However, because of line 122 linked above, the stream for request A reserves one more byte, so the client view of the window is 0 bytes remaining
  • Now we try to send request B, with even a single byte of body data. The stream cannot get any connection capacity and does not send the data frame
  • Only once request A closes its request body and that 1 byte is released back to the connection will request B be sent off

An example of a http2 server that only updates the window when its completely exhausted is Bun (I hope to change this in oven-sh/bun#25847). As such, I have a reproducing example in Bun:

main.rs:

use bytes::Bytes;
use http_body_util::StreamBody;
use hyper::body::Frame;
use hyper::Request;
use hyper_util::client::legacy::Client;
use hyper_util::rt::TokioExecutor;
use std::convert::Infallible;
use std::time::Duration;
use tokio::sync::mpsc;
use tokio_stream::wrappers::ReceiverStream;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client: Client<_, _> = Client::builder(TokioExecutor::new())
        .http2_only(true)
        .build_http();

    let (tx_1, rx_1) = mpsc::channel::<Result<Frame<Bytes>, Infallible>>(1);
    let body_1 = StreamBody::new(ReceiverStream::new(rx_1));
    let request_1 =
        Request::post("http://127.0.0.1:9080/test").body(http_body_util::Either::Left(body_1))?;

    println!("Sending request A");
    tokio::task::spawn(client.request(request_1));

    // Send 65534 bytes (h2 reserves 1 byte from the 65535 window)
    let mut sent = 0;
    for chunk in vec![b'A'; 65534].chunks(16384) {
        tx_1.send(Ok(Frame::data(Bytes::copy_from_slice(chunk))))
            .await?;
        sent += chunk.len();
        println!("Sent {} bytes (total: {})", chunk.len(), sent);
    }

    tokio::time::sleep(Duration::from_secs(1)).await;

    println!("Sent {} bytes, window full");

    tokio::task::spawn(async {
        tokio::time::sleep(Duration::from_secs(10)).await;
        println!("Timeout after 10s, closing request A");
        drop(tx_1);
    });

    println!("Sending request B");
    client
        .request(Request::post("http://127.0.0.1:9080/test").body(
            http_body_util::Either::Right(http_body_util::Full::new(Bytes::from_static(b"a"))),
        )?)
        .await?;
    println!("Sent request B");

    Ok(())
}

server.ts:

import { createServer } from "node:http2";

const server = createServer();

let totalBytes = 0;

server.on("stream", (stream, headers) => {
  console.log(
    `[${new Date().toISOString()}] New stream: ${headers[":method"]} ${headers[":path"]}`,
  );

  stream.on("data", (chunk: Buffer) => {
    totalBytes += chunk.length;
    console.log(
      `[${new Date().toISOString()}] Received chunk: ${chunk.length} bytes (total: ${totalBytes})`,
    );
  });

  stream.on("end", () => {
    console.log(
      `[${new Date().toISOString()}] Stream ended, total received: ${totalBytes} bytes`,
    );
    stream.respond({ ":status": 200 });
    stream.end("OK");
  });

  stream.on("error", (err) => {
    console.error(`[${new Date().toISOString()}] Stream error:`, err);
  });
});

server.on("error", (err) => {
  console.error("Server error:", err);
});

server.listen(9080, () => {
  console.log("HTTP/2 server listening on http://localhost:9080");
  console.log("Initial connection window size: 65535 bytes");
});
$ bun run server.ts
$ cargo run
Sending request A
Sent 16384 bytes (total: 16384)
Sent 16384 bytes (total: 32768)
Sent 16384 bytes (total: 49152)
Sent 16382 bytes (total: 65534)
Sent 65534 bytes, window full
Sending request B
Timeout after 10s, closing request A
Sent request B

Metadata

Metadata

Assignees

No one assigned

    Labels

    C-bugCategory: bug. Something is wrong. This is bad!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions