Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Replace HTTP+SSE with new "Streamable HTTP" transport #206

Merged
merged 24 commits into from
Mar 24, 2025

Conversation

jspahrsummers
Copy link
Member

@jspahrsummers jspahrsummers commented Mar 17, 2025

This PR introduces the Streamable HTTP transport for MCP, addressing key limitations of the current HTTP+SSE transport while maintaining its advantages.

Our deep appreciation to @atesgoral and @topherbullock (Shopify), @samuelcolvin and @Kludex (Pydantic), @calclavia, Cloudflare, LangChain, Vercel, the Anthropic team, and many others in the MCP community for their thoughts and input! This proposal was only possible thanks to the valuable feedback received in the GitHub Discussion.

TL;DR

As compared with the current HTTP+SSE transport:

  1. We remove the /sse endpoint
  2. All client → server messages go through the /message (or similar) endpoint
  3. All client → server requests could be upgraded by the server to be SSE, and used to send notifications/requests
  4. Servers can choose to establish a session ID to maintain state
  5. Client can initiate an SSE stream with an empty GET to /message

This approach can be implemented backwards compatibly, and allows servers to be fully stateless if desired.

Motivation

Remote MCP currently works over HTTP+SSE transport which:

  • Does not support resumability
  • Requires the server to maintain a long-lived connection with high availability
  • Can only deliver server messages over SSE

Benefits

  • Stateless servers are now possible—eliminating the requirement for high availability long-lived connections
  • Plain HTTP implementation—MCP can be implemented in a plain HTTP server without requiring SSE
  • Infrastructure compatibility—it's "just HTTP," ensuring compatibility with middleware and infrastructure
  • Backwards compatibility—this is an incremental evolution of our current transport
  • Flexible upgrade path—servers can choose to use SSE for streaming responses when needed

Example use cases

Stateless server

A completely stateless server, without support for long-lived connections, can be implemented in this proposal.

For example, a server that just offers LLM tools and utilizes no other features could be implemented like so:

  1. Always acknowledge initialization (but no need to persist any state from it)
  2. Respond to any incoming ToolListRequest with a single JSON-RPC response
  3. Handle any CallToolRequest by executing the tool, waiting for it to complete, then sending a single CallToolResponse as the HTTP response body

Stateless server with streaming

A server that is fully stateless and does not support long-lived connections can still take advantage of streaming in this design.

For example, to issue progress notifications during a tool call:

  1. When the incoming POST request is a CallToolRequest, server indicates the response will be SSE
  2. Server starts executing the tool
  3. Server sends any number of ProgressNotifications over SSE while the tool is executing
  4. When the tool execution completes, the server sends a CallToolResponse over SSE
  5. Server closes the SSE stream

Stateful server

A stateful server would be implemented very similarly to today. The main difference is that the server will need to generate a session ID, and the client will need to pass that back with every request.

The server can then use the session ID for sticky routing or routing messages on a message bus—that is, a POST message can arrive at any server node in a horizontally-scaled deployment, so must be routed to the existing session using a broker like Redis.

Why not WebSocket?

The core team thoroughly discussed making WebSocket the primary remote transport (instead of SSE), and applying similar work to it to make it disconnectable and resumable. We ultimately decided not to pursue WS right now because:

  1. Wanting to use MCP in an "RPC-like" way (e.g., a stateless MCP server that just exposes basic tools) would incur a lot of unnecessary operational and network overhead if a WebSocket is required for each call.
  2. From a browser, there is no way to attach headers (like Authorization), and unlike SSE, third-party libraries cannot reimplement WebSocket from scratch in the browser.
  3. Only GET requests can be transparently upgraded to WebSocket (other HTTP methods are not supported for upgrading), meaning that some kind of two-step upgrade process would be required on a POST endpoint, introducing complexity and latency.

We're also avoiding making WebSocket an additional option in the spec, because we want to limit the number of transports officially specified for MCP, to avoid a combinatorial compatibility problem between clients and servers. (Although this does not prevent community adoption of a non-standard WebSocket transport.)

The proposal in this doc does not preclude further exploration of WebSocket in future, if we conclude that SSE has not worked well.

To do

  • Move session ID responsibility to server
    • Define acceptable space of session IDs
    • Ensure session IDs are introspectable by middleware/WAF
  • Make cancellation explicit
  • Require centralized SSE GET for server -> client requests and notifications
  • Convert resumability into a per-stream concept
  • Design a way to proactively "end session"
  • "if the client has an auth token, it should include it in every MCP request"

Follow ups

  • Standardize support for JSON-RPC batching
  • Support for streaming request bodies?
  • Put some recommendations about timeouts into the spec, and maybe codify conventions like "issuing a progress notification should reset default timeouts."

@jspahrsummers jspahrsummers marked this pull request as ready for review March 17, 2025 10:15
@daviddenton
Copy link

Firstly - thanks for the effort in driving this forward 🙃

Client provides session ID in headers; server can pay attention to this if needed
This feels very unnatural and pretty insecure to me - what was the thinking about the client generating this as opposed to the server generating and signing it (possibly based on the client identity which is determined from authentication credentials)?

An alternative would be for the header to be set on the first response after the initialise request and then for the client to reuse/share it in whatever way they deem appropriate for their use-case.

@gunta
Copy link

gunta commented Mar 17, 2025

We're also avoiding making WebSocket an additional option in the spec

I completely agree with this decision.

While WebSocket and other transports will certainly be needed for some use cases, perhaps a separate "Extended" working group could be officially created and maintained by the community to address these needs - similar to how we could have Core and Extended working groups in the future.

@mitsuhiko
Copy link

Related to the point discussed above about session IDs I think it would be reasonable to ensure that either session IDs are communicated in a way that make routing on a basic load balancer possible or a separate header is added that enables that.

(That’s for folks who do not have fancy-pantsy durable objects ;))

@ukstv

This comment was marked as off-topic.

`text/event-stream` as supported content types.
- The server **MUST** either return `Content-Type: text/event-stream`, to initiate an
SSE stream, or `Content-Type: application/json`, to return a single JSON-RPC
_response_. The client **MUST** support both these cases.
Copy link

@halter73 halter73 Mar 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Given that EventSource already does not support POST requests meaning that fetch will have to be used by browser-based clients, why not go all the way and allow more than one JSON-RPC message in POST request's streaming request body? That's certainly where my mind goes to when renaming the transport from "HTTP with SSE" to "Streamable HTTP".

While this wouldn't solve the resumability issue by itself, it would vastly simplify the transport. It would be much closer to the stdio transport, and it could potentially support binary data.

And I think it would help with the resumability issue. It greatly simplifies resumability to only have one logical stream per connection like the stdio transport does. That way, you're not stuck managing multiple last-message-ids on the server which seems like a pain.

If a core design principal is that clients should handle complexity where it exists, I'd suggest forcing the client to only have one resumable server-to-client message stream at a time.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

JSON-RPC supports batching. You can pass an array of requests as a JSON-RPC body.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW - in OpenID Provider Commands we are proposing a POST that returns either application/json or text/event-stream https://openid.github.io/openid-provider-commands/main.html

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supporting batched client -> server messages makes sense! I honestly had forgotten that aspect of JSON-RPC, since we ignored it for so long. 😬

I would like to extend support to streaming request bodies, but I think we should kick the can on this a bit, as it will probably involve significant discussion of its own.

Copy link
Member Author

@jspahrsummers jspahrsummers Mar 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll punt on batching in this PR as well, as it also affects the stdio transport and has some wider-ranging implications, but basically I agree we should support it.

@colombod
Copy link

So this wouldd be bringing httptreansport and quic protocol to the mix?

@canncupiscent

This comment was marked as spam.

@calclavia
Copy link

The majority of MCPs will likely be stateless - so I think this is a generally good direction!

Thanks for the work @jspahrsummers

Looking forward to seeing the SDK implementations.

@jspahrsummers jspahrsummers merged commit 880673a into main Mar 24, 2025
6 checks passed
@jspahrsummers jspahrsummers deleted the justin/new-http-transport branch March 24, 2025 11:51
@ChrisLally
Copy link

Have been keeping an eye on this, excited to see it merged - Big time shoutout to @dsp-ant and everyone that contributed!

@kanlanc
Copy link

kanlanc commented Mar 24, 2025

Was looking forward to this! Excited to see it merged!

@jspahrsummers
Copy link
Member Author

@4t145
Copy link

4t145 commented Mar 25, 2025

I am an unofficial MCP Rust SDK rmcp developer, and after reading the new Streamable HTTP Transport, I am still a little confused.

In this new standard, the transport layer seems to have to manage the request and response of the same ID and the corresponding notification, which means that the logic of managing these messages that already exists upon the transport layer cannot be reused. In other words, as a transport layer, does it have too much works to do?

我是非官方的mcp rust sdk rmcp 的开发者,在阅读完新的streamable http transport后,我仍然感到有些困惑。在这份新的标准中,传输层似乎不得不管理同一个id的请求与响应以及对应的通知,这意味着无法复用在传输层之上已经存在的对这些消息管理的逻辑。换句话说,作为一个传输层,它肩负的工作是否有些过于多了?

@liudonghua123
Copy link

Hi, when will the official sdks updated for this changes?

@jspahrsummers
Copy link
Member Author

jspahrsummers commented Mar 26, 2025

Please note this is still in a draft version of the MCP protocol. SDK support should not be expected nor depended upon until this makes it into a released version.

We hope to do this soon, but will take as much as time as needed to ensure a stable release.

@shouldnotappearcalm
Copy link

Looking forward to the draft being released soon

@jirispilka
Copy link

It was actually already released and announced on X

@ahmadawais
Copy link
Contributor

ahmadawais commented Mar 28, 2025

Super excited to see this through.

P.S. @jirispilka The schema.ts link in the release notes is broken.

Sending in a PR #248

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Approved
Development

Successfully merging this pull request may close these issues.