feat(client,server): Mcp-Method/Mcp-Name request headers (SEP-2243)#2069
feat(client,server): Mcp-Method/Mcp-Name request headers (SEP-2243)#2069felixweinberger wants to merge 3 commits into
Conversation
|
@modelcontextprotocol/client
@modelcontextprotocol/server
@modelcontextprotocol/express
@modelcontextprotocol/fastify
@modelcontextprotocol/hono
@modelcontextprotocol/node
commit: |
|
@claude review |
1 similar comment
|
@claude review |
| import { describe, expect, test } from 'vitest'; | ||
|
|
||
| import { mcpNameForMethod, validateMcpHeaders } from '../../src/shared/httpHeaders.js'; |
There was a problem hiding this comment.
🟡 nit: encodeMcpHeaderValue / decodeMcpHeaderValue are new public exports with non-trivial UTF-8 → Latin-1 → btoa byte-mapping, but httpHeaders.test.ts only imports mcpNameForMethod/validateMcpHeaders and uses plain-ASCII values, so neither the base64 branch nor the HEADER_SAFE fast path is exercised. Consider adding a round-trip test for non-ASCII (e.g. 'café', emoji, CJK), an ASCII pass-through assertion, and a decode-identity-on-plain-input assertion.
Extended reasoning...
What's missing
packages/core/src/shared/httpHeaders.ts introduces two new public functions, encodeMcpHeaderValue and decodeMcpHeaderValue, re-exported from the package barrel via export * from './shared/httpHeaders.js' in packages/core/src/index.ts. The accompanying test file packages/core/test/shared/httpHeaders.test.ts imports only { mcpNameForMethod, validateMcpHeaders } (line 3) and never references the encode/decode pair. A grep across **/*.test.ts confirms no other test file imports them either.
Why the existing tests don't cover it indirectly
validateMcpHeaders does call decodeMcpHeaderValue internally, but every header value used in the test suite is plain ASCII ('tools/call', 'x', 'wrong', 'anything'). For these inputs the regex /^=\?base64\?(.+)\?=$/ never matches, so decodeMcpHeaderValue returns its argument on the first line and the atob / Uint8Array.from / TextDecoder path is never executed. encodeMcpHeaderValue is not called by anything under test at all — its only caller is the client transport, which has no test asserting the encoded header shape.
Why this is worth covering
The implementation is byte-level and easy to get subtly wrong: btoa requires a Latin-1 string where each char's code point ≤ 255, so the encoder must go TextEncoder → Uint8Array → String.fromCharCode(...bytes) → btoa, and the decoder must invert exactly that (atob → charCodeAt → Uint8Array → TextDecoder). A regression such as calling btoa(value) directly, dropping the String.fromCharCode mapping, or using codePointAt instead of charCodeAt would silently break non-ASCII tool names / resource URIs while leaving every existing test green. The repo's review checklist explicitly requires "New behavior has vitest coverage including error paths", and these are part of the public @modelcontextprotocol/core surface.
Step-by-step example of what's untested
- A client registers a tool named
'天気'and calls it. encodeMcpHeaderValue('天気')runs:HEADER_SAFE(/^[ -~]*$/) fails →TextEncoderproduces[0xe5,0xa4,0xa9,0xe6,0xb0,0x97]→String.fromCharCode(...)yields a 6-char Latin-1 string →btoa→'5aSp5rCX'→ header value'=?base64?5aSp5rCX?='.- Server receives it,
decodeMcpHeaderValuematches the regex,atob→ 6 Latin-1 chars →charCodeAtper char →Uint8Array→TextDecoder→'天気'. - None of steps 2–3 are executed by any test in this PR.
Suggested additions
describe('encode/decodeMcpHeaderValue', () => {
test('ASCII passes through unchanged', () => {
expect(encodeMcpHeaderValue('tools/call')).toBe('tools/call');
expect(decodeMcpHeaderValue('tools/call')).toBe('tools/call');
});
test.each(['café', '天気', '🛠️ tool'])('round-trips non-ASCII %p', v => {
const enc = encodeMcpHeaderValue(v);
expect(enc).toMatch(/^=\?base64\?.+\?=$/);
expect(decodeMcpHeaderValue(enc)).toBe(v);
});
});Severity
This is a nit — a test-coverage gap, not a runtime defect. The implementation looks correct; the ask is just to lock it in so a future refactor can't silently break the round-trip.
7758dfc to
b0d286d
Compare
279de5b to
1bd5a9a
Compare
|
@claude review |
| // SEP-2243: Mcp-Method/Mcp-Name headers (when present) must match the body. | ||
| if (!Array.isArray(rawMessage) && messages.length === 1 && isJSONRPCRequest(messages[0]!)) { | ||
| const headerMismatch = validateMcpHeaders(req, messages[0]); | ||
| if (headerMismatch) { | ||
| this.onerror?.(new Error(headerMismatch)); | ||
| return this.createJsonErrorResponse(400, -32_001, `Bad Request: ${headerMismatch}`); | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 nit: The new SEP-2243 wiring is only covered at the helper level — there's no transport test asserting that StreamableHTTPClientTransport.send() actually sets Mcp-Method/Mcp-Name on the outgoing fetch, nor that shttpHandler/WebStandardStreamableHTTPServerTransport return 400 / -32001 on a present-but-mismatched header (or on the malformed-base64 input fixed in 1bd5a9a). Consider adding a client test with a spy fetch and a server test that POSTs mcp-method: tools/list with a tools/call body, plus a regression case for mcp-method: =?base64?@@?=.
Extended reasoning...
What's missing
The PR description says "New header set/validate tests", but a grep across **/*.test.ts for mcp-method|mcp-name|encodeMcpHeaderValue|validateMcpHeaders matches only packages/core/test/shared/httpHeaders.test.ts, which exercises the pure helpers (mcpNameForMethod, validateMcpHeaders) directly. None of the three transport integration points added in this PR have coverage:
- Client emission —
packages/client/src/client/streamableHttp.ts_send()now setsheaders.set('mcp-method', …)and conditionallymcp-name.packages/client/test/**has zero references to either header, so nothing pins that the headers actually appear on the outgoingfetchRequest, that the!Array.isArray(message) && 'method' in messageguard skips batch/response bodies, or thatmcp-nameis set fortools/callbut omitted fortools/list. - Server rejection — both
shttpHandler.tsandWebStandardStreamableHTTPServerTransport.handlePostRequestnow early-return400 / -32001 / 'Bad Request: Mcp-* header … does not match …'when the header is present-but-mismatched.packages/server/test/server/streamableHttp.test.ts's only-32001reference is the pre-existing "Session not found" assertion; nothing exercises the new path. The guard conditions (!isBatch && requests.length === 1,isJSONRPCRequest), the HTTP status, the JSON-RPC code, and the'Bad Request: 'prefix are all unpinned. - Malformed-base64 regression — commit 1bd5a9a wrapped
atob()in a try/catch so thatMcp-Method: =?base64?@@?=falls through to the 400/-32001 mismatch path instead of bubbling aDOMExceptionto the outer catch (which previously produced HTTP 500/-32603 inshttpHandlerand -32700 ParseError instreamableHttp.ts). There is no test locking that fix in.
Why the existing tests don't cover it indirectly
httpHeaders.test.ts calls validateMcpHeaders(req, body) with a hand-built Request and inspects the returned string. It never goes through handlePostRequest/handlePost, so it can't observe the HTTP status, the JSON-RPC error code, or the placement of the check relative to body parsing / session validation. On the client side nothing constructs a StreamableHTTPClientTransport with a spy fetch to inspect the headers it produces.
Why it matters
The repo's REVIEW.md checklist requires "New behavior has vitest coverage including error paths". The 400/-32001 rejection is a new error path, and it's the part of SEP-2243 that intermediaries observe on the wire — if a future refactor accidentally moved the check after onmessage dispatch, dropped the -32001 code, or changed the status to 500, every existing test would still pass. Likewise the client header emission is what middle-boxes route on; a regression that stops setting mcp-method (e.g. someone tightening the type guard) would be silent. And the 1bd5a9a fix was added specifically in response to a review finding on this PR — pinning it with a test prevents the same 500-for-client-input from reappearing.
Step-by-step example of an untested path
- POST
/mcpwithAccept: application/json, text/event-stream,Content-Type: application/json, headermcp-method: tools/list, body{"jsonrpc":"2.0","id":1,"method":"tools/call","params":{"name":"x"}}. handlePostRequestparses the body →messages = [req],!Array.isArray(rawMessage)is true,isJSONRPCRequest(messages[0])is true.validateMcpHeadersreturns"Mcp-Method header 'tools/list' does not match request body method 'tools/call'".- Transport returns
createJsonErrorResponse(400, -32001, 'Bad Request: …')and never callsonmessage. - No test asserts step 4 (status 400, code -32001, body never dispatched).
And for the regression: same request but with mcp-method: =?base64?@@?= — decodeMcpHeaderValue now catches the atob throw and returns the literal '=?base64?@@?=', which fails the equality check → 400/-32001. No test asserts this is not a 500.
Suggested additions
- In
packages/client/test/client/streamableHttp.test.ts: construct the transport withfetch: vi.fn().mockResolvedValue(new Response(null, {status: 202})),send({jsonrpc:'2.0',id:1,method:'tools/call',params:{name:'w'}}), then assertmockFetch.mock.calls[0][1].headers.get('mcp-method') === 'tools/call'and.get('mcp-name') === 'w'; a second case fortools/listassertingmcp-nameis absent; a third for an array body asserting neither header is set. - In
packages/server/test/server/streamableHttp.test.ts(and the equivalent forshttpHandler): POST a singletools/callbody withmcp-method: tools/listand assertres.status === 400and(await res.json()).error.code === -32001; repeat withmcp-method: =?base64?@@?=to lock in the 1bd5a9a fix.
Relationship to the other open comment
This is distinct from inline comment 3226615007 on httpHeaders.test.ts:3, which asks for encode/decode round-trip unit coverage of the helpers themselves. This nit is about the transport wiring in packages/client and packages/server — the observable HTTP-level contract — not the byte-mapping of the encoder.
Severity
nit — test-coverage gap, not a runtime defect. The helper logic is unit-tested and the transport glue is thin, but the new server rejection path and client header emission are observable wire contracts worth pinning before merge.
| export function encodeMcpHeaderValue(value: string): string { | ||
| if (HEADER_SAFE.test(value)) return value; | ||
| // Byte-level mapping for btoa: each Uint8 byte must become one Latin-1 char. | ||
| // eslint-disable-next-line unicorn/prefer-code-point | ||
| const b64 = btoa(String.fromCharCode(...new TextEncoder().encode(value))); | ||
| return `=?base64?${b64}?=`; | ||
| } |
There was a problem hiding this comment.
🟡 nit: encodeMcpHeaderValue passes any all-ASCII value through unchanged, but if the input is itself a literal string of the form =?base64?<x>?= (e.g. a tool literally named '=?base64?dGVzdA==?='), decodeMcpHeaderValue on the server will match it, decode <x>, and return a different string — so validateMcpHeaders rejects a header that does match the body. The trigger is extremely contrived (tool/prompt names SHOULD be [A-Za-z0-9._-], URIs can't start with =?), but since these are public core exports advertised as inverses, consider also force-encoding when the input matches the decoder's regex: if (HEADER_SAFE.test(value) && !/^=\?base64\?.+\?=$/.test(value)) return value;.
Extended reasoning...
What the bug is
encodeMcpHeaderValue and decodeMcpHeaderValue are intended to be inverses — the JSDoc on decodeMcpHeaderValue literally says it "reverses encodeMcpHeaderValue". However, the round-trip decode(encode(x)) === x does not hold for one class of input: ASCII strings that themselves match the decoder's encoded-word pattern /^=\?base64\?(.+)\?=$/.
encodeMcpHeaderValue checks only HEADER_SAFE (/^[ -~]*$/). A literal like '=?base64?dGVzdA==?=' is composed entirely of printable ASCII, so the encoder returns it unchanged. But decodeMcpHeaderValue then sees a value that looks like an encoded-word, matches the regex, and base64-decodes the inner payload — yielding 'test', not the original input.
Step-by-step proof
Take a tool whose name is the literal string '=?base64?dGVzdA==?=':
- Client encode (
packages/client/src/client/streamableHttp.ts):encodeMcpHeaderValue('=?base64?dGVzdA==?=')is called. Every character (=,?,b,a, …) is in[ -~], soHEADER_SAFE.test(value)istrueand the function returns the input verbatim. The client sendsMcp-Name: =?base64?dGVzdA==?=. - Server decode (
validateMcpHeaders→decodeMcpHeaderValue): the regex/^=\?base64\?(.+)\?=$/matches withm[1] = 'dGVzdA=='.atob('dGVzdA==')succeeds andTextDecoderproduces'test', sohName = 'test'. - Server compare:
mcpNameForMethod('tools/call', { name: '=?base64?dGVzdA==?=' })returns'=?base64?dGVzdA==?='.hName ('test') !== bodyName ('=?base64?dGVzdA==?='), sovalidateMcpHeadersreturns a mismatch message. - Result:
shttpHandler/streamableHttp.tsreturn 400 / -32001Bad Request: Mcp-Name header 'test' does not match request body name '=?base64?dGVzdA==?='— even though the client sent a header that exactly mirrors the body.
The malformed-base64 try/catch added in 1bd5a9a does not help here, because 'dGVzdA==' is valid base64 — decode succeeds, it just produces the wrong string.
Why existing code doesn't prevent it
The encoder's only gate is "is every char in [ -~]?". It does not check whether the resulting raw value would be ambiguous to the decoder. This is the classic RFC 2047 encoded-word ambiguity: a literal that looks like an encoded-word must itself be encoded, otherwise the receiver cannot distinguish "this is an encoded-word" from "this is the literal text =?base64?…?=". Mature RFC-2047 encoders handle this by force-encoding any input that would otherwise be parsed as an encoded-word.
Impact
In practice the impact is near-zero, which is why this is a nit:
- SEP-986 says tool names SHOULD match
[A-Za-z0-9._-]{1,128}—?and=are outside that set, and the SDK warns on registration of non-conforming names. - Prompt names follow similar conventions.
- Resource URIs must start with a scheme per RFC 3986, so they cannot begin with
=?— the decoder regex is anchored with^, soscheme://…=?base64?…?=would not match anyway.
So no realistic MCP deployment will hit this. The reason it's still worth a one-line fix is that encodeMcpHeaderValue / decodeMcpHeaderValue are public exports from @modelcontextprotocol/core (re-exported via packages/core/src/index.ts), documented as inverses, and bijectivity is a reasonable invariant for a public encoding pair — a third-party caller might use them for arbitrary strings.
Suggested fix
In encodeMcpHeaderValue, also force the base64 path when the input would itself be parsed as an encoded-word by the decoder:
export function encodeMcpHeaderValue(value: string): string {
if (HEADER_SAFE.test(value) && !/^=\?base64\?.+\?=$/.test(value)) return value;
const b64 = btoa(String.fromCharCode(...new TextEncoder().encode(value)));
return `=?base64?${b64}?=`;
}(or equivalently, force-encode whenever /^=\?/.test(value)). With this, '=?base64?dGVzdA==?=' gets double-wrapped on the wire and decodes back to itself, restoring decode(encode(x)) === x for all inputs.
b0d286d to
f15bcb8
Compare
1bd5a9a to
87115e0
Compare
|
@claude review |
| // eslint-disable-next-line unicorn/prefer-code-point | ||
| const b64 = btoa(String.fromCharCode(...new TextEncoder().encode(value))); | ||
| return `=?base64?${b64}?=`; |
There was a problem hiding this comment.
🟡 nit: String.fromCharCode(...new TextEncoder().encode(value)) spreads the entire UTF-8 byte array as function arguments, which throws RangeError: Maximum call stack size exceeded once the byte count nears the engine's argument limit (~125K in V8, lower in JSC / constrained stacks). For the SDK's own Mcp-Name path this is effectively unreachable — HTTP header-size limits (8–16KB) cap values orders of magnitude below the threshold — but encodeMcpHeaderValue is a public @modelcontextprotocol/core export and the spread-into-fromCharCode pattern is the documented MDN footgun; consider the loop form: const bytes = new TextEncoder().encode(value); let s = ''; for (const b of bytes) s += String.fromCharCode(b); const b64 = btoa(s);.
Extended reasoning...
What the issue is
encodeMcpHeaderValue builds the Latin-1 string for btoa by spreading a Uint8Array into String.fromCharCode:
const b64 = btoa(String.fromCharCode(...new TextEncoder().encode(value)));Spread (...) materialises every element as a separate function argument. JS engines impose a stack-dependent cap on argument count; when the byte array exceeds it, the call throws RangeError: Maximum call stack size exceeded. MDN's Function.prototype.apply / String.fromCharCode docs explicitly call this pattern out and recommend chunking or a loop.
Step-by-step proof
Empirically in Node/V8 on this machine:
String.fromCharCode(...new Uint8Array(120_000)); // ok
String.fromCharCode(...new Uint8Array(130_000)); // RangeError: Maximum call stack size exceededSo with the current implementation:
- Caller invokes
encodeMcpHeaderValue(s)wherescontains at least one code point outside[ -~](soHEADER_SAFE.test(s)isfalse) andnew TextEncoder().encode(s).length≳ 125 000. new TextEncoder().encode(s)produces aUint8Arrayof ≥125K bytes.String.fromCharCode(...bytes)attempts to push ≥125K arguments onto the call stack.- V8 throws
RangeErrorbeforebtoaever runs; the function never returns a value.
The threshold is not a constant: it depends on the engine (JavaScriptCore is typically lower than V8), the available stack at the call site (deeper call stacks lower it), and runtime flags. So "~125K" is a ceiling, not a floor.
Why existing code doesn't prevent it
HEADER_SAFE (/^[ -~]*$/) is all-or-nothing on the whole value — a single non-ASCII code point anywhere forces the base64 branch on the entire string, so the byte array is the full UTF-8 length of the input, not just the non-ASCII portion. There is no length check, and encodeMcpHeaderValue is re-exported from the @modelcontextprotocol/core barrel (packages/core/src/index.ts) as public API with no documented size precondition.
Why this is only a nit (addressing the counter-argument)
In the SDK's own call site — StreamableHTTPClientTransport._send() setting Mcp-Name for tools/call / prompts/get / resources/read — the practical impact is essentially nil:
- Tool/prompt names SHOULD match
[A-Za-z0-9._-]{1,128}(SEP-986). - Resource URIs are bounded by filesystem/URL conventions (a few KB at most).
- Even if a ~125KB value were produced, the resulting ~167KB base64 header would be rejected by every common HTTP server/proxy (Node
httpdefaults to 16KB total header size, nginx 8KB) — the request would fail downstream regardless, just with a different error. - The throw is also caught by
_send's outertry/catch(→onerror+ rejected promise), so it's not an uncaught crash.
So this is not a reachable transport bug. The reason it's still worth a one-line change is that encodeMcpHeaderValue is public — third parties may call it on arbitrary strings outside the header context (the JSDoc scopes it to "an Mcp-* HTTP header" but nothing enforces that), and the canonical safe form is no longer or less clear than the spread:
const bytes = new TextEncoder().encode(value);
let s = '';
for (const b of bytes) s += String.fromCharCode(b);
const b64 = btoa(s);This isn't "adding error handling for a case that can't happen" — it's choosing the idiom that doesn't have a hidden input-size cliff, at zero cost in readability.
Relationship to other open comments
Distinct from the existing nit on httpHeaders.ts:36 (encoded-word round-trip ambiguity in the HEADER_SAFE pass-through branch) — that one is about the if branch, this is about the btoa branch; different inputs, different failure modes. Also distinct from the resolved comment on decodeMcpHeaderValue (malformed-base64 atob throw) and the two test-coverage nits.
…urn raw so mismatch path applies, not 500)
…n.validate (mismatched initialize must not mint a session)
87115e0 to
d7884d6
Compare
|
@claude review |
| export * from './shared/auth.js'; | ||
| export * from './shared/authUtils.js'; | ||
| export * from './shared/dispatcher.js'; | ||
| export * from './shared/httpHeaders.js'; |
There was a problem hiding this comment.
🟡 nit: This PR adds four new public exports to @modelcontextprotocol/core (encodeMcpHeaderValue, decodeMcpHeaderValue, mcpNameForMethod, validateMcpHeaders), changes @modelcontextprotocol/client wire behavior (every single-message POST now sends Mcp-Method/Mcp-Name), and adds a new @modelcontextprotocol/server rejection path (400/-32001 on header/body mismatch) — but no .changeset/*.md is included. The changeset-bot comment is neutral on whether one is needed; given new public API + observable wire behavior, one is — and likely minor for all three packages rather than the bot link's default patch.
Extended reasoning...
What's missing
The diff touches six files across @modelcontextprotocol/client, @modelcontextprotocol/core, and @modelcontextprotocol/server, but adds nothing under .changeset/. The repo uses changesets for versioning: .changeset/ currently holds 59 entries, and git log -- .changeset/ shows every recent merged feat/fix PR (#1855, #1887, #1974, #1901, #1871, …) landed with its own changeset — including changes of comparable or smaller scope (add-resource-size-field.md, fix-session-status-codes.md, custom-methods-minimal.md).
Why this PR needs one
This is not a no-op-for-consumers refactor:
@modelcontextprotocol/coregains four new public exports via the newexport * from './shared/httpHeaders.js'barrel line:encodeMcpHeaderValue,decodeMcpHeaderValue,mcpNameForMethod,validateMcpHeaders. New public API surface conventionally requires at least aminorbump.@modelcontextprotocol/client:StreamableHTTPClientTransport.send()now emitsMcp-Method(andMcp-Namefortools/call/prompts/get/resources/read) on every single-message POST. This is observable wire behavior — proxies, logging middleware, and SEP-2243-aware servers will see it.@modelcontextprotocol/server: bothshttpHandlerandWebStandardStreamableHTTPServerTransportnow reject requests they previously accepted (present-but-mismatchedMcp-*header → 400/-32001). That's a new error path consumers should see in release notes.
Step-by-step proof
packages/core/src/index.tsaddsexport * from './shared/httpHeaders.js'(line 6) →import { encodeMcpHeaderValue } from '@modelcontextprotocol/core'becomes valid in the next release.git diff --name-onlyfor this PR lists 6 files;grep '^\.changeset/'over that list returns nothing.ls .changeset/*.md | wc -l→ 60 (incl. README), confirming changesets are the active release mechanism.- Without a changeset, merging this PR produces no version bump and no release-notes entry for any of the three packages, so the new exports ship silently in whatever the next PR's bump happens to be (or not at all until one lands).
Why this isn't redundant with the changeset-bot comment
The bot comment is intentionally neutral: "If these changes should not result in a new version, you're good to go." It fires on every PR without a changeset and explicitly defers the judgment call to a human. This nit supplies that judgment: given (a) new public exports and (b) new observable wire behavior + a new server rejection path, a changeset is needed. It also differs from the bot's suggestion on bump level — the bot's one-click link pre-fills patch for all three packages, but new public API in core and new feature behavior in client/server conventionally warrant minor (the PR title is itself feat(...)).
On the "stacked PR / Depends on R3" point: git log -- .changeset/ shows no rollup pattern in this repo — each merged feat/fix PR carries its own entry. Even if a related PR also adds a changeset, this PR independently adds public exports to core and would need its own entry to attribute the bump correctly.
Suggested fix
---
"@modelcontextprotocol/core": minor
"@modelcontextprotocol/client": minor
"@modelcontextprotocol/server": minor
---
SEP-2243: client emits `Mcp-Method`/`Mcp-Name` headers on single-message POSTs; server transports reject (400/-32001) when present headers mismatch the body; core exports `encodeMcpHeaderValue`/`decodeMcpHeaderValue`/`mcpNameForMethod`/`validateMcpHeaders`.Severity
nit — release-process gap, not a runtime defect; trivially fixed via the bot's link (adjusting patch → minor) before merge.
Client
streamableHttp.tssetsMcp-Method(always) andMcp-Name(fortools/call,resources/read,prompts/get) on POST requests. ServershttpHandlerandstreamableHttp.tsvalidate the headers match the body when present and reject withHeaderMismatchErrorotherwise.Motivation and Context
Implements SEP-2243 (HTTP standardization for middle-box routing/observability).
How Has This Been Tested?
New header set/validate tests;
pnpm test:allpasses.Breaking Changes
None.
Types of changes
Checklist
Additional context
Depends on R3 (server side). Review guide: https://gist.github.com/felixweinberger/5a48e0f14d5aced39ed6a91b61940711.