Skip to content

[DO NOT MERGE] Upgrades Netty to 4.1.132#39

Open
tiagomlalves wants to merge 118 commits intodse-netty-4.1.128from
dse-netty-4.1.132
Open

[DO NOT MERGE] Upgrades Netty to 4.1.132#39
tiagomlalves wants to merge 118 commits intodse-netty-4.1.128from
dse-netty-4.1.132

Conversation

@tiagomlalves
Copy link
Copy Markdown

No description provided.

netty-project-bot and others added 30 commits October 14, 2025 14:32
Backport of netty#15752

Motivation:

Reflective field accesses were initially disabled in
netty#10428 because back then native-image
did not support `Unsafe.staticFieldOffset()`.

This is no longer an issue since GraalVM 21.2.0 for JDK 11 (July 2021)
see

oracle/graal@f97bdb5

Modification:

Remove the check for native-image before accessing fields using
`Unsafe`.

Result:

Netty can directly access fields necessary for `PlatformDependent0`
initialization using `Unsafe`.
Motivation:

There are JDK updates we should use them

Modifications:

- Update JDK versions

Result:

Use latest JDK versions
Motivation:

An invalid content length in a continue request would induce
HttpObjectAggregator to throw a NumberFormatException.

Modification:

Use the existing isContentLengthInvalid to guard the getContentLength
call in continue request processing.

Result:

No exception thrown.
…etty#15798)

Motivation:
The consumeCPU call can disturb potential optimizations, so its
desirable to be able to benchmark the case where it's not present.

Modification:
Add a zero-delay parameter to the benchmark, and make sure the
consumeCPU call is skipped in that case.

Result:
More flexible benchmark.
Motivation:

When the padding of an HTTP/2 DATA frame exceeds the bounds of the
frame, an IndexOutOfBoundsException would be thrown instead of the
expected Http2Exception:

```
Exception in thread "main" java.lang.IndexOutOfBoundsException: readerIndex: 1, writerIndex: 0 (expected: 0 <= readerIndex <= writerIndex <= capacity(4))
	at io.netty.buffer.AbstractByteBuf.checkIndexBounds(AbstractByteBuf.java:112)
	at io.netty.buffer.AbstractByteBuf.writerIndex(AbstractByteBuf.java:135)
	at io.netty.buffer.WrappedByteBuf.writerIndex(WrappedByteBuf.java:132)
	at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:408)
	at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:244)
	at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:164)
```

Modification:

Instead of verifying the padding size against the full payloadLength,
which includes the pad length, move the check to
lengthWithoutTrailingPadding where the exact number of remaining bytes
is known.

Result:

Proper protocol error.

Co-authored-by: Jonas Konrad <jonas.konrad@oracle.com>
Motivation:
BlockHound version 1.0.15.RELEASE comes with newer byte-buddy dependency

Modification:
- Bump BlockHound version as byte-buddy dependency is updated

Result:
BlockHound version 1.0.15.RELEASE with newer byte-buddy dependency
…ty#15800)

Motivation:
I've gotten allocation profile data from "a large e-commerce
application", which has sizable allocation volume at 32 KiB and 64 KiB,
with very little in between that isn't covered by the existing size
classes.

Modification:
Add 32 KiB and 64 KiB size classes to the adaptive allocator. Make the
adaptiveChunkMustDeallocateOrReuseWthBufferRelease test more size-class
agnostic.

Result:
Nearly 50% memory usage reduction in this e-commerce application use
case, according to the allocation pattern simulator for the 1024 live
buffers case, which brings adaptive on par with the pooled allocator for
this use case.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
 (… (netty#15810)

…netty#15805)

Motivation:

Fixes netty#15804. In java 25, some mqtt
flows cause `IndexOutOfBoundsException` as unsafe is no longer used.

Modification:

Use `MqttEncoder.writeEagerUTF8String` which precalculates expected
buffer size instead of `MqttEncoder.writeUnsafeUTF8String`.

Result:

No more IndexOutOfBoundsException in MqttEncoder and Java 25

Co-authored-by: Dmytro Dumanskiy <doom369@gmail.com>
Motivation:
If the ByteToMessageDecoder gets a reentrant channelRead call, we need
to avoid closing or otherwise manipulating the cumulation buffer.

Modification:
Guard reentrant channelRead calls by queueing up messages and letting
the top-level call process them all in order.

Result:
Reentrant calls to ByteToMessageDecoder.channelRead will no longer cause
weird IllegalReferenceCountException on the cumulation buffer.
Motivation:
Comment on the sizeId2sizeTab table looks like it's referencing wrong
field, going by how its created and used.

Modification:
Make it reference the nSizes field, which is used in the computation of
the table.

Result:
Clearer comment.
Fixes netty#15333
…5846)

Motivation:
AsciiString should have a consistent hash code regardless of the
platform we're running on, because we don't know of the hash code gets
exposed or used across platforms.

An optimized version of the hash code was assuming a little-endian
platform but could end up being used on big endian platforms.

Modification:
Add a condition that the safe, platform-agnostic hash code
implementation should be used on big endian platforms.

Result:
The AsciiString hash code values are now always the same regardless of
the platform and JVM configuration we're running on.
Motivation:
GitHub has deprecated the macOS 13 runners and will be removing them.
Details in actions/runner-images#13046

Modification:
Replace the runner OS for all Intel Mac builds with macos-15-intel.

Result:
MacOS builds now run on the latest and last version of MacOS that will
have Intel support.
Motivation:

Better javadoc for `CompositeByteBuf#release` method


Modification:

Add javadoc

Result:

Fixes netty#15863

---------

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Motivation:
The `AbstractHttp2StreamChannel` subclasses use a synthetic id in their
`ChannelId`, but include the real stream id in their `toString()`
method.
Using two different ids makes it harder to correlate the two in logs.
The synthetic ids are necessary because the stream id is assigned at a
protocol level at some point after the channel has been created.

Modification:
Include the synthetic id in the `AbstractHttp2StreamChannel.toString()`
output.

Result:
Easier to correlate HTTP/2 channel and stream ids, to help with
debugging.
### Motivation

The failure was found with
[NonDex](https://github.com/TestingResearchIllinois/NonDex), which
explores non-determinism in tests. These tests can cause test failure
under different JVMs, etc.

The non-determinism in the test class

`io.netty.handler.codec.http.websocketx.extensions.WebSocketExtensionUtilTest`
in `codec-http` module is caused by the `WebSocketExtensionUtil` storing
extension parameters in a HashMap, whose iteration order is
nondeterministic. When `computeMergeExtensionsHeaderValue` serializes
these parameters using `data.parameters().entrySet()`, the output
ordering varies depending on the underlying hash iteration order.

#### Modifications

Replacing the parameter hashmaps with `LinkedHashMap` fixes the issue by
preserving deterministic insertion/iteration order.

### Result

After the 3-line modification, the tests no longer depend on
non-deterministic order of HashMap iteration.

### Failure Reproduction

1. Checkout the 4.2 branch and the recent commit I tested on:
0a3df28
2. Environment: 

> openjdk 17.0.16 2025-07-15
OpenJDK Runtime Environment (build 17.0.16+8-Ubuntu-0ubuntu124.04.1)
OpenJDK 64-Bit Server VM (build 17.0.16+8-Ubuntu-0ubuntu124.04.1, mixed
mode, sharing)

> Apache Maven 3.9.11
3. Run test with
[NonDex](https://github.com/TestingResearchIllinois/NonDex), for
example:
```
mvn -pl codec-http edu.illinois:nondex-maven-plugin:2.1.7:nondex \
    -Dtest=io.netty.handler.codec.http.websocketx.extensions.WebSocketExtensionUtilTest#computeMergeExtensionsHeaderValueWhenNoConflictingUserDefinedHeader \
    -DnondexRerun=true -DnondexRuns=1 -DnondexSeed=974622 \
    -Djacoco.skip -Drat.skip -Dpmd.skip -Denforcer.skip 
```
4. Observe the test failure

Co-authored-by: lycoris106 <53146702+lycoris106@users.noreply.github.com>
netty#15894)

…size (netty#15859)

Motivation:
If the size of the input buffer is smaller than the configured block
size (64 KB by default), then ZstdEncoder.write only produces an empty
buffer. With the default sizes, this makes the encoder unusable for
variable size content. This is for fixing the
issue(netty#15340)

Modification:

Return the uncompressed data if for small size data

Result:

Fixes netty#15340

Signed-off-by: xingrufei <qhdxssm@qq.com>
Co-authored-by: skyguard1 <qhdxssm@qq.com>
Motivation:

Zstd compression fails for sources larger than 31MB since the default
max encode size is 32M. This is PR is to change default max encode size
to `Integer.MAX_VALUE` so that `ZstdEncoder` can works for large data

Modification:

Change default max encode size to `Integer.MAX_VALUE` so that
`ZstdEncoder` can works for large data


Result:

Fixes netty#14972

`ZstdEncoder` can works for the large data

Signed-off-by: xingrufei <qhdxssm@qq.com>
Signed-off-by: skyguard1 <qhdxssm@qq.com>
Co-authored-by: skyguard1 <qhdxssm@qq.com>
netty#15916)

…netty#15911)

Motivation:

The splice implementation was quite buggy and also not really fits in
the whole pipeline idea so we removed it already for 5.x. Let's mark it
as deprecated as well in earlier versions.

Modifications:

Add deprecation to methods

Result:

Prepare users that these methods / functionallity will go away
… (netty#15917)

Motivation:

We used Bootstrap to bootstrap EpollServerSocketChannel in the test
which is not correct.

Modification:

Use ServerBootstrap

Result:

Correct bootstrap used in test
…nEventLoop() results (netty#15927)

## Motivation

A race condition exists in
`NonStickyEventExecutorGroup.NonStickyOrderedEventExecutor` that can
cause `inEventLoop()` to return incorrect results,
  potentially leading to deadlocks and synchronization issues.

## Modifications

- Restore `executingThread` in the exception handler before setting
`state` to `RUNNING` in

`common/src/main/java/io/netty/util/concurrent/NonStickyEventExecutorGroup.java:262`
- Add test `testInEventLoopAfterReschedulingFailure()` to verify
`inEventLoop()` returns true after failed reschedule attempt

## Result

The executor correctly maintains `executingThread` even when
rescheduling fails, and `inEventLoop()` consistently returns accurate
results.

## Details

When the executor processes `maxTaskExecutePerRun` tasks and needs to
reschedule itself, if `executor.execute(this)` throws an exception
(e.g., queue
full), the catch block previously reset `state` to `RUNNING` but did not
restore `executingThread`, leaving it `null`.

This fix ensures `executingThread.set(current)` is called before
`state.set(RUNNING)` to maintain the invariant that when `state ==
RUNNING`, the
  executing thread is properly tracked.


## Testing

The new test `testInEventLoopAfterReschedulingFailure()` has been
verified to fail without the fix and pass with the fix applied.
Motivation:

lz4-java has been discontinued (see
[README](https://github.com/lz4/lz4-java)) and I've forked it to fix
[CVE-2025-12183](https://sites.google.com/sonatype.com/vulnerabilities/cve-2025-12183).
org.lz4:lz4-java:1.8.1 contains a redirect, but the latest version needs
new coordinates.

Modification:

Update to latest version, with the forked group ID.

Result:

No more CVE-2025-12183.

Co-authored-by: Jonas Konrad <jonas.konrad@oracle.com>
netty#15952)

…re (netty#15949)

Motivation:

Connect is an outbound operation and so should onyl fail the promise.

Modifications:

- Don'T call fireExceptionCaught(...)
- Add unit test

Result:

Correct exception reporting for connect failures
netty#15957)

…… (netty#15954)

…e DomainSocketAddress via LinuxSocket

Motivation:

We did not correctly handle abstract namespaces with UDS on linux when
creating the DomainSocketAddress

Modification:

Correctly detect and handle abstract namespaces on linux

Result:

Correct code
…15965)

Motivation:

DefaultSctpServerChannelConfig did miss to respect SO_BACKLOG which is
not expected

Modifications:

Correctly handle SO_BACKLOG

Result:

Correctly handle all ChannelOptions for SctpServerChannelConfig
…netty#15962)

Motivation:
HttpServerCodec always constructs an `EmptyLastHttpContent` and passes
it to the Handler chain when processing OPTIONS requests, The
`EmptyLastHttpContent` propagate through the handler chain.

However, the CORS handler might still propagate the EmptyLastHttpContent
to downstream handlers via fireChannelRead(), causing subsequent
handlers to receive only this empty content and lose access to the
original request URL.
Because `CorsHandler` does not consume this message, it calls
`ctx.fireChannelRead(msg)` for the EmptyLastHttpContent.

Downstream handlers then observe:

No HttpRequest

A LastHttpContent with empty content

This often breaks other handlers that rely on receiving the
`HttpRequest` first or expect consistent HTTP message.


This PR fixes an issue in `CorsHandler` where, after handling a CORS
preflight (OPTIONS) request, the handler still propagates the subsequent
`EmptyLastHttpContent` sent by the `HttpServerCodec` to downstream
handler.
Modification:

- `CorsHandler` track handled preflight and consume the following
HttpContent, LastHttpContent until the next HttpRequest is forwarded
instead of firing it downstream.
- Add tests

Result:

Fixes netty#15148 
It also improves compatibility with application frameworks and routing
handlers that expect well-formed HTTP request/response flows.

Signed-off-by: skyguard1 <qhdxssm@qq.com>
Co-authored-by: skyguard1 <qhdxssm@qq.com>
Co-authored-by: xingrufei <xingrufei@yl.com>
Co-authored-by: code-xing_wcar <code.xing@wirelesscar.com>
netty#15970)

…… (netty#15896)

…an error

Motivation:

We should close the Channel and fail the future of the bootstrap if
during setting a ChannelOption we observe an error. At the moment we
just log which might make things hard to debug and leave the Channel in
an unexpected state.

People can go back to the old behavior by using
`-Dio.netty.bootstrap.closeOnSetOptionFailure=false`.

Modifications:

- Adjust code to also close and faile the future
- Add testcases

Result:

Related to
netty#15860 (comment)
Motivation:

If initializeIfNecessary(...) fails we still need to ensure that we
release the message.

Modifications:

Correctly release message before rethrow or fail promise

Result:

Fix possible buffer leak
) (netty#15983)

Motivation:

We used some odd number for the length, just use the exact length that
we need.

Modifications:

- Allocate a byte[] of 25 length and remove odd comment

Result:

Cleanup
New vulnerability:

[CVE-2025-66566](GHSA-cmp6-m4wj-q63q)

Co-authored-by: Jonas Konrad <jonas.konrad@oracle.com>
chrisvest and others added 30 commits February 26, 2026 12:58
…freeListCapacity (netty#16334) (netty#16368)

Motivation:

Currently, the value of `BuddyChunk.freeListCapacity` is less than the
real max size of the `freeList`.
1. It may lead to the `freeList.drain(freeListCapacity, this)` can not
drain all the elements at once.
2. When calling `MpscIntQueue.create(freeListCapacity, -1)`, we rely on
`MpscIntQueue` implicitly calling
`MathUtil.safeFindNextPositivePowerOfTwo(freeListCapacity)` to set the
proper max size. This makes the code less explicit.
3. Semantically, `freeListCapacity` should be equal to the value of
`capFactor`, if I understand correctly.
4. In addition, use `freeListCapacity = capFactor` eliminates a
bit-shift operation.

Modification:

Use `freeListCapacity = capFactor;` instead of `freeListCapacity = tree
>> 1;`.

Result:

More clean code.

---------

Co-authored-by: lao <none>
Co-authored-by: Chris Vest <christianvest_hansen@apple.com>

(cherry picked from commit b7ec449)

Co-authored-by: old driver <29225782+laosijikaichele@users.noreply.github.com>
…st.createNewThr… (netty#16372)

Auto-port of netty#16365 to 4.1
Cherry-picked commit: eeaa0ee

---
…eadCache

Motivation:
We see tests using this method occasionally timing out, with little
information about what the worker threads are doing that takes so long.

Modification:
Also add async stack trace capturing on the cacheLatch await, because
the tests can get interrupted on that call as well.

Result:
More diagnostics next time this test fails.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
Motivation:
We see this SniHandlerTest often fail with a leak being detected.

Modification:

- Make sure to wait for the server to shut down, so the leak presence
detector isn't racing with the shutdown of the server child channel.
- Make sure to release any `SslContext` objects we create, if later ones
throw exceptions.
- Propagate any connect exceptions by awaiting with `sync`.

Result:
Hopefully more stable `SniHandlerTest` and no more leaks.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
…Threads (netty#16373)

Auto-port of netty#16370 to 4.1
Cherry-picked commit: fa59bfb

---
Motivation:
In busy CI environments these tests could time out (the timeout was 5
seconds) and cause build breakages.

Modification:

- Add a barrier for the start of the worker threads.
- Make sure every thread do the same number of iterations; the total
number of iterations remain the same, but now threads work on these
without coordination.
- Capture worker thread exceptions.
- Increase the timeout to 30 seconds.

Result:
Tests should be more stable now.

---------

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…ty#16379)

Motivation:

SizeClassedChunk performs 2 atomic ops (retain/release) per allocation
cycle on the hot path.

Modification:

Replace ref counting with a segment-count state machine that only needs
atomics on the cold deallocation path.

Result:

No more per-allocation atomic operations for SizeClassedChunk.

(cherry picked from commit de25e7a)

Co-authored-by: Francesco Nigro <nigro.fra@gmail.com>
…ads (netty#16384)

Auto-port of netty#16380 to 4.1
Cherry-picked commit: 36395eb

---
Motivation:
In busy CI environments these tests could time out and cause build
breakages.

Modification:

- Add a barrier for the start of the worker threads.
- Increase the timeout to 30 seconds.
- Capture more stack trace information if the test fails in the future.

Result:
Tests should be more stable now.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
netty#16280)

### Motivation:

When HttpObjectAggregator.handleOversizedMessage() sends a 413 response
with HTTP/1.1 keep-alive
enabled, it only closes the channel on write failure. On success, the
channel is left open but no
channel.read() is called.

With AUTO_READ=false, this leaves the connection stuck -
subsequent requests sit unread in the kernel buffer until timeout.

### Modification:

Always close the channel after sending 413, not just on write failure.
This matches:
1. The existing comment: "send back a 413 and close the connection"
2. The behavior when keep-alive is disabled
3. The behavior when the message is a FullHttpMessage

Closing is the correct behavior because after an oversized chunked
message, there may be leftover data
in the TCP stream that cannot be reliably skipped in HTTP/1.1.

### Result:

Connections are properly closed after 413, preventing stuck connections
with AUTO_READ=false.

### Repro:

The bug can only be reproed with a "real" channel. This test repros but
isn't committed since the fix to simply close the channel is testable
with a simpler EmbeddedChannel test

```
@test
public void testOversizedRequestWithKeepAliveAndAutoReadFalse() throws InterruptedException {
    final CountDownLatch responseLatch = new CountDownLatch(1);
    final CountDownLatch secondRequestLatch = new CountDownLatch(1);
    final AtomicReference<HttpResponseStatus> statusRef = new AtomicReference<HttpResponseStatus>();

    NioEventLoopGroup group = new NioEventLoopGroup(2);
    try {
        ServerBootstrap sb = new ServerBootstrap();
        sb.group(group)
                .channel(NioServerSocketChannel.class)
                .childOption(ChannelOption.AUTO_READ, false)
                .childHandler(new ChannelInitializer<Channel>() {
                    @OverRide
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpServerCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(4));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpRequest>() {
                            @OverRide
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpRequest msg) {
                                secondRequestLatch.countDown();
                            }
                        });
                        // Trigger the first read manually (AUTO_READ=false requires this)
                        ch.read();
                    }
                });

        Bootstrap cb = new Bootstrap();
        cb.group(group)
                .channel(NioSocketChannel.class)
                .handler(new ChannelInitializer<Channel>() {
                    @OverRide
                    protected void initChannel(Channel ch) {
                        ch.pipeline().addLast(new HttpClientCodec());
                        ch.pipeline().addLast(new HttpObjectAggregator(1024));
                        ch.pipeline().addLast(new SimpleChannelInboundHandler<FullHttpResponse>() {
                            @OverRide
                            protected void channelRead0(ChannelHandlerContext ctx, FullHttpResponse msg) {
                                statusRef.set(msg.status());
                                responseLatch.countDown();
                            }
                        });
                    }
                });

        Channel serverChannel = sb.bind(new InetSocketAddress(0)).sync().channel();
        int port = ((InetSocketAddress) serverChannel.localAddress()).getPort();
        Channel clientChannel = cb.connect(new InetSocketAddress(NetUtil.LOCALHOST, port)).sync().channel();

        HttpRequest request = new DefaultHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.PUT, "/upload");
        HttpUtil.setContentLength(request, 5);
        clientChannel.writeAndFlush(request);
        clientChannel.writeAndFlush(new DefaultLastHttpContent(
                Unpooled.wrappedBuffer(new byte[]{1, 2, 3, 4, 5})));

        // Server should respond with 413
        assertTrue(responseLatch.await(5, SECONDS));
        assertEquals(HttpResponseStatus.REQUEST_ENTITY_TOO_LARGE, statusRef.get());

        // Send a second request on the same connection. With the bug, the server never
        // calls ctx.read() after the 413, so this request hangs indefinitely.
        clientChannel.writeAndFlush(
                new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/next",
                        Unpooled.EMPTY_BUFFER));

        assertTrue(secondRequestLatch.await(5, SECONDS),
                "Second request was never read - channel is stuck after 413 with AUTO_READ=false");

        clientChannel.close().sync();
        serverChannel.close().sync();
    } finally {
        group.shutdownGracefully();
    }
}
```

Co-authored-by: Sam Landfried <samlland@amazon.com>
…netty#16417)

Auto-port of netty#16411 to 4.1
Cherry-picked commit: 5750880

---
Motivation:
There's no need to specifically fetch and create the target port branch
if that is already the default branch of the checkout. Doing so will
just make the fetch fail.

Modification:
Only fetch the target branch if it is different from the current branch.

Result:
We can now autoport into the default branch.

Reapplies and fixes "Fix autoport fetching into the existing branch"
(netty#16410)

This reverts commit 42ee99d.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
…ds0 (netty#16419)

Auto-port of netty#16404 to 4.1
Cherry-picked commit: 801da46

---
Motivation:
We often see timeouts of this test in CI.

Modification:
Apply a similar change to this test, to what we did to
`testToStringMultipleThreads` in
netty#16380

Result:
If it fails again we'll know what the worker threads are doing.

Co-authored-by: Chris Vest <christianvest_hansen@apple.com>
….orderedCopyOnInsert (netty#16421)

Auto-port of netty#16386 to 4.1
Cherry-picked commit: 97d9e1c

---
Motivation:

The second array access is not necessary in
DefaultAttributeMap.orderedCopyOnInsert as we already have a variable

Modification:

- Replaced array access with existing variable
- Removed unused suppress warning
- Move duplicated code into a variable

Result:

Fewer array accesses in DefaultAttributeMap.orderedCopyOnInsert

Co-authored-by: Dmytro Dumanskiy <doom369@gmail.com>
…le (netty#16430)

Auto-port of netty#16428 to 4.1
Cherry-picked commit: 335d294

---
Motivation:

Starting from JDK 23, the annotation processing was disabled by default,
see:
https://bugs.java.com/bugdatabase/JDK-8321314/description
https://mail.openjdk.org/pipermail/jdk-dev/2024-May/009028.html

This change breaks the `microbench` module because it relies on JMH
annotation processors to generate benchmark code. For instance, when
building with JDK 25, run the following command within the `microbench`
module:

> mvn clean install -Dcheckstyle.skip -Pbenchmark-jar

Then run the generated JAR(`java -jar target/microbenchmarks.jar`), and
it results in the following error:

> Error: Unable to access jarfile microbenchmarks.jar
> Exception in thread "main" java.lang.RuntimeException: ERROR: Unable
to find the resource: /META-INF/BenchmarkList
> at
org.openjdk.jmh.runner.AbstractResourceReader.getReaders(AbstractResourceReader.java:98)
> at org.openjdk.jmh.runner.BenchmarkList.find(BenchmarkList.java:124)
>         at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:253)
>         at org.openjdk.jmh.runner.Runner.run(Runner.java:209)
>         at org.openjdk.jmh.Main.main(Main.java:71) 

Modification:

Whitelist the `jmh-generator-annprocess` in the `microbench/pom.xml`,
with `annotationProcessorPaths` configuration, see:

https://maven.apache.org/plugins/maven-compiler-plugin/compile-mojo.html#annotationProcessorPaths

Result:

Properly process the JMH annotation.

Co-authored-by: old driver <29225782+laosijikaichele@users.noreply.github.com>
…6432)

Auto-port of netty#16407 to 4.1
Cherry-picked commit: 566768e

---
Motivation:

We did miss to ensure we flushed the preface automatically in some cases
which could have caused the remote peer to never see it if no data was
ready to read.
Beside this we also sometimes flushed even if there was no data
generated and so it it was not required.

Modifications:

- Ensure we always flush if we produce the preface
- Only flush if we really produce data
- Add tests

Result:

Preface is always flushed correctly

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…rs (netty#16437)

Auto-port of netty#16412 to 4.1
Cherry-picked commit: 9a85f9f

---
Motivation:

When a chunked HTTP message contains a trailer field with a folded value
(obs-fold continuation line starting with SP or HT), readTrailingHeaders
threw UnsupportedOperationException. This happened because the folding
logic called List.set() on the list returned by
trailingHeaders().getAll(), which is an AbstractList implementation that
does not override set().

Modifications:

Changed readTrailingHeaders in HttpObjectDecoder to accumulate folded
continuation lines into the instance-level `value` field, mirroring the
pattern already used by readHeaders. The previous header is now flushed
via trailingHeaders().add() only once its complete value is assembled,
eliminating the need to mutate the list returned by getAll(). Forbidden
trailer fields (Content-Length, Transfer-Encoding, Trailer) are filtered
at flush time, consistent with the previous behaviour.

Added tests to HttpRequestDecoderTest and HttpResponseDecoderTest
covering:
- A single trailer field
- A folded trailer field with multiple SP and HT continuation lines,
followed by a non-folded trailer to verify isolation between fields
- Forbidden trailer fields interleaved with a valid one, with a
forbidden field placed last to exercise the post-loop flush path

Result:

Chunked HTTP messages with folded trailer values are now decoded
correctly instead of throwing UnsupportedOperationException.

Co-authored-by: Furkan Varol <furkanvarol@users.noreply.github.com>
…ssage-deflate extension (netty#16435)

Auto-port of netty#16424 to 4.1
Cherry-picked commit: e216d30

---
Fixes netty#16005

Per RFC 7692 section 7.1.2.1, the client_max_window_bits parameter may
have a value or no value.

**Problem:**
- Server-side: Always ignored client's requested value, using
preferredClientWindowSize instead
- Client-side: Threw NumberFormatException when parameter had no value
(null)

**Changes:**
- PerMessageDeflateServerExtensionHandshaker: Parse
client_max_window_bits value when present
- PerMessageDeflateClientExtensionHandshaker: Handle null value
gracefully to avoid NumberFormatException

**Tests:**
- All existing tests pass (13 tests total)
- Added 4 new regression tests

---------

Co-authored-by: Nikita Nagar <138000433+nikitanagar08@users.noreply.github.com>
Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…ls. (netty#16446)

Auto-port of netty#16442 to 4.1
Cherry-picked commit: 9f2900f

---
Motivation:

When accept4(...) is not present on the platform we fallback to using
normal accept(...) syscall. In this case we also need two extra syscalls
(fcntl). If one of the fcntl calls failed we did not close the previous
accepted fd and so leaked it.

Modifications:

Call close(...) before returning early

Result:

No more fd leak in case of fcntl failure

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
… fails and SO_ACCEPTFILTER is supported (netty#16448)

Auto-port of netty#16441 to 4.1
Cherry-picked commit: 38eee8d

---
Motivation:

We did miss some NULL checks which could cause undefined behaviour which
most likely would cause a crash. As this only happens when
SO_ACCEPTFILTER is supported and when GetStringUTFChars fails it was not
possible to reach this on MacOS.

Modifications:

Add missing NULL checks

Result:

No more undef behaviour

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…ocket_setAcceptFilter(...) (netty#16459)

Auto-port of netty#16451 to 4.1
Cherry-picked commit: 9b09ab3

---
Motivation:

How we used strncat(...) was incorrect and could produce an overflow as
we did not take the null termination into account. We should better use
strlcat(...) which is safer to use and less error-prone.

Modifications:

- Validate that we will not truncate and so might use the "incorrect
value"
- Use strlcat(...) and so correctly respect the null termination which
could cause an overflow before

Result:

Fix possible overflow on systems that support SO_ACCEPTFILTER

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…ingUTFChars fails while open FD (netty#16456)

Auto-port of netty#16450 to 4.1
Cherry-picked commit: 8996631

---
Motivation:

We need to handle the case of GetStringUTFChars returning NULL as
otherwise we run into undefined behavior which will most likely cause a
crash

Modifications:

Return -ENOMEM in case of failure

Result:

No more undef behavior

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Auto-port of netty#16454 to 4.1
Cherry-picked commit: 342c1c0

---
Motivation:

We did not check for NULL when calling GetObjectArrayElement which could
be returned if we try to access an element that is out of bounds. While
this should never happen we should better still guard against it to make
the code less error-prone.

Modifications:

Add NULL check

Result:

More correct code

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…UDP to the same port (netty#16464)

Auto-port of netty#16455 to 4.1
Cherry-picked commit: 35a06f9

---
Motivation:

On windows we sometimes see failures when we try to bind TCP and UDP to
the same port for our tests even after we retried multiple times. To
make the CI less stable we should just skip the test in this case.

Modifications:

Just abort the test via an Assumption if we can't bind

Result:

More stable CI

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…elen (netty#16467)

Auto-port of netty#16460 to 4.1
Cherry-picked commit: 29a5efc

---
Motivation:

We did use the incorrect value for the msg_namelen and so could in
theory result in an overflow if the kernel really use this value for the
last element.

Modifications:

Use correct value

Result:

No risk of overflow during recvmmsg

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
Auto-port of netty#16461 to 4.1
Cherry-picked commit: d97f5e6

---
Motivation:

We incorrectly used the return value of
netty_unix_socket_getOption(...,IP_RECVORIGDSTADDR) and so did not
correctly support IP_RECVORIGDSTADDR when using recvmmsg.

Modifications:

- Correctly use the return value and also validate that the socket
option is actually set before allocating the control message buffer.

Result:

Fix support of IP_RECVORIGDSTADDR when using recvmmsg

Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…call in `capacity(int)` method (netty#16475)

Auto-port of netty#16473 to 4.1
Cherry-picked commit: 2ac5bf4

---
Motivation:

In `AdaptivePoolingAllocator.capacity(int)`, the call to
`checkNewCapacity(newCapacity)` already invokes `ensureAccessible()`
internally, making the explicit `ensureAccessible()` call redundant.

Modification:

Remove the redundant `ensureAccessible()` call from the `capacity(int)`
method.

Result:

Reducing unnecessary overhead.

Co-authored-by: old driver <29225782+laosijikaichele@users.noreply.github.com>
… multiple of 32 (netty#16497)

Auto-port of netty#16474 to 4.1
Cherry-picked commit: b029a2c

---
Motivation:

The assertion validating `SIZE_CLASSES` is a multiple of 32 used the
wrong bit-mask: `(sizeClass & 5) == 0`, so values like 10 will pass the
assertion.

Modification:

Changed to `(sizeClass & 31) == 0`.

Result:

Invalid size class can now correctly caught in assertion environment.

Co-authored-by: old driver <29225782+laosijikaichele@users.noreply.github.com>
Motivation:

We did not check that the provided byte[] will fin into tcpm_key and so
might overflow

Modifications:

Add length check and if it does not fit throw

Result:

No risk of overflow
…firing channelRead (netty#16532)

Auto-port of netty#16510 to 4.1
Cherry-picked commit: a7fbb6f

---
Motivation:

A recent change (9d804c5) changed
JdkZlibDecoder to fire
ctx.fireChannelRead() on every inflate iteration (~8KB output) when
maxAllocation is 0. For a typical ~150KB HTTP response this produces
~19 small buffer allocations and ~19 pipeline dispatches through the
internal EmbeddedChannel used by HttpContentDecoder, causing a 30-35%
throughput regression even in aggregated mode (where chunk count is
irrelevant downstream).

Modifications:

Accumulate decompressed output up to 64KB (DEFAULT_MAX_FORWARD_BYTES)
before firing ctx.fireChannelRead(). The buffer grows naturally via
prepareDecompressBuffer() until the threshold, then fires and starts a
new buffer. Any remaining data fires in the finally block as before.

Memory per in-flight buffer is bounded to 64KB regardless of the
compressed input size.

Result:

Throughput is restored to pre-regression levels. Chunks per response
drop from ~163 to ~6 for a 150KB payload.

Co-authored-by: Francesco Nigro <nigro.fra@gmail.com>
Motivation:

We should limit the number of continuation frames that the remote peer
is allowed to sent per headers.

Modifications:

- Limit the number of continuation frames by default to 16 and allow the
user to change this.
- Add unit test

Result:

Do some more validations to guard against resource usage

---------

Co-authored-by: Bryce Anderson <bl_anderson@apple.com>
Co-authored-by: Chris Vest <mr.chrisvest@gmail.com>
Motivation:
Chunk extensions can include quoted string values, which themselves can
include linebreaks and escapes for quotes. We need to parse these
properly to ensure we find the correct start of the chunk data.

Modification:
- Implement full RFC 9112 HTTP/1.1 compliant parsing of chunk start
lines.
- Add test cases from the Funky Chunks research:
https://w4ke.info/2025/10/29/funky-chunks-2.html
- This inclues chunk extensions with quoted strings that have linebreaks
in them, and quoted strings that use escape codes.
- Remove a test case that asserted support for control characters in the
middle of chunk start lines, including after a naked chunk length field.
Such control characters are not permitted by the standard.

 Result:
Prevents HTTP message smuggling through carefully crafted chunk
extensions.

* Revert the ByteProcessor changes

* Add a benchmark for HTTP/1.1 chunk decoding

* Fix chunk initial line decoding

The initial line was not correctly truncated at its line break and ended
up including some of the chunk contents.

* Failing to parse chunk size must throw NumberFormatException

* Line breaks are completely disallowed within chunk extensions

Change the chunk parsing back to its original code, because we know that
line breaks are not supposed to occur within chunk extensions at all.
This means doing the SWAR search should be suitable.

Modify the byte processor and add it as a validation step of the parsed
chunk start line.

Update the tests to match.

* Fix checkstyle

(cherry picked from commit 3b76df1)
[maven-release-plugin] copy for tag netty-4.1.132.Final
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.