Skip to content

Early yield on 429 throttling on barrier requests#48914

Open
mbhaskar wants to merge 2 commits intoAzure:mainfrom
mbhaskar:early-yield-on-429-throttling
Open

Early yield on 429 throttling on barrier requests#48914
mbhaskar wants to merge 2 commits intoAzure:mainfrom
mbhaskar:early-yield-on-429-throttling

Conversation

@mbhaskar
Copy link
Copy Markdown
Member

Description

This PR introduces early yield on 429s during barrier requests.

All SDK Contribution checklist:

  • The pull request does not introduce [breaking changes]
  • CHANGELOG is updated for new features, bug fixes or other significant changes.
  • I have read the contribution guidelines.

General Guidelines and Best Practices

  • Title of the pull request is clear and informative.
  • There are a small number of commits, each of which have an informative message. This means that previously merged commits do not appear in the history of the PR. For more information on cleaning up the commits in your PR, see this page.

Testing Guidelines

  • Pull request includes test coverage for the included changes.

mbhaskar and others added 2 commits April 20, 2026 15:28
Port of .NET PR #1667829: When receiving repeated 429 (Too Many Requests)
responses with strong consistency, QuorumReader and ConsistencyWriter now
handle throttling more efficiently.

QuorumReader (reads):
- waitForReadBarrierAsync: yield early when all replicas return 429 in both
  single-region and multi-region barrier loops
- ensureQuorumSelectedStoreResponse: yield early when all replicas throttled
  during initial quorum read
- All cases throw the 429 exception to let ResourceThrottleRetryPolicy
  handle retry with appropriate backoff

ConsistencyWriter (writes):
- waitForWriteBarrierAsync: track lastAttemptWasThrottled flag per iteration
- Do NOT yield early (preserves idempotency guarantees)
- When all retries exhausted due to consistent throttling, throw
  RequestTimeoutException (408) with substatus SERVER_WRITE_BARRIER_THROTTLED
  (21013) instead of returning barrier-not-met

Other changes:
- Added isThrottledException field to StoreResult
- Added SERVER_WRITE_BARRIER_THROTTLED (21013) substatus code
- Unit tests for all throttling scenarios

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…ed replica yields early

Port of .NET test ValidatesReadMultipleReplicaAsyncExcludesGoneReplicas.
Validates that when replicas return a mix of 410 (Gone) and 429 (TooManyRequests):
- Gone replicas are excluded from results by StoreReader (isValid=false for GONE)
- The 429 replica with valid LSN headers is kept (isValid=true for non-GONE with lsn>=0)
- Since all remaining replicas are throttled, early yield triggers
- The 429 exception propagates to ResourceThrottleRetryPolicy

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 23, 2026 18:04
@mbhaskar mbhaskar requested review from a team and kirankumarkolli as code owners April 23, 2026 18:04
@mbhaskar mbhaskar changed the title Early yield on 429 throttling Early yield on 429 throttling on barrier requests Apr 23, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates Cosmos direct connectivity quorum/barrier logic to “yield early” when replica reads are uniformly throttled (HTTP 429), allowing the existing ResourceThrottleRetryPolicy to apply appropriate backoff instead of progressing into additional quorum/primary/barrier attempts.

Changes:

  • Add StoreResult.isThrottledException to cheaply detect 429 responses.
  • In QuorumReader, propagate 429 immediately when all collected replica results are throttled (including barrier paths).
  • In ConsistencyWriter, track throttling during write barriers and, when retries are exhausted and the last attempt was fully throttled, throw a RequestTimeoutException with a new substatus code; add unit tests for the new behaviors.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/directconnectivity/StoreResult.java Adds a computed flag to identify throttling (429) on replica results.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/directconnectivity/QuorumReader.java Early-yields on replica-wide throttling to let throttle retry policy handle backoff.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/directconnectivity/ConsistencyWriter.java Tracks throttling during write barriers and surfaces a distinct timeout substatus when retries are exhausted.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/HttpConstants.java Introduces a new substatus code for write-barrier throttling exhaustion.
sdk/cosmos/azure-cosmos-tests/src/test/java/com/azure/cosmos/implementation/directconnectivity/QuorumReaderTest.java Adds unit tests covering 429 propagation and Gone+429 interactions.
sdk/cosmos/azure-cosmos-tests/src/test/java/com/azure/cosmos/implementation/directconnectivity/ConsistencyWriterTest.java Adds unit tests for write-barrier behavior under sustained throttling and mixed outcomes.

Comment on lines +521 to +522
logger.warn("ConsistencyWriter: Write barrier failed after all retries due to consistent "
+ "throttling (429). Throwing RequestTimeoutException (408).");
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The warning message says the barrier failed due to "consistent throttling", but the condition only tracks whether the last attempt saw all replicas throttled. Either adjust the wording to reflect "last attempt was throttled" or track throttling across all attempts if you want to assert consistency.

Suggested change
logger.warn("ConsistencyWriter: Write barrier failed after all retries due to consistent "
+ "throttling (429). Throwing RequestTimeoutException (408).");
logger.warn("ConsistencyWriter: Write barrier failed after all retries; last attempt was "
+ "throttled (429). Throwing RequestTimeoutException (408).");

Copilot uses AI. Check for mistakes.
Comment on lines +657 to +702
@Test(groups = "unit")
public void readStrong_AllReplicasThrottled_Returns429() {
// When all replicas return 429 during initial quorum read,
// the reader should propagate the 429 exception to let ResourceThrottleRetryPolicy handle it.
int replicaCountToRead = 2;

ISessionContainer sessionContainer = Mockito.mock(ISessionContainer.class);
Uri primaryReplicaURI = Uri.create("primary");
ImmutableList<Uri> secondaryReplicaURIs = ImmutableList.of(Uri.create("secondary1"), Uri.create("secondary2"), Uri.create("secondary3"));
AddressSelectorWrapper addressSelectorWrapper = AddressSelectorWrapper.Builder.Simple.create()
.withPrimary(primaryReplicaURI)
.withSecondary(secondaryReplicaURIs)
.build();

RequestRateTooLargeException throttleException = new RequestRateTooLargeException();

TransportClientWrapper transportClientWrapper = TransportClientWrapper.Builder.uriToResultBuilder()
.exceptionOn(primaryReplicaURI, OperationType.Read, ResourceType.Document, throttleException, true)
.exceptionOn(secondaryReplicaURIs.get(0), OperationType.Read, ResourceType.Document, throttleException, true)
.exceptionOn(secondaryReplicaURIs.get(1), OperationType.Read, ResourceType.Document, throttleException, true)
.exceptionOn(secondaryReplicaURIs.get(2), OperationType.Read, ResourceType.Document, throttleException, true)
.build();

RxDocumentServiceRequest request = RxDocumentServiceRequest.createFromName(mockDiagnosticsClientContext(),
OperationType.Read, "/dbs/db/colls/col/docs/docId", ResourceType.Document);

request.requestContext = new DocumentServiceRequestContext();
request.requestContext.timeoutHelper = Mockito.mock(TimeoutHelper.class);
request.requestContext.resolvedPartitionKeyRange = Mockito.mock(PartitionKeyRange.class);
request.requestContext.requestChargeTracker = new RequestChargeTracker();

StoreReader storeReader = new StoreReader(transportClientWrapper.transportClient, addressSelectorWrapper.addressSelector, sessionContainer);
GatewayServiceConfigurationReader serviceConfigurator = Mockito.mock(GatewayServiceConfigurationReader.class);
IAuthorizationTokenProvider authTokenProvider = Mockito.mock(IAuthorizationTokenProvider.class);
QuorumReader quorumReader = new QuorumReader(mockDiagnosticsClientContext(), configs, transportClientWrapper.transportClient, addressSelectorWrapper.addressSelector, storeReader, serviceConfigurator, authTokenProvider);

Mono<StoreResponse> storeResponseSingle = quorumReader.readStrongAsync(mockDiagnosticsClientContext(), request, replicaCountToRead, ReadMode.Strong);

StepVerifier.create(storeResponseSingle)
.expectErrorSatisfies(error -> {
assertThat(error).isInstanceOf(RequestRateTooLargeException.class);
RequestRateTooLargeException rte = (RequestRateTooLargeException) error;
assertThat(rte.getStatusCode()).isEqualTo(HttpConstants.StatusCodes.TOO_MANY_REQUESTS);
})
.verify(Duration.ofMillis(10000));
}
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These new tests assert that a 429 is propagated, but they don't assert the key behavioral change described in this PR (early-yield to avoid the follow-up primary read attempt / extra barrier looping). Consider also validating the AddressSelector/TransportClient invocation counts (as other tests in this file do) to ensure primary resolution/read isn't invoked when all replicas are throttled.

Copilot generated this review using guidance from repository custom instructions.
Comment on lines +439 to +444
// Check if all replicas returned 429 Too Many Requests.
// Yield early to let ResourceThrottleRetryPolicy handle the retry with appropriate backoff,
// instead of returning QuorumNotSelected which would trigger an unnecessary primary read attempt.
if (!responseResult.isEmpty() && responseResult.stream().allMatch(r -> r.isThrottledException)) {
logger.info("QuorumReader: ensureQuorumSelectedStoreResponse - All replicas returned 429 Too Many Requests. "
+ "Yielding early to ResourceThrottleRetryPolicy.");
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The log/comment says "All replicas returned 429", but this call reads only readQuorum replicas (not necessarily all replicas). Consider adjusting wording to "all contacted/selected replicas" to avoid misleading diagnostics.

Suggested change
// Check if all replicas returned 429 Too Many Requests.
// Yield early to let ResourceThrottleRetryPolicy handle the retry with appropriate backoff,
// instead of returning QuorumNotSelected which would trigger an unnecessary primary read attempt.
if (!responseResult.isEmpty() && responseResult.stream().allMatch(r -> r.isThrottledException)) {
logger.info("QuorumReader: ensureQuorumSelectedStoreResponse - All replicas returned 429 Too Many Requests. "
+ "Yielding early to ResourceThrottleRetryPolicy.");
// Check if all contacted replicas returned 429 Too Many Requests.
// Yield early to let ResourceThrottleRetryPolicy handle the retry with appropriate backoff,
// instead of returning QuorumNotSelected which would trigger an unnecessary primary read attempt.
if (!responseResult.isEmpty() && responseResult.stream().allMatch(r -> r.isThrottledException)) {
logger.info("QuorumReader: ensureQuorumSelectedStoreResponse - All contacted replicas returned "
+ "429 Too Many Requests. Yielding early to ResourceThrottleRetryPolicy.");

Copilot uses AI. Check for mistakes.
Comment on lines +564 to +570
// Track whether all replicas returned 429 Too Many Requests.
// For writes, we do NOT yield early - continue retries to preserve idempotency.
// If all retries are exhausted due to consistent throttling, we throw 408 (RequestTimeout).
if (responses != null && !responses.isEmpty()
&& responses.stream().allMatch(r -> r.isThrottledException)) {
logger.info("ConsistencyWriter: waitForWriteBarrierAsync - All replicas returned 429 Too Many Requests. "
+ "Continuing retries.");
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This log says "All replicas returned 429", but this barrier read is configured with replicaCountToRead = 1 and forceReadAll = false, so at most one replica is contacted per attempt. Consider rewording to reflect that the contacted replica(s) were throttled.

Suggested change
// Track whether all replicas returned 429 Too Many Requests.
// For writes, we do NOT yield early - continue retries to preserve idempotency.
// If all retries are exhausted due to consistent throttling, we throw 408 (RequestTimeout).
if (responses != null && !responses.isEmpty()
&& responses.stream().allMatch(r -> r.isThrottledException)) {
logger.info("ConsistencyWriter: waitForWriteBarrierAsync - All replicas returned 429 Too Many Requests. "
+ "Continuing retries.");
// Track whether all contacted replica responses for this attempt returned 429 Too Many Requests.
// For writes, we do NOT yield early - continue retries to preserve idempotency.
// If all retries are exhausted due to consistent throttling, we throw 408 (RequestTimeout).
if (responses != null && !responses.isEmpty()
&& responses.stream().allMatch(r -> r.isThrottledException)) {
logger.info("ConsistencyWriter: waitForWriteBarrierAsync - All contacted replicas returned "
+ "429 Too Many Requests for this attempt. Continuing retries.");

Copilot uses AI. Check for mistakes.
return Mono.error(responseResult.get(0).getException());
}

int responseCount= (int) responseResult.stream().filter(response -> response.isValid).count();
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WhitespaceAround checkstyle: add spaces around the '=' operator (e.g., int responseCount = ...). As written, this will likely fail checkstyle and break the build.

Suggested change
int responseCount= (int) responseResult.stream().filter(response -> response.isValid).count();
int responseCount = (int) responseResult.stream().filter(response -> response.isValid).count();

Copilot uses AI. Check for mistakes.
Comment on lines +806 to 807
long maxGlobalCommittedLsnInResponses= responses.size() > 0 ? responses.stream()
.mapToLong(response -> response.globalCommittedLSN).max().getAsLong() : 0;
Copy link

Copilot AI Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WhitespaceAround checkstyle: add spaces around the '=' operator (e.g., long maxGlobalCommittedLsnInResponses = ...). As written, this will likely fail checkstyle and break the build.

Suggested change
long maxGlobalCommittedLsnInResponses= responses.size() > 0 ? responses.stream()
.mapToLong(response -> response.globalCommittedLSN).max().getAsLong() : 0;
long maxGlobalCommittedLsnInResponses = responses.size() > 0 ? responses.stream()
.mapToLong(response -> response.globalCommittedLSN).max().getAsLong() : 0;

Copilot uses AI. Check for mistakes.
// For writes, we do NOT yield early - continue retries to preserve idempotency.
// If all retries are exhausted due to consistent throttling, we throw 408 (RequestTimeout).
if (responses != null && !responses.isEmpty()
&& responses.stream().allMatch(r -> r.isThrottledException)) {
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Blocking · Correctness: Stale State

lastAttemptWasThrottled not reset when avoidQuorumSelection path bypasses throttle tracking

The isAvoidQuorumSelectionStoreResult branch (lines 548—562) returns early without updating lastAttemptWasThrottled. If a prior iteration set this flag to true (all replicas returned 429), and a subsequent iteration enters the avoidQuorum path (replica returns 410/LeaseNotFound), the flag remains stale.

When isBarrierMeetPossibleInPresenceOfAvoidQuorumSelectionException returns Mono.empty() (continue loop) and retries exhaust, the check at line 520 sees lastAttemptWasThrottled == true and throws RequestTimeoutException(408, 21013) instead of the correct behavior of returning falseGoneException.

Concrete scenario:

  1. Iteration N: all replicas return 429 → lastAttemptWasThrottled = true· retryCount decremented
  2. Iteration N+1: a replica returns 410/LeaseNotFound → avoidQuorum path fires, lastAttemptWasThrottled NOT reset
  3. isBarrierMeetPossibleInPresenceOfAvoidQuorumSelectionException → primary check returns insufficient LSN → Mono.empty() → loop continues
  4. Retries exhaust → incorrect 408/21013 instead of the normal barrier-not-met flow

Suggested fix: Reset the flag before the avoidQuorum early return:

if (isAvoidQuorumSelectionStoreResult) {
    writeBarrierRetryCount.decrementAndGet();
    lastAttemptWasThrottled.set(false);  // This attempt was not a throttle
    return this.isBarrierMeetPossibleInPresenceOfAvoidQuorumSelectionException(...);
}

⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

}

if (writeBarrierRetryCount.get() == 0) {
if (lastAttemptWasThrottled.get()) {
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Recommendation · Correctness: Behavioral Change

Write barrier throttle failure changes exception type from retryable to non-retryable

Before this PR, when write barrier retries exhausted (for any reason), waitForWriteBarrierAsync returned false, and the caller at barrierForWriteRequests (line 419—427) threw GoneException (substatus GLOBAL_STRONG_WRITE_BARRIER_NOT_MET). GoneException IS retried by GoneAndRetryWithRetryPolicy.

With this change, throttle-specific exhaustion throws RequestTimeoutException(408, 21013) directly, which is not retried by GoneAndRetryWithRetryPolicy (isNonRetryableException() returns true for RequestTimeoutException).

This is likely the intended behavior (avoid re-executing committed writes), but it is a significant behavioral change:

  • Customers under heavy throttling will now see terminal 408 errors instead of automatic retry
  • The .NET SDK (azure-cosmos-dotnet-v3#5155) takes the opposite approach — yielding early with 429 (which IS retried by the throttle policy)

Please confirm this is intentional. If so, consider documenting the new substatus code 21013 so customers can handle it in their retry logic. Also consider whether ClientRetryPolicy.shouldRetryOnRequestTimeout should avoid marking the endpoint as unavailable for this specific substatus (since it's a transient throttle issue, not an endpoint problem).


⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

}

@Test(groups = "unit")
public void readStrong_AllReplicasThrottled_Returns429() {
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Recommendation · Test Coverage: Missing Path

waitForReadBarrierAsync throttle paths have zero test coverage

Both new tests (readStrong_AllReplicasThrottled_Returns429 and readStrong_GoneReplicasExcluded_ThrottledReplicaYieldsEarly) only exercise the ensureQuorumSelectedStoreResponse path (line 442). The two waitForReadBarrierAsync throttle checks (single-region at line 705 and multi-region at line 800) are structurally unreachable from these tests because when all replicas are throttled in ensureQuorumSelectedStoreResponse, the error short-circuits before any barrier is needed.

To reach waitForReadBarrierAsync, quorum selection must first succeed (two replicas agree on an LSN), then the subsequent barrier HEAD requests must return 429.

Suggested approach: Add a test similar to the existing readStrong_OnlySecondary_RequestBarrier_Success but where barrier HEAD requests return RequestRateTooLargeException on all replicas. Configure separate transport client responses for OperationType.Read/ResourceType.Document (successful quorum) and OperationType.Head/ResourceType.DocumentCollection (all 429s).


⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

if (!responseResult.isEmpty() && responseResult.stream().allMatch(r -> r.isThrottledException)) {
logger.info("QuorumReader: ensureQuorumSelectedStoreResponse - All replicas returned 429 Too Many Requests. "
+ "Yielding early to ResourceThrottleRetryPolicy.");
return Mono.error(responseResult.get(0).getException());
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟢 Suggestion · Correctness: Exception Selection

Exception selection uses first response without considering retryAfterInMs

responseResult.get(0).getException() propagates the exception from the first (arbitrary) replica. Different replicas may return different Retry-After header values. ResourceThrottleRetryPolicy uses this value for backoff timing. Picking the first response could use a too-short backoff, causing the retry to be immediately throttled again.

Suggested improvement:

StoreResult maxRetryResult = responseResult.stream()
    .max(Comparator.comparingDouble(r -> r.retryAfterInMs != null ? r.retryAfterInMs : 0))
    .get();
return Mono.error(maxRetryResult.getException());

The same pattern applies at lines 708 and 803.


⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

public static final int SERVER_GENERATED_408 = 21010;
public static final int FAILED_TO_PARSE_SERVER_RESPONSE = 21011;
public static final int GLOBAL_N_REGION_COMMIT_WRITE_BARRIER_NOT_MET = 21012;
public static final int SERVER_WRITE_BARRIER_THROTTLED = 21013;
Copy link
Copy Markdown
Member

@xinlian12 xinlian12 Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💬 Observation · Cross-SDK: New Substatus Code

SERVER_WRITE_BARRIER_THROTTLED (21013) is Java-specific

This substatus code does not exist in the .NET SDK. The .NET SDK's equivalent PR (azure-cosmos-dotnet-v3#5155) takes a different approach — it yields early with the 429 response directly rather than introducing a new substatus. Python and Rust SDKs don't have direct connectivity.

If cross-SDK observability consistency is a goal (customers monitoring substatus codes across SDKs), consider coordinating with the .NET team on whether this code should be adopted.


⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

@xinlian12
Copy link
Copy Markdown
Member

@sdkReviewAgent

}

@Test(groups = "unit")
public void readStrong_AllReplicasThrottled_Returns429() {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Blocking · Test Correctness: False Positive Test

readStrong_AllReplicasThrottled_Returns429 does not exercise the early yield code path

This test passes, but it does NOT validate the early yield mechanism added in this PR. Here's why:

new RequestRateTooLargeException() creates an exception with no response headers (and therefore no LSN). In StoreReader.createStoreResult() (line 1016), the LSN defaults to -1 when headers are missing. The isValid formula becomes:

isValid = !requiresValidLsn || ((429 != GONE) && lsn >= 0)
       = !true || (true && false)
       = false

Since isValid = false and the exception is not isThroughputControlRequestRateTooLargeException, StoreReader filters out all 429 results (line 313, 328). The responseResult list arriving at ensureQuorumSelectedStoreResponse is empty, so !responseResult.isEmpty() is false and the early yield at line 442 is skipped entirely.

The flow instead goes: responseCount = 0 < readQuorum = 2QuorumNotSelectedreadPrimaryAsync → primary also returns 429 with isValid=falseMono.error(exception) at line 535. The test sees a 429, but via the wrong path.

If the early yield code were deleted, this test would still pass. Regressions in the early yield logic would go undetected.

Fix: Set LSN on the exception, like the second test does:

RequestRateTooLargeException throttleException = new RequestRateTooLargeException();
BridgeInternal.setLSN(throttleException, 50);
BridgeInternal.setPartitionKeyRangeId(throttleException, "1");

This makes isValid = true, the results pass StoreReader's filter, and the early yield actually fires. Also add transport invocation count assertions (as the existing @copilot comment suggests) to verify the primary read is NOT attempted.


⚠️ AI-generated review — may be incorrect. Agree? → resolve the conversation. Disagree? → reply with your reasoning.

@xinlian12
Copy link
Copy Markdown
Member

Review complete (49:16)

Posted 1 inline comment(s).

Steps: ✓ context, correctness, cross-sdk, design, history, past-prs, synthesis, test-coverage

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants