Skip to content

[Multi-Tenancy Test]WireConnectionSharingInBenchmark#48131

Open
xinlian12 wants to merge 31 commits intoAzure:mainfrom
xinlian12:wireConnectionSharingInBenchmark
Open

[Multi-Tenancy Test]WireConnectionSharingInBenchmark#48131
xinlian12 wants to merge 31 commits intoAzure:mainfrom
xinlian12:wireConnectionSharingInBenchmark

Conversation

@xinlian12
Copy link
Member

@xinlian12 xinlian12 commented Feb 26, 2026

Summary

Wire connectionSharingAcrossClientsEnabled through the benchmark harness and add Reactor Netty HTTP client connection pool metrics for multi-tenancy testing.

Code Changes

Benchmark Harness (azure-cosmos-benchmark)

New classes:

  • NettyHttpMetricsReporter - Samples Reactor Netty connection pool metrics to CSV

Wired new config fields:

  • connectionSharingAcrossClientsEnabled - CLI flag + tenants.json + CosmosClientBuilder
  • enableNettyHttpMetrics - CLI flag to enable Reactor Netty pool metrics

SDK Changes (azure-cosmos)

  • Configs.java: Add COSMOS.NETTY_HTTP_CLIENT_METRICS_ENABLED system property (default false)
  • HttpClient.java: Conditionally call ConnectionProvider.metrics(true) when enabled

Baseline Test Results

Test environment: Azure VM D16s_v5 (16 cores, 64 GB) in West US 2, same region as Cosmos DB accounts

Common parameters:

Parameter Value
Tenants 50 distinct Cosmos DB accounts
Concurrency per tenant 20 (total: 1,000)
Duration 30 minutes per scenario
Connection mode Gateway
Consistency Session
Max connection pool size 1,000 per client
HTTP/2 max concurrent streams 30 (default)
HTTP/2 min connection pool size 16 (default)
JVM OpenJDK 21, -Xmx8g -Xms8g, G1GC
Cool-down between scenarios 5 min + CPU settle

ReadThroughput: HTTP/1.1 vs HTTP/2 x Isolated vs Shared

Measures aggregate ops/sec via Codahale Meter.

Throughput:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
Throughput (ops/s) 68,631 70,107 56,752 56,132
Total ops (30 min) 123M 125M 103M 102M
Throughput over time (1-min rate, ops/s):
xychart-beta
    title "ReadThroughput: HTTP/1.1 vs HTTP/2 (ops/s, 1-min rate)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "ops/sec" 40000 --> 75000
    line "H1.1" [55467,68904,70545,69989,69072,69141,69231,69159,68968,66870,65884,67137,67847,67957,68467]
    line "H2" [49116,56673,57755,57508,57254,57217,57164,57170,57183,57171,57214,57178,57244,57237,56784]
Loading

Resource utilization over time (ReadThroughput):

xychart-beta
    title "CPU Usage: HTTP/1.1 vs HTTP/2 (% of 16 cores)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "CPU %" 0 --> 100
    line "H1.1" [0,77,83,86,89,93,93,94,95,94,95,95,95,97,97]
    line "H2" [5,62,72,77,81,84,87,89,91,93,93,94,94,94,95]
Loading

Resource Consumption:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
OS threads (peak, /proc) 402 402 366 368
Java threads (peak, ThreadMXBean) 376 376 333 333
File descriptors 1,109 1,109 909 909
RSS (peak) 5,905 MB 5,896 MB 6,192 MB 6,238 MB
CPU (16 cores) 96.9% 97.5% 95.6% 96.9%
Heap (peak) 5,056 MB 4,729 MB 5,007 MB 4,982 MB
GC count / time 1,614 / 8,514 ms 1,642 / 8,522 ms 1,415 / 11,803 ms 1,424 / 12,014 ms

Connection Pool:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
TCP connections 1,000 1,000 688 624
Active connections 676 665 737 795
Idle connections 339 327 430 491
Active streams N/A N/A 500 633
Streams/connection 1.0 1.0 0.68 0.80
Pool slots 100 100 100 100

Connection utilization over time (ReadThroughput, regional endpoints):

xychart-beta
    title "HTTP/1.1: Active vs Idle Connections (total=1000)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "Connections" 0 --> 1100
    line "Active" [681,670,711,688,693,698,678,632,725,692,655,654,664,759,676]
    line "Idle" [321,323,267,340,295,302,344,377,295,289,338,356,297,244,339]
Loading
xychart-beta
    title "HTTP/2: Active Connections vs Active Streams (total TCP=800)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "Count" 0 --> 850
    line "TCP conns" [720,800,800,800,800,800,800,800,800,800,800,800,800,800,800]
    line "H2 active conns" [0,218,216,210,228,204,236,255,165,183,208,227,201,198,220]
    line "Active streams" [19,765,734,642,672,706,736,761,753,719,792,728,742,720,690]
Loading

HTTP/2 dual-pool architecture: Reactor Netty exposes two pool layers per endpoint a base ConnectionProvider pool (cosmos-pool-<host>) tracking TCP connections, and an HTTP/2 pool layer (http2.cosmos-pool-<host>) tracking stream-level usage. The base pool reports all TCP connections as "active", while the H2 pool shows which connections carry active streams vs idle. active.connections and idle.connections are reported by both layers filter by pool name prefix to avoid double-counting.

HTTP/2 Connection Metrics: Per-Account Breakdown (4 sample accounts, regional endpoints):

Metric (pool layer) acct-12101 acct-12105 acct-121025 acct-121050
TCP connections (base: total.connections) 16 16 16 16
Base active.connections 16 16 16 16
Base idle.connections 0 0 0 0
H2 active.connections (carrying streams) 7 5 5 8
H2 idle.connections (no active streams) 9 11 12 10
H2 active.streams 19 13 14 16
Streams/active connection 2.7 2.6 2.8 2.0

All accounts show the same pattern: 16 TCP connections (= minConnectionPoolSize), 5-8 actively streaming, 13-19 active streams (~2-3 streams/connection). Base pool reports all 16 as "active" (open TCP), while H2 pool shows real stream-level usage. Multiplexing is happening but at a low ratio higher concurrency or fewer connections would increase stream density.
Thread Breakdown Java threads only (mid-run snapshot via ThreadMXBean):

Thread Group H1.1 H2 Per Client? Purpose
transport-response-bounded-elastic 160 120 ~3.2 / ~2.4 Reactor response processing
tenant-worker 50 50 1 Benchmark thread pool
partition-availability-staleness-check 50 50 1 Circuit breaker health check
cosmos-daemon-cosmos-global-endpoint-mgr 50 50 1 Background endpoint refresh
reactor-http-epoll 16 16 Shared Netty event loop (= CPU cores)
parallel 16 16 Shared Reactor parallel scheduler
cosmos-parallel 16 16 Shared Cosmos SDK scheduler
boundedElastic-evictor 9 9 Shared Idle thread evictor
JVM + reporters 6 6 main, GC, metrics reporters
Total 373 333

HTTP/2 vs HTTP/1.1: HTTP/2 throughput is 17% lower (56,752 vs 68,631) with this config. HTTP/2 uses 31% fewer connections (688 vs 1,000) but the multiplexing benefit is minimal only ~0.7 streams/connection (concurrency=20 spread across minPoolSize=16 pre-warmed connections). The HTTP/2 framing overhead (HPACK, flow control) outweighs the connection savings. GC time is 39% higher with HTTP/2 (11,803 vs 8,514 ms), suggesting more object allocation in the H2 codec path.

Connection sharing: Still no impact on connection count in either protocol. Pool slots remain 100 in all cases.


ReadLatency: HTTP/1.1 vs HTTP/2

Measures per-operation latency via Codahale Timer with HDR Histogram.

Latency Percentiles (ms):

Percentile Isolated Shared Delta
P50 1.98 1.98 Same

| P99 | 4.60 | 4.56 | 6.18 | 2.99 |

Throughput:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
Throughput (ops/s) 9,424 9,459 8,408 9,457

Latency over time (ms):

xychart-beta
    title "ReadLatency P50: HTTP/1.1 vs HTTP/2 (ms)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "P50 ms" 1.5 --> 2.5
    line "H1.1" [2.02,1.99,2.00,1.97,1.98,1.98,1.98,1.98,1.99,1.98,1.98,1.97,1.98,1.98,1.97]
    line "H2" [2.31,2.11,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13,2.13]
Loading
xychart-beta
    title "ReadLatency P99: HTTP/1.1 vs HTTP/2 (ms)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "P99 ms" 3 --> 8
    line "H1.1" [4.85,4.55,4.62,4.55,4.59,4.62,4.55,4.65,4.62,4.62,4.65,4.52,4.62,4.55,4.55]
    line "H2" [5.70,5.77,5.73,5.73,5.70,5.73,5.80,5.77,5.73,5.73,5.67,5.67,5.67,5.80,5.73]
Loading

Resource Consumption:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
OS threads (peak) 265 272 276 264
File descriptors 1,109 1,109 909 909
RSS (peak) 5,812 MB 5,778 MB 6,097 MB 5,996 MB
CPU (16 cores) 15.6% 15.0% 16.9% 18.1%
Heap (peak) 4,789 MB 4,880 MB 5,133 MB 5,135 MB
GC count / time 231 / 630 ms 233 / 650 ms 222 / 789 ms 247 / 643 ms

Note: *Latency operations run ~7x slower than *Throughput due to a scheduler bottleneck in the benchmark harness (LatencySubscriber + Schedulers.parallel() with 16 threads shared across 50 tenants). Latency values are valid; throughput is artificially limited.


WriteThroughput: HTTP/1.1 vs HTTP/2 x Isolated vs Shared

Measures aggregate write ops/sec via Codahale Meter.

Throughput:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
Throughput (ops/s) 68,642 69,502 54,782 55,526
Total ops (30 min) 123M 123M 98M 100M

Throughput over time (1-min rate, ops/s):

xychart-beta
    title "WriteThroughput: HTTP/1.1 vs HTTP/2 (ops/s)"
    x-axis "Minutes" [1,3,5,7,9,11,13,15,17,19,21,23,25,27,29]
    y-axis "ops/sec" 25000 --> 75000
    line "H1.1" [34511,63717,66585,66737,67877,68544,68763,68833,69028,69154,67873,68678,68903,68913,68853]
    line "H2" [28474,51141,53366,53708,54359,54755,54865,54788,54685,54810,54838,54768,54786,54791,54905]
Loading

Resource Consumption:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
OS threads (peak) 400 408 358 357
Java threads (peak) 376 376 333 333
File descriptors 1,109 1,109 910 910
RSS (peak) 5,791 MB 5,741 MB 5,804 MB 5,861 MB
CPU (16 cores) 97.5% 98.1% 97.5% 98.1%
Heap (peak) 4,970 MB 4,970 MB 4,883 MB 5,039 MB
GC count / time 1,581 / 8,111 ms 1,586 / 8,155 ms 1,355 / 11,108 ms 1,385 / 11,344 ms

Connection Pool:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
TCP connections 1,000 1,000 800 800
Active connections 779 (77.9%) 800 (80.0%) N/A N/A
Idle connections 225 227 N/A N/A
Active streams N/A N/A 616 628
Pool slots 100 100 100 100

Write throughput is comparable to read (68,642 vs 68,631 for H1.1). HTTP/2 is ~20% slower (54,782 vs 68,642). Higher active connections (779 vs 676) indicate slightly longer server-side write processing. GC time is ~37% higher with HTTP/2 (11,108 vs 8,111 ms).


WriteLatency: HTTP/1.1 vs HTTP/2

Measures per-operation write latency via Codahale Timer with HDR Histogram.

Latency Percentiles (ms):

Percentile H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
P50 4.95 5.15 5.32 5.81
P99 8.05 8.07 8.85 9.28

Throughput:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
Throughput (ops/s) 3,952 3,780 3,654 3,392

Resource Consumption:

Metric H1.1 Isolated H1.1 Shared H2 Isolated H2 Shared
OS threads (peak) 261 260 263 264
File descriptors 178 178 272 272
RSS (peak) 5,625 MB 5,601 MB 5,711 MB 5,633 MB
CPU (16 cores) 6.9% 8.1% 9.4% 7.5%
Heap (peak) 2,776 MB 4,713 MB 4,807 MB 4,934 MB
GC count / time 99 / 288 ms 96 / 279 ms 102 / 320 ms 95 / 312 ms

Note: *Latency throughput (~3,400-3,900 ops/s) is artificially low due to all 50 tenants sharing a single synchronized HdrHistogramResetOnSnapshotReservoir in the shared Dropwizard MetricRegistry. This has been fixed (per-tenant meter names) and will be re-tested. The latency percentile values (P50, P99) are still valid.
---|---|
| ReadLatency (HTTP/2): Isolated vs Shared | Running |
| WriteThroughput (HTTP/2): Isolated vs Shared | Pending |
| WriteLatency (HTTP/2): Isolated vs Shared | Pending |


Key Findings

F7: Per-Client Thread Cost (~6.2 threads each)

Thread Group Count Per Client?
cosmos-global-endpoint-mgr 50 1/client could share
partition-availability-staleness-check 50 1/client could share
transport-response-bounded-elastic ~160 ~3.2/client

F8: Pre-Population Concurrency Fix

FDs: 5,100 -> 1,100. Connection utilization: 15% -> 67.6%. Throughput unchanged.

F9: Connection Pool Keyed by Hostname, Not IP

50 accounts -> 4 IPs -> but 100 pool slots (hostname-based). connectionSharingAcrossClientsEnabled is a no-op for multi-account.

Future Optimization Opportunities

Action Description Impact
A30-A32 Share endpoint-mgr + staleness-check schedulers -98 threads
A33 IP-based pool coalescing 100 slots -> ~4
A34 Fix metrics contention on query path Unblock queries with metrics
A35 Tune HTTP/2 pool size for multiplexing benefit Fewer connections, same throughput

Annie Liang and others added 26 commits February 24, 2026 11:39
Code changes:
- Add connectionSharingAcrossClientsEnabled field/getter/setter to TenantWorkloadConfig
- Add switch case in applyField() so tenants.json value is properly applied
- Add -connectionSharingAcrossClientsEnabled CLI parameter to Configuration (JCommander)
- Apply connectionSharingAcrossClientsEnabled on CosmosClientBuilder in AsyncBenchmark
- Wire through fromConfiguration() for legacy CLI path
- Add to toString() for debug visibility

Test plan updates:
- Expand S8 from 9 to 30 scenarios (3 protocols x 5 workloads x 2 sharing modes)
- Add ReadLatency and WriteLatency workloads
- Add isolated vs shared connection pool dimension
- Update operations per tenant to 1,000,000
- Add metrics catalog with availability status (available vs needs SDK change)
- Update execution runbook B13-B42 for 30-scenario matrix
- Update run-baseline-matrix.sh script for 30 scenarios
- F2: Align with analysis doc - IMDS client is now ephemeral, A25/A27 resolved
- F4: Add detailed before/after table for every A1/A2 resource claim
- F5: New finding - connectionSharingAcrossClientsEnabled was dead config (now fixed)
- F6: New finding - Reactor Netty pool metrics and H2 stream metrics are gaps
SDK change:
- Add fixedConnectionProviderBuilder.metrics(true) in HttpClient.createFixed()
  Emits reactor.netty.connection.provider.{total,active,idle,pending}.connections
  gauges tagged by remote.address (hostname:port) to Micrometer globalRegistry

Benchmark change:
- Add SimpleMeterRegistry to Metrics.globalRegistry so pool metrics are queryable
- Add logPoolMetrics() helper that logs all pool metrics at POST_CREATE and POST_WORKLOAD
  Shows remote.address tag to verify pooling is by hostname (not resolved IP)
- Isolated mode: pool name = 'cosmos-pool-<endpoint-host>'
- Shared mode: pool name = 'cosmos-shared-pool'
- Enables distinguishing pools by name in Reactor Netty metrics tags
…metrics export

- Build cosmosMicrometerRegistry once in run(), add to Metrics.globalRegistry
- Pass to prepareTenants() as parameter (no duplicate creation)
- Reactor Netty pool metrics now export to both SimpleMeterRegistry (local) and
  App Insights/Graphite (if configured) via globalRegistry
New class:
- NettyHttpMetricsReporter: periodically samples Reactor Netty connection pool
  metrics from Micrometer registry and writes to netty-pool-metrics.csv
- Columns: timestamp, metric, pool_id, pool_name, remote_address, value
- Started/stopped alongside the Dropwizard CsvReporter in BenchmarkOrchestrator

Cleanup:
- Remove logPoolMetrics() ad-hoc method and all its calls
- Remove SimpleMeterRegistry (cosmosMicrometerRegistry on globalRegistry is sufficient)
- Remove POOL_METRICS_TAGS debug dump
- Remove unused Gauge/Meter imports
SDK:
- Add COSMOS.NETTY_HTTP_CLIENT_METRICS_ENABLED system property (default false)
- Add COSMOS_NETTY_HTTP_CLIENT_METRICS_ENABLED env var fallback
- ConnectionProvider.metrics(true) only called when property is enabled
- Generic name allows enabling future Netty HTTP metrics beyond pool gauges

Benchmark:
- Add -enableNettyHttpMetrics CLI flag to Configuration
- Wire through BenchmarkConfig to BenchmarkOrchestrator
- Orchestrator sets system property before client creation
- NettyHttpMetricsReporter only starts when flag is enabled
- run-baseline-matrix.sh passes -enableNettyHttpMetrics
…pplyField)

Completes the http2Enabled wiring in TenantWorkloadConfig so it can be set
via tenants.json globalDefaults or per-tenant overrides. HTTP/2 can also be
enabled via -DCOSMOS.HTTP2_ENABLED=true system property (existing path).
The reporter was defined but never instantiated in the run() method.
Now creates and starts it before PRE_CREATE, stops it in cleanup.
…gauges

Without a backing registry in Metrics.globalRegistry, the CompositeMeterRegistry
registers gauges but Gauge.value() returns 0. Adding a SimpleMeterRegistry ensures
the gauges have actual storage for their values, making netty-pool-metrics.csv
report real connection counts.
…coded 100

Flux.merge concurrency during document pre-population was hardcoded to 100,
causing ~100 TCP connections to be opened per tenant regardless of the
configured concurrency (typically 20). Now uses min(cfg.getConcurrency(), 100)
so the number of pre-warmed connections matches the actual workload concurrency.
Add .gitignore entries for:
- .github/agents/
- .github/skills/
- sdk/cosmos/azure-cosmos-benchmark/docs/
- sdk/cosmos/azure-cosmos-benchmark/scripts/

These are local-only files that should not be tracked in the repository.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Annie Liang and others added 3 commits February 25, 2026 22:41
All 50 tenants were sharing a single Timer with synchronized HdrHistogramResetOnSnapshotReservoir,
causing ~7x throughput reduction for *Latency operations (9,400 vs 68,600 ops/s).

Fix: prefix meter names with tenant ID (e.g., 'tenant-0.Latency') so each tenant
gets its own Timer instance. Throughput/failure Meters also prefixed for consistency.

Root cause: MetricRegistry.register('Latency', timer) registered ONE timer in the shared
registry. All 50 tenants' LatencySubscriber.hookOnComplete() called context.stop() which
serialized through the same synchronized reservoir.update() method.
@xinlian12 xinlian12 marked this pull request as ready for review February 27, 2026 17:08
Copilot AI review requested due to automatic review settings February 27, 2026 17:08
@xinlian12 xinlian12 requested review from a team and kirankumarkolli as code owners February 27, 2026 17:08
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR wires connectionSharingAcrossClientsEnabled and a new Netty HTTP pool metrics toggle through the Cosmos multi-tenancy benchmark harness, and adds an SDK-side switch to enable Reactor Netty ConnectionProvider Micrometer metrics via a system property.

Changes:

  • Add COSMOS.NETTY_HTTP_CLIENT_METRICS_ENABLED config and conditionally enable Reactor Netty connection pool metrics in the Cosmos SDK HTTP client.
  • Extend the benchmark harness to support connectionSharingAcrossClientsEnabled and -enableNettyHttpMetrics, including a new CSV reporter for Netty pool gauges.
  • Adjust benchmark document pre-population merge concurrency to be capped by tenant concurrency.

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/http/HttpClient.java Enables ConnectionProvider.metrics(true) when the new config flag is set.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/clienttelemetry/ClientTelemetry.java Consolidates IMDS failure debug logging into a single message.
sdk/cosmos/azure-cosmos/src/main/java/com/azure/cosmos/implementation/Configs.java Adds COSMOS.NETTY_HTTP_CLIENT_METRICS_ENABLED system-property/env-var toggle.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/TenantWorkloadConfig.java Adds per-tenant connectionSharingAcrossClientsEnabled config plumbing.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/NettyHttpMetricsReporter.java New reporter that samples Reactor Netty pool gauges from Micrometer and writes CSV.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/Configuration.java Adds CLI flags for connection sharing and Netty metrics enablement.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/BenchmarkOrchestrator.java Turns on the system property and attaches a SimpleMeterRegistry for Netty gauge backing.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/BenchmarkConfig.java Wires the new enableNettyHttpMetrics setting into the internal config model.
sdk/cosmos/azure-cosmos-benchmark/src/main/java/com/azure/cosmos/benchmark/AsyncBenchmark.java Wires connection sharing into the client builder and changes pre-pop merge concurrency selection.

Annie Liang and others added 2 commits February 27, 2026 09:29
- Move NETTY_HTTP_CLIENT_METRICS_ENABLED system property into setGlobalSystemProperties
- Wrap run() lifecycle in try/finally to ensure cleanup on exceptions
- Stop NettyHttpMetricsReporter and remove SimpleMeterRegistry in cleanup
- Guard against zero prePopConcurrency by clamping to 1 and skipping empty list
- Log IOException with full stack trace in NettyHttpMetricsReporter
- Reword IMDS metadata debug message to avoid definitive claim

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This is an orchestrator-level JVM-global system property, not a per-tenant config.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
*
* <p>CSV columns: timestamp, metric, pool_id, pool_name, remote_address, value</p>
*/
public class NettyHttpMetricsReporter {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you please also add a way to report accumulated results/stats into the COsmos reporter. It amkes it so much easier to prodcue reprots when runnign repeatedly different testcases - manually merging combining csvs is just a headache - so, reproting min/max/average of values across the lifecycle of the process or simialr. This exists for system cpu and memory already - could follow simialr pattern.

Copy link
Member

@FabianMeiswinkel FabianMeiswinkel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM except for oen small ask.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants