Skip to content

feat(umami-postgres): keploy compat lane sample (smoke-test only)#96

Open
AkashKumar7902 wants to merge 6 commits intomainfrom
feat/keploy-compat-lanes-rollout
Open

feat(umami-postgres): keploy compat lane sample (smoke-test only)#96
AkashKumar7902 wants to merge 6 commits intomainfrom
feat/keploy-compat-lanes-rollout

Conversation

@AkashKumar7902
Copy link
Copy Markdown
Contributor

@AkashKumar7902 AkashKumar7902 commented May 1, 2026

Summary

Adds a new umami-postgres/ sample that owns end-to-end orchestration (compose / admin bootstrap / traffic / noise filter) for the umami v2 + postgres compat lane. The keploy/enterprise CI lane consumes it as a thin wrapper.

The sample drives the full umami v2 API surface keploy needs to gate on a record/replay round-trip — auth + me + admin lists, users CRUD, websites CRUD, all eight report types, share tokens + public share access, batch + identify event ingest, sessions deep-dive, replays, boards lifecycle, pixel tracker, metric/pageview parser-branch variants, and logout. 78 distinct (method, path) tuples in umami_record_traffic.

Layout

umami-postgres/
├── Dockerfile             # FROM ghcr.io/umami-software/umami:postgresql-v2.18.1
├── docker-compose.yml     # postgres-15 + umami v2 on a fixed subnet, env-driven
├── flow.sh                # bootstrap | record-traffic | coverage (no-op)
├── keploy.yml.template    # globalNoise for createdAt/updatedAt/Date/uuid id fields
└── README.md              # contract + run modes

Coverage status

This sample does not ship a coverage gate, intentionally.

The upstream ghcr.io/umami-software/umami:postgresql-v2.18.1 image ships a compiled + minified Next.js standalone build with no source tree (/app/src) or sourcemaps. V8 / c8 line coverage on minified output doesn't map back to anything a reviewer can act on (one minified line = many source statements concatenated by the bundler), so a coverage gate would be misleading.

flow.sh coverage is a no-op stub that prints an INFO message and exits 0 — so consumers' flow.sh coverage || true calls keep working.

If real source-line coverage becomes a hard requirement for this sample, the path is to rebuild umami from its own source (npm install + next build without minification, with sourcemaps) inside a Dockerfile.coverage overlay — a separate, larger change (~5-10 min added to CI per cell).

The keploy/enterprise compat lane uses the resulting record/replay assertions as its correctness gate — that IS the meaningful test of keploy here, not source coverage of umami's frontend.

Run modes

  • Smoke check (without keploy): docker compose up -d && bash flow.sh bootstrap 240 && bash flow.sh record-traffic — exactly what the keploy enterprise lane wraps.
  • With keploy: lane scripts in keploy/enterprise wrap docker compose up in keploy record / keploy test.

See README for full commands.

Consumers

  • keploy/enterprise .woodpecker/umami-linux.yml — three-cell record/replay matrix that delegates compose + bootstrap + traffic to this sample.

Test plan

  • docker compose up -d boots postgres + umami cleanly
  • flow.sh bootstrap 240 returns admin token within 240s
  • flow.sh record-traffic fires all 78 (method, path) tuples
  • flow.sh coverage exits 0 cleanly with the no-coverage INFO message

Mirrors the doccano-django sample shape: the sample owns
orchestration (compose / bootstrap / traffic / coverage), keploy
CI lanes consume it as a thin wrapper.

This is a SCAFFOLD — the full traffic loop driven by the existing
keploy/enterprise lane (`run_api_flow` in
.ci/scripts/umami-linux.sh) needs to be ported into
flow.sh::umami_record_traffic in a follow-up. The current loop is
deliberately minimal (heartbeat / me / teams / websites CRUD)
which is enough to prove the sample boots end-to-end without
keploy.

Layout:
  Dockerfile             — pin to umami:postgresql-v2.18.1
  docker-compose.yml     — postgres-15 + umami v2, env-driven
  flow.sh                — bootstrap | record-traffic | coverage | list-routes
  keploy.yml.template    — globalNoise for createdAt/updatedAt/uuid id
  README.md              — handoff + status notes
Signed-off-by: Akash Kumar <meakash7902@gmail.com>
Copilot AI review requested due to automatic review settings May 1, 2026 01:05
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a new umami-postgres/ sample scaffold intended to be consumed by Keploy CI “compat lane” wrappers, with the sample owning local orchestration (compose/bootstrap/traffic/coverage) and lanes acting as thin wrappers around those entrypoints.

Changes:

  • Introduces docker-compose.yml + Dockerfile to boot Umami (postgres image) against a local Postgres 15 container on a fixed, env-overridable subnet.
  • Adds flow.sh to bootstrap auth, generate minimal API traffic, and compute route coverage by discovering src/app/api/**/route.ts inside the running container.
  • Adds keploy.yml.template noise filters and README.md describing the scaffold contract and current limitations.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
umami-postgres/Dockerfile Pins the Umami postgres image version for the sample.
umami-postgres/docker-compose.yml Defines the app + Postgres services and network configuration for the sample.
umami-postgres/flow.sh Provides bootstrap, traffic generation, and route/coverage reporting orchestration.
umami-postgres/keploy.yml.template Adds a Keploy config template with global noise filters for non-deterministic fields.
umami-postgres/README.md Documents the sample’s purpose, layout, contract, and local run instructions.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +4 to +6
# matches the doccano-django sibling: SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; SKIP_INIT=1 second time launches the app against
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The header comment describes a SKIP_INIT=0/1 two-phase boot, but the compose file actually uses UMAMI_SKIP_INIT/UMAMI_SKIP_INIT. This mismatch makes it unclear which env var users should set. Consider updating the comment to match the real variable name (or vice versa) so the “two-phase boot” contract is unambiguous.

Suggested change
# matches the doccano-django sibling: SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; SKIP_INIT=1 second time launches the app against
# matches the doccano-django sibling: UMAMI_SKIP_INIT=0 first time so
# umami's `npx umami-app db:up` runs migrations and seeds; volume
# is retained; UMAMI_SKIP_INIT=1 second time launches the app against

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/Dockerfile
# keploy/enterprise.
#
# Upstream: https://github.com/umami-software/umami
# Image: docker.io/umamisoftware/umami:postgresql-v2.18.1
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Dockerfile comment says the pinned upstream image is docker.io/umamisoftware/umami:postgresql-v2.18.1, but the FROM line uses ghcr.io/umami-software/umami:postgresql-v2.18.1. Please align the comment with the actual registry to avoid confusion when updating the pin.

Suggested change
# Image: docker.io/umamisoftware/umami:postgresql-v2.18.1
# Image: ghcr.io/umami-software/umami:postgresql-v2.18.1

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh Outdated
Comment on lines +116 to +126
log_fired GET "$base/api/heartbeat"
curl -sS "$base/api/heartbeat" >/dev/null || true

log_fired GET "$base/api/me"
curl -sS -H "$h_auth" "$base/api/me" >/dev/null || true

log_fired GET "$base/api/teams"
curl -sS -H "$h_auth" "$base/api/teams" >/dev/null || true

log_fired GET "$base/api/websites"
curl -sS -H "$h_auth" "$base/api/websites" >/dev/null || true
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

record-traffic currently swallows request failures (curl ... || true), so the command can exit 0 even when the API is down / returning 401s. That makes the scaffold look healthy while not actually exercising the surface (and also logs routes as fired even if the request failed). Consider using curl -f (or checking status codes) and letting the script fail on the first unexpected response; only append to UMAMI_FIRED_ROUTES_FILE after a successful call.

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh Outdated
Comment on lines +129 to +139
local website_resp website_id
log_fired POST "$base/api/websites"
website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \
-d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "")
website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true)
if [ -n "$website_id" ]; then
log_fired GET "$base/api/websites/${website_id}"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true
log_fired GET "$base/api/websites/${website_id}/stats"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true
fi
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The website create call is wrapped with || echo "", which hides HTTP failures from curl -f and then proceeds with an empty response. This can silently skip the rest of the traffic and still exit 0. Prefer failing hard on a non-2xx response (or explicitly handling expected conflicts like “already exists” by checking the status code and response body).

Suggested change
local website_resp website_id
log_fired POST "$base/api/websites"
website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \
-d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "")
website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true)
if [ -n "$website_id" ]; then
log_fired GET "$base/api/websites/${website_id}"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true
log_fired GET "$base/api/websites/${website_id}/stats"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true
fi
local website_resp website_id website_status website_resp_file
log_fired POST "$base/api/websites"
website_resp_file=$(mktemp)
website_status=$(curl -sS -o "$website_resp_file" -w "%{http_code}" -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \
-d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}")
website_resp=$(cat "$website_resp_file")
rm -f "$website_resp_file"
if [ "$website_status" -lt 200 ] || [ "$website_status" -ge 300 ]; then
echo "umami_record_traffic: website creation returned HTTP ${website_status}; verify the Umami app is healthy and the admin token is valid, then retry \`flow.sh bootstrap\` or rerun this flow" >&2
return 1
fi
website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true)
if [ -z "$website_id" ]; then
echo "umami_record_traffic: website creation succeeded but no website id was returned; inspect the /api/websites response format and retry the flow" >&2
return 1
fi
log_fired GET "$base/api/websites/${website_id}"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true
log_fired GET "$base/api/websites/${website_id}/stats"
curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh Outdated
Comment on lines +191 to +192
echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2
rm -f "$routes_file" "$recorded_file"; return 0
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This WARNING: message + return 0 causes coverage to succeed even when route discovery failed (e.g., wrong container name, docker exec failure, or path changes in the image). Since this is used in CI gating, consider treating “no routes discovered” as an error with an actionable next step (e.g., verify the container is running and UMAMI_APP_CONTAINER matches), and exit non-zero instead of emitting a warning.

Suggested change
echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2
rm -f "$routes_file" "$recorded_file"; return 0
echo "ERROR: umami_list_routes produced no rows. Verify the app container is running, confirm UMAMI_APP_CONTAINER='${UMAMI_APP_CONTAINER}' matches the actual container name, and check whether the route discovery path inside the image has changed." >&2
rm -f "$routes_file" "$recorded_file"; return 1

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh
Comment on lines +41 to +44
UMAMI_ADMIN_USER="${UMAMI_ADMIN_USER:-admin}"
UMAMI_ADMIN_PASSWORD="${UMAMI_ADMIN_PASSWORD:-umami}"
UMAMI_FIXED_TOKEN="${UMAMI_FIXED_TOKEN:-}" # populated by bootstrap; lane scripts may pre-seed
UMAMI_PHASE="${UMAMI_PHASE:-local}"
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UMAMI_FIXED_TOKEN is declared but never used. Either wire it into bootstrap/record-traffic (so lane scripts can pre-seed a deterministic token) or remove it to avoid confusion; right now the header comment implies deterministic auth behavior that isn’t implemented in the script.

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh
code=$(curl -sS -o /dev/null -w '%{http_code}' "${base}/api/heartbeat" 2>/dev/null || echo "")
if [ "$code" = "200" ]; then return 0; fi
if [ $(( $(date +%s) - start_ts )) -ge "$timeout" ]; then
echo "umami_wait_for_app: timed out (last code: ${code:-<empty>})" >&2
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When umami_wait_for_app times out, the error message doesn’t provide a concrete next step to diagnose the failure. Consider including hints like checking docker compose ps, docker logs $UMAMI_APP_CONTAINER, or verifying that UMAMI_APP_PORT matches the compose port mapping to make CI failures easier to debug.

Suggested change
echo "umami_wait_for_app: timed out (last code: ${code:-<empty>})" >&2
echo "umami_wait_for_app: timed out waiting for ${base}/api/heartbeat (last code: ${code:-<empty>}). Next steps: run 'docker compose ps' to confirm services are up, inspect app logs with 'docker logs ${UMAMI_APP_CONTAINER}', and verify UMAMI_APP_PORT=${UMAMI_APP_PORT} matches the compose port mapping." >&2

Copilot uses AI. Check for mistakes.
Comment thread umami-postgres/flow.sh
Comment on lines +81 to +96
local resp code
resp=$(curl -sS -o /tmp/umami-login.json -w '%{http_code}' \
-H "$h_json" -X POST "${base}/api/auth/login" \
-d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "")
if [ "$resp" != "200" ]; then
echo "umami_bootstrap: login failed (code ${resp:-empty})" >&2
cat /tmp/umami-login.json >&2 || true
return 1
fi
local token
token=$(jq -r '.token' /tmp/umami-login.json 2>/dev/null)
if [ -z "$token" ] || [ "$token" = "null" ]; then
echo "umami_bootstrap: no token in login response" >&2
return 1
fi
printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}"
Copy link

Copilot AI May 1, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/tmp/umami-login.json is a fixed path. If curl fails before writing the file, cat may print stale output from a previous run, which can mislead debugging. Consider using a temp file (e.g., mktemp) and cleaning it up, or truncating the file before the request so failures don’t surface old content.

Suggested change
local resp code
resp=$(curl -sS -o /tmp/umami-login.json -w '%{http_code}' \
-H "$h_json" -X POST "${base}/api/auth/login" \
-d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "")
if [ "$resp" != "200" ]; then
echo "umami_bootstrap: login failed (code ${resp:-empty})" >&2
cat /tmp/umami-login.json >&2 || true
return 1
fi
local token
token=$(jq -r '.token' /tmp/umami-login.json 2>/dev/null)
if [ -z "$token" ] || [ "$token" = "null" ]; then
echo "umami_bootstrap: no token in login response" >&2
return 1
fi
printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}"
local resp code login_resp_file
login_resp_file=$(mktemp /tmp/umami-login.XXXXXX.json)
resp=$(curl -sS -o "$login_resp_file" -w '%{http_code}' \
-H "$h_json" -X POST "${base}/api/auth/login" \
-d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "")
if [ "$resp" != "200" ]; then
echo "umami_bootstrap: login failed (code ${resp:-empty}); verify the app is reachable and the admin credentials are correct, then retry." >&2
cat "$login_resp_file" >&2 || true
rm -f "$login_resp_file"
return 1
fi
local token
token=$(jq -r '.token' "$login_resp_file" 2>/dev/null)
if [ -z "$token" ] || [ "$token" = "null" ]; then
echo "umami_bootstrap: no token in login response; inspect the login API response and confirm the expected token field is present, then retry." >&2
rm -f "$login_resp_file"
return 1
fi
printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}"
rm -f "$login_resp_file"

Copilot uses AI. Check for mistakes.
Replace the bootstrap-only stub in flow.sh::umami_record_traffic with the
complete umami v2 API drive that the keploy compat lanes need to gate
against on a record/replay round-trip. The sample now owns the entire
traffic loop end-to-end; consuming lanes wrap `bootstrap | record-traffic
| coverage` inside `keploy record` / `keploy test` and add no curls of
their own.

Surfaces driven by record-traffic:

* auth: /api/auth/login (via bootstrap), /api/auth/verify, /api/auth/logout
* identity: /api/me, /api/me/teams, /api/me/websites
* admin: /api/admin/users, /api/admin/websites, /api/admin/teams (incl.
  paged + search variants)
* users CRUD: POST /api/users, GET /api/users/{id}, POST /api/users/{id}
  (update), GET /api/users/{id}/websites, GET /api/users/{id}/teams
* websites CRUD: POST /api/websites, GET /api/websites (paged), GET
  /api/websites/{id}, POST /api/websites/{id} (update), GET
  /api/websites/{id}/active, GET /api/websites/{id}/daterange,
  POST /api/websites/{id}/reset
* events ingest: POST /api/send (event + identify variants), POST /api/batch
* sessions deep-dive: GET /api/websites/{id}/sessions[, /stats, /weekly,
  /{sessionId}, /{sessionId}/activity, /{sessionId}/properties,
  /{sessionId}/replays], GET /api/websites/{id}/replays, GET
  /api/websites/{id}/session-data/properties
* analytics: stats, pageviews (multiple unit/timezone variants), events
  (series/stats), event-data[/stats], values, realtime, metrics (path /
  referrer / browser / os / device / country / event + search/limit
  variants), metrics/expanded
* reports: every type umami v2 ships — breakdown, goal, funnel, journey,
  retention, utm, attribution, performance — plus saved-report CRUD
  (create, read, update, delete) and the listing endpoints
* teams CRUD lifecycle: POST/GET/POST(update)/DELETE on /api/teams/{id},
  member attach/list/detach via /api/teams/{id}/users[/{userId}]
* share tokens: POST /api/websites/{id}/shares + GET /api/share/{shareId}
  (unauthenticated public-share access)
* boards: full CRUD + /api/boards/{id}/shares
* pixel tracker: GET /api/pixels
* heartbeat 405 path: POST /api/heartbeat

Total: 78 distinct (method, path) tuples fired per record-traffic run.

Resource ids/names are fixed UUIDs / deterministic strings so request
bodies stay byte-stable across record/replay (keeps keploy's body
equality check passing without per-field globalNoise entries). Each
call goes through a small umami_http() helper that logs the (method,
url) tuple to UMAMI_FIRED_ROUTES_FILE and tolerates non-2xx (|| true)
so a single endpoint regression in umami itself does not abort the
whole record run — keploy is the assertion layer at replay.

Also strips the SCAFFOLD/handoff/follow-up language from flow.sh and
README.md: the sample is now the complete reproducer, no out-of-tree
porting remains.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@AkashKumar7902 AkashKumar7902 changed the title feat(umami-postgres): keploy compat lane sample (scaffold) feat(umami-postgres): keploy compat lane sample May 1, 2026
Adds a GitHub Actions workflow scoped via paths: filter to
umami-postgres/** so it triggers ONLY on PRs and main-branch
pushes that touch the umami-postgres sample (or the workflow
file itself). Other samples in this repo keep their orthogonal
CI; gating the whole repo on every umami change would slow them
all down for no benefit.

Three jobs:
  * build-coverage   — runs the sample end-to-end against the
                       PR's HEAD ref via flow.sh bootstrap +
                       record-traffic, captures the route-
                       coverage percentage from flow.sh
                       coverage.
  * release-coverage — same end-to-end against the PR's base
                       ref. Has a first-PR bootstrap escape
                       hatch (sample-existed=false → coverage=0)
                       so the introducing PR doesn't fail for
                       lack of a baseline.
  * coverage-gate    — fails the PR if build-coverage drops
                       more than COVERAGE_THRESHOLD percentage
                       points below release-coverage. Default
                       1.0pp; overridable via the
                       UMAMI_COVERAGE_THRESHOLD repo variable.
                       Sticky PR comment summarises the diff.

The gate runs ONLY here, on the sample repo. The enterprise PR
pipeline (.woodpecker/umami-linux.yml) calls flow.sh coverage
informationally with || true and does NOT gate on coverage —
that separation keeps the enterprise lane decoupled from sample-
level coverage drift.

Helper script .github/workflows/scripts/run-and-measure.sh is
the keploy-independent measurement shared by both build- and
release-coverage jobs: two-phase compose boot
(UMAMI_SKIP_INIT=0 then =1) matching the lane scripts, then
flow.sh bootstrap + record-traffic + coverage with
UMAMI_FIRED_ROUTES_FILE wired in as the standalone numerator.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
The upstream umami image (ghcr.io/umami-software/umami:postgresql-v2.18.1)
ships a compiled Next.js build, not the TypeScript source. The
prior implementation greped src/app/api/**/route.ts inside the
container, which doesn't exist there, so umami_list_routes returned
zero rows and umami_report_coverage skipped with
"WARNING: ...skipping coverage report".

The route surface is fully derivable from the build artefacts:
  /app/.next/app-path-routes-manifest.json  → URL paths
  /app/.next/server/app<url>/route.js       → compiled handlers
                                              with method exports
                                              ({GET:...,POST:...})

Verified end-to-end against the running container: list-routes
now emits 93 (method, path) rows; coverage gate has a real
denominator.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@github-actions
Copy link
Copy Markdown

github-actions Bot commented May 1, 2026

umami-postgres sample coverage

ref coverage
base (main) 0.0%
this PR 60.9%

Threshold: PR may not drop coverage by more than 1.0pp. Override per-repo via the UMAMI_COVERAGE_THRESHOLD actions variable.

Coverage measures the umami v2 API surface (/api/auth/* + /api/me + /api/users + /api/teams + /api/websites/* + /api/reports/* + /api/share/* + heartbeat) that flow.sh::umami_record_traffic actually exercises against the running backend. Reports are attached as artifacts on each job ("coverage-build" / "coverage-release").

…d+minified

The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1`
image ships a heavily minified Next.js standalone build under
/app/.next/server/app/api/**/route.js. The source tree
(/app/src) and sourcemaps (.map) are stripped from the image.

V8 / c8 line coverage on minified code is structurally
meaningless — each "line" of the compiled output is many source
statements concatenated by the bundler, so a coverage
percentage doesn't map back to anything a reviewer can act on.

Rather than ship a misleading metric (the prior route-surface
"coverage" we removed elsewhere was exactly this kind of
proxy), the umami sample is now smoke-test-only:

  - `flow.sh bootstrap`  signs in as admin, persists the JWT
  - `flow.sh record-traffic`  exercises the v2 API surface
  - `flow.sh coverage`  is a no-op that prints an info message
                        and exits 0 (so consumers' `flow.sh
                        coverage || true` calls keep working)

The keploy/enterprise compat lane already uses the resulting
record/replay assertions as its correctness gate — that IS the
meaningful test here, not source coverage of umami's frontend.

If real source-line coverage becomes a hard requirement for
this sample, the path is to rebuild umami from source inside a
Dockerfile.coverage overlay (~5-10 min npm install + next build
without minification + with sourcemaps). That's a separate
~hours-of-work change.

Removed:
  - .github/workflows/umami-postgres.yml (coverage gate workflow)
  - .github/workflows/scripts/run-and-measure.sh (its helper)
  - umami_list_routes / umami_list_recorded_routes / the
    legacy route-surface umami_report_coverage in flow.sh.
  - list-routes subcommand.

Replaced umami_report_coverage with a no-op stub.

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
…led image)

Signed-off-by: Akash Kumar <meakash7902@gmail.com>
@AkashKumar7902 AkashKumar7902 changed the title feat(umami-postgres): keploy compat lane sample feat(umami-postgres): keploy compat lane sample (smoke-test only) May 1, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants