From f259fbda1dcbde0665928e8ced3ef06894d9b31f Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 06:28:31 +0530 Subject: [PATCH 1/6] feat(umami-postgres): keploy compat lane sample (scaffold) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mirrors the doccano-django sample shape: the sample owns orchestration (compose / bootstrap / traffic / coverage), keploy CI lanes consume it as a thin wrapper. This is a SCAFFOLD — the full traffic loop driven by the existing keploy/enterprise lane (`run_api_flow` in .ci/scripts/umami-linux.sh) needs to be ported into flow.sh::umami_record_traffic in a follow-up. The current loop is deliberately minimal (heartbeat / me / teams / websites CRUD) which is enough to prove the sample boots end-to-end without keploy. Layout: Dockerfile — pin to umami:postgresql-v2.18.1 docker-compose.yml — postgres-15 + umami v2, env-driven flow.sh — bootstrap | record-traffic | coverage | list-routes keploy.yml.template — globalNoise for createdAt/updatedAt/uuid id README.md — handoff + status notes Signed-off-by: Akash Kumar --- umami-postgres/Dockerfile | 9 ++ umami-postgres/README.md | 49 +++++++ umami-postgres/docker-compose.yml | 58 ++++++++ umami-postgres/flow.sh | 227 +++++++++++++++++++++++++++++ umami-postgres/keploy.yml.template | 30 ++++ 5 files changed, 373 insertions(+) create mode 100644 umami-postgres/Dockerfile create mode 100644 umami-postgres/README.md create mode 100644 umami-postgres/docker-compose.yml create mode 100755 umami-postgres/flow.sh create mode 100644 umami-postgres/keploy.yml.template diff --git a/umami-postgres/Dockerfile b/umami-postgres/Dockerfile new file mode 100644 index 0000000..5b316c2 --- /dev/null +++ b/umami-postgres/Dockerfile @@ -0,0 +1,9 @@ +# Thin wrapper around umami's official image at the version this +# sample tracks. Pin lives here (not in CI lane scripts) so a +# future umami release that changes the bug-triggering shape is a +# one-line retag, not a hunt across keploy/integrations and +# keploy/enterprise. +# +# Upstream: https://github.com/umami-software/umami +# Image: docker.io/umamisoftware/umami:postgresql-v2.18.1 +FROM ghcr.io/umami-software/umami:postgresql-v2.18.1 diff --git a/umami-postgres/README.md b/umami-postgres/README.md new file mode 100644 index 0000000..e079c7b --- /dev/null +++ b/umami-postgres/README.md @@ -0,0 +1,49 @@ +# umami-postgres — keploy compat lane sample (work in progress) + +Minimum reproducer scaffold for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), the keploy CI lanes consume it as a thin wrapper. + +## Status + +**This is a SCAFFOLD.** The compose, bootstrap, and a minimal record-traffic loop work end-to-end against bare umami without keploy in the picture. The full traffic loop the existing keploy/enterprise lane drives (`run_api_flow` in `enterprise/.ci/scripts/umami-linux.sh`, ~250 lines covering websites / events / sessions / reports / share-tokens / shareability) has **not been ported** into `flow.sh::umami_record_traffic` yet. Lanes consuming this sample today should either: + +1. Port the missing curls into `flow.sh::umami_record_traffic` (preferred — that's the migration this scaffold is designed around). +2. Or call into `enterprise/.ci/scripts/umami-linux.sh::run_api_flow` directly between `flow.sh bootstrap` and `flow.sh coverage` until the migration completes. + +See the migration plan in this PR's description / linked issue for the full porting checklist. + +## Layout + +``` +umami-postgres/ +├── Dockerfile # FROM ghcr.io/umami-software/umami:postgresql-v2.18.1 +├── docker-compose.yml # postgres-15 + umami v2 on a fixed subnet, env-driven +├── flow.sh # bootstrap | record-traffic | coverage | list-routes +├── keploy.yml.template # globalNoise for createdAt/updatedAt/Date/uuid id fields +└── README.md # this file +``` + +## Contract + +The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare umami. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. + +* `bootstrap` — login as admin via `/api/auth/login`, capture the JWT-style auth token, persist it to `/tmp/umami-token-${UMAMI_PHASE}` so subsequent calls share a deterministic Authorization header. +* `record-traffic` — drive the umami v1 API. Every call is logged to `${UMAMI_FIRED_ROUTES_FILE}` (when set) so the `coverage` subcommand has a numerator without needing a keploy recording. +* `coverage` — walks the running container's `src/app/api/**/route.ts` tree as the denominator (the umami router is file-system based), compares against fired/recorded routes, emits a `(method, path)` percentage. +* `list-routes` — diagnostic; prints the route table. + +## Local run + +```sh +docker compose up -d +bash flow.sh bootstrap 240 +UMAMI_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh record-traffic +UMAMI_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh coverage +docker compose down -v +``` + +## Consumers + +Lanes pinning to this sample (pinned via `--branch feat/keploy-compat-lanes-rollout` until merge): + +* `keploy/enterprise` `.woodpecker/umami-linux.yml` — being slimmed in a follow-up PR. +* `keploy/integrations` may add a `.woodpecker/umami-postgres.yml` falsifying lane in a future PR (currently no integrations-side coverage of this app). diff --git a/umami-postgres/docker-compose.yml b/umami-postgres/docker-compose.yml new file mode 100644 index 0000000..f8daaf2 --- /dev/null +++ b/umami-postgres/docker-compose.yml @@ -0,0 +1,58 @@ +# umami-postgres sample compose. Postgres-15 + umami v2 on a fixed +# subnet, every name env-driven so multiple matrix cells can run +# in parallel on the same docker daemon. Two-phase boot pattern +# matches the doccano-django sibling: SKIP_INIT=0 first time so +# umami's `npx umami-app db:up` runs migrations and seeds; volume +# is retained; SKIP_INIT=1 second time launches the app against +# the populated volume. +services: + app: + build: + context: . + dockerfile: Dockerfile + container_name: ${UMAMI_APP_CONTAINER:-umami_app} + init: true + stop_grace_period: 5s + ports: + - "${UMAMI_APP_PORT:-3001}:3000" + environment: + DATABASE_URL: postgresql://umami:umami@${UMAMI_DB_IP:-172.35.0.10}:5432/umami + DATABASE_TYPE: postgresql + APP_SECRET: ${UMAMI_APP_SECRET:-keploy-fixed-app-secret-for-deterministic-recordings} + DISABLE_TELEMETRY: "1" + DISABLE_UPDATES: "1" + UMAMI_SKIP_INIT: "${UMAMI_SKIP_INIT:-0}" + depends_on: + postgres: + condition: service_healthy + networks: + - umami-net + + postgres: + image: postgres:15-alpine + container_name: ${UMAMI_DB_CONTAINER:-umami_db} + stop_grace_period: 5s + environment: + POSTGRES_USER: umami + POSTGRES_PASSWORD: umami + POSTGRES_DB: umami + healthcheck: + test: ["CMD-SHELL", "pg_isready -U umami -d umami"] + interval: 5s + timeout: 5s + retries: 20 + volumes: + - umami-db-data:/var/lib/postgresql/data + networks: + umami-net: + ipv4_address: ${UMAMI_DB_IP:-172.35.0.10} + +networks: + umami-net: + driver: bridge + ipam: + config: + - subnet: ${UMAMI_NETWORK_SUBNET:-172.35.0.0/24} + +volumes: + umami-db-data: diff --git a/umami-postgres/flow.sh b/umami-postgres/flow.sh new file mode 100755 index 0000000..833acf1 --- /dev/null +++ b/umami-postgres/flow.sh @@ -0,0 +1,227 @@ +#!/usr/bin/env bash +# +# flow.sh — keploy-independent orchestration for the umami-postgres +# sample. Modeled on samples-python/doccano-django/flow.sh. +# +# Subcommands: +# bootstrap — log in as admin, install a deterministic auth +# token so record/replay headers match. Runs +# once against a SKIP_INIT=0 launch; idempotent +# on the named volume. +# record-traffic — drive the API: the call sequence whose +# responses we want recorded. Fire-and-forget; +# keploy is the assertion layer at replay. +# coverage — walk umami's route table inside the running +# container, compare against fired routes, emit +# a (method, path) coverage percentage. +# list-routes — print the route table the coverage report +# uses as its denominator (diagnostic). +# +# HANDOFF NOTE: this is a SCAFFOLD. The traffic loop in +# `umami_record_traffic` below is intentionally minimal — it hits +# the API surface enough to prove the sample boots end-to-end +# without keploy. The full traffic loop (the one +# enterprise/.ci/scripts/umami-linux.sh's `run_api_flow` function +# drives, ~250 lines of curls covering websites / events / +# sessions / reports / share-tokens / shareability) needs to be +# ported here. Until then, the keploy lane consuming this sample +# can either: +# (a) call `bash flow.sh record-traffic` then `bash flow.sh +# extra-traffic-from-lane` where the lane defines the +# extra calls inline, OR +# (b) call into `umami-linux.sh::run_api_flow` directly until +# the migration completes. +# See https://github.com/keploy/samples-typescript/issues/ +# for the migration plan. +set -Eeuo pipefail + +UMAMI_APP_PORT="${UMAMI_APP_PORT:-3001}" +UMAMI_APP_CONTAINER="${UMAMI_APP_CONTAINER:-umami_app}" +UMAMI_DB_CONTAINER="${UMAMI_DB_CONTAINER:-umami_db}" +UMAMI_ADMIN_USER="${UMAMI_ADMIN_USER:-admin}" +UMAMI_ADMIN_PASSWORD="${UMAMI_ADMIN_PASSWORD:-umami}" +UMAMI_FIXED_TOKEN="${UMAMI_FIXED_TOKEN:-}" # populated by bootstrap; lane scripts may pre-seed +UMAMI_PHASE="${UMAMI_PHASE:-local}" +UMAMI_FIRED_ROUTES_FILE="${UMAMI_FIRED_ROUTES_FILE:-}" + +base="http://127.0.0.1:${UMAMI_APP_PORT}" +h_json='Content-Type: application/json' + +log_fired() { + [ -z "$UMAMI_FIRED_ROUTES_FILE" ] && return 0 + printf '%s %s\n' "$1" "$2" >>"$UMAMI_FIRED_ROUTES_FILE" +} + +# umami_wait_for_app — readiness gate. /api/heartbeat returns 200 +# only when the Next.js server has bound and Prisma is connected. +# Stronger than wait_for_port; checks the actual app surface. +umami_wait_for_app() { + local timeout=${1:-180} + local start_ts code + start_ts=$(date +%s) + while true; do + code=$(curl -sS -o /dev/null -w '%{http_code}' "${base}/api/heartbeat" 2>/dev/null || echo "") + if [ "$code" = "200" ]; then return 0; fi + if [ $(( $(date +%s) - start_ts )) -ge "$timeout" ]; then + echo "umami_wait_for_app: timed out (last code: ${code:-})" >&2 + return 1 + fi + sleep 2 + done +} + +# umami_bootstrap — login as admin via /api/auth/login and capture +# the issued auth token (umami uses JWT-like tokens in the +# Authorization: Bearer header). Stores under +# /tmp/umami-token-${UMAMI_PHASE} so `record-traffic` can read it. +umami_bootstrap() { + local timeout=${1:-180} + umami_wait_for_app "$timeout" + + local resp code + resp=$(curl -sS -o /tmp/umami-login.json -w '%{http_code}' \ + -H "$h_json" -X POST "${base}/api/auth/login" \ + -d "{\"username\":\"${UMAMI_ADMIN_USER}\",\"password\":\"${UMAMI_ADMIN_PASSWORD}\"}" 2>/dev/null || echo "") + if [ "$resp" != "200" ]; then + echo "umami_bootstrap: login failed (code ${resp:-empty})" >&2 + cat /tmp/umami-login.json >&2 || true + return 1 + fi + local token + token=$(jq -r '.token' /tmp/umami-login.json 2>/dev/null) + if [ -z "$token" ] || [ "$token" = "null" ]; then + echo "umami_bootstrap: no token in login response" >&2 + return 1 + fi + printf '%s' "$token" > "/tmp/umami-token-${UMAMI_PHASE}" + echo "umami_bootstrap: token captured for phase ${UMAMI_PHASE}" +} + +# umami_record_traffic — SCAFFOLD traffic loop. See HANDOFF NOTE +# at the top of this file. Hits enough of the v1 surface to prove +# the sample boots; the full coverage-extending loop is in +# enterprise/.ci/scripts/umami-linux.sh::run_api_flow and needs to +# be ported here in a follow-up. +umami_record_traffic() { + local token + token=$(cat "/tmp/umami-token-${UMAMI_PHASE}" 2>/dev/null || echo "") + if [ -z "$token" ]; then + echo "umami_record_traffic: no auth token at /tmp/umami-token-${UMAMI_PHASE}; run \`flow.sh bootstrap\` first" >&2 + return 1 + fi + local h_auth="Authorization: Bearer ${token}" + + umami_wait_for_app 60 + + log_fired GET "$base/api/heartbeat" + curl -sS "$base/api/heartbeat" >/dev/null || true + + log_fired GET "$base/api/me" + curl -sS -H "$h_auth" "$base/api/me" >/dev/null || true + + log_fired GET "$base/api/teams" + curl -sS -H "$h_auth" "$base/api/teams" >/dev/null || true + + log_fired GET "$base/api/websites" + curl -sS -H "$h_auth" "$base/api/websites" >/dev/null || true + + # Create a website so subsequent reads have something to find. + local website_resp website_id + log_fired POST "$base/api/websites" + website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \ + -d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "") + website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true) + if [ -n "$website_id" ]; then + log_fired GET "$base/api/websites/${website_id}" + curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true + log_fired GET "$base/api/websites/${website_id}/stats" + curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true + fi +} + +umami_list_routes() { + # umami exposes its v1 routes via the Next.js file-system + # router. Inside the container, src/app/api/**/route.ts is + # the source of truth. find them and emit (method, path). + docker exec -i "$UMAMI_APP_CONTAINER" sh -c ' + cd /app && find src/app/api -name "route.ts" -o -name "route.js" 2>/dev/null | while read f; do + rel="${f#src/app/api/}" + rel="${rel%/route.ts}" + rel="${rel%/route.js}" + grep -oE "export[[:space:]]+(async[[:space:]]+)?function[[:space:]]+(GET|POST|PUT|DELETE|PATCH)" "$f" \ + | awk "{print \$NF}" \ + | sort -u \ + | while read method; do + echo "$method /api/${rel}" + done + done + ' 2>/dev/null | sort -u +} + +umami_list_recorded_routes() { + local f method route + local found_keploy=0 + while IFS= read -r f; do + found_keploy=1 + method=$(awk '/^ method:/{print $2; exit}' "$f") + route=$(awk '/^ url:/{print $2; exit}' "$f") + route="${route%%\?*}" + case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac + if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi + done < <(find keploy -type f -path '*/tests/*.yaml' 2>/dev/null) | sort -u + if [ "$found_keploy" = "1" ]; then return 0; fi + + if [ -n "$UMAMI_FIRED_ROUTES_FILE" ] && [ -f "$UMAMI_FIRED_ROUTES_FILE" ]; then + while IFS= read -r line; do + method="${line%% *}"; route="${line#* }" + route="${route%%\?*}" + case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac + [ -n "$method" ] && [ -n "$route" ] && echo "$method $route" + done <"$UMAMI_FIRED_ROUTES_FILE" | sort -u + fi +} + +umami_report_coverage() { + local routes_file recorded_file + routes_file="$(mktemp)"; recorded_file="$(mktemp)" + umami_list_routes >"$routes_file" + umami_list_recorded_routes >"$recorded_file" + + if [ ! -s "$routes_file" ]; then + echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2 + rm -f "$routes_file" "$recorded_file"; return 0 + fi + + local total covered missing pct + total=$(wc -l <"$routes_file" | tr -d ' '); covered=0; missing="" + while IFS= read -r line; do + local method="${line%% *}" + local route="${line#* }" + local pattern="^${method} $(printf '%s' "$route" | sed -E 's/\[[^]]+\]/[^\/]+/g')$" + if grep -qE "$pattern" "$recorded_file"; then + covered=$((covered + 1)) + else + missing+=" ${method} ${route}"$'\n' + fi + done <"$routes_file" + if [ "$total" -gt 0 ]; then + pct=$(awk -v c="$covered" -v t="$total" 'BEGIN{printf "%.1f", c*100/t}') + else pct="0.0"; fi + { + echo "================ umami API coverage ================" + echo "Covered ${covered}/${total} (${pct}%)" + if [ -n "$missing" ]; then echo "Uncovered:"; printf '%s' "$missing"; fi + echo "====================================================" + } | tee "${COVERAGE_REPORT_FILE:-coverage_report.txt}" + rm -f "$routes_file" "$recorded_file" +} + +case "${1:-}" in + bootstrap) umami_bootstrap "${2:-180}" ;; + record-traffic) umami_record_traffic ;; + coverage) umami_report_coverage ;; + list-routes) umami_list_routes ;; + *) + echo "usage: $0 {bootstrap|record-traffic|coverage|list-routes}" >&2 + exit 2 ;; +esac diff --git a/umami-postgres/keploy.yml.template b/umami-postgres/keploy.yml.template new file mode 100644 index 0000000..be55cd5 --- /dev/null +++ b/umami-postgres/keploy.yml.template @@ -0,0 +1,30 @@ +# keploy.yml template for the umami-postgres sample. +# +# Lane scripts copy this into the run dir before invoking +# `keploy record` / `keploy test`. globalNoise covers the +# fields whose value is inherently non-deterministic across +# record/replay (timestamps the server stamps from time.now() +# or generates from random sources): +# +# header.Date +# Set by Next.js / the runtime on every response. +# body.createdAt / body.updatedAt +# Prisma auto-now fields stamped on insert/update. +# body.id (when it's a uuid response field) / body.token +# Server-generated identifiers — the test surface that +# gates correctness lives in the *response shape*, not +# these random values. +# +# Add to this list when umami introduces another auto-stamped +# field; do NOT add it to the lane scripts (that's how the +# noise lists drift between consumers). +test: + globalNoise: + global: + header.Date: [] + body.createdAt: [] + body.updatedAt: [] + body.id: [] + body.token: [] + body.shareId: [] + body.websiteId: [] From f43faf2ed6abf81388bf8436aee1c8b3e609e4b9 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 06:45:09 +0530 Subject: [PATCH 2/6] feat(umami-postgres): port full umami v2 traffic loop MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Replace the bootstrap-only stub in flow.sh::umami_record_traffic with the complete umami v2 API drive that the keploy compat lanes need to gate against on a record/replay round-trip. The sample now owns the entire traffic loop end-to-end; consuming lanes wrap `bootstrap | record-traffic | coverage` inside `keploy record` / `keploy test` and add no curls of their own. Surfaces driven by record-traffic: * auth: /api/auth/login (via bootstrap), /api/auth/verify, /api/auth/logout * identity: /api/me, /api/me/teams, /api/me/websites * admin: /api/admin/users, /api/admin/websites, /api/admin/teams (incl. paged + search variants) * users CRUD: POST /api/users, GET /api/users/{id}, POST /api/users/{id} (update), GET /api/users/{id}/websites, GET /api/users/{id}/teams * websites CRUD: POST /api/websites, GET /api/websites (paged), GET /api/websites/{id}, POST /api/websites/{id} (update), GET /api/websites/{id}/active, GET /api/websites/{id}/daterange, POST /api/websites/{id}/reset * events ingest: POST /api/send (event + identify variants), POST /api/batch * sessions deep-dive: GET /api/websites/{id}/sessions[, /stats, /weekly, /{sessionId}, /{sessionId}/activity, /{sessionId}/properties, /{sessionId}/replays], GET /api/websites/{id}/replays, GET /api/websites/{id}/session-data/properties * analytics: stats, pageviews (multiple unit/timezone variants), events (series/stats), event-data[/stats], values, realtime, metrics (path / referrer / browser / os / device / country / event + search/limit variants), metrics/expanded * reports: every type umami v2 ships — breakdown, goal, funnel, journey, retention, utm, attribution, performance — plus saved-report CRUD (create, read, update, delete) and the listing endpoints * teams CRUD lifecycle: POST/GET/POST(update)/DELETE on /api/teams/{id}, member attach/list/detach via /api/teams/{id}/users[/{userId}] * share tokens: POST /api/websites/{id}/shares + GET /api/share/{shareId} (unauthenticated public-share access) * boards: full CRUD + /api/boards/{id}/shares * pixel tracker: GET /api/pixels * heartbeat 405 path: POST /api/heartbeat Total: 78 distinct (method, path) tuples fired per record-traffic run. Resource ids/names are fixed UUIDs / deterministic strings so request bodies stay byte-stable across record/replay (keeps keploy's body equality check passing without per-field globalNoise entries). Each call goes through a small umami_http() helper that logs the (method, url) tuple to UMAMI_FIRED_ROUTES_FILE and tolerates non-2xx (|| true) so a single endpoint regression in umami itself does not abort the whole record run — keploy is the assertion layer at replay. Also strips the SCAFFOLD/handoff/follow-up language from flow.sh and README.md: the sample is now the complete reproducer, no out-of-tree porting remains. Signed-off-by: Akash Kumar --- umami-postgres/README.md | 23 +-- umami-postgres/flow.sh | 418 ++++++++++++++++++++++++++++++++++----- 2 files changed, 374 insertions(+), 67 deletions(-) diff --git a/umami-postgres/README.md b/umami-postgres/README.md index e079c7b..a8bf40b 100644 --- a/umami-postgres/README.md +++ b/umami-postgres/README.md @@ -1,15 +1,8 @@ -# umami-postgres — keploy compat lane sample (work in progress) +# umami-postgres — keploy compat lane sample -Minimum reproducer scaffold for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), the keploy CI lanes consume it as a thin wrapper. +Reproducer for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), the keploy CI lanes consume it as a thin wrapper. -## Status - -**This is a SCAFFOLD.** The compose, bootstrap, and a minimal record-traffic loop work end-to-end against bare umami without keploy in the picture. The full traffic loop the existing keploy/enterprise lane drives (`run_api_flow` in `enterprise/.ci/scripts/umami-linux.sh`, ~250 lines covering websites / events / sessions / reports / share-tokens / shareability) has **not been ported** into `flow.sh::umami_record_traffic` yet. Lanes consuming this sample today should either: - -1. Port the missing curls into `flow.sh::umami_record_traffic` (preferred — that's the migration this scaffold is designed around). -2. Or call into `enterprise/.ci/scripts/umami-linux.sh::run_api_flow` directly between `flow.sh bootstrap` and `flow.sh coverage` until the migration completes. - -See the migration plan in this PR's description / linked issue for the full porting checklist. +The sample drives the full umami v2 API surface keploy needs to gate on a record/replay round-trip — auth + me + admin lists, users CRUD, websites CRUD, all eight report types, share tokens + public share access, batch + identify event ingest, sessions deep-dive, replays, boards lifecycle, pixel tracker, metric/pageview parser-branch variants, and logout. ## Layout @@ -26,8 +19,8 @@ umami-postgres/ The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare umami. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. -* `bootstrap` — login as admin via `/api/auth/login`, capture the JWT-style auth token, persist it to `/tmp/umami-token-${UMAMI_PHASE}` so subsequent calls share a deterministic Authorization header. -* `record-traffic` — drive the umami v1 API. Every call is logged to `${UMAMI_FIRED_ROUTES_FILE}` (when set) so the `coverage` subcommand has a numerator without needing a keploy recording. +* `bootstrap` — log in as admin via `/api/auth/login`, capture the JWT-style auth token, persist it to `/tmp/umami-token-${UMAMI_PHASE}` so subsequent calls share a deterministic Authorization header. +* `record-traffic` — drive the umami v2 API. Every call is logged to `${UMAMI_FIRED_ROUTES_FILE}` (when set) so the `coverage` subcommand has a numerator without needing a keploy recording. Calls are fire-and-forget (`|| true` semantics) so a single endpoint regression in umami itself does not abort the run — keploy is the assertion layer at replay. * `coverage` — walks the running container's `src/app/api/**/route.ts` tree as the denominator (the umami router is file-system based), compares against fired/recorded routes, emits a `(method, path)` percentage. * `list-routes` — diagnostic; prints the route table. @@ -43,7 +36,7 @@ docker compose down -v ## Consumers -Lanes pinning to this sample (pinned via `--branch feat/keploy-compat-lanes-rollout` until merge): +Lanes pinned to this sample: -* `keploy/enterprise` `.woodpecker/umami-linux.yml` — being slimmed in a follow-up PR. -* `keploy/integrations` may add a `.woodpecker/umami-postgres.yml` falsifying lane in a future PR (currently no integrations-side coverage of this app). +* `keploy/enterprise` `.woodpecker/umami-linux.yml` — record/replay matrix delegates compose + bootstrap + traffic to this sample. +* `keploy/integrations` may add a `.woodpecker/umami-postgres.yml` falsifying lane in a future PR. diff --git a/umami-postgres/flow.sh b/umami-postgres/flow.sh index 833acf1..ccdeb9c 100755 --- a/umami-postgres/flow.sh +++ b/umami-postgres/flow.sh @@ -4,35 +4,20 @@ # sample. Modeled on samples-python/doccano-django/flow.sh. # # Subcommands: -# bootstrap — log in as admin, install a deterministic auth -# token so record/replay headers match. Runs -# once against a SKIP_INIT=0 launch; idempotent -# on the named volume. -# record-traffic — drive the API: the call sequence whose -# responses we want recorded. Fire-and-forget; -# keploy is the assertion layer at replay. +# bootstrap — log in as admin, capture the deterministic +# auth token so record/replay headers match. +# record-traffic — drive the umami v2 API across auth, users, +# teams, websites, events, sessions, reports, +# share-tokens, replays, batch ingest, boards, +# pixels, admin sub-paths, and metric variants. +# Fire-and-forget; keploy is the assertion +# layer at replay. # coverage — walk umami's route table inside the running # container, compare against fired routes, emit # a (method, path) coverage percentage. # list-routes — print the route table the coverage report # uses as its denominator (diagnostic). # -# HANDOFF NOTE: this is a SCAFFOLD. The traffic loop in -# `umami_record_traffic` below is intentionally minimal — it hits -# the API surface enough to prove the sample boots end-to-end -# without keploy. The full traffic loop (the one -# enterprise/.ci/scripts/umami-linux.sh's `run_api_flow` function -# drives, ~250 lines of curls covering websites / events / -# sessions / reports / share-tokens / shareability) needs to be -# ported here. Until then, the keploy lane consuming this sample -# can either: -# (a) call `bash flow.sh record-traffic` then `bash flow.sh -# extra-traffic-from-lane` where the lane defines the -# extra calls inline, OR -# (b) call into `umami-linux.sh::run_api_flow` directly until -# the migration completes. -# See https://github.com/keploy/samples-typescript/issues/ -# for the migration plan. set -Eeuo pipefail UMAMI_APP_PORT="${UMAMI_APP_PORT:-3001}" @@ -44,6 +29,30 @@ UMAMI_FIXED_TOKEN="${UMAMI_FIXED_TOKEN:-}" # populated by bootstrap; lane scri UMAMI_PHASE="${UMAMI_PHASE:-local}" UMAMI_FIRED_ROUTES_FILE="${UMAMI_FIRED_ROUTES_FILE:-}" +# Deterministic ids/names for resources the traffic loop creates. +# Fixed values keep recorded request bodies byte-stable across +# record/replay, so keploy's body-equality check passes without +# globalNoise entries for these fields. +FLOW_USER_ID="${FLOW_USER_ID:-11111111-1111-4111-8111-111111111111}" +FLOW_USER_NAME="${FLOW_USER_NAME:-keploy-ci-user}" +FLOW_USER_PASS="${FLOW_USER_PASS:-keploy-user-123}" +FLOW_USER_ROLE="${FLOW_USER_ROLE:-user}" + +FLOW_WEBSITE_ID="${FLOW_WEBSITE_ID:-22222222-2222-4222-8222-222222222222}" +FLOW_WEBSITE_NAME="${FLOW_WEBSITE_NAME:-Keploy CI Website}" +FLOW_WEBSITE_DOMAIN="${FLOW_WEBSITE_DOMAIN:-keploy.example.com}" + +FLOW_EVENT_NAME="${FLOW_EVENT_NAME:-keploy-ci-event}" +FLOW_EVENT_SESSION="${FLOW_EVENT_SESSION:-keploy-ci-session}" +FLOW_EVENT_TAG="${FLOW_EVENT_TAG:-compat}" + +FLOW_TEAM_ID="${FLOW_TEAM_ID:-33333333-3333-4333-8333-333333333333}" +FLOW_TEAM_NAME="${FLOW_TEAM_NAME:-keploy-ci-team}" +FLOW_SHARE_ID="${FLOW_SHARE_ID:-44444444-4444-4444-8444-444444444444}" +FLOW_BOARD_ID="${FLOW_BOARD_ID:-55555555-5555-4555-8555-555555555555}" + +FLOW_USER_AGENT="${FLOW_USER_AGENT:-Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/136.0.0.0 Safari/537.36}" + base="http://127.0.0.1:${UMAMI_APP_PORT}" h_json='Content-Type: application/json' @@ -97,11 +106,65 @@ umami_bootstrap() { echo "umami_bootstrap: token captured for phase ${UMAMI_PHASE}" } -# umami_record_traffic — SCAFFOLD traffic loop. See HANDOFF NOTE -# at the top of this file. Hits enough of the v1 surface to prove -# the sample boots; the full coverage-extending loop is in -# enterprise/.ci/scripts/umami-linux.sh::run_api_flow and needs to -# be ported here in a follow-up. +# umami_http — wrapper around curl that fires a single request, +# logs the (method, url-without-query) tuple to the fired-routes +# file, and tolerates non-2xx responses (|| true). Same fault- +# tolerance pattern the upstream lane uses: a single endpoint +# regression in umami itself does not abort the whole record run. +umami_http() { + local method="${1:?method required}" + local url="${2:?url required}" + local token="${3:-}" + local body="${4:-}" + local route="${url#"$base"}" + route="${route%%\?*}" + log_fired "$method" "$route" + + local -a curl_args + curl_args=(-sS -o /dev/null -X "$method" \ + -H 'Accept: application/json' \ + -H "User-Agent: ${FLOW_USER_AGENT}") + if [ -n "$token" ]; then + curl_args+=(-H "Authorization: Bearer ${token}") + fi + if [ -n "$body" ]; then + curl_args+=(-H "$h_json" --data "$body") + fi + curl "${curl_args[@]}" "$url" >/dev/null 2>&1 || true +} + +# umami_poll_for_event — after POST /api/send, the event ingest +# is async; poll the website's /events listing until the named +# event surfaces (or the budget runs out). Each poll is a real +# GET that gets recorded, so this also widens replay coverage +# of the events listing endpoint. +umami_poll_for_event() { + local token="${1:?token required}" + local start_at end_at attempt url + start_at="$(( ($(date +%s) - 3600) * 1000 ))" + end_at="$(( ($(date +%s) + 3600) * 1000 ))" + url="${base}/api/websites/${FLOW_WEBSITE_ID}/events?startAt=${start_at}&endAt=${end_at}&page=1&pageSize=20&search=${FLOW_EVENT_NAME}" + for attempt in $(seq 1 10); do + local resp + resp="$(curl -sS -H "Authorization: Bearer ${token}" -H "User-Agent: ${FLOW_USER_AGENT}" "$url" 2>/dev/null || echo '{}')" + log_fired GET "/api/websites/${FLOW_WEBSITE_ID}/events" + if jq -e --arg event_name "$FLOW_EVENT_NAME" \ + '[.data[]? | select(.eventName == $event_name)] | length > 0' >/dev/null 2>&1 <<<"$resp"; then + return 0 + fi + sleep 2 + done + return 0 # fire-and-forget; ingest may be slow under recording +} + +# umami_record_traffic — drives the umami v2 API across every +# surface keploy needs to gate against: auth + me + admin lists, +# users CRUD, websites CRUD + analytics queries, send + batch +# ingest, sessions deep-dive, all 8 report types, share tokens +# + public share access, boards lifecycle, pixel tracker, +# metric/pageview variants, logout. Every call is logged via +# umami_http() to UMAMI_FIRED_ROUTES_FILE so the coverage +# subcommand has a numerator without needing a keploy recording. umami_record_traffic() { local token token=$(cat "/tmp/umami-token-${UMAMI_PHASE}" 2>/dev/null || echo "") @@ -109,34 +172,285 @@ umami_record_traffic() { echo "umami_record_traffic: no auth token at /tmp/umami-token-${UMAMI_PHASE}; run \`flow.sh bootstrap\` first" >&2 return 1 fi - local h_auth="Authorization: Bearer ${token}" umami_wait_for_app 60 - log_fired GET "$base/api/heartbeat" - curl -sS "$base/api/heartbeat" >/dev/null || true - - log_fired GET "$base/api/me" - curl -sS -H "$h_auth" "$base/api/me" >/dev/null || true - - log_fired GET "$base/api/teams" - curl -sS -H "$h_auth" "$base/api/teams" >/dev/null || true - - log_fired GET "$base/api/websites" - curl -sS -H "$h_auth" "$base/api/websites" >/dev/null || true - - # Create a website so subsequent reads have something to find. - local website_resp website_id - log_fired POST "$base/api/websites" - website_resp=$(curl -fsS -H "$h_auth" -H "$h_json" -X POST "$base/api/websites" \ - -d "{\"name\":\"keploy-${UMAMI_PHASE}\",\"domain\":\"sample.keploy.io\"}" 2>/dev/null || echo "") - website_id=$(jq -r '.id // empty' <<<"$website_resp" 2>/dev/null || true) - if [ -n "$website_id" ]; then - log_fired GET "$base/api/websites/${website_id}" - curl -sS -H "$h_auth" "$base/api/websites/${website_id}" >/dev/null || true - log_fired GET "$base/api/websites/${website_id}/stats" - curl -sS -H "$h_auth" "$base/api/websites/${website_id}/stats?startAt=0&endAt=$(date +%s%3N)" >/dev/null || true + # ---------- /api/heartbeat + /api/config + /api/me sweep ---------- + umami_http GET "${base}/api/heartbeat" "" + umami_http GET "${base}/api/config" "" + umami_http GET "${base}/api/me" "$token" + umami_http GET "${base}/api/me/websites" "$token" + umami_http GET "${base}/api/me/teams" "$token" + umami_http GET "${base}/api/admin/users" "$token" + umami_http GET "${base}/api/admin/websites" "$token" + umami_http GET "${base}/api/admin/teams" "$token" + + # ---------- Users CRUD ---------- + local user_body update_user_body + user_body="$(jq -nc \ + --arg id "$FLOW_USER_ID" \ + --arg username "$FLOW_USER_NAME" \ + --arg password "$FLOW_USER_PASS" \ + --arg role "$FLOW_USER_ROLE" \ + '{id: $id, username: $username, password: $password, role: $role}')" + umami_http POST "${base}/api/users" "$token" "$user_body" + umami_http GET "${base}/api/users/${FLOW_USER_ID}/websites" "$token" + umami_http GET "${base}/api/users/${FLOW_USER_ID}/teams" "$token" + update_user_body="$(jq -nc --arg username "$FLOW_USER_NAME" --arg role "$FLOW_USER_ROLE" '{username: $username, role: $role}')" + umami_http POST "${base}/api/users/${FLOW_USER_ID}" "$token" "$update_user_body" + + # ---------- Websites CRUD ---------- + local website_body update_website_body + website_body="$(jq -nc \ + --arg id "$FLOW_WEBSITE_ID" \ + --arg name "$FLOW_WEBSITE_NAME" \ + --arg domain "$FLOW_WEBSITE_DOMAIN" \ + '{id: $id, name: $name, domain: $domain}')" + umami_http POST "${base}/api/websites" "$token" "$website_body" + umami_http GET "${base}/api/websites?page=1&pageSize=10" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/daterange" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/active" "$token" + update_website_body="$(jq -nc --arg name "$FLOW_WEBSITE_NAME" --arg domain "$FLOW_WEBSITE_DOMAIN" '{name: $name, domain: $domain}')" + umami_http POST "${base}/api/websites/${FLOW_WEBSITE_ID}" "$token" "$update_website_body" + + # ---------- Event ingest via /api/send (event variant) ---------- + local send_body + send_body="$(jq -nc \ + --arg website "$FLOW_WEBSITE_ID" \ + --arg hostname "$FLOW_WEBSITE_DOMAIN" \ + --arg name "$FLOW_EVENT_NAME" \ + --arg session "$FLOW_EVENT_SESSION" \ + --arg tag "$FLOW_EVENT_TAG" \ + '{ + type: "event", + payload: { + website: $website, + hostname: $hostname, + language: "en-US", + referrer: "", + screen: "1920x1080", + title: "Keploy CI", + url: ("https://" + $hostname + "/umami"), + name: $name, + tag: $tag, + id: $session, + data: { source: "compat", suite: "umami" } + } + }')" + umami_http POST "${base}/api/send" "" "$send_body" + umami_poll_for_event "$token" + + # ---------- Analytics window queries ---------- + local start_at end_at window + start_at="$(( ($(date +%s) - 24 * 3600) * 1000 ))" + end_at="$(( ($(date +%s) + 24 * 3600) * 1000 ))" + window="startAt=${start_at}&endAt=${end_at}" + local startDate endDate + startDate="$(date -u -d "@$((start_at / 1000))" +%Y-%m-%dT%H:%M:%S.000Z 2>/dev/null || date -u +%Y-%m-%dT%H:%M:%S.000Z)" + endDate="$(date -u -d "@$((end_at / 1000))" +%Y-%m-%dT%H:%M:%S.000Z 2>/dev/null || date -u +%Y-%m-%dT%H:%M:%S.000Z)" + + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/stats?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/pageviews?${window}&unit=hour&timezone=UTC" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/stats?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/weekly?${window}&timezone=UTC" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/session-data/properties?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/event-data?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/event-data/stats?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/events/series?${window}&unit=hour&timezone=UTC" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/events/stats?${window}&unit=hour&timezone=UTC" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/values?${window}&type=path" "$token" + umami_http GET "${base}/api/realtime/${FLOW_WEBSITE_ID}?startAt=${start_at}" "$token" + umami_http GET "${base}/api/reports?websiteId=${FLOW_WEBSITE_ID}&page=1&pageSize=10" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/reports?page=1&pageSize=10" "$token" + umami_http GET "${base}/api/teams?page=1&pageSize=10" "$token" + + local metric_type + for metric_type in path referrer browser os device country event; do + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/metrics?${window}&type=${metric_type}" "$token" + done + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/metrics/expanded?${window}&type=path" "$token" + + # ---------- Reports — every type umami v2 ships ---------- + local report_body + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "breakdown", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, fields: ["path"]}}')" + umami_http POST "${base}/api/reports/breakdown" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "goal", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, type: "url", value: "/umami"}}')" + umami_http POST "${base}/api/reports/goal" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + --arg event "$FLOW_EVENT_NAME" \ + '{websiteId: $websiteId, type: "funnel", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, window: 60, + steps: [{type: "event", value: $event}, {type: "path", value: "/umami"}]}}')" + umami_http POST "${base}/api/reports/funnel" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "journey", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, steps: 3}}')" + umami_http POST "${base}/api/reports/journey" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "retention", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, timezone: "UTC"}}')" + umami_http POST "${base}/api/reports/retention" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "utm", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate}}')" + umami_http POST "${base}/api/reports/utm" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + --arg event "$FLOW_EVENT_NAME" \ + '{websiteId: $websiteId, type: "attribution", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, model: "first-click", type: "event", step: $event}}')" + umami_http POST "${base}/api/reports/attribution" "$token" "$report_body" + + report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, type: "performance", filters: {}, + parameters: {startDate: $startDate, endDate: $endDate, unit: "hour", timezone: "UTC"}}')" + umami_http POST "${base}/api/reports/performance" "$token" "$report_body" + + # Reset accumulated stats — drives the website-scoped reset path. + umami_http POST "${base}/api/websites/${FLOW_WEBSITE_ID}/reset" "$token" "{}" + + # ---------- User read-back (round-trip the user CRUD) ---------- + umami_http GET "${base}/api/users/${FLOW_USER_ID}" "$token" + + # ---------- /api/auth/verify — drives the auth interceptor ---------- + umami_http GET "${base}/api/auth/verify" "$token" + + # ---------- Teams CRUD lifecycle ---------- + local team_body update_team_body add_member_body + team_body="$(jq -nc --arg id "$FLOW_TEAM_ID" --arg name "$FLOW_TEAM_NAME" '{id: $id, name: $name}')" + umami_http POST "${base}/api/teams" "$token" "$team_body" + umami_http GET "${base}/api/teams/${FLOW_TEAM_ID}" "$token" + umami_http GET "${base}/api/teams/${FLOW_TEAM_ID}/users" "$token" + umami_http GET "${base}/api/teams/${FLOW_TEAM_ID}/websites" "$token" + add_member_body="$(jq -nc --arg userId "$FLOW_USER_ID" --arg role "team-member" '{userId: $userId, role: $role}')" + umami_http POST "${base}/api/teams/${FLOW_TEAM_ID}/users" "$token" "$add_member_body" + umami_http GET "${base}/api/teams/${FLOW_TEAM_ID}/users/${FLOW_USER_ID}" "$token" + update_team_body="$(jq -nc --arg name "${FLOW_TEAM_NAME}-renamed" '{name: $name}')" + umami_http POST "${base}/api/teams/${FLOW_TEAM_ID}" "$token" "$update_team_body" + umami_http GET "${base}/api/users/${FLOW_USER_ID}/teams" "$token" + umami_http DELETE "${base}/api/teams/${FLOW_TEAM_ID}/users/${FLOW_USER_ID}" "$token" + umami_http DELETE "${base}/api/teams/${FLOW_TEAM_ID}" "$token" + + # ---------- Share tokens + public-share access ---------- + local share_body + share_body="$(jq -nc --arg id "$FLOW_SHARE_ID" --arg name "keploy-ci-share" --arg websiteId "$FLOW_WEBSITE_ID" \ + '{id: $id, name: $name, websiteId: $websiteId}')" + umami_http POST "${base}/api/websites/${FLOW_WEBSITE_ID}/shares" "$token" "$share_body" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/shares" "$token" + umami_http GET "${base}/api/share/${FLOW_SHARE_ID}" "" + + # ---------- Replays + sessions deep-dive ---------- + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/replays?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/${FLOW_EVENT_SESSION}?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/${FLOW_EVENT_SESSION}/activity?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/${FLOW_EVENT_SESSION}/properties?${window}" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/sessions/${FLOW_EVENT_SESSION}/replays?${window}" "$token" + + # ---------- Boards lifecycle ---------- + local board_body + board_body="$(jq -nc --arg id "$FLOW_BOARD_ID" --arg name "keploy-ci-board" \ + --arg websiteId "$FLOW_WEBSITE_ID" \ + '{id: $id, name: $name, type: "mixed", status: "open", websiteId: $websiteId}')" + umami_http POST "${base}/api/boards" "$token" "$board_body" + umami_http GET "${base}/api/boards" "$token" + umami_http GET "${base}/api/boards/${FLOW_BOARD_ID}" "$token" + umami_http GET "${base}/api/boards/${FLOW_BOARD_ID}/shares" "$token" + umami_http DELETE "${base}/api/boards/${FLOW_BOARD_ID}" "$token" + + # ---------- Batch tracker (multi-event ingest) ---------- + local batch_body + batch_body="$(jq -nc \ + --arg website "$FLOW_WEBSITE_ID" \ + --arg hostname "$FLOW_WEBSITE_DOMAIN" \ + --arg session "$FLOW_EVENT_SESSION" \ + '[ + { "type": "event", "payload": { "website": $website, "hostname": $hostname, "url": "/batch-1", "name": "click", "id": $session } }, + { "type": "event", "payload": { "website": $website, "hostname": $hostname, "url": "/batch-2", "name": "scroll", "id": $session } }, + { "type": "identify", "payload": { "website": $website, "hostname": $hostname, "id": $session, "data": { "plan": "ci" } } } + ]')" + umami_http POST "${base}/api/batch" "" "$batch_body" + + # ---------- Identify event variant ---------- + local identify_body + identify_body="$(jq -nc \ + --arg website "$FLOW_WEBSITE_ID" \ + --arg hostname "$FLOW_WEBSITE_DOMAIN" \ + --arg session "$FLOW_EVENT_SESSION" \ + '{ + type: "identify", + payload: { + website: $website, hostname: $hostname, id: $session, + data: { plan: "ci-pro", company: "keploy" } + } + }')" + umami_http POST "${base}/api/send" "" "$identify_body" + + # ---------- Pixel tracker ---------- + umami_http GET "${base}/api/pixels?websiteId=${FLOW_WEBSITE_ID}&hostname=${FLOW_WEBSITE_DOMAIN}&url=/pixel" "$token" + + # ---------- /api/me/* + /api/admin/* paged variants ---------- + umami_http GET "${base}/api/me/teams" "$token" + umami_http GET "${base}/api/me/websites?page=1&pageSize=20" "$token" + umami_http GET "${base}/api/admin/users?page=1&pageSize=10&search=keploy" "$token" + umami_http GET "${base}/api/admin/websites?page=1&pageSize=10" "$token" + umami_http GET "${base}/api/admin/teams?page=1&pageSize=10" "$token" + + # ---------- Saved-report CRUD ---------- + local saved_report_body saved_report_response saved_report_id + saved_report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg name "keploy-ci-report" \ + --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, name: $name, type: "breakdown", + parameters: {startDate: $startDate, endDate: $endDate, fields: ["path"]}}')" + saved_report_response="$(curl -sS -H "Authorization: Bearer ${token}" -H "User-Agent: ${FLOW_USER_AGENT}" \ + -H "$h_json" -X POST "${base}/api/reports" --data "$saved_report_body" 2>/dev/null || true)" + log_fired POST "/api/reports" + saved_report_id="$(jq -r '.id // empty' <<<"$saved_report_response" 2>/dev/null || true)" + if [ -n "${saved_report_id:-}" ]; then + umami_http GET "${base}/api/reports/${saved_report_id}" "$token" + local update_report_body + update_report_body="$(jq -nc \ + --arg websiteId "$FLOW_WEBSITE_ID" --arg name "keploy-ci-report-renamed" \ + --arg startDate "$startDate" --arg endDate "$endDate" \ + '{websiteId: $websiteId, name: $name, type: "breakdown", + parameters: {startDate: $startDate, endDate: $endDate, fields: ["path"]}}')" + umami_http POST "${base}/api/reports/${saved_report_id}" "$token" "$update_report_body" + umami_http DELETE "${base}/api/reports/${saved_report_id}" "$token" fi + + # ---------- Metric query-string variants (parser branches) ---------- + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/metrics?${window}&type=path&search=/" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/metrics?${window}&type=referrer&limit=10" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/metrics?${window}&type=event&search=keploy" "$token" + + # ---------- Pageviews unit/timezone variants ---------- + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/pageviews?${window}&unit=day&timezone=America%2FNew_York" "$token" + umami_http GET "${base}/api/websites/${FLOW_WEBSITE_ID}/pageviews?${window}&unit=hour&timezone=Europe%2FLondon" "$token" + + # ---------- 405 path on heartbeat (POST is not allowed) ---------- + umami_http POST "${base}/api/heartbeat" "" "{}" + + # ---------- Logout ---------- + umami_http POST "${base}/api/auth/logout" "$token" "{}" } umami_list_routes() { From f191a90f6bfff41796fcf687934cf006c60b00ba Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 11:46:08 +0530 Subject: [PATCH 3/6] ci(umami-postgres): add per-sample coverage gate workflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds a GitHub Actions workflow scoped via paths: filter to umami-postgres/** so it triggers ONLY on PRs and main-branch pushes that touch the umami-postgres sample (or the workflow file itself). Other samples in this repo keep their orthogonal CI; gating the whole repo on every umami change would slow them all down for no benefit. Three jobs: * build-coverage — runs the sample end-to-end against the PR's HEAD ref via flow.sh bootstrap + record-traffic, captures the route- coverage percentage from flow.sh coverage. * release-coverage — same end-to-end against the PR's base ref. Has a first-PR bootstrap escape hatch (sample-existed=false → coverage=0) so the introducing PR doesn't fail for lack of a baseline. * coverage-gate — fails the PR if build-coverage drops more than COVERAGE_THRESHOLD percentage points below release-coverage. Default 1.0pp; overridable via the UMAMI_COVERAGE_THRESHOLD repo variable. Sticky PR comment summarises the diff. The gate runs ONLY here, on the sample repo. The enterprise PR pipeline (.woodpecker/umami-linux.yml) calls flow.sh coverage informationally with || true and does NOT gate on coverage — that separation keeps the enterprise lane decoupled from sample- level coverage drift. Helper script .github/workflows/scripts/run-and-measure.sh is the keploy-independent measurement shared by both build- and release-coverage jobs: two-phase compose boot (UMAMI_SKIP_INIT=0 then =1) matching the lane scripts, then flow.sh bootstrap + record-traffic + coverage with UMAMI_FIRED_ROUTES_FILE wired in as the standalone numerator. Signed-off-by: Akash Kumar --- .github/workflows/scripts/run-and-measure.sh | 112 +++++++++++ .github/workflows/umami-postgres.yml | 199 +++++++++++++++++++ 2 files changed, 311 insertions(+) create mode 100755 .github/workflows/scripts/run-and-measure.sh create mode 100644 .github/workflows/umami-postgres.yml diff --git a/.github/workflows/scripts/run-and-measure.sh b/.github/workflows/scripts/run-and-measure.sh new file mode 100755 index 0000000..093d0d8 --- /dev/null +++ b/.github/workflows/scripts/run-and-measure.sh @@ -0,0 +1,112 @@ +#!/usr/bin/env bash +# +# run-and-measure.sh — bring umami up via the sample's compose, +# run flow.sh bootstrap + record-traffic with the per-call audit +# log enabled, run flow.sh coverage, and emit `coverage=PCT` +# onto $GITHUB_OUTPUT for the downstream coverage-gate job. +# +# Called from .github/workflows/umami-postgres.yml's +# build-coverage and release-coverage jobs (one per ref under +# comparison). Both jobs source the same script so the +# measurement is identical across refs — any drift in the +# numerator definition would otherwise produce a misleading +# delta. +# +# Inputs (all from the workflow env): +# UMAMI_FIRED_ROUTES_FILE — per-call audit log path; passed +# through to flow.sh so its +# record-traffic loop logs each +# (METHOD, URL) pair, and so its +# coverage subcommand uses that +# file as the standalone +# numerator. +# UMAMI_PHASE — label spliced into the project +# name and the on-disk token path +# (`/tmp/umami-token-${UMAMI_PHASE}`) +# so build vs. release runs don't +# collide on volume names or token +# files. Compose project naming +# inside the GH runner is per-job +# anyway, but UMAMI_PHASE is +# useful for diffing logs. +# GITHUB_OUTPUT — standard GH Actions sink for +# step outputs. +set -Eeuo pipefail + +export UMAMI_APP_CONTAINER="${UMAMI_APP_CONTAINER:-umami_app}" +export UMAMI_DB_CONTAINER="${UMAMI_DB_CONTAINER:-umami_db}" +export UMAMI_APP_PORT="${UMAMI_APP_PORT:-3001}" +export UMAMI_APP_SECRET="${UMAMI_APP_SECRET:-keploy-fixed-app-secret-for-deterministic-recordings}" +: "${UMAMI_FIRED_ROUTES_FILE:?UMAMI_FIRED_ROUTES_FILE must be set by the workflow}" + +# Reset audit log for this run; otherwise a prior run's entries +# would inflate the numerator on a re-trigger. +: >"$UMAMI_FIRED_ROUTES_FILE" + +# Stage 1: cold boot — umami's entrypoint runs Prisma migrations +# + seeds the admin user into the named volume. UMAMI_SKIP_INIT=0 +# means "do the init work this time." +UMAMI_SKIP_INIT=0 docker compose up -d + +# Wait for the backend to actually serve. /api/heartbeat returns +# 200 only when Next.js has bound and Prisma is connected — a +# stronger gate than wait-for-port, since umami is up on :3000 +# inside the container before the Next.js server has finished +# warming the route table. +for i in $(seq 1 120); do + code=$(curl -sS -o /dev/null -w '%{http_code}' \ + "http://127.0.0.1:${UMAMI_APP_PORT}/api/heartbeat" 2>/dev/null || echo "") + if [ "$code" = "200" ]; then break; fi + sleep 2 +done + +bash flow.sh bootstrap 240 +docker compose down --remove-orphans + +# Stage 2: re-launch in skip-init mode against the populated +# volume — same shape the keploy lanes use, so the recorded +# request stream matches what record/replay sees. +UMAMI_SKIP_INIT=1 docker compose up -d + +# Wait again — same readiness gate. Stage 2 is faster than stage +# 1 (no migrations) but Next.js still needs ~10-30s to warm. +for i in $(seq 1 120); do + code=$(curl -sS -o /dev/null -w '%{http_code}' \ + "http://127.0.0.1:${UMAMI_APP_PORT}/api/heartbeat" 2>/dev/null || echo "") + if [ "$code" = "200" ]; then break; fi + sleep 2 +done + +# Re-bootstrap: the auth token is request-scoped (JWT in the +# Authorization header), and stage 1's compose-down dropped the +# in-memory token. flow.sh::umami_bootstrap re-issues a fresh +# one against the same admin credentials and rewrites +# /tmp/umami-token-${UMAMI_PHASE}, which umami_record_traffic +# reads. +bash flow.sh bootstrap 240 + +# Drive traffic. flow.sh::umami_record_traffic re-reads the +# token from /tmp/umami-token-${UMAMI_PHASE} and tolerates +# non-2xx responses internally, so a single endpoint regression +# in umami itself doesn't abort the whole record run. +bash flow.sh record-traffic + +# Coverage report — uses UMAMI_FIRED_ROUTES_FILE as numerator +# since no keploy/test-set-* tree exists in the standalone case. +# umami_list_routes walks src/app/api/**/route.ts inside the +# running container, so the app must still be up here. +COVERAGE_REPORT_FILE="$PWD/coverage_report.txt" bash flow.sh coverage + +# Pull the percentage out of the report's `Covered N/M (XX.X%)` +# line. Anchored on the parenthesised form so a future change to +# the report's prose doesn't break the parse. +pct=$(grep -oE '\([0-9]+\.[0-9]+%\)' coverage_report.txt | head -1 | tr -d '()%') +if [ -z "$pct" ]; then + echo "::error::Could not parse coverage percentage from coverage_report.txt" + cat coverage_report.txt || true + exit 1 +fi +echo "coverage=${pct}" >>"$GITHUB_OUTPUT" +echo "coverage: ${pct}% (audit log: $UMAMI_FIRED_ROUTES_FILE)" + +docker compose down -v --remove-orphans diff --git a/.github/workflows/umami-postgres.yml b/.github/workflows/umami-postgres.yml new file mode 100644 index 0000000..ba8cd7e --- /dev/null +++ b/.github/workflows/umami-postgres.yml @@ -0,0 +1,199 @@ +# umami-postgres sample CI — keploy-independent end-to-end smoke + +# coverage gate. +# +# Triggers ONLY on changes under umami-postgres/ (or this workflow +# file). Other samples in this repo have their own orthogonal CI; +# gating the whole repo on every umami change would slow them +# all down for no benefit. +# +# What it gates: +# * `release-coverage` — checks out the PR's base branch (main) +# and runs the sample end-to-end: docker compose up, bootstrap +# admin token, drive flow.sh record-traffic with the per-call +# audit log enabled, capture the route-coverage percentage from +# `flow.sh coverage`. This is the baseline. +# * `build-coverage` — same end-to-end against the PR's HEAD ref. +# * `coverage-gate` — fails the PR if `build`'s coverage drops +# more than COVERAGE_THRESHOLD percentage points below +# `release`. Default threshold is 1.0pp; override via repo +# variable `UMAMI_COVERAGE_THRESHOLD` for a tighter or +# looser bar. +# +# On push to main, only `build-coverage` runs (no baseline to +# compare against — main IS the baseline). +# +# Standards-aligned choices: +# * `paths:` filter on both push and pull_request triggers — the +# canonical GH Actions way to scope a workflow to one +# subdirectory. +# * Job outputs (steps..outputs.coverage → needs..outputs) +# to thread the captured percentage between jobs. +# * `concurrency:` cancel-in-progress on the same ref so a stale +# run doesn't waste runner minutes. +# * actions/upload-artifact for the human-readable +# coverage_report.txt — reviewers can inspect missing routes +# directly from the PR's "checks" tab. +# * marocchino/sticky-pull-request-comment for the PR-side diff +# comment. Pinned-by-header so successive runs update the same +# comment instead of fanning out. +# * The compare step is plain bash + python3 (no external +# coverage service). For full coverage XMLs you'd want +# diff-cover or codecov, but the sample's coverage is +# API-route-based (single percentage), so the gate is a 3-line +# subtraction. +# +# Sample is genuinely keploy-independent here: the workflow uses +# flow.sh's $UMAMI_FIRED_ROUTES_FILE per-call audit log as its +# numerator source, not a keploy recording. The lane scripts in +# keploy/integrations and keploy/enterprise consume the same +# flow.sh, but use the keploy/test-set-*/tests/*.yaml tree as +# their numerator (authoritative — only calls keploy actually +# CAPTURED count). Both modes are wired into +# `flow.sh::umami_list_recorded_routes`. +name: umami-postgres sample + +on: + pull_request: + paths: + - 'umami-postgres/**' + - '.github/workflows/umami-postgres.yml' + push: + branches: [main] + paths: + - 'umami-postgres/**' + - '.github/workflows/umami-postgres.yml' + workflow_dispatch: {} + +concurrency: + group: umami-postgres-${{ github.ref }} + cancel-in-progress: true + +env: + COVERAGE_THRESHOLD: ${{ vars.UMAMI_COVERAGE_THRESHOLD || '1.0' }} + +jobs: + build-coverage: + name: build (current ref) coverage + runs-on: ubuntu-latest + timeout-minutes: 20 + outputs: + coverage: ${{ steps.measure.outputs.coverage }} + steps: + - uses: actions/checkout@v4 + - id: measure + name: Run sample end-to-end + measure coverage + working-directory: umami-postgres + env: + UMAMI_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-build.log + UMAMI_PHASE: ci-build + run: ../.github/workflows/scripts/run-and-measure.sh + + - name: Upload coverage report + if: always() + uses: actions/upload-artifact@v4 + with: + name: coverage-build + path: umami-postgres/coverage_report.txt + if-no-files-found: warn + + release-coverage: + if: github.event_name == 'pull_request' + name: release (base ref) coverage + runs-on: ubuntu-latest + timeout-minutes: 20 + outputs: + coverage: ${{ steps.measure.outputs.coverage || steps.empty-baseline.outputs.coverage }} + sample-existed: ${{ steps.detect.outputs.sample-existed }} + steps: + - uses: actions/checkout@v4 + with: + ref: ${{ github.event.pull_request.base.ref }} + + # First-PR bootstrap escape hatch: the very PR that + # introduces the umami-postgres/ sample has no baseline + # (umami-postgres/ doesn't exist on the base ref). Detect + # that and short-circuit to coverage=0; the gate then + # treats build's coverage as the new baseline and trivially + # passes for any percentage > 0. After the introducing PR + # merges, every subsequent PR has a real baseline to diff + # against. + - id: detect + name: Detect baseline presence + run: | + if [ -d umami-postgres ] && [ -x umami-postgres/flow.sh ]; then + echo "sample-existed=true" >>"$GITHUB_OUTPUT" + echo "Sample exists on base ref — running full measurement." + else + echo "sample-existed=false" >>"$GITHUB_OUTPUT" + echo "No umami-postgres/ on base ref — first-PR bootstrap; baseline coverage treated as 0%." + fi + + - id: measure + name: Run sample end-to-end + measure coverage + if: steps.detect.outputs.sample-existed == 'true' + working-directory: umami-postgres + env: + UMAMI_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-release.log + UMAMI_PHASE: ci-release + run: ../.github/workflows/scripts/run-and-measure.sh + + - id: empty-baseline + name: Emit zero baseline (first-PR bootstrap) + if: steps.detect.outputs.sample-existed != 'true' + run: echo "coverage=0.0" >>"$GITHUB_OUTPUT" + + - name: Upload coverage report + if: always() && steps.detect.outputs.sample-existed == 'true' + uses: actions/upload-artifact@v4 + with: + name: coverage-release + path: umami-postgres/coverage_report.txt + if-no-files-found: warn + + coverage-gate: + if: github.event_name == 'pull_request' + name: coverage gate + needs: [build-coverage, release-coverage] + runs-on: ubuntu-latest + steps: + - name: Compare build vs release + env: + BUILD: ${{ needs.build-coverage.outputs.coverage }} + RELEASE: ${{ needs.release-coverage.outputs.coverage }} + THRESHOLD: ${{ env.COVERAGE_THRESHOLD }} + BASE_REF: ${{ github.event.pull_request.base.ref }} + run: | + set -Eeuo pipefail + if [ -z "${BUILD:-}" ] || [ -z "${RELEASE:-}" ]; then + echo "::error::missing coverage outputs — build='${BUILD:-}' release='${RELEASE:-}'" + exit 1 + fi + drop=$(python3 -c "print(round(${RELEASE} - ${BUILD}, 2))") + echo "Release (${BASE_REF}): ${RELEASE}%" + echo "Build (this PR): ${BUILD}%" + echo "Drop: ${drop}pp (threshold ${THRESHOLD}pp)" + if python3 -c "import sys; sys.exit(0 if (${RELEASE} - ${BUILD}) > ${THRESHOLD} else 1)"; then + echo "::error::umami-postgres coverage dropped from ${RELEASE}% → ${BUILD}% (-${drop}pp), exceeding the ${THRESHOLD}pp threshold." + echo "Suggested actions:" + echo " * Add curl(s) to flow.sh::umami_record_traffic that exercise the routes you changed/touched." + echo " * If the route(s) was intentionally retired, drop it from umami-postgres/flow.sh::umami_list_routes' route-table walk too so it's removed from the denominator." + exit 1 + fi + echo "OK — coverage delta within ${THRESHOLD}pp threshold." + + - name: Sticky PR comment + if: ${{ !cancelled() }} + uses: marocchino/sticky-pull-request-comment@v2 + with: + header: umami-postgres-coverage + message: | + ### umami-postgres sample coverage + + | ref | coverage | + |---|---| + | base (`${{ github.event.pull_request.base.ref }}`) | **${{ needs.release-coverage.outputs.coverage }}%** | + | this PR | **${{ needs.build-coverage.outputs.coverage }}%** | + + Threshold: PR may not drop coverage by more than **${{ env.COVERAGE_THRESHOLD }}pp**. Override per-repo via the `UMAMI_COVERAGE_THRESHOLD` actions variable. + + Coverage measures the umami v2 API surface (`/api/auth/*` + `/api/me` + `/api/users` + `/api/teams` + `/api/websites/*` + `/api/reports/*` + `/api/share/*` + heartbeat) that `flow.sh::umami_record_traffic` actually exercises against the running backend. Reports are attached as artifacts on each job ("coverage-build" / "coverage-release"). From bf50e4917e973bce155fb6d7b3418219648780a6 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 11:48:22 +0530 Subject: [PATCH 4/6] fix(umami-postgres): read route surface from compiled Next.js manifest MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The upstream umami image (ghcr.io/umami-software/umami:postgresql-v2.18.1) ships a compiled Next.js build, not the TypeScript source. The prior implementation greped src/app/api/**/route.ts inside the container, which doesn't exist there, so umami_list_routes returned zero rows and umami_report_coverage skipped with "WARNING: ...skipping coverage report". The route surface is fully derivable from the build artefacts: /app/.next/app-path-routes-manifest.json → URL paths /app/.next/server/app/route.js → compiled handlers with method exports ({GET:...,POST:...}) Verified end-to-end against the running container: list-routes now emits 93 (method, path) rows; coverage gate has a real denominator. Signed-off-by: Akash Kumar --- umami-postgres/flow.sh | 37 ++++++++++++++++++++++--------------- 1 file changed, 22 insertions(+), 15 deletions(-) diff --git a/umami-postgres/flow.sh b/umami-postgres/flow.sh index ccdeb9c..34b62a6 100755 --- a/umami-postgres/flow.sh +++ b/umami-postgres/flow.sh @@ -454,21 +454,28 @@ umami_record_traffic() { } umami_list_routes() { - # umami exposes its v1 routes via the Next.js file-system - # router. Inside the container, src/app/api/**/route.ts is - # the source of truth. find them and emit (method, path). - docker exec -i "$UMAMI_APP_CONTAINER" sh -c ' - cd /app && find src/app/api -name "route.ts" -o -name "route.js" 2>/dev/null | while read f; do - rel="${f#src/app/api/}" - rel="${rel%/route.ts}" - rel="${rel%/route.js}" - grep -oE "export[[:space:]]+(async[[:space:]]+)?function[[:space:]]+(GET|POST|PUT|DELETE|PATCH)" "$f" \ - | awk "{print \$NF}" \ - | sort -u \ - | while read method; do - echo "$method /api/${rel}" - done - done + # The upstream umami image ships a compiled Next.js build, not + # the TypeScript source tree, so the route surface is read from + # the built artefacts: app-path-routes-manifest.json gives every + # route's URL path; the matching compiled route.js exports the + # HTTP methods. node is in PATH inside the container. + docker exec -i "$UMAMI_APP_CONTAINER" node -e ' + const fs = require("fs"); + const manifest = require("/app/.next/app-path-routes-manifest.json"); + const seen = new Set(); + for (const url of Object.values(manifest)) { + const file = "/app/.next/server/app" + url + "/route.js"; + let body; + try { body = fs.readFileSync(file, "utf8"); } catch { continue; } + const found = new Set(); + for (const m of body.matchAll(/(GET|POST|PUT|DELETE|PATCH|OPTIONS|HEAD)["\x3a,]/g)) { + found.add(m[1]); + } + for (const method of found) { + const key = method + " " + url; + if (!seen.has(key)) { seen.add(key); console.log(key); } + } + } ' 2>/dev/null | sort -u } From acfffe27694cb47dd1311956476e332b0196d9ea Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 13:23:12 +0530 Subject: [PATCH 5/6] =?UTF-8?q?ci(umami-postgres):=20drop=20coverage=20gat?= =?UTF-8?q?e=20=E2=80=94=20upstream=20image=20is=20precompiled+minified?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1` image ships a heavily minified Next.js standalone build under /app/.next/server/app/api/**/route.js. The source tree (/app/src) and sourcemaps (.map) are stripped from the image. V8 / c8 line coverage on minified code is structurally meaningless — each "line" of the compiled output is many source statements concatenated by the bundler, so a coverage percentage doesn't map back to anything a reviewer can act on. Rather than ship a misleading metric (the prior route-surface "coverage" we removed elsewhere was exactly this kind of proxy), the umami sample is now smoke-test-only: - `flow.sh bootstrap` signs in as admin, persists the JWT - `flow.sh record-traffic` exercises the v2 API surface - `flow.sh coverage` is a no-op that prints an info message and exits 0 (so consumers' `flow.sh coverage || true` calls keep working) The keploy/enterprise compat lane already uses the resulting record/replay assertions as its correctness gate — that IS the meaningful test here, not source coverage of umami's frontend. If real source-line coverage becomes a hard requirement for this sample, the path is to rebuild umami from source inside a Dockerfile.coverage overlay (~5-10 min npm install + next build without minification + with sourcemaps). That's a separate ~hours-of-work change. Removed: - .github/workflows/umami-postgres.yml (coverage gate workflow) - .github/workflows/scripts/run-and-measure.sh (its helper) - umami_list_routes / umami_list_recorded_routes / the legacy route-surface umami_report_coverage in flow.sh. - list-routes subcommand. Replaced umami_report_coverage with a no-op stub. Signed-off-by: Akash Kumar --- .github/workflows/scripts/run-and-measure.sh | 112 ----------- .github/workflows/umami-postgres.yml | 199 ------------------- umami-postgres/flow.sh | 110 +++------- 3 files changed, 28 insertions(+), 393 deletions(-) delete mode 100755 .github/workflows/scripts/run-and-measure.sh delete mode 100644 .github/workflows/umami-postgres.yml diff --git a/.github/workflows/scripts/run-and-measure.sh b/.github/workflows/scripts/run-and-measure.sh deleted file mode 100755 index 093d0d8..0000000 --- a/.github/workflows/scripts/run-and-measure.sh +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env bash -# -# run-and-measure.sh — bring umami up via the sample's compose, -# run flow.sh bootstrap + record-traffic with the per-call audit -# log enabled, run flow.sh coverage, and emit `coverage=PCT` -# onto $GITHUB_OUTPUT for the downstream coverage-gate job. -# -# Called from .github/workflows/umami-postgres.yml's -# build-coverage and release-coverage jobs (one per ref under -# comparison). Both jobs source the same script so the -# measurement is identical across refs — any drift in the -# numerator definition would otherwise produce a misleading -# delta. -# -# Inputs (all from the workflow env): -# UMAMI_FIRED_ROUTES_FILE — per-call audit log path; passed -# through to flow.sh so its -# record-traffic loop logs each -# (METHOD, URL) pair, and so its -# coverage subcommand uses that -# file as the standalone -# numerator. -# UMAMI_PHASE — label spliced into the project -# name and the on-disk token path -# (`/tmp/umami-token-${UMAMI_PHASE}`) -# so build vs. release runs don't -# collide on volume names or token -# files. Compose project naming -# inside the GH runner is per-job -# anyway, but UMAMI_PHASE is -# useful for diffing logs. -# GITHUB_OUTPUT — standard GH Actions sink for -# step outputs. -set -Eeuo pipefail - -export UMAMI_APP_CONTAINER="${UMAMI_APP_CONTAINER:-umami_app}" -export UMAMI_DB_CONTAINER="${UMAMI_DB_CONTAINER:-umami_db}" -export UMAMI_APP_PORT="${UMAMI_APP_PORT:-3001}" -export UMAMI_APP_SECRET="${UMAMI_APP_SECRET:-keploy-fixed-app-secret-for-deterministic-recordings}" -: "${UMAMI_FIRED_ROUTES_FILE:?UMAMI_FIRED_ROUTES_FILE must be set by the workflow}" - -# Reset audit log for this run; otherwise a prior run's entries -# would inflate the numerator on a re-trigger. -: >"$UMAMI_FIRED_ROUTES_FILE" - -# Stage 1: cold boot — umami's entrypoint runs Prisma migrations -# + seeds the admin user into the named volume. UMAMI_SKIP_INIT=0 -# means "do the init work this time." -UMAMI_SKIP_INIT=0 docker compose up -d - -# Wait for the backend to actually serve. /api/heartbeat returns -# 200 only when Next.js has bound and Prisma is connected — a -# stronger gate than wait-for-port, since umami is up on :3000 -# inside the container before the Next.js server has finished -# warming the route table. -for i in $(seq 1 120); do - code=$(curl -sS -o /dev/null -w '%{http_code}' \ - "http://127.0.0.1:${UMAMI_APP_PORT}/api/heartbeat" 2>/dev/null || echo "") - if [ "$code" = "200" ]; then break; fi - sleep 2 -done - -bash flow.sh bootstrap 240 -docker compose down --remove-orphans - -# Stage 2: re-launch in skip-init mode against the populated -# volume — same shape the keploy lanes use, so the recorded -# request stream matches what record/replay sees. -UMAMI_SKIP_INIT=1 docker compose up -d - -# Wait again — same readiness gate. Stage 2 is faster than stage -# 1 (no migrations) but Next.js still needs ~10-30s to warm. -for i in $(seq 1 120); do - code=$(curl -sS -o /dev/null -w '%{http_code}' \ - "http://127.0.0.1:${UMAMI_APP_PORT}/api/heartbeat" 2>/dev/null || echo "") - if [ "$code" = "200" ]; then break; fi - sleep 2 -done - -# Re-bootstrap: the auth token is request-scoped (JWT in the -# Authorization header), and stage 1's compose-down dropped the -# in-memory token. flow.sh::umami_bootstrap re-issues a fresh -# one against the same admin credentials and rewrites -# /tmp/umami-token-${UMAMI_PHASE}, which umami_record_traffic -# reads. -bash flow.sh bootstrap 240 - -# Drive traffic. flow.sh::umami_record_traffic re-reads the -# token from /tmp/umami-token-${UMAMI_PHASE} and tolerates -# non-2xx responses internally, so a single endpoint regression -# in umami itself doesn't abort the whole record run. -bash flow.sh record-traffic - -# Coverage report — uses UMAMI_FIRED_ROUTES_FILE as numerator -# since no keploy/test-set-* tree exists in the standalone case. -# umami_list_routes walks src/app/api/**/route.ts inside the -# running container, so the app must still be up here. -COVERAGE_REPORT_FILE="$PWD/coverage_report.txt" bash flow.sh coverage - -# Pull the percentage out of the report's `Covered N/M (XX.X%)` -# line. Anchored on the parenthesised form so a future change to -# the report's prose doesn't break the parse. -pct=$(grep -oE '\([0-9]+\.[0-9]+%\)' coverage_report.txt | head -1 | tr -d '()%') -if [ -z "$pct" ]; then - echo "::error::Could not parse coverage percentage from coverage_report.txt" - cat coverage_report.txt || true - exit 1 -fi -echo "coverage=${pct}" >>"$GITHUB_OUTPUT" -echo "coverage: ${pct}% (audit log: $UMAMI_FIRED_ROUTES_FILE)" - -docker compose down -v --remove-orphans diff --git a/.github/workflows/umami-postgres.yml b/.github/workflows/umami-postgres.yml deleted file mode 100644 index ba8cd7e..0000000 --- a/.github/workflows/umami-postgres.yml +++ /dev/null @@ -1,199 +0,0 @@ -# umami-postgres sample CI — keploy-independent end-to-end smoke + -# coverage gate. -# -# Triggers ONLY on changes under umami-postgres/ (or this workflow -# file). Other samples in this repo have their own orthogonal CI; -# gating the whole repo on every umami change would slow them -# all down for no benefit. -# -# What it gates: -# * `release-coverage` — checks out the PR's base branch (main) -# and runs the sample end-to-end: docker compose up, bootstrap -# admin token, drive flow.sh record-traffic with the per-call -# audit log enabled, capture the route-coverage percentage from -# `flow.sh coverage`. This is the baseline. -# * `build-coverage` — same end-to-end against the PR's HEAD ref. -# * `coverage-gate` — fails the PR if `build`'s coverage drops -# more than COVERAGE_THRESHOLD percentage points below -# `release`. Default threshold is 1.0pp; override via repo -# variable `UMAMI_COVERAGE_THRESHOLD` for a tighter or -# looser bar. -# -# On push to main, only `build-coverage` runs (no baseline to -# compare against — main IS the baseline). -# -# Standards-aligned choices: -# * `paths:` filter on both push and pull_request triggers — the -# canonical GH Actions way to scope a workflow to one -# subdirectory. -# * Job outputs (steps..outputs.coverage → needs..outputs) -# to thread the captured percentage between jobs. -# * `concurrency:` cancel-in-progress on the same ref so a stale -# run doesn't waste runner minutes. -# * actions/upload-artifact for the human-readable -# coverage_report.txt — reviewers can inspect missing routes -# directly from the PR's "checks" tab. -# * marocchino/sticky-pull-request-comment for the PR-side diff -# comment. Pinned-by-header so successive runs update the same -# comment instead of fanning out. -# * The compare step is plain bash + python3 (no external -# coverage service). For full coverage XMLs you'd want -# diff-cover or codecov, but the sample's coverage is -# API-route-based (single percentage), so the gate is a 3-line -# subtraction. -# -# Sample is genuinely keploy-independent here: the workflow uses -# flow.sh's $UMAMI_FIRED_ROUTES_FILE per-call audit log as its -# numerator source, not a keploy recording. The lane scripts in -# keploy/integrations and keploy/enterprise consume the same -# flow.sh, but use the keploy/test-set-*/tests/*.yaml tree as -# their numerator (authoritative — only calls keploy actually -# CAPTURED count). Both modes are wired into -# `flow.sh::umami_list_recorded_routes`. -name: umami-postgres sample - -on: - pull_request: - paths: - - 'umami-postgres/**' - - '.github/workflows/umami-postgres.yml' - push: - branches: [main] - paths: - - 'umami-postgres/**' - - '.github/workflows/umami-postgres.yml' - workflow_dispatch: {} - -concurrency: - group: umami-postgres-${{ github.ref }} - cancel-in-progress: true - -env: - COVERAGE_THRESHOLD: ${{ vars.UMAMI_COVERAGE_THRESHOLD || '1.0' }} - -jobs: - build-coverage: - name: build (current ref) coverage - runs-on: ubuntu-latest - timeout-minutes: 20 - outputs: - coverage: ${{ steps.measure.outputs.coverage }} - steps: - - uses: actions/checkout@v4 - - id: measure - name: Run sample end-to-end + measure coverage - working-directory: umami-postgres - env: - UMAMI_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-build.log - UMAMI_PHASE: ci-build - run: ../.github/workflows/scripts/run-and-measure.sh - - - name: Upload coverage report - if: always() - uses: actions/upload-artifact@v4 - with: - name: coverage-build - path: umami-postgres/coverage_report.txt - if-no-files-found: warn - - release-coverage: - if: github.event_name == 'pull_request' - name: release (base ref) coverage - runs-on: ubuntu-latest - timeout-minutes: 20 - outputs: - coverage: ${{ steps.measure.outputs.coverage || steps.empty-baseline.outputs.coverage }} - sample-existed: ${{ steps.detect.outputs.sample-existed }} - steps: - - uses: actions/checkout@v4 - with: - ref: ${{ github.event.pull_request.base.ref }} - - # First-PR bootstrap escape hatch: the very PR that - # introduces the umami-postgres/ sample has no baseline - # (umami-postgres/ doesn't exist on the base ref). Detect - # that and short-circuit to coverage=0; the gate then - # treats build's coverage as the new baseline and trivially - # passes for any percentage > 0. After the introducing PR - # merges, every subsequent PR has a real baseline to diff - # against. - - id: detect - name: Detect baseline presence - run: | - if [ -d umami-postgres ] && [ -x umami-postgres/flow.sh ]; then - echo "sample-existed=true" >>"$GITHUB_OUTPUT" - echo "Sample exists on base ref — running full measurement." - else - echo "sample-existed=false" >>"$GITHUB_OUTPUT" - echo "No umami-postgres/ on base ref — first-PR bootstrap; baseline coverage treated as 0%." - fi - - - id: measure - name: Run sample end-to-end + measure coverage - if: steps.detect.outputs.sample-existed == 'true' - working-directory: umami-postgres - env: - UMAMI_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-release.log - UMAMI_PHASE: ci-release - run: ../.github/workflows/scripts/run-and-measure.sh - - - id: empty-baseline - name: Emit zero baseline (first-PR bootstrap) - if: steps.detect.outputs.sample-existed != 'true' - run: echo "coverage=0.0" >>"$GITHUB_OUTPUT" - - - name: Upload coverage report - if: always() && steps.detect.outputs.sample-existed == 'true' - uses: actions/upload-artifact@v4 - with: - name: coverage-release - path: umami-postgres/coverage_report.txt - if-no-files-found: warn - - coverage-gate: - if: github.event_name == 'pull_request' - name: coverage gate - needs: [build-coverage, release-coverage] - runs-on: ubuntu-latest - steps: - - name: Compare build vs release - env: - BUILD: ${{ needs.build-coverage.outputs.coverage }} - RELEASE: ${{ needs.release-coverage.outputs.coverage }} - THRESHOLD: ${{ env.COVERAGE_THRESHOLD }} - BASE_REF: ${{ github.event.pull_request.base.ref }} - run: | - set -Eeuo pipefail - if [ -z "${BUILD:-}" ] || [ -z "${RELEASE:-}" ]; then - echo "::error::missing coverage outputs — build='${BUILD:-}' release='${RELEASE:-}'" - exit 1 - fi - drop=$(python3 -c "print(round(${RELEASE} - ${BUILD}, 2))") - echo "Release (${BASE_REF}): ${RELEASE}%" - echo "Build (this PR): ${BUILD}%" - echo "Drop: ${drop}pp (threshold ${THRESHOLD}pp)" - if python3 -c "import sys; sys.exit(0 if (${RELEASE} - ${BUILD}) > ${THRESHOLD} else 1)"; then - echo "::error::umami-postgres coverage dropped from ${RELEASE}% → ${BUILD}% (-${drop}pp), exceeding the ${THRESHOLD}pp threshold." - echo "Suggested actions:" - echo " * Add curl(s) to flow.sh::umami_record_traffic that exercise the routes you changed/touched." - echo " * If the route(s) was intentionally retired, drop it from umami-postgres/flow.sh::umami_list_routes' route-table walk too so it's removed from the denominator." - exit 1 - fi - echo "OK — coverage delta within ${THRESHOLD}pp threshold." - - - name: Sticky PR comment - if: ${{ !cancelled() }} - uses: marocchino/sticky-pull-request-comment@v2 - with: - header: umami-postgres-coverage - message: | - ### umami-postgres sample coverage - - | ref | coverage | - |---|---| - | base (`${{ github.event.pull_request.base.ref }}`) | **${{ needs.release-coverage.outputs.coverage }}%** | - | this PR | **${{ needs.build-coverage.outputs.coverage }}%** | - - Threshold: PR may not drop coverage by more than **${{ env.COVERAGE_THRESHOLD }}pp**. Override per-repo via the `UMAMI_COVERAGE_THRESHOLD` actions variable. - - Coverage measures the umami v2 API surface (`/api/auth/*` + `/api/me` + `/api/users` + `/api/teams` + `/api/websites/*` + `/api/reports/*` + `/api/share/*` + heartbeat) that `flow.sh::umami_record_traffic` actually exercises against the running backend. Reports are attached as artifacts on each job ("coverage-build" / "coverage-release"). diff --git a/umami-postgres/flow.sh b/umami-postgres/flow.sh index 34b62a6..d8df1bf 100755 --- a/umami-postgres/flow.sh +++ b/umami-postgres/flow.sh @@ -453,96 +453,42 @@ umami_record_traffic() { umami_http POST "${base}/api/auth/logout" "$token" "{}" } -umami_list_routes() { - # The upstream umami image ships a compiled Next.js build, not - # the TypeScript source tree, so the route surface is read from - # the built artefacts: app-path-routes-manifest.json gives every - # route's URL path; the matching compiled route.js exports the - # HTTP methods. node is in PATH inside the container. - docker exec -i "$UMAMI_APP_CONTAINER" node -e ' - const fs = require("fs"); - const manifest = require("/app/.next/app-path-routes-manifest.json"); - const seen = new Set(); - for (const url of Object.values(manifest)) { - const file = "/app/.next/server/app" + url + "/route.js"; - let body; - try { body = fs.readFileSync(file, "utf8"); } catch { continue; } - const found = new Set(); - for (const m of body.matchAll(/(GET|POST|PUT|DELETE|PATCH|OPTIONS|HEAD)["\x3a,]/g)) { - found.add(m[1]); - } - for (const method of found) { - const key = method + " " + url; - if (!seen.has(key)) { seen.add(key); console.log(key); } - } - } - ' 2>/dev/null | sort -u -} - -umami_list_recorded_routes() { - local f method route - local found_keploy=0 - while IFS= read -r f; do - found_keploy=1 - method=$(awk '/^ method:/{print $2; exit}' "$f") - route=$(awk '/^ url:/{print $2; exit}' "$f") - route="${route%%\?*}" - case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac - if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi - done < <(find keploy -type f -path '*/tests/*.yaml' 2>/dev/null) | sort -u - if [ "$found_keploy" = "1" ]; then return 0; fi - - if [ -n "$UMAMI_FIRED_ROUTES_FILE" ] && [ -f "$UMAMI_FIRED_ROUTES_FILE" ]; then - while IFS= read -r line; do - method="${line%% *}"; route="${line#* }" - route="${route%%\?*}" - case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac - [ -n "$method" ] && [ -n "$route" ] && echo "$method $route" - done <"$UMAMI_FIRED_ROUTES_FILE" | sort -u - fi -} +# umami_report_coverage is intentionally a no-op. +# +# The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1` +# image ships a compiled, minified Next.js standalone build — +# `/app/.next/server/app/api/**/route.js` is heavily uglified and +# the source tree (/app/src) plus sourcemaps (.map files) are +# stripped. V8 / c8 can collect line coverage on minified code, +# but each "line" is a multi-statement minified output line that +# doesn't correspond to any single source line, so the percentage +# is meaningless. +# +# Real Java/Python/source-line coverage requires the underlying +# source to be on disk inside the container. Building umami from +# its own source (npm install + next build, ~5-10 min) inside a +# coverage overlay would produce real data, but is a much larger +# rebuild and slows the workflow disproportionately for what +# remains a smoke-test sample. +# +# For now the umami-postgres lane runs as a smoke test only: +# `flow.sh bootstrap` + `flow.sh record-traffic` exercise the v2 +# API surface against the upstream image; the keploy/enterprise +# compat lane uses the resulting record/replay assertions as its +# correctness gate (which IS the meaningful test of keploy here, +# not source coverage of umami's frontend). umami_report_coverage() { - local routes_file recorded_file - routes_file="$(mktemp)"; recorded_file="$(mktemp)" - umami_list_routes >"$routes_file" - umami_list_recorded_routes >"$recorded_file" - - if [ ! -s "$routes_file" ]; then - echo "WARNING: umami_list_routes produced no rows; skipping coverage report" >&2 - rm -f "$routes_file" "$recorded_file"; return 0 - fi - - local total covered missing pct - total=$(wc -l <"$routes_file" | tr -d ' '); covered=0; missing="" - while IFS= read -r line; do - local method="${line%% *}" - local route="${line#* }" - local pattern="^${method} $(printf '%s' "$route" | sed -E 's/\[[^]]+\]/[^\/]+/g')$" - if grep -qE "$pattern" "$recorded_file"; then - covered=$((covered + 1)) - else - missing+=" ${method} ${route}"$'\n' - fi - done <"$routes_file" - if [ "$total" -gt 0 ]; then - pct=$(awk -v c="$covered" -v t="$total" 'BEGIN{printf "%.1f", c*100/t}') - else pct="0.0"; fi - { - echo "================ umami API coverage ================" - echo "Covered ${covered}/${total} (${pct}%)" - if [ -n "$missing" ]; then echo "Uncovered:"; printf '%s' "$missing"; fi - echo "====================================================" - } | tee "${COVERAGE_REPORT_FILE:-coverage_report.txt}" - rm -f "$routes_file" "$recorded_file" + echo "INFO: umami coverage not measured — upstream image is precompiled+minified without sourcemaps; rebuild from source would be required." + : >"${COVERAGE_REPORT_FILE:-coverage_report.txt}" + return 0 } case "${1:-}" in bootstrap) umami_bootstrap "${2:-180}" ;; record-traffic) umami_record_traffic ;; coverage) umami_report_coverage ;; - list-routes) umami_list_routes ;; *) - echo "usage: $0 {bootstrap|record-traffic|coverage|list-routes}" >&2 + echo "usage: $0 {bootstrap|record-traffic|coverage}" >&2 exit 2 ;; esac From 6c4f05c2f18f507ce6b103962c47b6fd0d3d7acc Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 13:39:39 +0530 Subject: [PATCH 6/6] docs(umami-postgres): document why coverage is not measured (precompiled image) Signed-off-by: Akash Kumar --- umami-postgres/README.md | 42 +++++++++++++++++++++++++++++++--------- 1 file changed, 33 insertions(+), 9 deletions(-) diff --git a/umami-postgres/README.md b/umami-postgres/README.md index a8bf40b..2a69400 100644 --- a/umami-postgres/README.md +++ b/umami-postgres/README.md @@ -1,6 +1,6 @@ # umami-postgres — keploy compat lane sample -Reproducer for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), the keploy CI lanes consume it as a thin wrapper. +Reproducer for the umami / postgres-v3 compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose + bootstrap + traffic), the keploy CI lanes consume it as a thin wrapper. The sample drives the full umami v2 API surface keploy needs to gate on a record/replay round-trip — auth + me + admin lists, users CRUD, websites CRUD, all eight report types, share tokens + public share access, batch + identify event ingest, sessions deep-dive, replays, boards lifecycle, pixel tracker, metric/pageview parser-branch variants, and logout. @@ -10,7 +10,7 @@ The sample drives the full umami v2 API surface keploy needs to gate on a record umami-postgres/ ├── Dockerfile # FROM ghcr.io/umami-software/umami:postgresql-v2.18.1 ├── docker-compose.yml # postgres-15 + umami v2 on a fixed subnet, env-driven -├── flow.sh # bootstrap | record-traffic | coverage | list-routes +├── flow.sh # bootstrap | record-traffic | coverage ├── keploy.yml.template # globalNoise for createdAt/updatedAt/Date/uuid id fields └── README.md # this file ``` @@ -20,23 +20,47 @@ umami-postgres/ The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare umami. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. * `bootstrap` — log in as admin via `/api/auth/login`, capture the JWT-style auth token, persist it to `/tmp/umami-token-${UMAMI_PHASE}` so subsequent calls share a deterministic Authorization header. -* `record-traffic` — drive the umami v2 API. Every call is logged to `${UMAMI_FIRED_ROUTES_FILE}` (when set) so the `coverage` subcommand has a numerator without needing a keploy recording. Calls are fire-and-forget (`|| true` semantics) so a single endpoint regression in umami itself does not abort the run — keploy is the assertion layer at replay. -* `coverage` — walks the running container's `src/app/api/**/route.ts` tree as the denominator (the umami router is file-system based), compares against fired/recorded routes, emits a `(method, path)` percentage. -* `list-routes` — diagnostic; prints the route table. +* `record-traffic` — drive the umami v2 API. Calls are fire-and-forget (`|| true` semantics) so a single endpoint regression in umami itself does not abort the run — keploy is the assertion layer at replay. +* `coverage` — no-op stub. The upstream umami image ships compiled+minified Next.js without sourcemaps, so source-line coverage is not meaningful without rebuilding from source. Returns 0 cleanly so `flow.sh coverage || true` informational hooks keep working. ## Local run +### Without keploy — smoke check + ```sh docker compose up -d bash flow.sh bootstrap 240 -UMAMI_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh record-traffic -UMAMI_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh coverage +bash flow.sh record-traffic docker compose down -v ``` -## Consumers +This is what the keploy/enterprise compat lane wraps in `keploy record` / `keploy test` — the base compose runs unchanged inside that lane. + +### With keploy — record + replay + +```sh +docker compose up -d +bash flow.sh bootstrap 240 + +# In one shell: +keploy record -c "docker compose up" --container-name umami_app \ + --proxy-port 13081 --dns-port 13082 -Lanes pinned to this sample: +# In another shell: +bash flow.sh record-traffic +# SIGINT keploy when traffic returns + +keploy test -c "docker compose up" --containerName umami_app \ + --apiTimeout 60 --delay 30 --proxy-port 13081 --dns-port 13082 +``` + +### Coverage + +This sample does not emit a coverage metric. The upstream `ghcr.io/umami-software/umami:postgresql-v2.18.1` image ships a compiled + minified Next.js standalone build with no source tree or sourcemaps; V8 line coverage on minified output doesn't map back to anything a reviewer can act on, so a coverage gate would be misleading. The keploy/enterprise compat lane uses the record/replay assertions as its correctness gate, which is the meaningful test here. + +If real source-line coverage becomes a hard requirement, the path is to rebuild umami from its own source (npm install + `next build` without minification) inside a `Dockerfile.coverage` overlay — a separate, larger change. + +## Consumers * `keploy/enterprise` `.woodpecker/umami-linux.yml` — record/replay matrix delegates compose + bootstrap + traffic to this sample. * `keploy/integrations` may add a `.woodpecker/umami-postgres.yml` falsifying lane in a future PR.