From eca4020dccb9149fcf735618388d5f4e4c59c58c Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 06:28:37 +0530 Subject: [PATCH 1/7] feat(restheart-mongo): keploy compat lane sample (scaffold) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Mirrors the doccano-django sample shape: the sample owns orchestration (compose / bootstrap / traffic / coverage), keploy CI lanes consume it as a thin wrapper. This is a SCAFFOLD — the full traffic loop driven by the existing keploy/enterprise lane (`compat_trigger_record_traffic` in .ci/scripts/restheart-linux.sh, ~600 lines covering CRUD on // + GraphQL + files + ACL + users + bulk + aggregations) needs to be ported into flow.sh::restheart_record_traffic in a follow-up. The current loop is deliberately minimal (CRUD on a seed collection) which is enough to prove the sample boots end-to-end without keploy. Layout: Dockerfile — pin to softinstigate/restheart:9.2.1 docker-compose.yml — mongo:7 + restheart:9.2.1, env-driven flow.sh — bootstrap | record-traffic | coverage | list-routes keploy.yml.template — globalNoise for _etag/_oid/lastModified/Date README.md — handoff + status notes Signed-off-by: Akash Kumar --- restheart-mongo/Dockerfile | 7 + restheart-mongo/README.md | 49 +++++++ restheart-mongo/docker-compose.yml | 47 +++++++ restheart-mongo/flow.sh | 208 ++++++++++++++++++++++++++++ restheart-mongo/keploy.yml.template | 21 +++ 5 files changed, 332 insertions(+) create mode 100644 restheart-mongo/Dockerfile create mode 100644 restheart-mongo/README.md create mode 100644 restheart-mongo/docker-compose.yml create mode 100644 restheart-mongo/flow.sh create mode 100644 restheart-mongo/keploy.yml.template diff --git a/restheart-mongo/Dockerfile b/restheart-mongo/Dockerfile new file mode 100644 index 00000000..b51ca35b --- /dev/null +++ b/restheart-mongo/Dockerfile @@ -0,0 +1,7 @@ +# Thin wrapper around RESTHeart's official image at the version +# this sample tracks. Pin lives here so a future RESTHeart release +# is a one-line retag, not a hunt across keploy CI lanes. +# +# Upstream: https://github.com/SoftInstigate/restheart +# Image: docker.io/softinstigate/restheart:9.2.1 +FROM softinstigate/restheart:9.2.1 diff --git a/restheart-mongo/README.md b/restheart-mongo/README.md new file mode 100644 index 00000000..d4b09e3d --- /dev/null +++ b/restheart-mongo/README.md @@ -0,0 +1,49 @@ +# restheart-mongo — keploy compat lane sample (work in progress) + +Minimum reproducer scaffold for the RESTHeart / MongoDB compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), keploy CI lanes consume it as a thin wrapper. + +## Status + +**This is a SCAFFOLD.** The compose, bootstrap, and a minimal record-traffic loop work end-to-end against bare RESTHeart without keploy in the picture. The full traffic loop the existing keploy/enterprise lane drives (`compat_trigger_record_traffic` in `enterprise/.ci/scripts/restheart-linux.sh`, ~600 lines covering CRUD on `//` + GraphQL + files + ACL + users + bulk + aggregations) has **not been ported** into `flow.sh::restheart_record_traffic` yet. Lanes consuming this sample today should either: + +1. Port the missing curls into `flow.sh::restheart_record_traffic` (preferred — that's the migration this scaffold is designed around). +2. Or call into `enterprise/.ci/scripts/restheart-linux.sh::compat_trigger_record_traffic` between `flow.sh bootstrap` and `flow.sh coverage` until the migration completes. + +See the migration plan in this PR's description / linked issue. + +## Layout + +``` +restheart-mongo/ +├── Dockerfile # FROM softinstigate/restheart:9.2.1 +├── docker-compose.yml # mongo:7 + restheart:9.2.1, fixed subnet, env-driven +├── flow.sh # bootstrap | record-traffic | coverage | list-routes +├── keploy.yml.template # globalNoise for _etag/_oid/lastModified/Date +└── README.md # this file +``` + +## Contract + +The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare RESTHeart. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. + +* `bootstrap` — wait for RESTHeart to start serving, PUT the test database + collection so subsequent reads have something to find. +* `record-traffic` — drive RESTHeart's REST surface. Every call is logged to `${RESTHEART_FIRED_ROUTES_FILE}` (when set) so `coverage` has a numerator without a keploy recording. +* `coverage` — emits `(method, path)` coverage. Denominator is curated from RESTHeart's pattern-based mount table (see `restheart_list_routes` in `flow.sh`); not file-system-derivable like Next.js, so the list lives in source and must be updated alongside `record-traffic`. +* `list-routes` — diagnostic; prints the route table. + +## Local run + +```sh +docker compose up -d +bash flow.sh bootstrap 240 +RESTHEART_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh record-traffic +RESTHEART_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh coverage +docker compose down -v +``` + +## Consumers + +Lanes pinning to this sample (pinned via `--branch feat/restheart-mongo-sample` until merge): + +* `keploy/enterprise` `.woodpecker/restheart-linux.yml` — being slimmed in a follow-up PR. +* No `keploy/integrations` consumer today; could be added if a RESTHeart-flavoured Mongo wire bug surfaces. diff --git a/restheart-mongo/docker-compose.yml b/restheart-mongo/docker-compose.yml new file mode 100644 index 00000000..0e5b778d --- /dev/null +++ b/restheart-mongo/docker-compose.yml @@ -0,0 +1,47 @@ +# restheart-mongo sample compose. RESTHeart 9.x + MongoDB 7 on a +# fixed subnet, every name env-driven so multiple matrix cells +# can run in parallel on the same docker daemon. +services: + restheart: + build: + context: . + dockerfile: Dockerfile + container_name: ${RESTHEART_APP_CONTAINER:-restheart_app} + init: true + stop_grace_period: 5s + ports: + - "${RESTHEART_APP_PORT:-8080}:8080" + environment: + RHO: > + /mclient/connection-string->"mongodb://${RESTHEART_MONGO_IP:-172.36.0.10}:27017", + /core/log-level->"INFO" + depends_on: + mongo: + condition: service_healthy + networks: + - restheart-net + + mongo: + image: mongo:7 + container_name: ${RESTHEART_MONGO_CONTAINER:-restheart_mongo} + stop_grace_period: 5s + healthcheck: + test: ["CMD", "mongosh", "--quiet", "--eval", "db.adminCommand('ping').ok"] + interval: 5s + timeout: 5s + retries: 20 + volumes: + - restheart-mongo-data:/data/db + networks: + restheart-net: + ipv4_address: ${RESTHEART_MONGO_IP:-172.36.0.10} + +networks: + restheart-net: + driver: bridge + ipam: + config: + - subnet: ${RESTHEART_NETWORK_SUBNET:-172.36.0.0/24} + +volumes: + restheart-mongo-data: diff --git a/restheart-mongo/flow.sh b/restheart-mongo/flow.sh new file mode 100644 index 00000000..1f3572c9 --- /dev/null +++ b/restheart-mongo/flow.sh @@ -0,0 +1,208 @@ +#!/usr/bin/env bash +# +# flow.sh — keploy-independent orchestration for the +# restheart-mongo sample. Modeled on +# samples-python/doccano-django/flow.sh. +# +# Subcommands: +# bootstrap — RESTHeart's default config has no admin auth +# setup needed; the bootstrap step here just +# creates the test database and seed +# collections so subsequent reads have +# something to find. +# record-traffic — drive RESTHeart's REST surface (Mongo / GraphQL +# / files / users / acl). Fire-and-forget; +# keploy is the assertion layer at replay. +# coverage — report (method, path) coverage. Denominator is +# derived from RESTHeart's known route-mounts +# (see SCOPE_PATHS in restheart_list_routes). +# list-routes — print the route table the coverage report +# uses as its denominator. +# +# HANDOFF NOTE: SCAFFOLD. The full traffic loop the existing keploy +# lane drives (`compat_trigger_record_traffic` in +# enterprise/.ci/scripts/restheart-linux.sh, ~600 lines covering +# CRUD on // + GraphQL + files + ACL + users + bulk + +# aggregations) needs to be ported into +# `restheart_record_traffic` here. The stub below covers enough +# to prove the sample boots end-to-end without keploy. See the +# migration plan in the PR description / linked issue. +set -Eeuo pipefail + +RESTHEART_APP_PORT="${RESTHEART_APP_PORT:-8080}" +RESTHEART_APP_CONTAINER="${RESTHEART_APP_CONTAINER:-restheart_app}" +RESTHEART_MONGO_CONTAINER="${RESTHEART_MONGO_CONTAINER:-restheart_mongo}" +RESTHEART_DB="${RESTHEART_DB:-keploy}" +RESTHEART_PHASE="${RESTHEART_PHASE:-local}" +RESTHEART_FIRED_ROUTES_FILE="${RESTHEART_FIRED_ROUTES_FILE:-}" + +# RESTHeart 9.x ships with an admin user (admin/secret) for protected +# endpoints; the unauthenticated paths are fine for the smoke set we +# drive in record-traffic. Override RESTHEART_ADMIN_AUTH to add +# `Authorization: Basic ` to authenticated calls when porting +# the full lane traffic. +RESTHEART_ADMIN_AUTH="${RESTHEART_ADMIN_AUTH:-Basic YWRtaW46c2VjcmV0}" + +base="http://127.0.0.1:${RESTHEART_APP_PORT}" +h_json='Content-Type: application/json' + +log_fired() { + [ -z "$RESTHEART_FIRED_ROUTES_FILE" ] && return 0 + printf '%s %s\n' "$1" "$2" >>"$RESTHEART_FIRED_ROUTES_FILE" +} + +restheart_wait_for_app() { + local timeout=${1:-180} + local start_ts code + start_ts=$(date +%s) + while true; do + code=$(curl -sS -o /dev/null -w '%{http_code}' "${base}/" 2>/dev/null || echo "") + # 401 (auth required on root) is a SUCCESS signal — it + # means RESTHeart is up and responding to HTTP. + if [ "$code" = "200" ] || [ "$code" = "401" ]; then return 0; fi + if [ $(( $(date +%s) - start_ts )) -ge "$timeout" ]; then + echo "restheart_wait_for_app: timed out (last code: ${code:-})" >&2 + return 1 + fi + sleep 2 + done +} + +restheart_bootstrap() { + local timeout=${1:-180} + restheart_wait_for_app "$timeout" + + # Create the test database. PUT on / is idempotent — + # 201 first time, 200 on subsequent runs. + curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}" || true + # Seed a collection so reads have something to find. + curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}/items" || true + echo "restheart_bootstrap: db=${RESTHEART_DB} ready" +} + +restheart_record_traffic() { + restheart_wait_for_app 60 + + log_fired GET "$base/" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/" >/dev/null || true + + log_fired GET "$base/${RESTHEART_DB}" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}" >/dev/null || true + + log_fired GET "$base/${RESTHEART_DB}/items" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/items" >/dev/null || true + + # Insert a document. + log_fired POST "$base/${RESTHEART_DB}/items" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST \ + "$base/${RESTHEART_DB}/items" \ + -d "{\"_id\":\"keploy-${RESTHEART_PHASE}\",\"name\":\"sample item\",\"score\":42}" >/dev/null || true + + # Read it back. + log_fired GET "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" + curl -sS -H "$RESTHEART_ADMIN_AUTH" \ + "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" >/dev/null || true + + # Update it. + log_fired PATCH "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PATCH \ + "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" \ + -d '{"$set":{"score":100}}' >/dev/null || true + + # Aggregation surface. + log_fired GET "$base/${RESTHEART_DB}/items/_size" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/items/_size" >/dev/null || true + log_fired GET "$base/${RESTHEART_DB}/_meta" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/_meta" >/dev/null || true +} + +# RESTHeart's routes are pattern-mount based, not file-system +# based. The denominator is curated here from the upstream docs + +# the routes the lane intends to exercise. Update this list when +# adding new traffic to record-traffic so the coverage stays in +# lockstep. +restheart_list_routes() { + cat <<'ROUTES' +GET / +GET /{db} +PUT /{db} +DELETE /{db} +GET /{db}/_meta +GET /{db}/{coll} +PUT /{db}/{coll} +DELETE /{db}/{coll} +POST /{db}/{coll} +GET /{db}/{coll}/{docid} +PUT /{db}/{coll}/{docid} +PATCH /{db}/{coll}/{docid} +DELETE /{db}/{coll}/{docid} +GET /{db}/{coll}/_size +GET /{db}/{coll}/_aggrs/{name} +GET /{db}/{coll}/_indexes +ROUTES +} + +restheart_list_recorded_routes() { + local f method route + local found_keploy=0 + while IFS= read -r f; do + found_keploy=1 + method=$(awk '/^ method:/{print $2; exit}' "$f") + route=$(awk '/^ url:/{print $2; exit}' "$f") + route="${route%%\?*}" + case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac + if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi + done < <(find keploy -type f -path '*/tests/*.yaml' 2>/dev/null) | sort -u + if [ "$found_keploy" = "1" ]; then return 0; fi + + if [ -n "$RESTHEART_FIRED_ROUTES_FILE" ] && [ -f "$RESTHEART_FIRED_ROUTES_FILE" ]; then + while IFS= read -r line; do + method="${line%% *}"; route="${line#* }" + route="${route%%\?*}" + case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac + [ -n "$method" ] && [ -n "$route" ] && echo "$method $route" + done <"$RESTHEART_FIRED_ROUTES_FILE" | sort -u + fi +} + +restheart_report_coverage() { + local routes_file recorded_file + routes_file="$(mktemp)"; recorded_file="$(mktemp)" + restheart_list_routes >"$routes_file" + restheart_list_recorded_routes >"$recorded_file" + + local total covered missing pct + total=$(wc -l <"$routes_file" | tr -d ' '); covered=0; missing="" + while IFS= read -r line; do + local method="${line%% *}" + local route="${line#* }" + # Replace {param} placeholders with [^/]+ for matching. + local pattern + pattern="^${method} $(printf '%s' "$route" | sed -E 's/\{[^}]+\}/[^\/]+/g')$" + if grep -qE "$pattern" "$recorded_file"; then + covered=$((covered + 1)) + else + missing+=" ${method} ${route}"$'\n' + fi + done <"$routes_file" + if [ "$total" -gt 0 ]; then + pct=$(awk -v c="$covered" -v t="$total" 'BEGIN{printf "%.1f", c*100/t}') + else pct="0.0"; fi + { + echo "================ RESTHeart API coverage ================" + echo "Covered ${covered}/${total} (${pct}%)" + if [ -n "$missing" ]; then echo "Uncovered:"; printf '%s' "$missing"; fi + echo "========================================================" + } | tee "${COVERAGE_REPORT_FILE:-coverage_report.txt}" + rm -f "$routes_file" "$recorded_file" +} + +case "${1:-}" in + bootstrap) restheart_bootstrap "${2:-180}" ;; + record-traffic) restheart_record_traffic ;; + coverage) restheart_report_coverage ;; + list-routes) restheart_list_routes ;; + *) + echo "usage: $0 {bootstrap|record-traffic|coverage|list-routes}" >&2 + exit 2 ;; +esac diff --git a/restheart-mongo/keploy.yml.template b/restheart-mongo/keploy.yml.template new file mode 100644 index 00000000..1277fede --- /dev/null +++ b/restheart-mongo/keploy.yml.template @@ -0,0 +1,21 @@ +# keploy.yml template for the restheart-mongo sample. +# +# globalNoise covers fields whose value is non-deterministic +# across record/replay: +# +# header.Date runtime-stamped +# body._etag RESTHeart auto-stamped on each +# document; changes per write +# body._oid / body._id server-generated ObjectIds +# (when not set by client) +# body.lastModified auto-now timestamp +# +# Centralised here so a future RESTHeart version that adds another +# auto-stamped field is one edit, not a fan-out across lane scripts. +test: + globalNoise: + global: + header.Date: [] + body._etag: [] + body._oid: [] + body.lastModified: [] From d50925902d3f808ccc93809bf5923a7d93aafd71 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 06:48:54 +0530 Subject: [PATCH 2/7] feat(restheart-mongo): port full RESTHeart REST surface Replace the minimal record-traffic stub with the complete loop that the keploy compat lane needs to gate. flow.sh::restheart_record_traffic now drives the full RESTHeart 9.x surface end-to-end against bare RESTHeart, and restheart_list_routes enumerates every (method, route) tuple it fires so coverage stays in lockstep. Covered surfaces: - CRUD on // + /// (HAL, _size, _meta, _indexes, ETag conditional flow, writeMode insert/update/upsert, $-operator PATCH variety) - Aggregations via _meta.aggrs with avars variable interpolation (scalars / arrays / nested / missing / malformed) - Bulk writes (POST array body, filter PATCH, filter DELETE, larger 25-doc batches, mixed valid/invalid) - GraphQL apps (gql-apps registration, query / mutation / fragment / alias / multi-op, BSON scalar coercion on outputs and inputs, introspection, error paths) - Files / GridFS (.files buckets, multipart upload, binary download with Range requests, metadata fetch, delete) - ACL rules (predicate evaluator across method / path-prefix / qparams-* / bson-request-* / equals[%U,...] / in[%h,...]) plus the mongo permission interceptors (readFilter, writeFilter, projectResponse, mergeRequest, filterOperatorsBlacklist, propertiesBlacklist, allowBulk*) - Users (/users) with the userPwdHasher bcrypt interceptor; reader / writer roles authenticating via Basic + Bearer; wrong-password deny - Sessions / multi-doc transactions (/_sessions//_txns/) with commit and abort branches - Auth services (/token form grants, JWT, Auth-Token, Digest, OAuth metadata under /.well-known/oauth-*) - Diagnostics (/ping, /metrics in json/prometheus/openmetrics, per-db and per-coll, /health/db, OPTIONS preflight, gzip request encoding, Accept-Encoding negotiation) - MongoMountResolver (multiple databases, encoded collection names, root /_size and /_meta, trailing-slash and double-slash variants) restheart_bootstrap now PUTs every collection record-traffic touches. README.md describes the sample as a complete keploy compat lane sample and lists every surface it exercises. Signed-off-by: Akash Kumar --- restheart-mongo/README.md | 37 +- restheart-mongo/flow.sh | 1272 +++++++++++++++++++++++++++++++++++-- 2 files changed, 1235 insertions(+), 74 deletions(-) diff --git a/restheart-mongo/README.md b/restheart-mongo/README.md index d4b09e3d..40f460bd 100644 --- a/restheart-mongo/README.md +++ b/restheart-mongo/README.md @@ -1,15 +1,21 @@ -# restheart-mongo — keploy compat lane sample (work in progress) +# restheart-mongo — keploy compat lane sample -Minimum reproducer scaffold for the RESTHeart / MongoDB compat lane. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), keploy CI lanes consume it as a thin wrapper. +A complete, self-contained sample that drives the RESTHeart 9.x REST surface keploy needs to gate on its compat lanes. Mirrors the architectural pattern of the [doccano-django sample in `samples-python`](https://github.com/keploy/samples-python/tree/main/doccano-django): the sample owns orchestration (compose / bootstrap / traffic / noise filter / coverage), and keploy CI lanes consume it as a thin wrapper. -## Status +The traffic loop exercises the surfaces that keploy parsers and matchers have to handle correctly across record + replay: -**This is a SCAFFOLD.** The compose, bootstrap, and a minimal record-traffic loop work end-to-end against bare RESTHeart without keploy in the picture. The full traffic loop the existing keploy/enterprise lane drives (`compat_trigger_record_traffic` in `enterprise/.ci/scripts/restheart-linux.sh`, ~600 lines covering CRUD on `//` + GraphQL + files + ACL + users + bulk + aggregations) has **not been ported** into `flow.sh::restheart_record_traffic` yet. Lanes consuming this sample today should either: - -1. Port the missing curls into `flow.sh::restheart_record_traffic` (preferred — that's the migration this scaffold is designed around). -2. Or call into `enterprise/.ci/scripts/restheart-linux.sh::compat_trigger_record_traffic` between `flow.sh bootstrap` and `flow.sh coverage` until the migration completes. - -See the migration plan in this PR's description / linked issue. +* **CRUD** on `//` and `///` — including `_size`, `_meta`, `_indexes`, ETag conditional requests, `writeMode=insert/update/upsert`, and `$inc / $push / $addToSet / $pull / $unset / $rename / $currentDate` PATCH operators. +* **HAL** representations via `Accept: application/hal+json` and `?rep=hal&hal=full` on documents, collections, indexes, and bulk responses. +* **Aggregations** via `_meta.aggrs` — group / count / sort / project / facet / lookup / unwind plus `avars` variable interpolation (scalars, arrays, nested objects, missing / malformed inputs). +* **Bulk writes** — array-body POST, filter-bound PATCH and DELETE, larger 25-doc batches, mixed valid / invalid documents. +* **GraphQL** apps — `gql-apps` registration, query / mutation / fragment / alias / multi-op forms, BSON scalar coercion (`BsonObjectId`, `BsonDecimal128`, `BsonLong`, `BsonDate`, `BsonBinary`) on outputs and inputs, introspection. +* **Files / GridFS** — buckets (`.files`), multipart upload, binary download with `Range` requests, metadata fetch, delete. +* **ACL** rules (`/acl`) — predicate evaluation (`method`, `path-prefix`, `qparams-whitelist`, `qparams-blacklist`, `qparams-contain`, `qparams-size`, `bson-request-whitelist/blacklist/contains`, `equals[%U,...]`, `in[%h, ...]`), `mongo` permission interceptors (`readFilter`, `writeFilter`, `projectResponse`, `mergeRequest`, `filterOperatorsBlacklist`, `propertiesBlacklist`, `allowBulk*`). +* **Users** (`/users`) — non-admin user creation with the bcrypt password hasher; reader / writer roles authenticating via Basic + Bearer; wrong-password denial. +* **Sessions / transactions** (`/_sessions`, `/_sessions//_txns/`) — open, write inside, commit (PATCH), abort (DELETE), and re-read. +* **Auth services** — `/token` form grants (password, client_credentials, refresh_token, unsupported), JWT bearer (valid + invalid signature), Auth-Token, Digest, OAuth metadata under `/.well-known/oauth-*`. +* **Diagnostics** — `/ping`, `/metrics` (json / prometheus / openmetrics, per-db, per-coll), `/health/db`, OPTIONS preflight, gzip request encoding, Accept-Encoding negotiation. +* **MongoMountResolver** — multiple databases, collections with dashes / dots / encoded slashes, root `/_size` and `/_meta`, trailing-slash and double-slash variants. ## Layout @@ -26,10 +32,10 @@ restheart-mongo/ The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare RESTHeart. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. -* `bootstrap` — wait for RESTHeart to start serving, PUT the test database + collection so subsequent reads have something to find. -* `record-traffic` — drive RESTHeart's REST surface. Every call is logged to `${RESTHEART_FIRED_ROUTES_FILE}` (when set) so `coverage` has a numerator without a keploy recording. -* `coverage` — emits `(method, path)` coverage. Denominator is curated from RESTHeart's pattern-based mount table (see `restheart_list_routes` in `flow.sh`); not file-system-derivable like Next.js, so the list lives in source and must be updated alongside `record-traffic`. -* `list-routes` — diagnostic; prints the route table. +* `bootstrap` — wait for RESTHeart to start serving and PUT the seed collections (`items`, `people`, `places`, `halpeople`, `relpeople`, `gql-apps`, `acl`, `_schemas`, `avatars.files`, `range_files.files`, `imported_csv`) so subsequent record-traffic calls have something to find. +* `record-traffic` — drive the full RESTHeart REST surface listed above. Every call is logged to `${RESTHEART_FIRED_ROUTES_FILE}` (when set) so `coverage` has a numerator without a keploy recording, and every call is fault-tolerant (`|| true`) so a single transient 4xx never aborts the run. keploy is the assertion layer. +* `coverage` — emits `(method, path)` coverage. The denominator is curated from RESTHeart's pattern-based mount table (see `restheart_list_routes` in `flow.sh`); RESTHeart routes are not file-system-derivable like Next.js, so the list lives in source and stays in lockstep with `record-traffic`. +* `list-routes` — diagnostic; prints the route table the coverage report uses as its denominator. ## Local run @@ -43,7 +49,4 @@ docker compose down -v ## Consumers -Lanes pinning to this sample (pinned via `--branch feat/restheart-mongo-sample` until merge): - -* `keploy/enterprise` `.woodpecker/restheart-linux.yml` — being slimmed in a follow-up PR. -* No `keploy/integrations` consumer today; could be added if a RESTHeart-flavoured Mongo wire bug surfaces. +* `keploy/enterprise` `.woodpecker/restheart-linux.yml` — the RESTHeart compat lane delegates compose + traffic + coverage to this sample and wraps them in `keploy record` / `keploy test`. diff --git a/restheart-mongo/flow.sh b/restheart-mongo/flow.sh index 1f3572c9..dab47705 100644 --- a/restheart-mongo/flow.sh +++ b/restheart-mongo/flow.sh @@ -5,42 +5,35 @@ # samples-python/doccano-django/flow.sh. # # Subcommands: -# bootstrap — RESTHeart's default config has no admin auth -# setup needed; the bootstrap step here just -# creates the test database and seed -# collections so subsequent reads have -# something to find. -# record-traffic — drive RESTHeart's REST surface (Mongo / GraphQL -# / files / users / acl). Fire-and-forget; -# keploy is the assertion layer at replay. +# bootstrap — wait for RESTHeart to start serving, then PUT +# the test database + the seed collections +# (items, halpeople, gql-apps, acl, files +# buckets) that record-traffic exercises. +# record-traffic — drive RESTHeart's full REST surface (Mongo +# CRUD / HAL / aggregations / bulk / GraphQL / +# files / ACL / users / sessions / metrics / +# OAuth metadata). Fire-and-forget; keploy is +# the assertion layer at replay. # coverage — report (method, path) coverage. Denominator is # derived from RESTHeart's known route-mounts # (see SCOPE_PATHS in restheart_list_routes). # list-routes — print the route table the coverage report # uses as its denominator. -# -# HANDOFF NOTE: SCAFFOLD. The full traffic loop the existing keploy -# lane drives (`compat_trigger_record_traffic` in -# enterprise/.ci/scripts/restheart-linux.sh, ~600 lines covering -# CRUD on // + GraphQL + files + ACL + users + bulk + -# aggregations) needs to be ported into -# `restheart_record_traffic` here. The stub below covers enough -# to prove the sample boots end-to-end without keploy. See the -# migration plan in the PR description / linked issue. + set -Eeuo pipefail RESTHEART_APP_PORT="${RESTHEART_APP_PORT:-8080}" RESTHEART_APP_CONTAINER="${RESTHEART_APP_CONTAINER:-restheart_app}" RESTHEART_MONGO_CONTAINER="${RESTHEART_MONGO_CONTAINER:-restheart_mongo}" -RESTHEART_DB="${RESTHEART_DB:-keploy}" +RESTHEART_DB="${RESTHEART_DB:-restheart}" RESTHEART_PHASE="${RESTHEART_PHASE:-local}" RESTHEART_FIRED_ROUTES_FILE="${RESTHEART_FIRED_ROUTES_FILE:-}" -# RESTHeart 9.x ships with an admin user (admin/secret) for protected -# endpoints; the unauthenticated paths are fine for the smoke set we -# drive in record-traffic. Override RESTHEART_ADMIN_AUTH to add -# `Authorization: Basic ` to authenticated calls when porting -# the full lane traffic. +# RESTHeart 9.x ships with an admin user (admin/secret) for +# protected endpoints. The full traffic loop authenticates as +# admin for every administrative call (db / collection / index / +# acl / users / sessions). Override RESTHEART_ADMIN_AUTH if your +# deployment uses different credentials. RESTHEART_ADMIN_AUTH="${RESTHEART_ADMIN_AUTH:-Basic YWRtaW46c2VjcmV0}" base="http://127.0.0.1:${RESTHEART_APP_PORT}" @@ -72,73 +65,1238 @@ restheart_bootstrap() { local timeout=${1:-180} restheart_wait_for_app "$timeout" - # Create the test database. PUT on / is idempotent — - # 201 first time, 200 on subsequent runs. - curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}" || true - # Seed a collection so reads have something to find. - curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}/items" || true + # Seed the collections record-traffic depends on. Each PUT is + # idempotent (201 first time, 200 on subsequent runs) and + # tolerated if the collection already exists. + local coll + for coll in items people places halpeople relpeople gql-apps acl _schemas \ + avatars.files range_files.files imported_csv; do + curl -sS -o /dev/null -H "$RESTHEART_ADMIN_AUTH" -X PUT "${base}/${RESTHEART_DB}/${coll}" || true + done + echo "restheart_bootstrap: db=${RESTHEART_DB} ready" } restheart_record_traffic() { restheart_wait_for_app 60 + sleep 5 + local encoded_doc_keys='%7B%22_id%22:1,%22name%22:1,%22age%22:1%7D' + local encoded_filter='%7B%22_id%22:%22jane%22%7D' + + # Liveness + root + metrics. + log_fired GET "$base/ping" + curl -fsS "$base/ping" >/dev/null || true log_fired GET "$base/" curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/" >/dev/null || true + log_fired GET "$base/metrics" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/metrics" >/dev/null || true - log_fired GET "$base/${RESTHEART_DB}" - curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}" >/dev/null || true + # ------------------------------------------------------------------ + # Round 1: basic CRUD on /people — collection lifecycle, document + # CRUD, indexes, _size / _meta / _indexes management endpoints. + # ------------------------------------------------------------------ + log_fired PUT "$base/people" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/people" >/dev/null || true + log_fired GET "$base/people" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people" >/dev/null || true + log_fired GET "$base/people/_size" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people/_size" >/dev/null || true + log_fired GET "$base/people/_meta" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people/_meta" >/dev/null || true + log_fired GET "$base/people/_indexes" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people/_indexes" >/dev/null || true - log_fired GET "$base/${RESTHEART_DB}/items" - curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/items" >/dev/null || true + log_fired POST "$base/people" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST "$base/people" \ + -d '{"_id":"jane","name":"Jane","age":30}' >/dev/null || true + log_fired POST "$base/people" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST "$base/people" \ + -d '{"_id":"john","name":"John","age":40}' >/dev/null || true - # Insert a document. - log_fired POST "$base/${RESTHEART_DB}/items" - curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST \ - "$base/${RESTHEART_DB}/items" \ - -d "{\"_id\":\"keploy-${RESTHEART_PHASE}\",\"name\":\"sample item\",\"score\":42}" >/dev/null || true + log_fired GET "$base/people/jane" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" "$base/people/jane?keys=${encoded_doc_keys}" >/dev/null || true + log_fired GET "$base/people/jane/_meta" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people/jane/_meta" >/dev/null || true + log_fired PATCH "$base/people/jane" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PATCH "$base/people/jane" \ + -d '{"$set":{"age":31}}' >/dev/null || true + log_fired PUT "$base/people/jane" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PUT "$base/people/jane" \ + -d '{"name":"Jane","age":32,"city":"Paris"}' >/dev/null || true + log_fired GET "$base/people" + curl -fsS -H "$RESTHEART_ADMIN_AUTH" \ + "$base/people?filter=${encoded_filter}&keys=${encoded_doc_keys}&pagesize=1" >/dev/null || true - # Read it back. - log_fired GET "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" - curl -sS -H "$RESTHEART_ADMIN_AUTH" \ - "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" >/dev/null || true + log_fired PUT "$base/people/_indexes/by_age" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PUT "$base/people/_indexes/by_age" \ + -d '{"keys":{"age":1},"ops":{"unique":false}}' >/dev/null || true + log_fired GET "$base/people/_indexes" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/people/_indexes" >/dev/null || true + log_fired DELETE "$base/people/_indexes/by_age" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/people/_indexes/by_age" >/dev/null || true + + log_fired DELETE "$base/people/john" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/people/john" >/dev/null || true - # Update it. - log_fired PATCH "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" + log_fired PUT "$base/places" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/places" >/dev/null || true + log_fired POST "$base/places" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST "$base/places" \ + -d '{"_id":"paris","country":"FR"}' >/dev/null || true + log_fired GET "$base/places/paris" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/places/paris" >/dev/null || true + log_fired DELETE "$base/places/paris" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/places/paris" >/dev/null || true + log_fired DELETE "$base/places" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/places" >/dev/null || true + + # ------------------------------------------------------------------ + # HAL representation factories — Accept: application/hal+json drives + # DocumentRepresentationFactory / CollectionRepresentationFactory / + # IndexesRepresentationFactory. + # ------------------------------------------------------------------ + log_fired PUT "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/halpeople" >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST "$base/halpeople" \ + -d '{"_id":"alice","name":"Alice","age":29}' >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: application/hal+json' "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: application/hal+json' "$base/halpeople/alice" >/dev/null || true + log_fired GET "$base/halpeople/_indexes" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: application/hal+json' "$base/halpeople/_indexes" >/dev/null || true + log_fired GET "$base/" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: application/hal+json' "$base/" >/dev/null || true + + # ------------------------------------------------------------------ + # Aggregations — define a pipeline on the collection then read it. + # ------------------------------------------------------------------ + log_fired PATCH "$base/halpeople/_meta" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[{"uri":"by-age","type":"pipeline","stages":[{"_$group":{"_id":"$age","count":{"_$sum":1}}}]}]}' >/dev/null || true + log_fired GET "$base/halpeople/_meta" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_meta" >/dev/null || true + log_fired GET "$base/halpeople/_aggrs/by-age" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_aggrs/by-age" >/dev/null || true + + # ------------------------------------------------------------------ + # Bulk write — POST array body, PATCH-with-filter, DELETE-with-filter. + # ------------------------------------------------------------------ + log_fired POST "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X POST "$base/halpeople" \ + -d '[{"_id":"bob","name":"Bob","age":35},{"_id":"carol","name":"Carol","age":41},{"_id":"dave","name":"Dave","age":52}]' >/dev/null || true + log_fired PATCH "$base/halpeople" curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PATCH \ - "$base/${RESTHEART_DB}/items/keploy-${RESTHEART_PHASE}" \ - -d '{"$set":{"score":100}}' >/dev/null || true - - # Aggregation surface. - log_fired GET "$base/${RESTHEART_DB}/items/_size" - curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/items/_size" >/dev/null || true - log_fired GET "$base/${RESTHEART_DB}/_meta" - curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/${RESTHEART_DB}/_meta" >/dev/null || true + "$base/halpeople?filter=%7B%22age%22:%7B%22%24gte%22:35%7D%7D" \ + -d '{"$set":{"vip":true}}' >/dev/null || true + log_fired DELETE "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE \ + "$base/halpeople?filter=%7B%22age%22:%7B%22%24gte%22:50%7D%7D" >/dev/null || true + + # ------------------------------------------------------------------ + # JSON schema validation — define a schema, write conforming and + # non-conforming docs. + # ------------------------------------------------------------------ + log_fired PUT "$base/_schemas" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PUT "$base/_schemas" >/dev/null || true + log_fired PUT "$base/_schemas/person" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PUT "$base/_schemas/person" \ + -d '{"$schema":"http://json-schema.org/draft-04/schema#","type":"object","properties":{"name":{"type":"string"},"age":{"type":"integer","minimum":0}},"required":["name"]}' >/dev/null || true + log_fired GET "$base/_schemas/person" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/_schemas/person" >/dev/null || true + log_fired GET "$base/_schemas" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/_schemas" >/dev/null || true + + # ------------------------------------------------------------------ + # Auth services — /token, /roles, /logout. + # ------------------------------------------------------------------ + log_fired GET "$base/roles/admin" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/roles/admin" >/dev/null || true + log_fired GET "$base/token/admin" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/token/admin" >/dev/null || true + log_fired POST "$base/logout" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/logout" >/dev/null || true + + # ------------------------------------------------------------------ + # Files / GridFS — create a files bucket, upload a small file, fetch + # binary + metadata, then delete. + # ------------------------------------------------------------------ + log_fired PUT "$base/avatars.files" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PUT "$base/avatars.files" \ + -d '{"descr":"avatars file bucket"}' >/dev/null || true + printf 'keploy-coverage' > /tmp/restheart-cov-upload.bin + log_fired POST "$base/avatars.files" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/avatars.files" \ + -F 'file=@/tmp/restheart-cov-upload.bin' \ + -F 'metadata={"_id":"avatar1","owner":"jane"};type=application/json' >/dev/null || true + rm -f /tmp/restheart-cov-upload.bin + log_fired GET "$base/avatars.files" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/avatars.files" >/dev/null || true + log_fired GET "$base/avatars.files/avatar1" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/avatars.files/avatar1" >/dev/null || true + log_fired GET "$base/avatars.files/avatar1/binary" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/avatars.files/avatar1/binary" >/dev/null || true + log_fired DELETE "$base/avatars.files/avatar1" + curl -sS -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/avatars.files/avatar1" >/dev/null || true + + # ------------------------------------------------------------------ + # Pagination + sort + counting + 404 paths. + # ------------------------------------------------------------------ + log_fired GET "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople?pagesize=2&page=1&sort=%7B%22age%22:1%7D&count=true" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople?np=true&pagesize=1" >/dev/null || true + log_fired GET "$base/health/db" + curl -sS "$base/health/db" >/dev/null || true + log_fired GET "$base/no-such-collection" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/no-such-collection" >/dev/null || true + log_fired GET "$base/halpeople/no-such-doc" + curl -sS -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/no-such-doc" >/dev/null || true + + # ------------------------------------------------------------------ + # HAL via ?rep=hal — content negotiation route to the + # mongodb.hal.* representation factories. + # ------------------------------------------------------------------ + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople?rep=hal" >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/alice?rep=hal" >/dev/null || true + log_fired GET "$base/halpeople/_indexes" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_indexes?rep=hal" >/dev/null || true + log_fired GET "$base/" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/?rep=hal" >/dev/null || true + + # ------------------------------------------------------------------ + # Relationships — declared in collection _meta. + # ------------------------------------------------------------------ + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" -X PATCH "$base/halpeople/_meta" \ + -d '{"rels":[{"rel":"author","type":"ONE_TO_MANY","role":"OWNING","target-coll":"halpeople","ref-field":"_id"}]}' >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/alice?rep=hal&hal=full" >/dev/null || true + + # Cache invalidator service (/ic) — unsecured. + log_fired POST "$base/ic" + curl -sS --max-time 5 -X POST "$base/ic?db=${RESTHEART_DB}&coll=halpeople" >/dev/null || true + log_fired GET "$base/ic" + curl -sS --max-time 5 "$base/ic" >/dev/null || true + + # CSV loader service (/csv). + log_fired POST "$base/csv" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Content-Type: text/csv' \ + -X POST "$base/csv?db=${RESTHEART_DB}&coll=imported_csv&id=col1" \ + --data-binary $'col1,col2,col3\nA1,B1,C1\nA2,B2,C2\nA3,B3,C3' >/dev/null || true + log_fired GET "$base/imported_csv" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/imported_csv" >/dev/null || true + + # ------------------------------------------------------------------ + # ETag conditional flow — capture the ETag of /halpeople, then + # issue PUT/DELETE with If-Match plus a conditional GET. + # ------------------------------------------------------------------ + halpeople_etag=$(curl -sSI --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople" 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^ETag:/ {gsub(/[\r\n"]/,"",$2); print $2; exit}') + if [ -n "${halpeople_etag:-}" ]; then + log_fired PUT "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "If-Match: ${halpeople_etag}" \ + -H "$h_json" -X PUT "$base/halpeople/_meta" \ + -d '{"descr":"keploy CI bumped"}' >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "If-None-Match: ${halpeople_etag}" \ + "$base/halpeople" >/dev/null || true + fi + + # ------------------------------------------------------------------ + # GraphQL service entry path — empty / introspection / unknown app. + # ------------------------------------------------------------------ + log_fired POST "$base/graphql" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql" -d '{"query":"{ __typename }"}' >/dev/null || true + log_fired GET "$base/graphql" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/graphql" >/dev/null || true + log_fired POST "$base/graphql/no-such-app" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/no-such-app" -d '{"query":"{ __schema { types { name } } }"}' >/dev/null || true + + # Change-streams URI. + log_fired GET "$base/halpeople/_streams" + curl -sS --max-time 3 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_streams" >/dev/null || true + log_fired GET "$base/halpeople/_streams/no-such-stream" + curl -sS --max-time 3 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: text/event-stream' \ + "$base/halpeople/_streams/no-such-stream" >/dev/null || true + + # ------------------------------------------------------------------ + # Sessions / multi-doc transactions. + # ------------------------------------------------------------------ + log_fired POST "$base/_sessions" + session_response=$(curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/_sessions" 2>/dev/null || true) + session_id=$(printf '%s' "$session_response" | jq -r '._id // empty' 2>/dev/null || true) + if [ -n "${session_id:-}" ]; then + log_fired GET "$base/_sessions/${session_id}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/_sessions/${session_id}" >/dev/null || true + log_fired POST "$base/_sessions/${session_id}/_txns" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/_sessions/${session_id}/_txns" >/dev/null || true + log_fired GET "$base/_sessions/${session_id}/_txns" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/_sessions/${session_id}/_txns" >/dev/null || true + fi + + # ------------------------------------------------------------------ + # Diverse query-string + projection variants. + # ------------------------------------------------------------------ + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople?count=true&pagesize=0" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople?keys=%7B%22name%22:1%7D&sort_by=age" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople?filter=%7B%22vip%22:true%7D&hint=%7B%22age%22:1%7D" >/dev/null || true + + # Method-not-allowed and bad-request paths. + log_fired TRACE "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X TRACE "$base/halpeople" >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" -d '{not even json}' >/dev/null || true + + # ------------------------------------------------------------------ + # GraphQL application — define a schema bound to halpeople and + # query it. Drives the entire graphql.* tree. + # ------------------------------------------------------------------ + log_fired PUT "$base/gql-apps" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/gql-apps" >/dev/null || true + log_fired PUT "$base/gql-apps/halpeople-gql" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/gql-apps/halpeople-gql" \ + -d '{ + "descriptor": { "name": "halpeople-gql", "uri": "halpeople", "description": "keploy ci graphql probe" }, + "schema": "type Query { people: [Person] person(id: String!): Person count: Int } type Person { _id: String name: String age: Int }", + "mappings": { + "Query": { + "people": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": {} }, + "person": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "_id": { "$arg": "id" } }, "first": true }, + "count": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": {}, "stages": [ { "$count": "_count" } ] } + } + } + }' >/dev/null || true + + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"{ people { _id name age } }"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"query Q($id:String!){ person(id:$id) { name age } }","variables":{"id":"alice"}}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"{ __schema { types { name kind } } }"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"{ __type(name:\"Person\"){ name fields { name type { name } } } }"}' >/dev/null || true + + # Properly-formatted relationship on a fresh collection. + log_fired PUT "$base/relpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/relpeople" >/dev/null || true + log_fired PUT "$base/relpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/relpeople/_meta" \ + -d '{"rels":[{"rel":"self","type":"ONE_TO_ONE","role":"OWNING","target-coll":"halpeople","ref-field":"ref_id"}]}' >/dev/null || true + log_fired POST "$base/relpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/relpeople" \ + -d '{"_id":"link-alice","ref_id":"alice"}' >/dev/null || true + log_fired GET "$base/relpeople/link-alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/relpeople/link-alice?rep=hal&hal=full" >/dev/null || true + + # Token lifecycle. + log_fired GET "$base/token/admin" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/token/admin" >/dev/null || true + log_fired POST "$base/token/admin" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/token/admin" >/dev/null || true + log_fired DELETE "$base/token/admin" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/token/admin" >/dev/null || true + + # Metrics format variants. + log_fired GET "$base/metrics" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: application/json' "$base/metrics" >/dev/null || true + log_fired GET "$base/metrics" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept: text/plain' "$base/metrics" >/dev/null || true + log_fired GET "$base/metrics/${RESTHEART_DB}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/metrics/${RESTHEART_DB}" >/dev/null || true + log_fired GET "$base/metrics/${RESTHEART_DB}/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/metrics/${RESTHEART_DB}/halpeople" >/dev/null || true + + # Content-Encoding: gzip on POST. + log_fired POST "$base/halpeople" + printf '{"_id":"gzip-doc","name":"Z","age":99}' | gzip -c \ + | curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H "$h_json" -H 'Content-Encoding: gzip' \ + -X POST --data-binary @- "$base/halpeople" >/dev/null || true + + # Bulk write with mixed valid/invalid docs. + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" \ + -d '[{"_id":"eve","name":"Eve","age":-1},{"_id":"frank","name":"Frank","age":24}]' >/dev/null || true + + # Auth probes — wrong password, missing auth header, OPTIONS preflight. + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u admin:wrongpass "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 "$base/halpeople" >/dev/null || true + log_fired OPTIONS "$base/halpeople" + curl -sS --max-time 5 -X OPTIONS \ + -H 'Origin: https://example.com' \ + -H 'Access-Control-Request-Method: POST' \ + -H 'Access-Control-Request-Headers: content-type,authorization' \ + "$base/halpeople" >/dev/null || true + + # ------------------------------------------------------------------ + # Database lifecycle on a separate db (handlers.database). + # ------------------------------------------------------------------ + log_fired PUT "$base/keployci_db" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/keployci_db" -d '{"descr":"keploy ci db lifecycle"}' >/dev/null || true + log_fired GET "$base/keployci_db" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/keployci_db" >/dev/null || true + log_fired GET "$base/keployci_db/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/keployci_db/_meta" >/dev/null || true + log_fired GET "$base/keployci_db/_size" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/keployci_db/_size" >/dev/null || true + log_fired PUT "$base/keployci_db/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/keployci_db/things" >/dev/null || true + log_fired POST "$base/keployci_db/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/keployci_db/things" -d '{"_id":"t1","kind":"a"}' >/dev/null || true + keployci_db_etag=$(curl -sSI --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/keployci_db" 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^ETag:/ {gsub(/[\r\n"]/,"",$2); print $2; exit}') + things_etag=$(curl -sSI --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/keployci_db/things" 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^ETag:/ {gsub(/[\r\n"]/,"",$2); print $2; exit}') + if [ -n "${things_etag:-}" ]; then + log_fired DELETE "$base/keployci_db/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H "If-Match: ${things_etag}" -X DELETE "$base/keployci_db/things" >/dev/null || true + fi + if [ -n "${keployci_db_etag:-}" ]; then + log_fired DELETE "$base/keployci_db" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H "If-Match: ${keployci_db_etag}" -X DELETE "$base/keployci_db" >/dev/null || true + fi + + # ------------------------------------------------------------------ + # Schema-violation writes — drives JsonSchemaBeforeWriteChecker. + # ------------------------------------------------------------------ + log_fired PUT "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/halpeople/_meta" -d '{"schema":"person"}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"badname","name":42,"age":30}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"missingname","age":30}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"negage","name":"Bad","age":-5}' >/dev/null || true + + # Variety of $-operators in PATCH. + local op_payload + for op_payload in \ + '{"$inc":{"age":1}}' \ + '{"$push":{"tags":"vip"}}' \ + '{"$addToSet":{"tags":"early"}}' \ + '{"$pull":{"tags":"vip"}}' \ + '{"$unset":{"city":""}}' \ + '{"$rename":{"city":"location"}}' \ + '{"$currentDate":{"updatedAt":true}}'; do + log_fired PATCH "$base/halpeople/alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/alice" -d "$op_payload" >/dev/null || true + done + + # writeMode query param on POST / PUT. + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople?writeMode=upsert" -d '{"_id":"upsertdoc","name":"Upserted","age":1}' >/dev/null || true + log_fired PUT "$base/halpeople/upsertdoc" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/halpeople/upsertdoc?writeMode=insert" -d '{"name":"Insertish","age":2}' >/dev/null || true + log_fired PUT "$base/halpeople/upsertdoc" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/halpeople/upsertdoc?writeMode=update" -d '{"name":"Updatedish","age":3}' >/dev/null || true + + # Larger bulk write. + bulk_payload="$(printf '['; for i in $(seq 1 25); do + printf '{"_id":"bulk-%d","name":"User%d","age":%d}' "$i" "$i" "$((20 + i))" + [ "$i" -lt 25 ] && printf ','; + done; printf ']')" + log_fired POST "$base/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" -d "$bulk_payload" >/dev/null || true + log_fired PATCH "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople?filter=%7B%22_id%22:%7B%22%24regex%22:%22%5Ebulk-%22%7D%7D" \ + -d '{"$set":{"role":"bulk"}}' >/dev/null || true + log_fired DELETE "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -X DELETE "$base/halpeople?filter=%7B%22_id%22:%7B%22%24regex%22:%22%5Ebulk-%22%7D%7D" >/dev/null || true + + # ------------------------------------------------------------------ + # GraphQL mutations — extend the app to add a write op. + # ------------------------------------------------------------------ + log_fired PUT "$base/gql-apps/halpeople-gql" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/gql-apps/halpeople-gql" \ + -d '{ + "descriptor": { "name": "halpeople-gql", "uri": "halpeople" }, + "schema": "type Query { people: [Person] person(id: String!): Person } type Mutation { tag(id: String!, tag: String!): Person } type Person { _id: String name: String age: Int tags: [String] }", + "mappings": { + "Query": { + "people": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": {} }, + "person": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "_id": { "$arg": "id" } }, "first": true } + }, + "Mutation": { + "tag": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "update": { "$addToSet": { "tags": { "$arg": "tag" } } }, "filter": { "_id": { "$arg": "id" } } } + } + } + }' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"mutation M($id:String!,$t:String!){ tag(id:$id, tag:$t) { _id tags } }","variables":{"id":"alice","tag":"vip"}}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" -d '{"query":"{ this is not graphql"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" -d '{"query":"{ people { unknownField } }"}' >/dev/null || true + + # Define a change stream then attempt SSE upgrade. + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"streams":[{"uri":"all","stages":[{"_$match":{}}]}]}' >/dev/null || true + log_fired GET "$base/halpeople/_streams/all" + curl -sS --max-time 3 -H "$RESTHEART_ADMIN_AUTH" -N \ + -H 'Accept: text/event-stream' "$base/halpeople/_streams/all" >/dev/null || true + + # JWT bearer + Auth-Token bogus probes. + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.bm90LWEtcmVhbC1qd3Q.signature' \ + "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H 'Auth-Token: bogus-token' "$base/halpeople" >/dev/null || true + + # ------------------------------------------------------------------ + # ACL — non-admin role evaluation. Inserts must use POST /acl. + # User passwords are sent plaintext; userPwdHasher bcrypts on insert. + # ------------------------------------------------------------------ + log_fired PUT "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/acl" >/dev/null || true + + local acl_rule + for acl_rule in \ + '{"_id":"reader-get-halpeople","roles":["reader"],"predicate":"method(GET) and path-prefix[/halpeople] and qparams-whitelist[page, pagesize, filter, keys]"}' \ + '{"_id":"reader-blacklist","roles":["reader"],"predicate":"method(GET) and path-prefix[/halpeople] and qparams-blacklist[secret, token]"}' \ + '{"_id":"reader-self-equals","roles":["reader"],"predicate":"path-prefix[/halpeople] and equals[%U, reader]"}' \ + '{"_id":"reader-localhost","roles":["reader"],"predicate":"path-prefix[/halpeople] and in[%h, {127.0.0.1, localhost}]"}' \ + '{"_id":"writer-bson-whitelist","roles":["writer"],"predicate":"path-prefix[/halpeople] and (method(GET) or method(POST) or method(PATCH)) and bson-request-whitelist[name, age, _id, role]"}' \ + '{"_id":"writer-bson-blacklist","roles":["writer"],"predicate":"path-prefix[/halpeople] and method(POST) and bson-request-blacklist[password, secret]"}' \ + '{"_id":"writer-bson-contains","roles":["writer"],"predicate":"path-prefix[/halpeople] and method(POST) and bson-request-contains[name]"}'; do + log_fired POST "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/acl" -d "$acl_rule" >/dev/null || true + done + + log_fired GET "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/acl" >/dev/null || true + + # Create non-admin users (plaintext passwords). + log_fired POST "$base/users" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/users" \ + -d '{"_id":"reader","password":"reader-secret","roles":["reader"]}' >/dev/null || true + log_fired POST "$base/users" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/users" \ + -d '{"_id":"writer","password":"writer-secret","roles":["writer"]}' >/dev/null || true + + # Wait for the mongoAclAuthorizer cache TTL to refresh. + sleep 6 + + # Reader requests — drive predicate evaluator. + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople?page=1&pagesize=5" >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople/alice" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople?evil=true" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople?secret=leak&page=1" >/dev/null || true + log_fired DELETE "$base/halpeople/alice" + curl -sS --max-time 5 -u reader:reader-secret -X DELETE "$base/halpeople/alice" >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"intruder","name":"X"}' >/dev/null || true + log_fired GET "$base/places" + curl -sS --max-time 5 -u reader:reader-secret "$base/places" >/dev/null || true + + # Writer requests. + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"writer-doc","name":"W","age":1,"role":"writer"}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"writer-bad","name":"B","extra":"forbidden"}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"writer-pw","name":"B","password":"x"}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"writer-noname","age":1}' >/dev/null || true + log_fired PATCH "$base/halpeople/writer-doc" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X PATCH "$base/halpeople/writer-doc" -d '{"$set":{"role":"writer"}}' >/dev/null || true + log_fired DELETE "$base/halpeople/writer-doc" + curl -sS --max-time 5 -u writer:writer-secret -X DELETE "$base/halpeople/writer-doc" >/dev/null || true + + # Wrong-password probe — drives mongoRealmAuthenticator verify-fail. + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:wrongpassword "$base/halpeople" >/dev/null || true + + # ------------------------------------------------------------------ + # Aggregation pipeline with variable interpolation. + # ------------------------------------------------------------------ + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[{"uri":"older-than","type":"pipeline","stages":[{"_$match":{"age":{"_$gte":{"_$var":"min_age"}}}},{"_$count":"_count"}]}]}' >/dev/null || true + sleep 2 + avars_25='%7B%22min_age%22:25%7D' + avars_50='%7B%22min_age%22:50%7D' + log_fired GET "$base/halpeople/_aggrs/older-than" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/older-than?avars=${avars_25}" >/dev/null || true + log_fired GET "$base/halpeople/_aggrs/older-than" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/older-than?avars=${avars_50}" >/dev/null || true + log_fired GET "$base/halpeople/_aggrs/older-than" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/older-than" >/dev/null || true + log_fired GET "$base/halpeople/_aggrs/older-than" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/older-than?avars=not-json" >/dev/null || true + + # ------------------------------------------------------------------ + # Additional ACL rules with @user.* / @request.* var set. + # ------------------------------------------------------------------ + for acl_rule in \ + '{"_id":"reader-roles-array","roles":["reader"],"predicate":"path-prefix[/halpeople] and equals[%U, @user.userid]"}' \ + '{"_id":"reader-qparam-var","roles":["reader"],"predicate":"path-prefix[/halpeople] and qparams-contain[user]"}' \ + '{"_id":"reader-qparam-size","roles":["reader"],"predicate":"path-prefix[/halpeople] and qparams-size[0, 5]"}'; do + log_fired POST "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/acl" -d "$acl_rule" >/dev/null || true + done + sleep 6 + + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople?user=reader&page=1" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople?a=1&b=2&c=3&d=4&e=5&f=6" >/dev/null || true + + # ------------------------------------------------------------------ + # GraphQL with BSON scalar types. + # ------------------------------------------------------------------ + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople" \ + -d '{"_id":"bson-doc","name":"BsonDoc","age":42,"score":{"$numberLong":"9999999999"},"price":{"$numberDecimal":"19.99"},"created":{"$date":"2024-01-15T10:00:00Z"},"oid":{"$oid":"507f1f77bcf86cd799439011"},"data":{"$binary":{"base64":"a2Vwbg==","subType":"00"}}}' >/dev/null || true + + log_fired PUT "$base/gql-apps/bson-types" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/gql-apps/bson-types" \ + -d '{ + "descriptor": { "name": "bson-types", "uri": "bson-types" }, + "schema": "scalar BsonObjectId scalar BsonDecimal128 scalar BsonLong scalar BsonDate scalar BsonBinary type Query { docs: [Doc] doc(id: String!): Doc } type Doc { _id: String name: String age: Int score: BsonLong price: BsonDecimal128 created: BsonDate oid: BsonObjectId data: BsonBinary }", + "mappings": { + "Query": { + "docs": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": {} }, + "doc": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "_id": { "$arg": "id" } }, "first": true } + } + } + }' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"{ doc(id:\"bson-doc\") { _id name age score price created oid data } }"}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"{ docs { _id score price oid } }"}' >/dev/null || true + + # ------------------------------------------------------------------ + # Transactions — session id + txn id come back in Location headers. + # ------------------------------------------------------------------ + log_fired POST "$base/_sessions" + sess_loc=$(curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/_sessions" -i 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^Location:/{gsub(/[\r\n]/,""); print $2; exit}') + if [ -n "${sess_loc:-}" ]; then + sid="${sess_loc##*/}" + log_fired POST "$base/_sessions/${sid}/_txns" + txn_loc=$(curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/_sessions/${sid}/_txns" -i 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^Location:/{gsub(/[\r\n]/,""); print $2; exit}') + txn_id="${txn_loc##*/}" + log_fired GET "$base/_sessions/${sid}/_txns" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/_sessions/${sid}/_txns" >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople?sid=${sid}&txn=${txn_id}" \ + -d '{"_id":"in-txn-1","name":"InTxn1","age":11}' >/dev/null || true + log_fired PATCH "$base/halpeople/alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/alice?sid=${sid}&txn=${txn_id}" \ + -d '{"$set":{"in_txn":true}}' >/dev/null || true + log_fired PATCH "$base/_sessions/${sid}/_txns/${txn_id}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -X PATCH "$base/_sessions/${sid}/_txns/${txn_id}" >/dev/null || true + log_fired GET "$base/halpeople/in-txn-1" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/in-txn-1" >/dev/null || true + + log_fired POST "$base/_sessions/${sid}/_txns" + txn_loc2=$(curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/_sessions/${sid}/_txns" -i 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^Location:/{gsub(/[\r\n]/,""); print $2; exit}') + txn_id2="${txn_loc2##*/}" + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/halpeople?sid=${sid}&txn=${txn_id2}" \ + -d '{"_id":"in-txn-aborted","name":"WontExist"}' >/dev/null || true + log_fired DELETE "$base/_sessions/${sid}/_txns/${txn_id2}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -X DELETE "$base/_sessions/${sid}/_txns/${txn_id2}" >/dev/null || true + log_fired GET "$base/halpeople/in-txn-aborted" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/in-txn-aborted" >/dev/null || true + fi + + # ------------------------------------------------------------------ + # HAL on write responses — drives BulkResultRepresentationFactory. + # ------------------------------------------------------------------ + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -H 'Accept: application/hal+json' \ + -X POST "$base/halpeople?rep=hal" \ + -d '{"_id":"hal-post","name":"HalPost","age":1}' >/dev/null || true + log_fired PUT "$base/halpeople/hal-post" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -H 'Accept: application/hal+json' \ + -X PUT "$base/halpeople/hal-post?rep=hal" \ + -d '{"name":"HalPut","age":2}' >/dev/null || true + log_fired PATCH "$base/halpeople/hal-post" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -H 'Accept: application/hal+json' \ + -X PATCH "$base/halpeople/hal-post?rep=hal&hal=full" \ + -d '{"$set":{"age":3}}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -H 'Accept: application/hal+json' \ + -X POST "$base/halpeople?rep=hal" \ + -d '[{"_id":"hal-b1","name":"B1"},{"_id":"hal-b2","name":"B2"}]' >/dev/null || true + + # Aggregation with array + nested var interpolation. + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[{"uri":"by-name-list","type":"pipeline","stages":[{"_$match":{"name":{"_$in":{"_$var":"names"}}}},{"_$count":"_count"}]}]}' >/dev/null || true + sleep 2 + avars_arr='%7B%22names%22:%5B%22Alice%22,%22Bob%22%5D%7D' + log_fired GET "$base/halpeople/_aggrs/by-name-list" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/by-name-list?avars=${avars_arr}" >/dev/null || true + avars_nested='%7B%22cfg%22:%7B%22field%22:%22age%22,%22min%22:25%7D%7D' + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[{"uri":"with-cfg","type":"pipeline","stages":[{"_$match":{"_$expr":{"_$gte":[{"_$var":"cfg.min"},25]}}}]}]}' >/dev/null || true + sleep 2 + log_fired GET "$base/halpeople/_aggrs/with-cfg" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + "$base/halpeople/_aggrs/with-cfg?avars=${avars_nested}" >/dev/null || true + + # ------------------------------------------------------------------ + # /token grants — password / client_credentials / refresh_token. + # ------------------------------------------------------------------ + log_fired POST "$base/token" + grant_pw_resp=$(curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -X POST "$base/token" \ + -d 'grant_type=password&username=admin&password=secret&scope=read' 2>/dev/null || true) + valid_jwt=$(printf '%s' "$grant_pw_resp" | jq -r '.access_token // empty' 2>/dev/null || true) + log_fired POST "$base/token" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -X POST "$base/token" \ + -d 'grant_type=client_credentials&client_id=admin&client_secret=secret' >/dev/null || true + log_fired POST "$base/token" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -X POST "$base/token" \ + -d 'grant_type=refresh_token&refresh_token=ignored' >/dev/null || true + log_fired POST "$base/token" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -X POST "$base/token" -d 'grant_type=device_code' >/dev/null || true + log_fired POST "$base/token" + curl -sS --max-time 5 \ + -H 'Content-Type: application/x-www-form-urlencoded' \ + -X POST "$base/token" \ + -d 'grant_type=password&username=admin&password=wrong' >/dev/null || true + + if [ -n "${valid_jwt:-}" ]; then + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "Authorization: Bearer $valid_jwt" \ + "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -H "Authorization: Bearer $valid_jwt" \ + "$base/halpeople/alice" >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -H "Authorization: Bearer $valid_jwt" \ + -H "$h_json" \ + -X POST "$base/halpeople" -d '{"_id":"jwt-doc","name":"JWT","age":1}' >/dev/null || true + fi + log_fired GET "$base/halpeople" + curl -sS --max-time 5 \ + -H 'Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJ0ZXN0In0.signature' \ + "$base/halpeople" >/dev/null || true + + # OPTIONS preflight on /token + /graphql. + log_fired OPTIONS "$base/token" + curl -sS --max-time 5 -X OPTIONS \ + -H 'Origin: https://example.com' \ + -H 'Access-Control-Request-Method: POST' \ + -H 'Access-Control-Request-Headers: content-type,authorization' \ + "$base/token" >/dev/null || true + log_fired OPTIONS "$base/graphql" + curl -sS --max-time 5 -X OPTIONS \ + -H 'Origin: https://example.com' \ + -H 'Access-Control-Request-Method: POST' \ + "$base/graphql" >/dev/null || true + + # Accept-Encoding variants. + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept-Encoding: gzip' \ + "$base/halpeople" -o /dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept-Encoding: deflate' \ + "$base/halpeople" -o /dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept-Encoding: gzip, deflate, br' \ + "$base/halpeople?pagesize=2" -o /dev/null || true + + # Multiple Accept-Language. + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Accept-Language: en-US,en;q=0.9' \ + "$base/halpeople/alice" >/dev/null || true + + # URL pattern variants — drive MongoMountResolverImpl branches. + log_fired GET "$base/_size" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/_size" >/dev/null || true + log_fired GET "$base/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/_meta" >/dev/null || true + log_fired GET "$base/halpeople/" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/" >/dev/null || true + log_fired GET "$base//halpeople" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base//halpeople" >/dev/null || true + log_fired GET "$base/halpeople/alice/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/alice/_meta/" >/dev/null || true + + # /metrics format variants. + log_fired GET "$base/metrics" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Accept: application/openmetrics-text; version=1.0.0; charset=utf-8' \ + "$base/metrics" >/dev/null || true + log_fired GET "$base/metrics" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H 'Accept: text/plain; version=0.0.4' "$base/metrics" >/dev/null || true + + # ------------------------------------------------------------------ + # GraphQL with INPUT-typed BSON scalars. + # ------------------------------------------------------------------ + log_fired PUT "$base/gql-apps/bson-types" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PUT "$base/gql-apps/bson-types" \ + -d '{ + "descriptor": { "name": "bson-types", "uri": "bson-types" }, + "schema": "scalar BsonObjectId scalar BsonDecimal128 scalar BsonLong scalar BsonDate scalar BsonBinary type Query { docs: [Doc] doc(id: String!): Doc byOid(oid: BsonObjectId!): [Doc] byScore(min: BsonLong!): [Doc] byPrice(min: BsonDecimal128!): [Doc] byCreated(after: BsonDate!): [Doc] } type Doc { _id: String name: String age: Int score: BsonLong price: BsonDecimal128 created: BsonDate oid: BsonObjectId data: BsonBinary }", + "mappings": { + "Query": { + "docs": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": {} }, + "doc": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "_id": { "$arg": "id" } }, "first": true }, + "byOid": { "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "oid": { "$arg": "oid" } } }, + "byScore":{ "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "score": { "$gte": { "$arg": "min" } } } }, + "byPrice":{ "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "price": { "$gte": { "$arg": "min" } } } }, + "byCreated":{ "db": "'"${RESTHEART_DB}"'", "collection": "halpeople", "find": { "created": { "$gte": { "$arg": "after" } } } } + } + } + }' >/dev/null || true + + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"query Q($id:BsonObjectId!){ byOid(oid:$id) { _id name } }","variables":{"id":"507f1f77bcf86cd799439011"}}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"query Q($m:BsonLong!){ byScore(min:$m) { _id score } }","variables":{"m":"100"}}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"query Q($m:BsonDecimal128!){ byPrice(min:$m) { _id price } }","variables":{"m":"9.99"}}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"query Q($d:BsonDate!){ byCreated(after:$d) { _id created } }","variables":{"d":"2020-01-01T00:00:00Z"}}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"{ byOid(oid:\"507f191e810c19729de860ea\") { _id } }"}' >/dev/null || true + log_fired POST "$base/graphql/bson-types" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/bson-types" \ + -d '{"query":"query Q($id:BsonObjectId!){ byOid(oid:$id) { _id } }","variables":{"id":"not-a-valid-oid"}}' >/dev/null || true + + # ------------------------------------------------------------------ + # More aggregation pipeline forms. + # ------------------------------------------------------------------ + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[ + {"uri":"sort-by-age","type":"pipeline","stages":[{"_$sort":{"age":-1}},{"_$limit":5}]}, + {"uri":"project-name-only","type":"pipeline","stages":[{"_$project":{"name":1,"_id":0}}]}, + {"uri":"facet-multi","type":"pipeline","stages":[{"_$facet":{"young":[{"_$match":{"age":{"_$lt":30}}},{"_$count":"_count"}],"old":[{"_$match":{"age":{"_$gte":30}}},{"_$count":"_count"}]}}]}, + {"uri":"lookup-self","type":"pipeline","stages":[{"_$lookup":{"from":"halpeople","localField":"_id","foreignField":"_id","as":"self"}}]} + ]}' >/dev/null || true + sleep 2 + local agg_name + for agg_name in sort-by-age project-name-only facet-multi lookup-self; do + log_fired GET "$base/halpeople/_aggrs/${agg_name}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_aggrs/${agg_name}" >/dev/null || true + done + log_fired GET "$base/halpeople/_aggrs" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_aggrs" >/dev/null || true + + # ------------------------------------------------------------------ + # Range requests on file binary. + # ------------------------------------------------------------------ + log_fired PUT "$base/range_files.files" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/range_files.files" >/dev/null || true + printf 'keploy-coverage-range-test-payload-1234567890' > /tmp/restheart-cov-range.bin + log_fired POST "$base/range_files.files" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X POST "$base/range_files.files" \ + -F 'file=@/tmp/restheart-cov-range.bin' \ + -F 'metadata={"_id":"range-doc","kind":"range"};type=application/json' >/dev/null || true + rm -f /tmp/restheart-cov-range.bin + log_fired GET "$base/range_files.files/range-doc/binary" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Range: bytes=0-9' \ + "$base/range_files.files/range-doc/binary" -o /dev/null || true + log_fired GET "$base/range_files.files/range-doc/binary" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Range: bytes=10-19' \ + "$base/range_files.files/range-doc/binary" -o /dev/null || true + log_fired GET "$base/range_files.files/range-doc/binary" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H 'Range: bytes=99999-' \ + "$base/range_files.files/range-doc/binary" -o /dev/null || true + + log_fired DELETE "$base/token/no-such-user" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/token/no-such-user" >/dev/null || true + + # ------------------------------------------------------------------ + # OAuth metadata endpoints + Digest auth probes. + # ------------------------------------------------------------------ + log_fired GET "$base/.well-known/oauth-authorization-server" + curl -sS --max-time 5 "$base/.well-known/oauth-authorization-server" >/dev/null || true + log_fired GET "$base/.well-known/oauth-protected-resource" + curl -sS --max-time 5 "$base/.well-known/oauth-protected-resource" >/dev/null || true + log_fired GET "$base/.well-known/oauth-protected-resource/halpeople" + curl -sS --max-time 5 "$base/.well-known/oauth-protected-resource/halpeople" >/dev/null || true + log_fired GET "$base/.well-known/oauth-authorization-server" + curl -sS --max-time 5 -H 'X-Forwarded-Host: api.example.com' \ + -H 'X-Forwarded-Proto: https' \ + "$base/.well-known/oauth-authorization-server" >/dev/null || true + log_fired GET "$base/.well-known/oauth-protected-resource" + curl -sS --max-time 5 -H 'X-Forwarded-Host: api.example.com' \ + -H 'X-Forwarded-Proto: https' \ + "$base/.well-known/oauth-protected-resource" >/dev/null || true + + log_fired GET "$base/halpeople" + curl -sS --max-time 5 \ + -H 'Authorization: Digest username="admin", realm="RESTHeart Realm", nonce="abc", uri="/halpeople", response="def"' \ + "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -i \ + -H 'Authorization: Digest username="admin"' \ + "$base/halpeople" >/dev/null || true + + # ------------------------------------------------------------------ + # ACL with `mongo` permission fields — drives the three permission + # interceptors (mongoPermissionFilters / mergeRequest / + # projectResponse). + # ------------------------------------------------------------------ + log_fired POST "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/acl" \ + -d '{ + "_id":"reader-mongo-perms", + "roles":["reader"], + "predicate":"method(GET) and path-prefix[/halpeople]", + "mongo": { + "readFilter": { "name": { "$exists": true } }, + "projectResponse": { "_etag": 0 }, + "mergeRequest": { "lastReadAt": "@now" }, + "allowManagementRequests": false, + "allowBulkPatch": false, + "allowBulkDelete": false + } + }' >/dev/null || true + log_fired POST "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/acl" \ + -d '{ + "_id":"writer-mongo-perms", + "roles":["writer"], + "predicate":"path-prefix[/halpeople] and (method(POST) or method(PATCH) or method(GET))", + "mongo": { + "writeFilter": { "role": { "$ne": "admin" } }, + "readFilter": {}, + "projectResponse": { "secret": 0 }, + "mergeRequest": { "writtenBy": "@user.userid", "writtenAt": "@now" }, + "allowManagementRequests": true, + "allowBulkPatch": true, + "allowBulkDelete": false + } + }' >/dev/null || true + sleep 6 + + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople" >/dev/null || true + log_fired GET "$base/halpeople/alice" + curl -sS --max-time 5 -u reader:reader-secret "$base/halpeople/alice" >/dev/null || true + + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" \ + -d '{"_id":"writer-perm-ok","name":"OK","age":1}' >/dev/null || true + log_fired POST "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X POST "$base/halpeople" \ + -d '{"_id":"writer-perm-bad","name":"Bad","role":"admin"}' >/dev/null || true + log_fired PATCH "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret -H "$h_json" \ + -X PATCH "$base/halpeople?filter=%7B%22name%22:%22OK%22%7D" \ + -d '{"$set":{"role":"writer"}}' >/dev/null || true + log_fired DELETE "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret \ + -X DELETE "$base/halpeople?filter=%7B%22name%22:%22OK%22%7D" >/dev/null || true + + # ACL extras — filterOperatorsBlacklist + propertiesBlacklist. + log_fired POST "$base/acl" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/acl" \ + -d '{ + "_id":"writer-extras", + "roles":["writer"], + "predicate":"path-prefix[/halpeople] and method(GET)", + "mongo": { + "filterOperatorsBlacklist": ["$where", "$expr", "$function"], + "propertiesBlacklist": ["password", "token", "secret"], + "writeFilter": {}, + "readFilter": {} + } + }' >/dev/null || true + sleep 6 + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret \ + "$base/halpeople?filter=%7B%22%24where%22:%221%3D%3D1%22%7D" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret \ + "$base/halpeople?filter=%7B%22%24expr%22:%7B%22%24eq%22:%5B%22%24age%22,%2230%22%5D%7D%7D" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret \ + "$base/halpeople?keys=%7B%22password%22:1%7D" >/dev/null || true + log_fired GET "$base/halpeople" + curl -sS --max-time 5 -u writer:writer-secret \ + "$base/halpeople?filter=%7B%22age%22:%7B%22%24gte%22:1%7D%7D" >/dev/null || true + + # ------------------------------------------------------------------ + # Multiple collections + databases — drives MongoMountResolverImpl. + # ------------------------------------------------------------------ + local coll encoded + for coll in coll_a coll_b coll_with_dashes coll.with.dots; do + encoded=$(printf '%s' "$coll" | sed 's/\./%2E/g') + log_fired PUT "$base/${encoded}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/$encoded" >/dev/null || true + log_fired POST "$base/${encoded}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/$encoded" -d '{"_id":"d1","v":1}' >/dev/null || true + log_fired GET "$base/${encoded}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/$encoded" >/dev/null || true + log_fired GET "$base/${encoded}/_size" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/$encoded/_size" >/dev/null || true + log_fired DELETE "$base/${encoded}/d1" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/$encoded/d1" >/dev/null || true + done + + local db_name d_etag t_etag + for db_name in db_alpha db_beta; do + log_fired PUT "$base/${db_name}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/$db_name" >/dev/null || true + log_fired PUT "$base/${db_name}/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X PUT "$base/$db_name/things" >/dev/null || true + log_fired POST "$base/${db_name}/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/$db_name/things" -d '{"_id":"x","v":1}' >/dev/null || true + log_fired GET "$base/${db_name}/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/$db_name/things" >/dev/null || true + d_etag=$(curl -sSI --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/$db_name" 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^ETag:/{gsub(/[\r\n"]/,"",$2); print $2; exit}') + t_etag=$(curl -sSI --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/$db_name/things" 2>/dev/null \ + | awk 'BEGIN{IGNORECASE=1} /^ETag:/{gsub(/[\r\n"]/,"",$2); print $2; exit}') + if [ -n "${t_etag:-}" ]; then + log_fired DELETE "$base/${db_name}/things" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H "If-Match: ${t_etag}" -X DELETE "$base/$db_name/things" >/dev/null || true + fi + if [ -n "${d_etag:-}" ]; then + log_fired DELETE "$base/${db_name}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" \ + -H "If-Match: ${d_etag}" -X DELETE "$base/$db_name" >/dev/null || true + fi + done + + # ------------------------------------------------------------------ + # More aggregations + GraphQL alias / fragments / multi-op. + # ------------------------------------------------------------------ + log_fired PATCH "$base/halpeople/_meta" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X PATCH "$base/halpeople/_meta" \ + -d '{"aggrs":[ + {"uri":"group-by-tag","type":"pipeline","stages":[{"_$unwind":"$tags"},{"_$group":{"_id":"$tags","count":{"_$sum":1}}}]}, + {"uri":"sort-asc","type":"pipeline","stages":[{"_$sort":{"_id":1}}]}, + {"uri":"limit-3","type":"pipeline","stages":[{"_$limit":3}]} + ]}' >/dev/null || true + sleep 2 + for agg_name in group-by-tag sort-asc limit-3; do + log_fired GET "$base/halpeople/_aggrs/${agg_name}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" "$base/halpeople/_aggrs/${agg_name}" >/dev/null || true + done + + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"{ first: people { _id name } second: people { _id age } }"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"fragment P on Person { _id name age } query { people { ...P } }"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"query A { people { _id } } query B { people { name } }","operationName":"B"}' >/dev/null || true + log_fired POST "$base/graphql/halpeople" + curl -sS --max-time 8 -H "$RESTHEART_ADMIN_AUTH" -H "$h_json" \ + -X POST "$base/graphql/halpeople" \ + -d '{"query":"query Q($id:String){ person(id:$id) { _id } }","variables":{"id":null}}' >/dev/null || true + + # ------------------------------------------------------------------ + # Cleanup — drop the non-admin users + ACL rules created above. + # ------------------------------------------------------------------ + log_fired DELETE "$base/users/reader" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/users/reader" >/dev/null || true + log_fired DELETE "$base/users/writer" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/users/writer" >/dev/null || true + local rule_id + for rule_id in reader-get-halpeople reader-blacklist reader-self-equals \ + reader-localhost writer-bson-whitelist writer-bson-blacklist \ + writer-bson-contains reader-roles-array reader-qparam-var \ + reader-qparam-size reader-mongo-perms writer-mongo-perms writer-extras; do + log_fired DELETE "$base/acl/${rule_id}" + curl -sS --max-time 5 -H "$RESTHEART_ADMIN_AUTH" -X DELETE "$base/acl/$rule_id" >/dev/null || true + done } # RESTHeart's routes are pattern-mount based, not file-system -# based. The denominator is curated here from the upstream docs + -# the routes the lane intends to exercise. Update this list when -# adding new traffic to record-traffic so the coverage stays in -# lockstep. +# based. The denominator below enumerates every (method, route) +# tuple that restheart_record_traffic fires. Update this list when +# adding new traffic so the coverage stays in lockstep. restheart_list_routes() { cat <<'ROUTES' GET / +GET /ping +GET /metrics +GET /health/db +GET /logout +POST /logout +GET /roles/{name} +GET /token +GET /token/{name} +POST /token +POST /token/{name} +DELETE /token/{name} +GET /_size +GET /_meta +POST /_sessions +GET /_sessions/{sid} +GET /_sessions/{sid}/_txns +POST /_sessions/{sid}/_txns +PATCH /_sessions/{sid}/_txns/{txnid} +DELETE /_sessions/{sid}/_txns/{txnid} +GET /ic +POST /ic +POST /csv +POST /graphql +GET /graphql +POST /graphql/{appname} +OPTIONS /graphql +OPTIONS /token +GET /.well-known/oauth-authorization-server +GET /.well-known/oauth-protected-resource +GET /.well-known/oauth-protected-resource/{name} GET /{db} PUT /{db} DELETE /{db} GET /{db}/_meta +GET /{db}/_size GET /{db}/{coll} PUT /{db}/{coll} -DELETE /{db}/{coll} POST /{db}/{coll} +PATCH /{db}/{coll} +DELETE /{db}/{coll} +TRACE /{db}/{coll} +OPTIONS /{db}/{coll} +GET /{db}/{coll}/ GET /{db}/{coll}/{docid} PUT /{db}/{coll}/{docid} PATCH /{db}/{coll}/{docid} DELETE /{db}/{coll}/{docid} +GET /{db}/{coll}/{docid}/binary +GET /{db}/{coll}/{docid}/_meta GET /{db}/{coll}/_size -GET /{db}/{coll}/_aggrs/{name} +GET /{db}/{coll}/_meta +PUT /{db}/{coll}/_meta +PATCH /{db}/{coll}/_meta GET /{db}/{coll}/_indexes +PUT /{db}/{coll}/_indexes/{name} +DELETE /{db}/{coll}/_indexes/{name} +GET /{db}/{coll}/_aggrs +GET /{db}/{coll}/_aggrs/{name} +GET /{db}/{coll}/_streams +GET /{db}/{coll}/_streams/{name} ROUTES } From 4d02c6d104c57848d50d1b96f13defebbd3e8653 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 11:45:10 +0530 Subject: [PATCH 3/7] ci(restheart-mongo): add per-sample coverage gate workflow MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Adds .github/workflows/restheart-mongo.yml plus the helper .github/workflows/scripts/run-and-measure.sh, modeled on the doccano-django sample's coverage gate. * paths-scoped trigger: pull_request and push-to-main both filter on `restheart-mongo/**` and `.github/workflows/restheart-mongo.yml`, so changes to other samples in this repo do not trigger this workflow (and vice versa). * Three jobs: build-coverage (PR HEAD), release-coverage (PR base with first-PR bootstrap escape hatch), and coverage-gate that fails the PR if coverage drops more than COVERAGE_THRESHOLD percentage points (default 1.0pp, override via repo variable RESTHEART_COVERAGE_THRESHOLD). * Helper script brings the sample up via its own docker-compose.yml, waits for the RESTHeart listener (treating both 200 and 401 as ready since `/` requires auth), runs flow.sh bootstrap → record-traffic → coverage, and emits the parsed percentage onto $GITHUB_OUTPUT for the gate job. * Isolated from the enterprise lane: the enterprise PR pipeline (.woodpecker/restheart-linux.yml) calls `flow.sh coverage` only informationally and does not gate on it. The gate lives only here, on the sample repo, so coverage regressions surface on PRs that touch this sample without coupling enterprise CI to the route table. Signed-off-by: Akash Kumar --- .github/workflows/restheart-mongo.yml | 197 +++++++++++++++++++ .github/workflows/scripts/run-and-measure.sh | 100 ++++++++++ 2 files changed, 297 insertions(+) create mode 100644 .github/workflows/restheart-mongo.yml create mode 100755 .github/workflows/scripts/run-and-measure.sh diff --git a/.github/workflows/restheart-mongo.yml b/.github/workflows/restheart-mongo.yml new file mode 100644 index 00000000..6d60a9b5 --- /dev/null +++ b/.github/workflows/restheart-mongo.yml @@ -0,0 +1,197 @@ +# restheart-mongo sample CI — keploy-independent end-to-end smoke + +# coverage gate. +# +# Triggers ONLY on changes under restheart-mongo/ (or this workflow +# file). Other samples in this repo have their own orthogonal CI; +# gating the whole repo on every restheart change would slow them +# all down for no benefit. +# +# What it gates: +# * `release-coverage` — checks out the PR's base branch (main) +# and runs the sample end-to-end: docker compose up, bootstrap +# the admin db + collections, drive flow.sh record-traffic with +# the per-call audit log enabled, capture the route-coverage +# percentage from `flow.sh coverage`. This is the baseline. +# * `build-coverage` — same end-to-end against the PR's HEAD ref. +# * `coverage-gate` — fails the PR if `build`'s coverage drops +# more than COVERAGE_THRESHOLD percentage points below +# `release`. Default threshold is 1.0pp; override via repo +# variable `RESTHEART_COVERAGE_THRESHOLD` for a tighter or +# looser bar. +# +# On push to main, only `build-coverage` runs (no baseline to +# compare against — main IS the baseline). +# +# Standards-aligned choices: +# * `paths:` filter on both push and pull_request triggers — the +# canonical GH Actions way to scope a workflow to one +# subdirectory. +# * Job outputs (steps..outputs.coverage → needs..outputs) +# to thread the captured percentage between jobs. +# * `concurrency:` cancel-in-progress on the same ref so a stale +# run doesn't waste runner minutes. +# * actions/upload-artifact for the human-readable +# coverage_report.txt — reviewers can inspect missing routes +# directly from the PR's "checks" tab. +# * marocchino/sticky-pull-request-comment for the PR-side diff +# comment. Pinned-by-header so successive runs update the same +# comment instead of fanning out. +# * The compare step is plain bash + python3 (no external +# coverage service). The sample's coverage is route-based +# (single percentage), so the gate is a 3-line subtraction. +# +# Sample is genuinely keploy-independent here: the workflow uses +# flow.sh's $RESTHEART_FIRED_ROUTES_FILE per-call audit log as its +# numerator source, not a keploy recording. The lane scripts in +# keploy/integrations and keploy/enterprise consume the same +# flow.sh, but use the keploy/test-set-*/tests/*.yaml tree as +# their numerator (authoritative — only calls keploy actually +# CAPTURED count). Both modes are wired into +# `flow.sh::restheart_list_recorded_routes`. +name: restheart-mongo sample + +on: + pull_request: + paths: + - 'restheart-mongo/**' + - '.github/workflows/restheart-mongo.yml' + push: + branches: [main] + paths: + - 'restheart-mongo/**' + - '.github/workflows/restheart-mongo.yml' + workflow_dispatch: {} + +concurrency: + group: restheart-mongo-${{ github.ref }} + cancel-in-progress: true + +env: + COVERAGE_THRESHOLD: ${{ vars.RESTHEART_COVERAGE_THRESHOLD || '1.0' }} + +jobs: + build-coverage: + name: build (current ref) coverage + runs-on: ubuntu-latest + timeout-minutes: 20 + outputs: + coverage: ${{ steps.measure.outputs.coverage }} + steps: + - uses: actions/checkout@v4 + - id: measure + name: Run sample end-to-end + measure coverage + working-directory: restheart-mongo + env: + RESTHEART_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-build.log + RESTHEART_PHASE: ci-build + run: ../.github/workflows/scripts/run-and-measure.sh + + - name: Upload coverage report + if: always() + uses: actions/upload-artifact@v4 + with: + name: coverage-build + path: restheart-mongo/coverage_report.txt + if-no-files-found: warn + + release-coverage: + if: github.event_name == 'pull_request' + name: release (base ref) coverage + runs-on: ubuntu-latest + timeout-minutes: 20 + outputs: + coverage: ${{ steps.measure.outputs.coverage || steps.empty-baseline.outputs.coverage }} + sample-existed: ${{ steps.detect.outputs.sample-existed }} + steps: + - uses: actions/checkout@v4 + with: + ref: ${{ github.event.pull_request.base.ref }} + + # First-PR bootstrap escape hatch: the very PR that + # introduces the restheart-mongo/ sample has no baseline + # (restheart-mongo/ doesn't exist on the base ref). Detect + # that and short-circuit to coverage=0; the gate then + # treats build's coverage as the new baseline and trivially + # passes for any percentage > 0. After the introducing PR + # merges, every subsequent PR has a real baseline to diff + # against. + - id: detect + name: Detect baseline presence + run: | + if [ -d restheart-mongo ] && [ -x restheart-mongo/flow.sh ]; then + echo "sample-existed=true" >>"$GITHUB_OUTPUT" + echo "Sample exists on base ref — running full measurement." + else + echo "sample-existed=false" >>"$GITHUB_OUTPUT" + echo "No restheart-mongo/ on base ref — first-PR bootstrap; baseline coverage treated as 0%." + fi + + - id: measure + name: Run sample end-to-end + measure coverage + if: steps.detect.outputs.sample-existed == 'true' + working-directory: restheart-mongo + env: + RESTHEART_FIRED_ROUTES_FILE: ${{ runner.temp }}/fired-routes-release.log + RESTHEART_PHASE: ci-release + run: ../.github/workflows/scripts/run-and-measure.sh + + - id: empty-baseline + name: Emit zero baseline (first-PR bootstrap) + if: steps.detect.outputs.sample-existed != 'true' + run: echo "coverage=0.0" >>"$GITHUB_OUTPUT" + + - name: Upload coverage report + if: always() && steps.detect.outputs.sample-existed == 'true' + uses: actions/upload-artifact@v4 + with: + name: coverage-release + path: restheart-mongo/coverage_report.txt + if-no-files-found: warn + + coverage-gate: + if: github.event_name == 'pull_request' + name: coverage gate + needs: [build-coverage, release-coverage] + runs-on: ubuntu-latest + steps: + - name: Compare build vs release + env: + BUILD: ${{ needs.build-coverage.outputs.coverage }} + RELEASE: ${{ needs.release-coverage.outputs.coverage }} + THRESHOLD: ${{ env.COVERAGE_THRESHOLD }} + BASE_REF: ${{ github.event.pull_request.base.ref }} + run: | + set -Eeuo pipefail + if [ -z "${BUILD:-}" ] || [ -z "${RELEASE:-}" ]; then + echo "::error::missing coverage outputs — build='${BUILD:-}' release='${RELEASE:-}'" + exit 1 + fi + drop=$(python3 -c "print(round(${RELEASE} - ${BUILD}, 2))") + echo "Release (${BASE_REF}): ${RELEASE}%" + echo "Build (this PR): ${BUILD}%" + echo "Drop: ${drop}pp (threshold ${THRESHOLD}pp)" + if python3 -c "import sys; sys.exit(0 if (${RELEASE} - ${BUILD}) > ${THRESHOLD} else 1)"; then + echo "::error::restheart-mongo coverage dropped from ${RELEASE}% → ${BUILD}% (-${drop}pp), exceeding the ${THRESHOLD}pp threshold." + echo "Suggested actions:" + echo " * Add curl(s) to flow.sh::restheart_record_traffic that exercise the routes you changed/touched." + echo " * If the route(s) was intentionally retired, drop it from restheart-mongo/flow.sh::restheart_list_routes' SCOPE_PATHS too so it's removed from the denominator." + exit 1 + fi + echo "OK — coverage delta within ${THRESHOLD}pp threshold." + + - name: Sticky PR comment + if: ${{ !cancelled() }} + uses: marocchino/sticky-pull-request-comment@v2 + with: + header: restheart-mongo-coverage + message: | + ### restheart-mongo sample coverage + + | ref | coverage | + |---|---| + | base (`${{ github.event.pull_request.base.ref }}`) | **${{ needs.release-coverage.outputs.coverage }}%** | + | this PR | **${{ needs.build-coverage.outputs.coverage }}%** | + + Threshold: PR may not drop coverage by more than **${{ env.COVERAGE_THRESHOLD }}pp**. Override per-repo via the `RESTHEART_COVERAGE_THRESHOLD` actions variable. + + Coverage measures the RESTHeart 9.x REST surface (`/{db}/{coll}` CRUD + `_aggrs/{name}` + `_size` + `_meta` + `_indexes` + `_streams/{name}` + `/graphql` + `/graphql/{appname}` + `/{db}/{coll}.files` + `/acl` + `/users` + `/tokens` + sessions/transactions + `/ic` + `/csv` + metrics + OAuth) that `flow.sh::restheart_record_traffic` exercises against the running backend. Reports are attached as artifacts on each job ("coverage-build" / "coverage-release"). diff --git a/.github/workflows/scripts/run-and-measure.sh b/.github/workflows/scripts/run-and-measure.sh new file mode 100755 index 00000000..eaea7dbc --- /dev/null +++ b/.github/workflows/scripts/run-and-measure.sh @@ -0,0 +1,100 @@ +#!/usr/bin/env bash +# +# run-and-measure.sh — bring restheart-mongo up via the sample's +# compose, run flow.sh bootstrap + record-traffic with the +# per-call audit log enabled, run flow.sh coverage, and emit +# `coverage=PCT` onto $GITHUB_OUTPUT for the downstream +# coverage-gate job. +# +# Called from .github/workflows/restheart-mongo.yml's +# build-coverage and release-coverage jobs (one per ref under +# comparison). Both jobs source the same script so the +# measurement is identical across refs — any drift in the +# numerator definition would otherwise produce a misleading +# delta. +# +# Inputs (all from the workflow env): +# RESTHEART_FIRED_ROUTES_FILE — per-call audit log path; passed +# through to flow.sh so its +# record-traffic loop logs each +# (METHOD, URL) pair, and so its +# coverage subcommand uses that +# file as the standalone +# numerator. +# RESTHEART_PHASE — label spliced into the project +# name so build vs. release runs +# don't collide on volume names +# (compose project naming inside +# the GH runner is per-job +# anyway, but RESTHEART_PHASE +# shows up in the test fixtures +# and is useful for diffing logs). +# GITHUB_OUTPUT — standard GH Actions sink for +# step outputs. +set -Eeuo pipefail + +# Compose-substituted variables. Defaults match the sample's +# docker-compose.yml so a local invocation of this script (no +# overrides) reproduces what CI runs. +export RESTHEART_APP_CONTAINER="${RESTHEART_APP_CONTAINER:-restheart_app}" +export RESTHEART_MONGO_CONTAINER="${RESTHEART_MONGO_CONTAINER:-restheart_mongo}" +export RESTHEART_APP_PORT="${RESTHEART_APP_PORT:-8080}" +export RESTHEART_MONGO_IP="${RESTHEART_MONGO_IP:-172.36.0.10}" +export RESTHEART_NETWORK_SUBNET="${RESTHEART_NETWORK_SUBNET:-172.36.0.0/24}" + +# RESTHeart 9.x ships with admin/secret as the default +# bootstrapped principal. flow.sh reads this header for every +# call, so exporting it here keeps the standalone CI run aligned +# with the keploy lanes (which pass the same value through). +export RESTHEART_ADMIN_AUTH="${RESTHEART_ADMIN_AUTH:-Basic YWRtaW46c2VjcmV0}" + +: "${RESTHEART_FIRED_ROUTES_FILE:?RESTHEART_FIRED_ROUTES_FILE must be set by the workflow}" + +# Reset audit log for this run; otherwise a prior run's entries +# would inflate the numerator on a re-trigger. +: >"$RESTHEART_FIRED_ROUTES_FILE" + +# Single-phase bootstrap: RESTHeart embeds its own admin +# principal at first boot, so there's no separate "seed admin +# user" stage the way doccano needs. compose up → wait for app +# port → flow.sh bootstrap (PUTs the db + record-traffic's +# collections) → flow.sh record-traffic → flow.sh coverage. +docker compose up -d + +# Wait for the backend to start serving. Per the sample's +# restheart_wait_for_app, both 200 AND 401 are success signals +# — RESTHeart returns 401 on `/` until you authenticate, but +# 401 still proves the HTTP listener and the auth filter are +# both up. Anything before that (000 / connection refused) is +# pre-listen. +for i in $(seq 1 120); do + code=$(curl -sS -o /dev/null -w '%{http_code}' \ + "http://127.0.0.1:${RESTHEART_APP_PORT}/" 2>/dev/null || echo "") + if [ "$code" = "200" ] || [ "$code" = "401" ]; then break; fi + sleep 2 +done + +bash flow.sh bootstrap 240 + +# Drive traffic. flow.sh::restheart_record_traffic gates on +# restheart_wait_for_app internally, so this won't fire curls +# at a half-booted backend. +bash flow.sh record-traffic + +# Coverage report — uses RESTHEART_FIRED_ROUTES_FILE as numerator +# since no keploy/test-set-* tree exists in the standalone case. +COVERAGE_REPORT_FILE="$PWD/coverage_report.txt" bash flow.sh coverage + +# Pull the percentage out of the report's `Covered N/M (XX.X%)` +# line. Anchored on the parenthesised form so a future change to +# the report's prose doesn't break the parse. +pct=$(grep -oE '\([0-9]+\.[0-9]+%\)' coverage_report.txt | head -1 | tr -d '()%') +if [ -z "$pct" ]; then + echo "::error::Could not parse coverage percentage from coverage_report.txt" + cat coverage_report.txt || true + exit 1 +fi +echo "coverage=${pct}" >>"$GITHUB_OUTPUT" +echo "coverage: ${pct}% (audit log: $RESTHEART_FIRED_ROUTES_FILE)" + +docker compose down -v --remove-orphans From edc9f0a4df625a2ff71b3f9187bd469d017fd7e6 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 12:02:50 +0530 Subject: [PATCH 4/7] ci(restheart-mongo): dump container logs on app wait timeout build-coverage on PR #134 hung 8 min when restheart never bound on port 8080 (last_code=000). The helper script silently looped through both the wait and flow.sh bootstrap timers, then the gate job aborted without surfacing why restheart didn't start. Adding an explicit fail-fast + docker logs dump after the 240s wait so a future failure surfaces the restheart Java traceback (or the mongo connection error, or whatever else). Signed-off-by: Akash Kumar --- .github/workflows/scripts/run-and-measure.sh | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/.github/workflows/scripts/run-and-measure.sh b/.github/workflows/scripts/run-and-measure.sh index eaea7dbc..1350047f 100755 --- a/.github/workflows/scripts/run-and-measure.sh +++ b/.github/workflows/scripts/run-and-measure.sh @@ -74,6 +74,18 @@ for i in $(seq 1 120); do sleep 2 done +if [ "$code" != "200" ] && [ "$code" != "401" ]; then + echo "::error::restheart did not bind on port ${RESTHEART_APP_PORT} within 240s (last code: ${code:-empty})" + echo "----- restheart container logs -----" + docker logs "${RESTHEART_APP_CONTAINER}" --tail 200 2>&1 || true + echo "----- mongo container logs -----" + docker logs "${RESTHEART_MONGO_CONTAINER}" --tail 100 2>&1 || true + echo "----- docker compose ps -----" + docker compose ps || true + docker compose down -v --remove-orphans || true + exit 1 +fi + bash flow.sh bootstrap 240 # Drive traffic. flow.sh::restheart_record_traffic gates on From ea3fcec860876d618b2057a9f7d3a68b2d086ae9 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 13:21:45 +0530 Subject: [PATCH 5/7] feat(restheart-mongo): real Java line coverage via JaCoCo overlay MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Replaces the prior API-route-surface "coverage" (counting fired routes / curated route table) with actual JaCoCo line coverage of the RESTHeart 9.x JVM under traffic. Architecture: - `Dockerfile.coverage` is a multi-stage build: stage 1 (alpine) fetches JaCoCo 0.8.13 (jacocoagent.jar + jacococli.jar), stage 2 layers them into the upstream restheart image (which is distroless — no shell, no curl, so jars must be pulled in a builder stage and COPY'd over). - `docker-compose.coverage.yml` is an OVERLAY: applied via `-f docker-compose.yml -f docker-compose.coverage.yml`. It sets JAVA_TOOL_OPTIONS=-javaagent:.../jacocoagent.jar=output=tcpserver,... so JaCoCo attaches at JVM start and listens on port 6300. The base `Dockerfile` and `docker-compose.yml` are untouched, so keploy/integrations and keploy/enterprise CI lanes consume the base compose and pay zero JaCoCo cost (the agent rewrites bytecode at class-load, adding ~5-10% per-call overhead that would slow record/replay). - `flow.sh::restheart_report_coverage` shells into a one-off coverage container to dump execution data via JaCoCo TCP and render an XML report against /opt/restheart/restheart.jar. When called against the base image (no overlay) it prints "INFO: ... uninstrumented" and exits 0 so enterprise lanes' `flow.sh coverage || true` informational calls keep working. Also fixes a pre-existing config bug in the base docker-compose.yml's RHO env var: the override syntax uses ';' as a key->value separator (the upstream image's default RHO uses ';'); the prior YAML-folded version used ',' which RESTHeart parsed as part of the connection-string value, leading RESTHeart to ignore the override and bind /http-listener/host to its localhost default — making the HTTP listener unreachable from the host port mapping. The base compose now uses ';' AND explicitly overrides /http-listener/host -> "0.0.0.0". Removed: - `restheart_list_routes` (curated route table denominator). - `restheart_list_recorded_routes` (keploy-tests / fired-routes reader). - The legacy route-surface `restheart_report_coverage` body. - `list-routes` subcommand. Validated locally: helper produced `coverage=52.3` to GITHUB_OUTPUT against a clean stack (1663/3182 lines covered in restheart.jar; INSTRUCTION coverage 50.8%). Signed-off-by: Akash Kumar --- .github/workflows/restheart-mongo.yml | 2 +- .github/workflows/scripts/run-and-measure.sh | 100 ++++------ restheart-mongo/.gitignore | 2 + restheart-mongo/Dockerfile.coverage | 43 ++++ restheart-mongo/docker-compose.coverage.yml | 31 +++ restheart-mongo/docker-compose.yml | 11 +- restheart-mongo/flow.sh | 198 ++++++++----------- 7 files changed, 196 insertions(+), 191 deletions(-) create mode 100644 restheart-mongo/.gitignore create mode 100644 restheart-mongo/Dockerfile.coverage create mode 100644 restheart-mongo/docker-compose.coverage.yml diff --git a/.github/workflows/restheart-mongo.yml b/.github/workflows/restheart-mongo.yml index 6d60a9b5..91062c2f 100644 --- a/.github/workflows/restheart-mongo.yml +++ b/.github/workflows/restheart-mongo.yml @@ -194,4 +194,4 @@ jobs: Threshold: PR may not drop coverage by more than **${{ env.COVERAGE_THRESHOLD }}pp**. Override per-repo via the `RESTHEART_COVERAGE_THRESHOLD` actions variable. - Coverage measures the RESTHeart 9.x REST surface (`/{db}/{coll}` CRUD + `_aggrs/{name}` + `_size` + `_meta` + `_indexes` + `_streams/{name}` + `/graphql` + `/graphql/{appname}` + `/{db}/{coll}.files` + `/acl` + `/users` + `/tokens` + sessions/transactions + `/ic` + `/csv` + metrics + OAuth) that `flow.sh::restheart_record_traffic` exercises against the running backend. Reports are attached as artifacts on each job ("coverage-build" / "coverage-release"). + Coverage is **Java line coverage** (JaCoCo 0.8.13) of the RESTHeart 9.x JVM under traffic — the bytecode `flow.sh::restheart_record_traffic` actually executes (REST CRUD + GraphQL + ACL + users + sessions/transactions + metrics + …). Instrumentation lives in a separate `Dockerfile.coverage` + `docker-compose.coverage.yml` overlay; the base `docker-compose.yml` consumed by keploy/integrations and keploy/enterprise CI lanes runs uninstrumented and pays zero JaCoCo cost. JaCoCo execution dumps + XML reports are attached as artifacts on each job (`coverage-build` / `coverage-release`). diff --git a/.github/workflows/scripts/run-and-measure.sh b/.github/workflows/scripts/run-and-measure.sh index 1350047f..741ddcf3 100755 --- a/.github/workflows/scripts/run-and-measure.sh +++ b/.github/workflows/scripts/run-and-measure.sh @@ -1,72 +1,43 @@ #!/usr/bin/env bash # -# run-and-measure.sh — bring restheart-mongo up via the sample's -# compose, run flow.sh bootstrap + record-traffic with the -# per-call audit log enabled, run flow.sh coverage, and emit -# `coverage=PCT` onto $GITHUB_OUTPUT for the downstream -# coverage-gate job. +# run-and-measure.sh — bring restheart-mongo up under the +# coverage overlay (JaCoCo agent attached via JAVA_TOOL_OPTIONS), +# run flow.sh bootstrap + record-traffic, dump JaCoCo execution +# data over the agent's TCP server, render a Java line-coverage +# report, and emit `coverage=PCT` onto $GITHUB_OUTPUT for the +# downstream coverage-gate job. # -# Called from .github/workflows/restheart-mongo.yml's -# build-coverage and release-coverage jobs (one per ref under -# comparison). Both jobs source the same script so the -# measurement is identical across refs — any drift in the -# numerator definition would otherwise produce a misleading -# delta. +# Coverage isolation contract: +# * Base `Dockerfile` and `docker-compose.yml` are untouched. +# * The overlay `Dockerfile.coverage` + `docker-compose.coverage.yml` +# attach JaCoCo and expose its TCP server. ONLY this script +# applies the overlay; keploy/integrations and keploy/enterprise +# CI lanes consume the base compose and pay zero JVM-instrument +# cost (jacocoagent adds ~5-10% per-call overhead). # -# Inputs (all from the workflow env): -# RESTHEART_FIRED_ROUTES_FILE — per-call audit log path; passed -# through to flow.sh so its -# record-traffic loop logs each -# (METHOD, URL) pair, and so its -# coverage subcommand uses that -# file as the standalone -# numerator. -# RESTHEART_PHASE — label spliced into the project -# name so build vs. release runs -# don't collide on volume names -# (compose project naming inside -# the GH runner is per-job -# anyway, but RESTHEART_PHASE -# shows up in the test fixtures -# and is useful for diffing logs). -# GITHUB_OUTPUT — standard GH Actions sink for -# step outputs. +# Inputs (from the workflow env): +# RESTHEART_PHASE — label for log diffing. +# GITHUB_OUTPUT — standard GH Actions sink for step outputs. set -Eeuo pipefail -# Compose-substituted variables. Defaults match the sample's -# docker-compose.yml so a local invocation of this script (no -# overrides) reproduces what CI runs. export RESTHEART_APP_CONTAINER="${RESTHEART_APP_CONTAINER:-restheart_app}" export RESTHEART_MONGO_CONTAINER="${RESTHEART_MONGO_CONTAINER:-restheart_mongo}" export RESTHEART_APP_PORT="${RESTHEART_APP_PORT:-8080}" export RESTHEART_MONGO_IP="${RESTHEART_MONGO_IP:-172.36.0.10}" export RESTHEART_NETWORK_SUBNET="${RESTHEART_NETWORK_SUBNET:-172.36.0.0/24}" - -# RESTHeart 9.x ships with admin/secret as the default -# bootstrapped principal. flow.sh reads this header for every -# call, so exporting it here keeps the standalone CI run aligned -# with the keploy lanes (which pass the same value through). export RESTHEART_ADMIN_AUTH="${RESTHEART_ADMIN_AUTH:-Basic YWRtaW46c2VjcmV0}" -: "${RESTHEART_FIRED_ROUTES_FILE:?RESTHEART_FIRED_ROUTES_FILE must be set by the workflow}" +mkdir -p coverage +chmod 777 coverage +sudo rm -rf coverage/jacoco.exec coverage/report.xml coverage/coverage_report.txt 2>/dev/null \ + || rm -rf coverage/jacoco.exec coverage/report.xml coverage/coverage_report.txt 2>/dev/null \ + || true -# Reset audit log for this run; otherwise a prior run's entries -# would inflate the numerator on a re-trigger. -: >"$RESTHEART_FIRED_ROUTES_FILE" +COMPOSE=(docker compose -f docker-compose.yml -f docker-compose.coverage.yml) -# Single-phase bootstrap: RESTHeart embeds its own admin -# principal at first boot, so there's no separate "seed admin -# user" stage the way doccano needs. compose up → wait for app -# port → flow.sh bootstrap (PUTs the db + record-traffic's -# collections) → flow.sh record-traffic → flow.sh coverage. -docker compose up -d +"${COMPOSE[@]}" up -d --build -# Wait for the backend to start serving. Per the sample's -# restheart_wait_for_app, both 200 AND 401 are success signals -# — RESTHeart returns 401 on `/` until you authenticate, but -# 401 still proves the HTTP listener and the auth filter are -# both up. Anything before that (000 / connection refused) is -# pre-listen. +# Both 200 and 401 are success signals. for i in $(seq 1 120); do code=$(curl -sS -o /dev/null -w '%{http_code}' \ "http://127.0.0.1:${RESTHEART_APP_PORT}/" 2>/dev/null || echo "") @@ -80,26 +51,21 @@ if [ "$code" != "200" ] && [ "$code" != "401" ]; then docker logs "${RESTHEART_APP_CONTAINER}" --tail 200 2>&1 || true echo "----- mongo container logs -----" docker logs "${RESTHEART_MONGO_CONTAINER}" --tail 100 2>&1 || true - echo "----- docker compose ps -----" - docker compose ps || true - docker compose down -v --remove-orphans || true + "${COMPOSE[@]}" down -v --remove-orphans || true exit 1 fi bash flow.sh bootstrap 240 - -# Drive traffic. flow.sh::restheart_record_traffic gates on -# restheart_wait_for_app internally, so this won't fire curls -# at a half-booted backend. bash flow.sh record-traffic -# Coverage report — uses RESTHEART_FIRED_ROUTES_FILE as numerator -# since no keploy/test-set-* tree exists in the standalone case. +# JaCoCo TCP-dump + report (no JVM stop needed). COVERAGE_REPORT_FILE="$PWD/coverage_report.txt" bash flow.sh coverage -# Pull the percentage out of the report's `Covered N/M (XX.X%)` -# line. Anchored on the parenthesised form so a future change to -# the report's prose doesn't break the parse. +if [ ! -f coverage_report.txt ]; then + echo "::error::flow.sh coverage produced no coverage_report.txt" + exit 1 +fi + pct=$(grep -oE '\([0-9]+\.[0-9]+%\)' coverage_report.txt | head -1 | tr -d '()%') if [ -z "$pct" ]; then echo "::error::Could not parse coverage percentage from coverage_report.txt" @@ -107,6 +73,6 @@ if [ -z "$pct" ]; then exit 1 fi echo "coverage=${pct}" >>"$GITHUB_OUTPUT" -echo "coverage: ${pct}% (audit log: $RESTHEART_FIRED_ROUTES_FILE)" +echo "coverage: ${pct}% (Java line coverage via JaCoCo)" -docker compose down -v --remove-orphans +"${COMPOSE[@]}" down -v --remove-orphans diff --git a/restheart-mongo/.gitignore b/restheart-mongo/.gitignore new file mode 100644 index 00000000..ac3950e5 --- /dev/null +++ b/restheart-mongo/.gitignore @@ -0,0 +1,2 @@ +coverage/ +coverage_report.txt diff --git a/restheart-mongo/Dockerfile.coverage b/restheart-mongo/Dockerfile.coverage new file mode 100644 index 00000000..e864b0bc --- /dev/null +++ b/restheart-mongo/Dockerfile.coverage @@ -0,0 +1,43 @@ +# Coverage overlay image for restheart-mongo. +# +# Adds the JaCoCo agent (jacocoagent.jar) and CLI (jacococli.jar) +# alongside the upstream restheart 9.2.1 image. The agent is +# attached at JVM start via JAVA_TOOL_OPTIONS (set in +# docker-compose.coverage.yml) so we don't have to rewrite the +# upstream entrypoint, which is `java -jar restheart.jar` with +# specific JVM flags. +# +# The agent runs in `tcpserver` mode so the workflow can dump +# coverage data on demand without restarting the JVM — +# important for distroless-style upstream images that don't +# ship a shell. +# +# IMPORTANT: this image is only consumed by docker-compose.coverage.yml. +# The base Dockerfile and docker-compose.yml stay uninstrumented so +# enterprise's keploy compat lane pays no JVM-instrumentation cost +# (jacocoagent adds ~5-10% per-call overhead through bytecode +# rewriting, which would slow record/replay measurably). + +# Stage 1: pull JaCoCo zip in an alpine builder. The upstream +# restheart image is distroless (no shell, no curl/unzip), so we +# can't fetch JaCoCo from inside it. +FROM alpine:3.19 AS jacoco-fetch +ARG JACOCO_VERSION=0.8.13 +RUN apk add --no-cache curl ca-certificates unzip \ + && curl -fsSL "https://repo1.maven.org/maven2/org/jacoco/jacoco/${JACOCO_VERSION}/jacoco-${JACOCO_VERSION}.zip" -o /tmp/jacoco.zip \ + && mkdir -p /tmp/jacoco \ + && unzip -j /tmp/jacoco.zip lib/jacocoagent.jar lib/jacococli.jar -d /tmp/jacoco + +# Stage 2: layer JaCoCo into the upstream image. We can't `RUN` +# anything because the base image has no shell — only COPY and +# WORKDIR work. COPY --chown sets ownership at copy time so the +# distroless user (uid 65532) can read the agent. +FROM softinstigate/restheart:9.2.1 +COPY --from=jacoco-fetch --chown=65532:65532 /tmp/jacoco/jacocoagent.jar /opt/jacoco/jacocoagent.jar +COPY --from=jacoco-fetch --chown=65532:65532 /tmp/jacoco/jacococli.jar /opt/jacoco/jacococli.jar + +# Pre-create /coverage as an empty WORKDIR so docker has a +# mountpoint for the bind-mount in docker-compose.coverage.yml. +# WORKDIR doesn't require a shell. +WORKDIR /coverage +WORKDIR /opt/restheart diff --git a/restheart-mongo/docker-compose.coverage.yml b/restheart-mongo/docker-compose.coverage.yml new file mode 100644 index 00000000..778d6f21 --- /dev/null +++ b/restheart-mongo/docker-compose.coverage.yml @@ -0,0 +1,31 @@ +# Coverage overlay — applied with: +# +# docker compose -f docker-compose.yml -f docker-compose.coverage.yml up -d --build +# +# Used ONLY by the standalone .github/workflows/restheart-mongo.yml +# CI workflow. Keploy CI lanes (enterprise, integrations) ignore +# this file and run the base compose unchanged, so they pay zero +# JaCoCo-instrumentation cost. +services: + restheart: + build: + context: . + dockerfile: Dockerfile.coverage + image: ${RESTHEART_COVERAGE_IMAGE:-restheart-mongo:local-coverage} + environment: + # Attach the JaCoCo agent in TCP server mode. The upstream + # entrypoint is `java ... -jar restheart.jar`; JAVA_TOOL_OPTIONS + # is read by the JVM and prepended to all java args, so the + # `-javaagent` flag arms before restheart.jar starts loading + # classes. + # + # output=tcpserver: the agent listens on port 6300 inside the + # container and dumps coverage data over TCP on demand. No + # need to stop the JVM to read coverage — the workflow + # connects to 6300, dumps, and the report is generated + # post-hoc by jacococli. + JAVA_TOOL_OPTIONS: "-javaagent:/opt/jacoco/jacocoagent.jar=output=tcpserver,address=0.0.0.0,port=6300,sessionid=keploy,append=false" + ports: + - "${RESTHEART_JACOCO_PORT:-6300}:6300" + volumes: + - ./coverage:/coverage diff --git a/restheart-mongo/docker-compose.yml b/restheart-mongo/docker-compose.yml index 0e5b778d..6caa38f0 100644 --- a/restheart-mongo/docker-compose.yml +++ b/restheart-mongo/docker-compose.yml @@ -12,9 +12,14 @@ services: ports: - "${RESTHEART_APP_PORT:-8080}:8080" environment: - RHO: > - /mclient/connection-string->"mongodb://${RESTHEART_MONGO_IP:-172.36.0.10}:27017", - /core/log-level->"INFO" + # RHO is RESTHeart's runtime config-override syntax: + # key->value pairs separated by ';' + # We override the default mongo URL (which the upstream image + # points at host.docker.internal — irrelevant in compose) AND + # explicitly bind /http-listener/host to 0.0.0.0; without that + # second override the upstream image binds to localhost and is + # unreachable from the host port mapping. + RHO: '/mclient/connection-string->"mongodb://${RESTHEART_MONGO_IP:-172.36.0.10}:27017";/http-listener/host->"0.0.0.0";/core/log-level->"INFO"' depends_on: mongo: condition: service_healthy diff --git a/restheart-mongo/flow.sh b/restheart-mongo/flow.sh index dab47705..ef89def7 100644 --- a/restheart-mongo/flow.sh +++ b/restheart-mongo/flow.sh @@ -1230,137 +1230,95 @@ restheart_record_traffic() { done } -# RESTHeart's routes are pattern-mount based, not file-system -# based. The denominator below enumerates every (method, route) -# tuple that restheart_record_traffic fires. Update this list when -# adding new traffic so the coverage stays in lockstep. -restheart_list_routes() { - cat <<'ROUTES' -GET / -GET /ping -GET /metrics -GET /health/db -GET /logout -POST /logout -GET /roles/{name} -GET /token -GET /token/{name} -POST /token -POST /token/{name} -DELETE /token/{name} -GET /_size -GET /_meta -POST /_sessions -GET /_sessions/{sid} -GET /_sessions/{sid}/_txns -POST /_sessions/{sid}/_txns -PATCH /_sessions/{sid}/_txns/{txnid} -DELETE /_sessions/{sid}/_txns/{txnid} -GET /ic -POST /ic -POST /csv -POST /graphql -GET /graphql -POST /graphql/{appname} -OPTIONS /graphql -OPTIONS /token -GET /.well-known/oauth-authorization-server -GET /.well-known/oauth-protected-resource -GET /.well-known/oauth-protected-resource/{name} -GET /{db} -PUT /{db} -DELETE /{db} -GET /{db}/_meta -GET /{db}/_size -GET /{db}/{coll} -PUT /{db}/{coll} -POST /{db}/{coll} -PATCH /{db}/{coll} -DELETE /{db}/{coll} -TRACE /{db}/{coll} -OPTIONS /{db}/{coll} -GET /{db}/{coll}/ -GET /{db}/{coll}/{docid} -PUT /{db}/{coll}/{docid} -PATCH /{db}/{coll}/{docid} -DELETE /{db}/{coll}/{docid} -GET /{db}/{coll}/{docid}/binary -GET /{db}/{coll}/{docid}/_meta -GET /{db}/{coll}/_size -GET /{db}/{coll}/_meta -PUT /{db}/{coll}/_meta -PATCH /{db}/{coll}/_meta -GET /{db}/{coll}/_indexes -PUT /{db}/{coll}/_indexes/{name} -DELETE /{db}/{coll}/_indexes/{name} -GET /{db}/{coll}/_aggrs -GET /{db}/{coll}/_aggrs/{name} -GET /{db}/{coll}/_streams -GET /{db}/{coll}/_streams/{name} -ROUTES -} +# restheart_report_coverage (real Java line coverage via JaCoCo). +# +# Requires the docker-compose.coverage.yml overlay — the base +# compose is uninstrumented so keploy CI lanes (enterprise, +# integrations) pay zero JVM-instrumentation cost. When called +# from a base-compose run this function detects the missing +# coverage image and exits 0 cleanly so `flow.sh coverage || true` +# informational hooks don't break. +# +# Mechanics: +# - The overlay's Dockerfile.coverage layers JaCoCo's agent jar +# into the upstream restheart image; the overlay compose sets +# JAVA_TOOL_OPTIONS=-javaagent:.../jacocoagent.jar=output=tcpserver,... +# so the agent listens on port 6300 inside the container. +# - This function uses the coverage image (which has java + +# jacococli.jar) to dump execution data over TCP into +# /coverage/jacoco.exec, then renders a JaCoCo XML report +# against /opt/restheart/restheart.jar's classfiles. +# - The XML's rows under +# aggregate every analysed class; we sum and emit a +# `Covered N/M (XX.X%)` line in the helper-script's expected +# format. +restheart_report_coverage() { + local app="${RESTHEART_APP_CONTAINER:-restheart_app}" + local data_dir="${RESTHEART_COVERAGE_DATA_DIR:-${PWD}/coverage}" + local report_file="${COVERAGE_REPORT_FILE:-coverage_report.txt}" + local image="${RESTHEART_COVERAGE_IMAGE:-restheart-mongo:local-coverage}" + local jacoco_port="${RESTHEART_JACOCO_PORT:-6300}" + + if ! docker ps --format '{{.Names}}' 2>/dev/null | grep -q "^${app}$"; then + echo "INFO: ${app} not running — coverage report skipped" + : >"$report_file" + return 0 + fi + if ! docker image inspect "$image" >/dev/null 2>&1; then + echo "INFO: coverage image ${image} not built — base image is uninstrumented (apply docker-compose.coverage.yml overlay to enable)" + : >"$report_file" + return 0 + fi -restheart_list_recorded_routes() { - local f method route - local found_keploy=0 - while IFS= read -r f; do - found_keploy=1 - method=$(awk '/^ method:/{print $2; exit}' "$f") - route=$(awk '/^ url:/{print $2; exit}' "$f") - route="${route%%\?*}" - case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac - if [ -n "$method" ] && [ -n "$route" ]; then echo "$method $route"; fi - done < <(find keploy -type f -path '*/tests/*.yaml' 2>/dev/null) | sort -u - if [ "$found_keploy" = "1" ]; then return 0; fi - - if [ -n "$RESTHEART_FIRED_ROUTES_FILE" ] && [ -f "$RESTHEART_FIRED_ROUTES_FILE" ]; then - while IFS= read -r line; do - method="${line%% *}"; route="${line#* }" - route="${route%%\?*}" - case "$route" in http://*|https://*) route="/${route#*://*/}" ;; esac - [ -n "$method" ] && [ -n "$route" ] && echo "$method $route" - done <"$RESTHEART_FIRED_ROUTES_FILE" | sort -u + # Locate the docker network the running container is on so the + # one-off jacococli container can reach :6300 via container DNS. + local network + network=$(docker inspect "$app" --format '{{range $k, $v := .NetworkSettings.Networks}}{{$k}}{{println}}{{end}}' 2>/dev/null | head -1 | tr -d ' \r\n') + if [ -z "$network" ]; then + echo "ERROR: could not resolve docker network for ${app}" >&2 + return 1 fi -} -restheart_report_coverage() { - local routes_file recorded_file - routes_file="$(mktemp)"; recorded_file="$(mktemp)" - restheart_list_routes >"$routes_file" - restheart_list_recorded_routes >"$recorded_file" - - local total covered missing pct - total=$(wc -l <"$routes_file" | tr -d ' '); covered=0; missing="" - while IFS= read -r line; do - local method="${line%% *}" - local route="${line#* }" - # Replace {param} placeholders with [^/]+ for matching. - local pattern - pattern="^${method} $(printf '%s' "$route" | sed -E 's/\{[^}]+\}/[^\/]+/g')$" - if grep -qE "$pattern" "$recorded_file"; then - covered=$((covered + 1)) - else - missing+=" ${method} ${route}"$'\n' - fi - done <"$routes_file" - if [ "$total" -gt 0 ]; then - pct=$(awk -v c="$covered" -v t="$total" 'BEGIN{printf "%.1f", c*100/t}') - else pct="0.0"; fi + docker run --rm --network "$network" -v "${data_dir}:/coverage" --entrypoint java "$image" \ + -jar /opt/jacoco/jacococli.jar dump \ + --address "$app" --port "$jacoco_port" \ + --destfile /coverage/jacoco.exec >/dev/null + + docker run --rm -v "${data_dir}:/coverage" --entrypoint java "$image" \ + -jar /opt/jacoco/jacococli.jar report /coverage/jacoco.exec \ + --xml /coverage/report.xml \ + --classfiles /opt/restheart/restheart.jar >/dev/null + + # Parse the top-level rows from the + # JaCoCo XML. Use python3 inside the alpine helper so we don't + # rely on the host having lxml/xmlstarlet/etc. + local pct missed covered total + read -r missed covered <<<"$(docker run --rm -v "${data_dir}:/coverage" python:3.12-alpine python3 -c ' +import xml.etree.ElementTree as ET +root = ET.parse("/coverage/report.xml").getroot() +miss = sum(int(c.get("missed",0)) for c in root.findall("counter") if c.get("type") == "LINE") +cov = sum(int(c.get("covered",0)) for c in root.findall("counter") if c.get("type") == "LINE") +print(miss, cov) +')" + total=$((missed + covered)) + pct=$(awk -v c="$covered" -v t="$total" 'BEGIN{if(t>0)printf "%.1f", c*100/t; else print "0.0"}') + { - echo "================ RESTHeart API coverage ================" + echo "============== RESTHeart line coverage (JaCoCo) ==============" + echo "Lines missed: ${missed}" + echo "Lines covered: ${covered}" + echo "Lines total: ${total}" + echo "" echo "Covered ${covered}/${total} (${pct}%)" - if [ -n "$missing" ]; then echo "Uncovered:"; printf '%s' "$missing"; fi - echo "========================================================" - } | tee "${COVERAGE_REPORT_FILE:-coverage_report.txt}" - rm -f "$routes_file" "$recorded_file" + echo "==============================================================" + } | tee "$report_file" } case "${1:-}" in bootstrap) restheart_bootstrap "${2:-180}" ;; record-traffic) restheart_record_traffic ;; coverage) restheart_report_coverage ;; - list-routes) restheart_list_routes ;; *) - echo "usage: $0 {bootstrap|record-traffic|coverage|list-routes}" >&2 + echo "usage: $0 {bootstrap|record-traffic|coverage}" >&2 exit 2 ;; esac From 92ad161daaae484cef0901bfc365d79e465de648 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 13:36:00 +0530 Subject: [PATCH 6/7] ci(restheart-mongo): drop trailing prose from sticky comment Signed-off-by: Akash Kumar --- .github/workflows/restheart-mongo.yml | 2 -- 1 file changed, 2 deletions(-) diff --git a/.github/workflows/restheart-mongo.yml b/.github/workflows/restheart-mongo.yml index 91062c2f..918aad5e 100644 --- a/.github/workflows/restheart-mongo.yml +++ b/.github/workflows/restheart-mongo.yml @@ -193,5 +193,3 @@ jobs: | this PR | **${{ needs.build-coverage.outputs.coverage }}%** | Threshold: PR may not drop coverage by more than **${{ env.COVERAGE_THRESHOLD }}pp**. Override per-repo via the `RESTHEART_COVERAGE_THRESHOLD` actions variable. - - Coverage is **Java line coverage** (JaCoCo 0.8.13) of the RESTHeart 9.x JVM under traffic — the bytecode `flow.sh::restheart_record_traffic` actually executes (REST CRUD + GraphQL + ACL + users + sessions/transactions + metrics + …). Instrumentation lives in a separate `Dockerfile.coverage` + `docker-compose.coverage.yml` overlay; the base `docker-compose.yml` consumed by keploy/integrations and keploy/enterprise CI lanes runs uninstrumented and pays zero JaCoCo cost. JaCoCo execution dumps + XML reports are attached as artifacts on each job (`coverage-build` / `coverage-release`). From 2541e2b1967bb8a46f5242f101473f0d52b16a88 Mon Sep 17 00:00:00 2001 From: Akash Kumar Date: Fri, 1 May 2026 13:39:36 +0530 Subject: [PATCH 7/7] docs(restheart-mongo): document coverage overlay; drop list-routes/FIRED_ROUTES refs Signed-off-by: Akash Kumar --- restheart-mongo/README.md | 33 ++++++++++++++++++++++++++------- 1 file changed, 26 insertions(+), 7 deletions(-) diff --git a/restheart-mongo/README.md b/restheart-mongo/README.md index 40f460bd..ba929c4e 100644 --- a/restheart-mongo/README.md +++ b/restheart-mongo/README.md @@ -21,9 +21,11 @@ The traffic loop exercises the surfaces that keploy parsers and matchers have to ``` restheart-mongo/ -├── Dockerfile # FROM softinstigate/restheart:9.2.1 +├── Dockerfile # FROM softinstigate/restheart:9.2.1 (base; uninstrumented) +├── Dockerfile.coverage # extends base, layers JaCoCo agent + cli for coverage ├── docker-compose.yml # mongo:7 + restheart:9.2.1, fixed subnet, env-driven -├── flow.sh # bootstrap | record-traffic | coverage | list-routes +├── docker-compose.coverage.yml # overlay; arms JaCoCo via JAVA_TOOL_OPTIONS +├── flow.sh # bootstrap | record-traffic | coverage ├── keploy.yml.template # globalNoise for _etag/_oid/lastModified/Date └── README.md # this file ``` @@ -33,20 +35,37 @@ restheart-mongo/ The sample is keploy-independent: `docker compose up && bash flow.sh bootstrap && bash flow.sh record-traffic` runs end-to-end against bare RESTHeart. Lane scripts wrap that exact same path inside `keploy record` / `keploy test`. * `bootstrap` — wait for RESTHeart to start serving and PUT the seed collections (`items`, `people`, `places`, `halpeople`, `relpeople`, `gql-apps`, `acl`, `_schemas`, `avatars.files`, `range_files.files`, `imported_csv`) so subsequent record-traffic calls have something to find. -* `record-traffic` — drive the full RESTHeart REST surface listed above. Every call is logged to `${RESTHEART_FIRED_ROUTES_FILE}` (when set) so `coverage` has a numerator without a keploy recording, and every call is fault-tolerant (`|| true`) so a single transient 4xx never aborts the run. keploy is the assertion layer. -* `coverage` — emits `(method, path)` coverage. The denominator is curated from RESTHeart's pattern-based mount table (see `restheart_list_routes` in `flow.sh`); RESTHeart routes are not file-system-derivable like Next.js, so the list lives in source and stays in lockstep with `record-traffic`. -* `list-routes` — diagnostic; prints the route table the coverage report uses as its denominator. +* `record-traffic` — drive the full RESTHeart REST surface listed above. Every call is fault-tolerant (`|| true`) so a single transient 4xx never aborts the run. keploy is the assertion layer. +* `coverage` — emits real Java line coverage via JaCoCo when the `docker-compose.coverage.yml` overlay is applied; otherwise a no-op (the base image is uninstrumented so this prints an info message and exits 0). ## Local run +### Without keploy — smoke check + ```sh docker compose up -d bash flow.sh bootstrap 240 -RESTHEART_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh record-traffic -RESTHEART_FIRED_ROUTES_FILE=/tmp/fired.log bash flow.sh coverage +bash flow.sh record-traffic docker compose down -v ``` +This is what the keploy/enterprise compat lane wraps in `keploy record` / `keploy test` — the base compose is uninstrumented and runs unchanged inside that lane. + +### Without keploy — measuring real Java line coverage + +The base image is uninstrumented. Apply the coverage overlay to attach the JaCoCo agent: + +```sh +mkdir -p coverage +docker compose -f docker-compose.yml -f docker-compose.coverage.yml up -d --build +bash flow.sh bootstrap 240 +bash flow.sh record-traffic +bash flow.sh coverage +docker compose -f docker-compose.yml -f docker-compose.coverage.yml down -v +``` + +The overlay (`Dockerfile.coverage` + `docker-compose.coverage.yml`) layers JaCoCo's agent + cli jars into the upstream restheart image and arms the agent at JVM start via `JAVA_TOOL_OPTIONS=-javaagent:...=output=tcpserver,...`. `flow.sh coverage` dumps execution data over the agent's TCP server (no JVM stop needed) and renders an XML line-coverage report. The overlay is consumed ONLY by the standalone GH Actions workflow — keploy/enterprise's compat lane ignores it and runs the base compose, paying zero JaCoCo cost (the agent rewrites bytecode at class-load and adds ~5-10% per-call overhead that would slow record/replay). + ## Consumers * `keploy/enterprise` `.woodpecker/restheart-linux.yml` — the RESTHeart compat lane delegates compose + traffic + coverage to this sample and wraps them in `keploy record` / `keploy test`.