From b96db7e5bd0e6b3223eabf708ebed72dabaa2d73 Mon Sep 17 00:00:00 2001 From: Asish Kumar Date: Wed, 29 Apr 2026 12:56:31 +0530 Subject: [PATCH 1/2] Revert "docs(k8s-proxy): add DaemonSet architecture + auto-replay environments guide (#842)" This reverts commit 9ef5086964be104336b3e416a7b5d5eb5b627a30. Signed-off-by: Asish Kumar --- .../config/vocabularies/Base/accept.txt | 48 ---- .../k8s-proxy-daemonset-architecture.md | 256 ------------------ .../version-4.0.0-sidebars.json | 24 +- 3 files changed, 12 insertions(+), 316 deletions(-) delete mode 100644 versioned_docs/version-4.0.0/running-keploy/k8s-proxy-daemonset-architecture.md diff --git a/vale_styles/config/vocabularies/Base/accept.txt b/vale_styles/config/vocabularies/Base/accept.txt index 9acf69b28..bd3151d20 100644 --- a/vale_styles/config/vocabularies/Base/accept.txt +++ b/vale_styles/config/vocabularies/Base/accept.txt @@ -136,51 +136,3 @@ keploy-daemonset keploy-agent recordingsessions replaysessions -TGID[s]? -[Rr]efcount[s]? -GitOps -envFrom -valueFrom -[Cc]onfigMap[s]? -ServiceAccount[s]? -imagePullSecret[s]? -NetPolic(y|ies) -NetworkPolic(y|ies) -containerd -launchd -systemd -pm2 -SPDY -mTLS -PodTemplate[Ss]pec -podSelector -matchLabels -backoff -[Aa]ir-?gap(?:ped|ping)? -kubelet -keployContext -keploy-replay-runner -ReplayJob[s]? -CreateReplayJobRequest -runner-mode -cluster-mode -crd -runner -sidecar -[Kk]3s -[Kk]0s -kindNet -randAlphaNum -secretKeyRef -HostPath -PostStart -[Cc]group[s]? -[Uu]serspace -[Tt]eardown -[Rr]eplayer -[Rr]ehydrate[ds]? -[Rr]eachability -[Ww]alkthrough -[Dd]ev -[Cc]Rs? -[Ss]ubresource[s]? diff --git a/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-daemonset-architecture.md b/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-daemonset-architecture.md deleted file mode 100644 index 9e9162468..000000000 --- a/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-daemonset-architecture.md +++ /dev/null @@ -1,256 +0,0 @@ ---- -id: k8s-proxy-daemonset-architecture -title: K8s Proxy DaemonSet Architecture & Auto-Replay Environments -sidebar_label: DaemonSet & Auto-Replay -description: How Keploy's DaemonSet recording works under the hood and the three environments where auto-replay can run—in-cluster, Docker daemon runner, and a separate replay cluster. -tags: - - kubernetes - - k8s proxy - - daemonset - - architecture - - auto-replay - - enterprise -keywords: - - keploy daemonset - - eBPF capture - - RecordingSession CRD - - auto-replay modes - - cluster-mode replay - - replay-runner - - docker daemon replay ---- - -import ProductTier from '@site/src/components/ProductTier'; - - - -The Keploy Kubernetes Proxy supports two recording modes—**Sidecar** and **DaemonSet**—and two independent **auto-replay environments** that the same proxy can dispatch to. This page explains the moving parts of DaemonSet recording and then walks through both replay environments end to end. - -If you only want the install steps, see [the K8s Proxy quickstart](/docs/quickstart/k8s-proxy/) or [the customer cluster-mode setup guide](/docs/running-keploy/k8s-proxy-api/). This document is the "behind the scenes" reference. - ---- - -## Part 1—DaemonSet recording architecture - -### Why DaemonSet mode - -Sidecar mode injects a `keploy-agent` container into your application Pod via a `MutatingAdmissionWebhook` and rolls the Deployment. That works, but it has two non-trivial requirements: - -1. **Write RBAC on the application namespace.** The proxy needs `patch deployments` to add the sidecar. -2. **An application restart at recording start.** The injected sidecar only takes effect on the next rollout. - -In production environments where Keploy must operate under read-only RBAC on the application namespace, or where rolling the Pod has unacceptable cost, neither requirement is acceptable. DaemonSet mode removes both. - -### The three components - -``` -┌────────────── Source cluster ──────────────────────────────────────────┐ -│ │ -│ ┌───────────────┐ ┌─────────────────────────────────────┐ │ -│ │ Application │ │ k8s-proxy (Deployment) │ │ -│ │ Pods │ │ - controller-runtime manager │ │ -│ │ (unchanged, │ │ - REST API (/record/start, etc.) │ │ -│ │ no sidecar) │ │ - persists to MinIO + MongoDB │ │ -│ └───────┬───────┘ └──────────────┬──────────────────────┘ │ -│ │ │ │ -│ │ traffic captured by eBPF │ creates RecordingSession │ -│ │ ▼ │ -│ │ ┌────────────────────────────┐ │ -│ │ │ kube-apiserver / etcd │ │ -│ │ │ • RecordingSession CRD │ │ -│ │ │ • ReplaySession CRD │ │ -│ │ └──────────────┬─────────────┘ │ -│ │ │ watch │ -│ ┌───────▼─────────────────────────────────┐ │ │ -│ │ keploy-daemonset (per node) │◀─┘ │ -│ │ - controller-runtime watches the CR │ │ -│ │ - resolves matching Pods on this node │ │ -│ │ - programs target_namespace_pids + │ │ -│ │ target_cgroup_ids BPF maps │ │ -│ │ - eBPF programs filter by those maps │ │ -│ │ - uploads test cases + mocks back to │ │ -│ │ k8s-proxy over HTTP │ │ -│ └─────────────────────────────────────────┘ │ -└────────────────────────────────────────────────────────────────────────┘ -``` - -The pieces: - -1. **k8s-proxy Deployment.** Same single-replica controller you already run for Sidecar mode. It owns the REST API the Console calls (`/record/start`, `/record/stop`, `/test/start`, etc.), persists captured artifacts to MinIO + MongoDB, and dispatches auto-replay (see Part 2). -2. **`recordingsessions.keploy.io` CRD.** A small Custom Resource the proxy creates at `/record/start`. Each CR is named after the target Deployment and carries a `podSelector`, the list of containers to trace, and the desired mock format. The CRD is the authoritative coordination object between the control plane (k8s-proxy) and the data plane (DaemonSet). Status flows back as a `perNode` array on the CR's `status` subresource. -3. **`keploy-daemonset` DaemonSet.** One Pod per node, running the same enterprise binary you ship for Sidecar mode but in agent-only mode. Each Pod loads its eBPF programs, watches the RecordingSession CR via controller-runtime, and is responsible for capturing traffic from the application Pods that landed on its node. - -A `replaysessions.keploy.io` CRD ships alongside RecordingSession but is not used by any current replay environment—it exists so the controller-runtime scheme registers cleanly when a future in-cluster served-replay path is wired up. - -### What you don't get without the DaemonSet - -If `daemonset.enabled=false` in the chart, `/record/start` falls back to the Sidecar path: the proxy injects the agent via the webhook and rolls the application Pod. Both modes drive the same REST API and persist to the same MongoDB schema, so the rest of the Console (Reports, Schema Coverage, Auto-Replay history) does not need to know which mode produced the data. - ---- - -## Part 2—Auto-replay environments - -When a recording session ends—either because the cooldown window expires or because `/record/stop` was called—the proxy fires an auto-replay against the freshly recorded test sets. Where that replay actually runs is controlled by `KEPLOY_AUTO_REPLAY_MODE`. Two values are supported, deliberately independent of each other: - -| Mode | Replay runs on… | Best for | -| --------- | ----------------------------------------- | ---------------------------------------------------------------------------------------------------------------------- | -| `runner` | a Docker daemon outside the cluster | Customers who don't want any pod scheduling for replay; long-lived runners that pull work over HTTP. | -| `cluster` | a separate Kubernetes cluster you provide | Production with read-only RBAC on the source cluster; replay runs against an isolated Pod in a customer-owned cluster. | - -`cluster` is the default in current builds. The mode is process-wide on each k8s-proxy Pod—flipping it requires a Helm upgrade or `kubectl set env` and a rollout. - -### How dispatch works - -`/record/stop` runs the recording teardown synchronously and then enters a dispatch branch in `pkg/http/handlers.go`. The branch reads `cfg.AutoReplayMode` and routes to the matching handler, which stands up a replay environment from the captured test cases. Both modes eventually drive the OSS replayer (`go.keploy.io/server/v3/pkg/service/replay`)—what differs is **where the application under test actually runs** during replay. - -The default replay-start delay is **10 seconds** in both modes. This gives the replayed application time to bind its port before the OSS replayer fires the first test case. Callers can override it via `auto_replay_config.delay` in the `/record/start` body. - ---- - -### Mode A—`runner` (Docker daemon) - -``` -[/record/stop] - │ - ▼ -k8s-proxy - • POSTs a CreateReplayJobRequest to its own - /replay-jobs endpoint, which puts a ReplayJob - in an in-memory store with status=pending - -(somewhere outside the cluster, on a host with Docker installed) -keploy-replay-runner ─poll──▶ k8s-proxy /replay-jobs/poll - binary (HTTPS, shared bearer token) - │ - │ receives a job: - │ { record_id, test_set_ids[], image, env, app_port, ... } - ▼ - docker run (the application container) - docker run keploy/enterprise (the keploy agent, on the same - user-defined Docker network) - │ - │ keploy enterprise replay … --record-id= - │ downloads mocks + test cases from k8s-proxy via HTTP - │ runs the OSS replayer - ▼ - docker rm - │ - │ POST /replay-jobs/{jobID}/complete - ▼ -k8s-proxy - • merges the report into Mongo - • surfaces the run on the Console reports dashboard -``` - -The runner is a small standalone binary (`cmd/replay-runner` in the k8s-proxy repo). It is not deployed by the chart—operators install it on whichever machine has the Docker daemon, point it at the proxy with a shared token, and start it as a systemd unit / launchd service / pm2 job. - -**Configuration on the k8s-proxy side:** - -```yaml -env: - KEPLOY_AUTO_REPLAY_MODE: runner -``` - -**Configuration on the runner side** (CLI flags or env): - -| Flag | Env | Description | -| ---------------- | --------------------- | ------------------------------------------------------------------------------ | -| `--platform-url` | `KEPLOY_PLATFORM_URL` | k8s-proxy's externally reachable URL (the same `ingressUrl` the Console uses). | -| `--shared-token` | `KEPLOY_SHARED_TOKEN` | Bearer token. Read from the k8s-proxy `-shared-token` Secret. | -| `--runner-id` | `KEPLOY_RUNNER_ID` | Stable identifier for this runner; used for heartbeat + job assignment. | -| `--keploy-bin` | `KEPLOY_BIN` | Path to the `keploy enterprise` binary that drives the replay. | -| `--work-dir` | `KEPLOY_WORK_DIR` | Scratch directory for downloaded mocks and reports. | -| `--cluster-name` | `KEPLOY_CLUSTER_NAME` | Optional. When set, the runner only picks up jobs scoped to this cluster. | - -The runner heartbeats while a job is in progress and POSTs the final report back to `/replay-jobs/{jobID}/complete`. The k8s-proxy never touches the runner's host—it just exposes the queue. - -**When to use it:** customers who can't (or don't want to) run replay Pods inside a Kubernetes cluster at all—typically when the customer has a dedicated VM for test execution, or when air-gapping the replay environment from production is a hard requirement. The trade-off is one more piece of infrastructure to operate. - ---- - -### Mode B—`cluster` (separate replay cluster) - -This is the **recommended** production mode and is also the default. It keeps the source cluster strictly read-only and runs every replay in a customer-provided second cluster reached through a kubeconfig. - -``` -┌── Source cluster (read-only RBAC) ────────────────────────────────────┐ -│ │ -│ [/record/stop] ──▶ k8s-proxy │ -│ │ reads source Deployment (image, ports, env, │ -│ │ ConfigMap/Secret refs)—read-only │ -│ │ rehydrates referenced ConfigMaps + Secrets │ -│ │ into the replay namespace │ -│ │ │ -└───────────────────────┼───────────────────────────────────────────────┘ - │ kubeconfig (mounted as a Secret) - ▼ -┌── Replay cluster (customer-managed) ──────────────────────────────────┐ -│ │ -│ ┌───────────────────────────────────────────────────────────┐ │ -│ │ Replay namespace (e.g. keploy-replay) │ │ -│ │ │ │ -│ │ Pod -rpl-xxxxxx │ │ -│ │ ├─ application container (image from source Deployment) │ │ -│ │ └─ keploy-agent sidecar (replays mocks) │ │ -│ │ Service -rpl-xxxxxx-svc │ │ -│ │ NetPolicy -rpl-xxxxxx-deny-egress │ │ -│ │ Rehydrated ConfigMaps + Secrets │ │ -│ │ │ │ -│ │ All resources cleaned up after the session ends. │ │ -│ └───────────────────────────────────────────────────────────┘ │ -└───────────────────────────────────────────────────────────────────────┘ -``` - -**Flow on `/record/stop`:** - -1. k8s-proxy reads the source Deployment's `PodTemplateSpec` (read-only). -2. It rehydrates every `envFrom` / `valueFrom` / volume `ConfigMap` and `Secret` referenced by the Pod template into the replay-cluster's namespace, using the mounted kubeconfig. ServiceAccount-token Secrets are intentionally skipped—they are cluster-bound. -3. It creates a standalone Pod (`-rpl-`) plus a backing Service and a deny-all-egress NetworkPolicy in the replay cluster. The Pod runs the application image alongside the keploy-agent sidecar. -4. It opens a SPDY port-forward through the replay cluster's API server to the agent port and the recorded application port. The OSS replayer drives test cases through that local forward—k8s-proxy never needs in-cluster network reachability into the replay cluster. -5. When replay ends, the proxy deletes the Pod, Service, and NetworkPolicy. ConfigMaps and Secrets are left in place; they're rehydrated again next run if the source spec changed. - -**What stays the same as `runner` mode:** the OSS replayer, the report shape, the Mongo collections (`testrunReports`, `testsetReports`, `testcaseReports`, `autoReplayMetrics`, `k8sSchemaCoverageReports`), and the Console UI. - -**What's different:** every Pod / Service / NetworkPolicy write goes to the replay cluster. The source cluster never sees a write from Keploy. - -**Configuration:** - -```yaml -env: - KEPLOY_AUTO_REPLAY_MODE: cluster - KEPLOY_REPLAY_KUBECONFIG_PATH: /etc/replay/kubeconfig - KEPLOY_REPLAY_NAMESPACE: keploy-replay - # Optional—pre-existing imagePullSecret in the replay namespace - # KEPLOY_REPLAY_IMAGE_PULL_SECRET: my-pull-secret - -extraVolumes: - - name: replay-kubeconfig - secret: - secretName: replay-kubeconfig - -extraVolumeMounts: - - name: replay-kubeconfig - mountPath: /etc/replay - readOnly: true -``` - -The kubeconfig in the Secret should grant the proxy `create / update / patch / delete` on Pods, Services, NetworkPolicies, ConfigMaps, and Secrets **in the replay namespace only**, plus `pods/portforward` and `pods/log`. See the customer setup guide for a copy-paste Role + RoleBinding template. - -**Graceful fallback:** if `KEPLOY_AUTO_REPLAY_MODE=cluster` is set but `KEPLOY_REPLAY_KUBECONFIG_PATH` is empty or the file is missing, k8s-proxy logs a warning and skips the trailing replay rather than failing the recording session. - -**When to use it:** any production environment where the source cluster must remain untouched, or where you want hard isolation between recording and replay environments. The trade-off is operating a second Kubernetes cluster; for many teams a small managed cluster (1 or 2 small nodes) is sufficient since replays are short-lived and serialized per `(namespace, deployment)` pair. - ---- - -## Picking a combination - -Recording mode and replay environment are orthogonal—every combination is valid, and the choice is independent on each side: - -| You want… | Recording mode | Replay environment | -| ---------------------------------------------------------------------------------------------- | -------------- | ------------------ | -| Fastest setup, you already have a Docker host outside the cluster | Sidecar | `runner` | -| No application restart, you already have a Docker host outside the cluster | DaemonSet | `runner` | -| Production with read-only RBAC on the source namespace, second K8s cluster available | DaemonSet | `cluster` | -| Production with read-only RBAC on the source namespace, no spare K8s cluster but a Docker host | DaemonSet | `runner` | - -For the operational walkthrough of the cluster-mode setup, see the K8s Proxy REST API guide's setup section. diff --git a/versioned_sidebars/version-4.0.0-sidebars.json b/versioned_sidebars/version-4.0.0-sidebars.json index e4eb279d2..50eedd895 100644 --- a/versioned_sidebars/version-4.0.0-sidebars.json +++ b/versioned_sidebars/version-4.0.0-sidebars.json @@ -106,6 +106,7 @@ "items": [ "quickstart/samples-django", "quickstart/flask-redis", + "quickstart/k8s-proxy", "quickstart/samples-microservices", "quickstart/samples-fastapi", "quickstart/samples-fastapi-twilio" @@ -132,25 +133,21 @@ "label": "C# (.NET Core)", "collapsible": true, "collapsed": true, - "items": ["quickstart/samples-csharp"] + "items": [ + "quickstart/samples-csharp" + ] } ] }, { "type": "category", - "label": "K8s Proxy", - "collapsible": true, - "collapsed": true, + "label": "CI/CD Integration", "items": [ - "quickstart/k8s-proxy", - "running-keploy/k8s-proxy-daemonset-architecture" + "ci-cd/github", + "ci-cd/gitlab", + "ci-cd/jenkins" ] }, - { - "type": "category", - "label": "CI/CD Integration", - "items": ["ci-cd/github", "ci-cd/gitlab", "ci-cd/jenkins"] - }, { "type": "category", "label": "Test Coverage Integration", @@ -223,7 +220,10 @@ "type": "category", "label": "API Reference", "collapsed": false, - "items": ["running-keploy/public-api", "running-keploy/k8s-proxy-api"] + "items": [ + "running-keploy/public-api", + "running-keploy/k8s-proxy-api" + ] }, { "type": "category", From 6bc931216d0e3a304c2cebaf274a6b1f6ce692d0 Mon Sep 17 00:00:00 2001 From: Asish Kumar Date: Wed, 29 Apr 2026 12:56:31 +0530 Subject: [PATCH 2/2] Revert "docs(k8s-proxy): add Kubernetes Proxy REST API and DaemonSet recording mode (#838)" This reverts commit c352a429304e0d7cae96dd71f72634c1084d7dce except for the existing Vale Google.Units override, which is kept so the current PR Vale workflow continues to pass on pre-existing k8s text. Signed-off-by: Asish Kumar --- .../config/vocabularies/Base/accept.txt | 19 - .../version-3.0.0/quickstart/k8s-proxy.md | 46 -- .../version-4.0.0/quickstart/k8s-proxy.md | 46 -- .../running-keploy/k8s-proxy-api.md | 491 ------------------ .../version-4.0.0-sidebars.json | 3 +- 5 files changed, 1 insertion(+), 604 deletions(-) delete mode 100644 versioned_docs/version-4.0.0/running-keploy/k8s-proxy-api.md diff --git a/vale_styles/config/vocabularies/Base/accept.txt b/vale_styles/config/vocabularies/Base/accept.txt index bd3151d20..9e09b8033 100644 --- a/vale_styles/config/vocabularies/Base/accept.txt +++ b/vale_styles/config/vocabularies/Base/accept.txt @@ -110,29 +110,10 @@ shipping_address_id [Ll]inux [Ee]nv [Kk]8s -IPs [Dd]edup -[Dd]edups -[Rr]ollout[s]? -[Pp]refill[s]? -[Aa]uditable -[Cc]ooldown -[Ll]iveness [Cc]ron [Tt]oolchain [Rr]untime[s]? -MongoIDs -initialised normalisation behaviour polyglot -[Dd]aemon[Ss]et[s]? -[Cc]RD[s]? -eBPF -[Mm]utatingAdmissionWebhook -RecordingSession[s]? -ReplaySession[s]? -keploy-daemonset -keploy-agent -recordingsessions -replaysessions diff --git a/versioned_docs/version-3.0.0/quickstart/k8s-proxy.md b/versioned_docs/version-3.0.0/quickstart/k8s-proxy.md index 504bf287f..3b2fb44a2 100644 --- a/versioned_docs/version-3.0.0/quickstart/k8s-proxy.md +++ b/versioned_docs/version-3.0.0/quickstart/k8s-proxy.md @@ -135,20 +135,6 @@ At this point, your e-commerce application is live and ready to receive traffic. ## Enable Live Record & Replay with Keploy Proxy -### Pick a recording mode - -The Keploy Proxy supports two ways to capture traffic from your application Pods. Both modes drive the **same Console UI and REST API**—the rest of this guide works identically in either case. Pick whichever fits your environment. - -| | **Sidecar mode (default)** | **DaemonSet mode** | -| ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| How traffic is captured | A `keploy-agent` sidecar container is injected into your application Pod via a `MutatingAdmissionWebhook`. The agent intercepts traffic alongside your container. | A `keploy-daemonset` Pod runs on each node and captures traffic from existing application Pods using **eBPF**—no sidecar, no application Pod restart. | -| What happens on `Start Recording` | The proxy injects the agent and rolls the application Deployment. | The proxy creates a `RecordingSession` Custom Resource. The DaemonSet picks it up and programs its BPF target maps to capture matching Pods on each node. | -| Pod mutation on the application namespace | Required (`patch` on Deployments). | **Not required.** Application Pods are never modified. | -| Application restart at recording start | Yes, on first recording. | No. | -| Best for | Dev/staging, teams happy to grant write RBAC to Keploy on the application namespace. | Production with read-only RBAC on the application namespace; environments where rolling the application Pod has unacceptable cost; or when you want cluster-mode auto-replay (replay runs in a separate cluster you provide). | - -The screenshots below show the **Sidecar** flow because that is the default. To use **DaemonSet** mode instead, set the daemonset values when you run the Helm command in step 4 below—every other step is identical. - ### 1. Open Keploy Dashboard Visit: @@ -189,38 +175,6 @@ Once you have provided the cluster details, you can install the Keploy Proxy in Sample Keploy K8s proxy -#### DaemonSet mode (optional) - -If you want to use **DaemonSet mode** instead of the default Sidecar mode, append the daemonset values to the Helm command shown in the dashboard. The Helm chart installs the `recordingsessions.keploy.io` and `replaysessions.keploy.io` Custom Resource Definitions, and the per-node DaemonSet that performs the eBPF capture. - -```bash -# add these flags to the Helm command from the dashboard: - --set daemonset.enabled=true \ - --set daemonset.crds.install=true -``` - -After install you should see a per-node `k8s-proxy-daemonset-*` Pod alongside the regular proxy Deployment: - -```bash -kubectl get pods -n keploy -# NAME READY STATUS RESTARTS AGE -# k8s-proxy-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -# k8s-proxy-daemonset-xxxxx 1/1 Running 0 1m ← per node -# k8s-proxy-daemonset-yyyyy 1/1 Running 0 1m -# k8s-proxy-mongodb-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -# k8s-proxy-minio-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -``` - -Verify the CRDs registered: - -```bash -kubectl get crd | grep keploy.io -# recordingsessions.keploy.io -# replaysessions.keploy.io -``` - -The rest of this quickstart proceeds identically—the Console **Start Recording** button creates a `RecordingSession` CR which the DaemonSet picks up; you do not need to interact with the CR yourself. - ### 5. Verify the Installation Paste the Helm command into the terminal. Once the installation is complete, verify that the Keploy Proxy is running. diff --git a/versioned_docs/version-4.0.0/quickstart/k8s-proxy.md b/versioned_docs/version-4.0.0/quickstart/k8s-proxy.md index 504bf287f..3b2fb44a2 100644 --- a/versioned_docs/version-4.0.0/quickstart/k8s-proxy.md +++ b/versioned_docs/version-4.0.0/quickstart/k8s-proxy.md @@ -135,20 +135,6 @@ At this point, your e-commerce application is live and ready to receive traffic. ## Enable Live Record & Replay with Keploy Proxy -### Pick a recording mode - -The Keploy Proxy supports two ways to capture traffic from your application Pods. Both modes drive the **same Console UI and REST API**—the rest of this guide works identically in either case. Pick whichever fits your environment. - -| | **Sidecar mode (default)** | **DaemonSet mode** | -| ----------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| How traffic is captured | A `keploy-agent` sidecar container is injected into your application Pod via a `MutatingAdmissionWebhook`. The agent intercepts traffic alongside your container. | A `keploy-daemonset` Pod runs on each node and captures traffic from existing application Pods using **eBPF**—no sidecar, no application Pod restart. | -| What happens on `Start Recording` | The proxy injects the agent and rolls the application Deployment. | The proxy creates a `RecordingSession` Custom Resource. The DaemonSet picks it up and programs its BPF target maps to capture matching Pods on each node. | -| Pod mutation on the application namespace | Required (`patch` on Deployments). | **Not required.** Application Pods are never modified. | -| Application restart at recording start | Yes, on first recording. | No. | -| Best for | Dev/staging, teams happy to grant write RBAC to Keploy on the application namespace. | Production with read-only RBAC on the application namespace; environments where rolling the application Pod has unacceptable cost; or when you want cluster-mode auto-replay (replay runs in a separate cluster you provide). | - -The screenshots below show the **Sidecar** flow because that is the default. To use **DaemonSet** mode instead, set the daemonset values when you run the Helm command in step 4 below—every other step is identical. - ### 1. Open Keploy Dashboard Visit: @@ -189,38 +175,6 @@ Once you have provided the cluster details, you can install the Keploy Proxy in Sample Keploy K8s proxy -#### DaemonSet mode (optional) - -If you want to use **DaemonSet mode** instead of the default Sidecar mode, append the daemonset values to the Helm command shown in the dashboard. The Helm chart installs the `recordingsessions.keploy.io` and `replaysessions.keploy.io` Custom Resource Definitions, and the per-node DaemonSet that performs the eBPF capture. - -```bash -# add these flags to the Helm command from the dashboard: - --set daemonset.enabled=true \ - --set daemonset.crds.install=true -``` - -After install you should see a per-node `k8s-proxy-daemonset-*` Pod alongside the regular proxy Deployment: - -```bash -kubectl get pods -n keploy -# NAME READY STATUS RESTARTS AGE -# k8s-proxy-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -# k8s-proxy-daemonset-xxxxx 1/1 Running 0 1m ← per node -# k8s-proxy-daemonset-yyyyy 1/1 Running 0 1m -# k8s-proxy-mongodb-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -# k8s-proxy-minio-xxxxxxxxxx-xxxxx 1/1 Running 0 1m -``` - -Verify the CRDs registered: - -```bash -kubectl get crd | grep keploy.io -# recordingsessions.keploy.io -# replaysessions.keploy.io -``` - -The rest of this quickstart proceeds identically—the Console **Start Recording** button creates a `RecordingSession` CR which the DaemonSet picks up; you do not need to interact with the CR yourself. - ### 5. Verify the Installation Paste the Helm command into the terminal. Once the installation is complete, verify that the Keploy Proxy is running. diff --git a/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-api.md b/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-api.md deleted file mode 100644 index e96b7b8cd..000000000 --- a/versioned_docs/version-4.0.0/running-keploy/k8s-proxy-api.md +++ /dev/null @@ -1,491 +0,0 @@ ---- -id: k8s-proxy-api -title: Kubernetes Proxy REST API -sidebar_label: Kubernetes Proxy REST API -description: Use the Keploy Kubernetes Proxy REST API to trigger recordings, manage recording and auto-replay configs, stream session status, run replays, and drive the enterprise recording flow programmatically from CI/CD, internal tooling, or AI agents. -tags: - - kubernetes - - k8s proxy - - REST API - - recording - - automation - - enterprise - - CI/CD -keywords: - - k8s proxy - - kubernetes proxy - - keploy enterprise - - recording API - - live recording - - auto replay - - programmatic recording - - shared token ---- - -import ProductTier from '@site/src/components/ProductTier'; - - - -The **Keploy Kubernetes Proxy** runs as an in-cluster service that drives recording, replay, and observability for Deployments in one or more namespaces. Its REST API has two groups of routes: - -- **Operational routes** such as `/record/start`, `/record/status`, `/test/start`, `/deployments`, and `/proxy/update`. These are the routes used to control live in-cluster recording and replay. -- **API-server-compatible data routes** under `/k8s-proxy/*`. The Console and CLI use these paths for stored test cases, mocks, reports, schema, schema coverage, and saved configs. The proxy can serve these paths directly. - -Use this API when you want to script the same Kubernetes live-recording flow the Keploy Console drives from CI/CD pipelines, operators, or internal tooling without running the `keploy` CLI on each node. - -**Base URL:** `https://` - the externally reachable address configured as `ingressUrl` when you installed the `k8s-proxy` Helm chart. In-cluster callers can use `https://..svc:8080` by default, or `http://..svc:8081` when `proxy.insecure.enabled=true`. - ---- - -## Recording modes: Sidecar and DaemonSet - -The Kubernetes Proxy supports two recording modes. Both expose the same REST API documented here—pick whichever fits your environment. - -- **Sidecar mode (default).** When recording starts, the proxy's `MutatingAdmissionWebhook` injects a `keploy-agent` sidecar container into the target Pod and rolls it. The agent intercepts traffic from the application container alongside it. This is the mode the rest of this document describes. -- **DaemonSet mode.** A `keploy-daemonset` Pod runs on each node and captures traffic from existing application Pods via eBPF—no sidecar injection, no application-Pod restart. Recording is scoped by a `RecordingSession` Custom Resource that the proxy creates from `/record/start`; the DaemonSet agents pick it up and program their BPF target maps. This is the right mode when application Pods cannot be mutated (read-only RBAC on the application namespace), or when the rollout cost of injecting a sidecar is unacceptable. Cluster-mode auto-replay (a separate replay cluster reached via mounted kubeconfig) is supported in this mode. - -The same `/record/start`, `/record/stop`, `/test/start`, `/deployments`, and report endpoints work identically across both modes—the difference is purely in how the agent is delivered to the workload. - ---- - -## Why the Kubernetes Proxy instead of `keploy enterprise` directly? - -Running the Keploy enterprise CLI inside a Pod works, but it is a per-app, per-node model: each Deployment you want to record needs its own sidecar plumbing, image rebuild, or pod restart. The Kubernetes Proxy removes that friction: - -- **Zero-touch agent setup.** The proxy registers a `MutatingAdmissionWebhook` (`/mutate`) so the Keploy recording agent is injected into target Pods on the next rollout. No image rebuild, sidecar template change, or per-app config knob is required. -- **One API for every Deployment.** A single shared-token-authenticated endpoint starts or stops recording for any Deployment in the watched scope. `podsCount` controls how many pods are recorded and is capped by the Deployment replicas or HPA max replicas. -- **Cluster-wide or namespace-scoped.** Install once per cluster, or set `watchNamespace` to pin the proxy to a single team's namespace. Cross-namespace calls are rejected with `403`. -- **Stored session outputs.** Recording, replay, and schema-coverage outputs are persisted through the configured platform storage. Per-session and proxy logs are available through the log endpoints when log retention/support-bundle storage is enabled. -- **Auto-replay loop.** A recording session can kick off an auto-replay on a cadence (`autoReplayInterval`) against freshly recorded test sets, giving you self-validating live traffic without a separate pipeline. -- **Self-updating.** The proxy can roll itself (and the injected agent) forward via `POST /proxy/update`, so upgrades do not require kubectl or a GitOps round-trip—unless you _want_ GitOps to stay authoritative (the proxy detects and reports reverts). -- **Static deduplication at the edge.** Enable `static_dedup` in the recording config to drop schema-identical traffic _before_ it is ever written as a test case. See [Static Deduplication](/docs/keploy-cloud/static-deduplication/). - ---- - -## Authentication - -Every protected proxy endpoint requires the cluster **shared token**. Send it as a Bearer token: - -```text -Authorization: Bearer -``` - -Only `GET /healthz` and the admission webhook `POST /mutate` are unauthenticated. Every other route rejects missing or malformed headers with `401 Unauthorized`. - -```bash -# Verify the proxy is up (no auth required) -curl -sf https://$PROXY/healthz -# {"status":"ok"} -``` - -### How the token is provisioned - -The shared token is generated **at Helm install time** and stored as a Kubernetes Secret named `-shared-token` in the proxy's namespace. The chart's pre-render step uses Helm's `randAlphaNum 48` to produce the value on the very first install and a `lookup` + `helm.sh/resource-policy: keep` annotation to preserve it across upgrades, so the token is **stable for the lifetime of the release**—Pod restarts and chart upgrades do not rotate it. - -The k8s-proxy Deployment and the per-node DaemonSet both mount the Secret as the `KEPLOY_SHARED_TOKEN` env var via `secretKeyRef`. On startup the proxy reports the value to the Keploy API server in its first heartbeat (`POST /cluster/status`) so the Console can display it under the cluster's app entries. - -For local/dev runs without a Secret, if `KEPLOY_SHARED_TOKEN` is unset the proxy falls back to generating a random 32-byte value via `crypto/rand` (hex-encoded). This fallback is fresh on every restart and is **not** the path used in any Helm-managed deployment. - -### Retrieve the token - -Two equally valid paths. - -**(a) Read it directly from the Secret** if you have `kubectl` access to the proxy namespace: - -```bash -kubectl -n keploy get secret -shared-token -o jsonpath='{.data.token}' | base64 -d -``` - -**(b) Fetch it from the Keploy API server**, which mirrors what the proxy reported in its last heartbeat. Log in once to obtain a user JWT, then look up the proxy app for the Deployment you want to drive: - -```bash -API_SERVER="https://api.keploy.io" -NS="default" -DEPLOY="orders-api" -CLUSTER="prod-use1" - -# 1. Authenticate as a Keploy user (admin, user, or cicd role) -JWT=$(curl -s -X POST "$API_SERVER/login" \ - -H "Content-Type: application/json" \ - -d '{"email":"you@example.com","password":"..."}' | jq -r '.token') - -# 2. Look up the proxy app for this Deployment and read its sharedToken -K8S_PROXY_SHARED_TOKEN=$(curl -s -H "Authorization: Bearer $JWT" \ - "$API_SERVER/cluster/getApp?namespace=$NS&deployment=$DEPLOY&clusterName=$CLUSTER" \ - | jq -r '.sharedToken') - -AUTH="Authorization: Bearer $K8S_PROXY_SHARED_TOKEN" -``` - -`GET /cluster/getApps` returns the same `sharedToken` field for every proxy-managed app in your organization in a single response, which is convenient when you want to script across many Deployments at once. - -> The proxy shared token is cluster-wide, not per-user. The API server still uses normal user JWT/cookie authentication on its own routes (including `/cluster/getApp`). The token is sticky across Pod restarts and chart upgrades, so callers can cache it for the lifetime of the Helm release. - ---- - -## Response format - -Handlers return JSON with `application/json` on success. Validation failures usually return `{"error": "..."}` with a 4xx status; shared-token auth failures return `{"success": false, "message": "Unauthorized: ..."}`. A handful of endpoints stream newline-delimited JSON instead - they are called out explicitly below. - -```js -// Successful record start (200) -{ "record": "started", "id": "default-orders-api" } - -// Validation error (400) -{ "error": "namespace and deployment are required" } - -// Auth error (401) -{ "success": false, "message": "Unauthorized: Missing authorization header" } - -// Namespace-scoped proxy rejecting a cross-namespace call (403) -{ "error": "this proxy is scoped to namespace \"payments\"" } -``` - -### Error status codes - -| HTTP | When it happens | -| ---- | ----------------------------------------------------------------------------------------------- | -| 400 | Missing or malformed request body, missing required fields | -| 401 | Missing or invalid `Authorization: Bearer` header | -| 403 | Request touches a namespace outside `watchNamespace`, or image repo mismatch on `/proxy/update` | -| 404 | Recording/replay session ID not found, or deployment/config does not exist | -| 405 | Wrong HTTP method for the route | -| 500 | Kubernetes API error, storage backend unavailable, or unexpected server error | -| 503 | Kubernetes client or self-discovery not initialised (proxy is still starting or missing RBAC) | - ---- - -## Quick start: Trigger and watch a live recording - -The golden path: pick a Deployment, start a recording, stream its status, and stop it when you have the traffic you need. - -### 1. Set up variables - -```bash -PROXY="https://k8s-proxy.example.com" # ingressUrl from Helm install -AUTH="Authorization: Bearer $K8S_PROXY_SHARED_TOKEN" -NS="default" -DEPLOY="orders-api" -``` - -### 2. Discover target Deployments - -```bash -curl -s -H "$AUTH" "$PROXY/deployments?namespace=$NS" | jq -# [{"name":"orders-api","namespace":"default","replicas":3,"readyReplicas":3}] -``` - -### 3. Start a recording - -```bash -RECORD_ID=$(curl -s -X POST "$PROXY/record/start" \ - -H "$AUTH" -H "Content-Type: application/json" \ - -d '{ - "namespace": "'"$NS"'", - "deployment": "'"$DEPLOY"'", - "podsCount": 3, - "clusterId": "prod-use1", - "record_config": { - "static_dedup": true, - "enable_sampling": 10, - "filters": [ - { "path": "/health", "urlMethods": ["GET"] } - ] - } - }' | jq -r '.id') - -echo "Recording started: $RECORD_ID" -``` - -On success the proxy registers the session before it touches the workload, ensures the mutating webhook configuration is present, copies the CA secret into the target namespace, creates the headless Service, and triggers a targeted restart of the selected pods so the agent is injected as they come back. - -### 4. Stream session status - -```bash -curl -N -H "$AUTH" "$PROXY/record/status?record_id=$RECORD_ID" -``` - -This returns newline-delimited JSON—one event per state change. Each line includes the current testcase count, endpoints seen, mock counts, and `static_dedup_stats` when static dedup is enabled. - -```json -{ - "test_cases_count": 12, - "endpoints": [ - { - "name": "test-1", - "endpoint": "/orders", - "method": "GET", - "status_code": 200 - } - ], - "mock_count": 8, - "mock_types": {"SQL": 3}, - "status": "running", - "pods_running": 3, - "static_dedup_stats": [], - "started_at": 1712345678 -} -``` - -### 5. Stop the session - -```bash -curl -s -X POST "$PROXY/record/stop" \ - -H "$AUTH" -H "Content-Type: application/json" \ - -d '{"record_id":"'"$RECORD_ID"'"}' -# {"record":"stopped","id":"default-orders-api"} -``` - -The proxy tears down the headless Service, unloads the agent on the next rollout, and flushes the recorded tests to the platform store so they show up under that app in the Keploy Console. - ---- - -## Recording configuration - -The `record_config` block in `POST /record/start` is a UI-friendly subset of the OSS `config.Record` struct and is persisted alongside the session so the UI can prefill it and so the exact inputs are auditable. - -| Field | Type | Description | -| --------------------- | ----------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- | -| `filters` | `Filter[]` | Traffic patterns to filter during recording. Matches use AND semantics across fields. | -| `client_key` | `string` | Optional client identifier propagated to downstream mock capture (useful for multi-tenant apps). | -| `enable_sampling` | `uint` | If set to a positive value, sample 1-in-N matching requests. Omit or set `0` to use the proxy default. | -| `static_dedup` | `bool` | Drop schema-identical traffic in the agent before it becomes a test case. See [Static Deduplication](/docs/keploy-cloud/static-deduplication/). | -| `custom_dedup_fields` | `EndpointDedupFields[]` | Add value-aware fingerprints for matching endpoints. Providing this also enables static dedup for the injected sidecar. | -| `low_latency_mode` | `bool` | Start the agent in low-latency mode. | -| `debug` | `bool` | Start the injected agent with debug logs. | -| `memory_limit` | `string` | Memory request in MiB, expressed as a positive integer string. The container limit is set to twice this value. | -| `secret_protection` | `object` | Enable record-time secret detection/obfuscation with optional custom headers, body keys, URL params, and allow lists. | - -Each `Filter` accepts: - -| Field | Type | Description | -| -------------- | ------------------- | --------------------------------------- | -| `path` | `string` | Regex matched against the request path. | -| `host` | `string` | Hostname to match. | -| `port` | `uint` | Port to match. | -| `urlMethods` | `string[]` | HTTP methods (e.g. `["GET","POST"]`). | -| `headers` | `map[string]string` | Header key/value pairs to match. | -| `filterPolicy` | `string` | `exclude` (default) or `include`. | - -### Auto-replay configuration - -Attach an `auto_replay_config` to `POST /record/start` to automatically replay freshly recorded test sets against a standalone Pod the proxy provisions. Each replay runs in isolation against a fresh Pod + Service so it cannot disturb production traffic. - -| Field | Type | Description | -| -------------------- | ---------------------------- | ------------------------------------------------------------------- | -| `autoReplayInterval` | `int64` (minutes, default 5) | Cooldown between replays for the same session. | -| `mongoPassword` | `string` | Override for user-provided Mongo credentials used during replay. | -| `apiTimeout` | `uint64` (seconds) | Per-request timeout for the replayed application. | -| `delay` | `uint64` (seconds) | Initial delay before starting tests (lets the standalone Pod warm). | -| `globalNoise` | `object` | Fields to ignore during diffing. Accepts `global` and `test-sets`. | -| `envOverrides` | `map[string]string` | Env var overrides for the standalone replay Pod. | - ---- - -## Endpoint reference - -All paths are relative to the proxy base URL. Unless noted, every route requires `Authorization: Bearer `. - -### Health and admission - -| Method | Path | Auth | Description | -| ------ | ---------- | ---- | ------------------------------------------------------------------- | -| `GET` | `/healthz` | No | Liveness probe. Returns `{"status":"ok"}`. | -| `POST` | `/mutate` | No | Kubernetes MutatingAdmissionWebhook endpoint. Do not call directly. | - -### Deployments - -| Method | Path | Description | -| ------ | ----------------------------- | -------------------------------------------------------------------- | -| `GET` | `/deployments?namespace=` | List Deployments. Omit `namespace` for cluster-wide (unless scoped). | - -### Recording - -| Method | Path | Description | -| ------ | ------------------------------------------------------- | ----------------------------------------------------------------------------------------- | -| `POST` | `/record/start` | Start a recording session. Body: `RecordRequest`. See quickstart above. | -| `POST` | `/record/stop` | Stop a session. Body: `{"record_id":"..."}`. | -| `GET` | `/record/status?record_id=` | Stream session status (NDJSON). One line per state change. | -| `GET` | `/record/active?namespace=&deployment=` | Check whether a session is running for a Deployment. Returns `in_progress` + `record_id`. | -| `GET` | `/record/app-status?namespace=&deployment=` | Report agent-injection and Pod-readiness status for the target app. | -| `GET` | `/record/logs?namespace=&deployment=&...` | Tail recording-session logs. Accepts `stream`, `previous`, `tail_bytes`, `stream_bytes`. | -| `GET` | `/record/logs/check?namespace=&deployment=` | Cheap check: are session logs available? | -| `GET` | `/record/logs/download?namespace=&deployment=` | Download recording logs as a ZIP. | - -**`RecordRequest` body:** - -```json -{ - "namespace": "default", - "deployment": "orders-api", - "podsCount": 3, - "clusterId": "prod-use1", - "record_config": {"static_dedup": true, "enable_sampling": 10, "filters": []}, - "auto_replay_config": {"autoReplayInterval": 10, "delay": 5} -} -``` - -### Replay / Test - -| Method | Path | Description | -| ------ | ----------------------------------------------------- | ------------------------------------------------------------------ | -| `POST` | `/test/start` | Start a replay. Body: `ReplayRequest` with optional `test_config`. | -| `POST` | `/test/stop` | Stop a replay. Body: `{"replay_id":"..."}`. | -| `GET` | `/test/status?replay_id=` | Stream replay status and per-testcase results (NDJSON). | -| `GET` | `/test/active?namespace=&deployment=` | Check whether a replay is in progress for this Deployment. | -| `POST` | `/test/mock-metadata` | Extended mock metadata. | -| `POST` | `/test/normalize` | AI-normalize testcases in a run. | -| `GET` | `/test/download?...` | Download a full test bundle (ZIP). | -| `GET` | `/test/download/active?...` | Download tests from the currently active recording session. | -| `GET` | `/test/logs?namespace=&deployment=&...` | Tail replay logs. Same flags as `/record/logs`. | -| `GET` | `/test/logs/check?namespace=&deployment=` | Replay-logs availability check. | -| `GET` | `/test/logs/download?namespace=&deployment=` | Replay logs ZIP. | - -**`ReplayRequest` body:** - -```json -{ - "namespace": "default", - "deployment": "orders-api", - "test_config": { - "selectedTests": {"test-set-0": ["tc-1", "tc-2"]}, - "apiTimeout": 30, - "delay": 5, - "globalNoise": {"global": {"header": {"Date": []}}}, - "envOverrides": {"FEATURE_FLAG_X": "off"} - } -} -``` - -Omit `selectedTests` to replay every set. - -### API-server-compatible data routes - -These routes are all mounted under `/k8s-proxy` and are served directly by the proxy. Direct proxy calls use the proxy shared token; calls routed through the API server use normal Console/API-server authentication and role checks. - -#### Health - -| Method | Path | Description | -| ------ | ------------------- | -------------------------------------------------------- | -| `GET` | `/k8s-proxy/health` | Health check for the API-server-compatible data surface. | - -#### Test cases, mocks, and mappings - -| Method | Path | Description | -| -------- | --------------------------------------------------- | ---------------------------------------------- | -| `POST` | `/k8s-proxy/testcases` | Insert a testcase. | -| `POST` | `/k8s-proxy/testcases/bulk` | Insert multiple testcases. | -| `GET` | `/k8s-proxy/testcases` | Fetch testcases. | -| `GET` | `/k8s-proxy/testcases/detail` | Fetch one testcase payload. | -| `GET` | `/k8s-proxy/testcases/metadata` | Fetch testcase metadata. | -| `POST` | `/k8s-proxy/testcases/selective` | Fetch selected testcases. | -| `PUT` | `/k8s-proxy/testcases/{testCaseId}` | Update one testcase. | -| `PUT` | `/k8s-proxy/testcases/bulk` | Update multiple testcases. | -| `DELETE` | `/k8s-proxy/testcases` | Delete testcases. | -| `GET` | `/k8s-proxy/testcases/testsets` | List test set IDs. | -| `GET` | `/k8s-proxy/testcases/testsets/metadata` | List test set metadata. | -| `GET` | `/k8s-proxy/testcases/testsets/latest-release/full` | Fetch latest-release test sets with full data. | -| `DELETE` | `/k8s-proxy/testcases/testset` | Delete a test set. | -| `POST` | `/k8s-proxy/mocks/upload` | Upload mocks. | -| `GET` | `/k8s-proxy/mocks/download` | Download mocks. | -| `GET` | `/k8s-proxy/mocks/reference` | Fetch mock reference metadata. | -| `POST` | `/k8s-proxy/mocks/reference` | Insert or update mock reference metadata. | -| `DELETE` | `/k8s-proxy/mocks/reference` | Delete mock reference metadata. | -| `POST` | `/k8s-proxy/mappings` | Upload mappings. | -| `GET` | `/k8s-proxy/mappings` | Fetch mappings. | - -#### Reports and schema coverage - -| Method | Path | Description | -| -------- | -------------------------------------------- | ------------------------------------ | -| `POST` | `/k8s-proxy/insert/testCaseResult` | Insert testcase result data. | -| `GET` | `/k8s-proxy/get/testCaseResults` | Fetch testcase result data. | -| `DELETE` | `/k8s-proxy/clear/testCaseResults` | Clear testcase result data. | -| `POST` | `/k8s-proxy/insert/report` | Insert a test report. | -| `GET` | `/k8s-proxy/get/report` | Fetch one test report. | -| `GET` | `/k8s-proxy/get/allReports` | List stored reports. | -| `GET` | `/k8s-proxy/get/testRunIds` | List test run IDs. | -| `GET` | `/k8s-proxy/get/testRunReports` | Fetch test-run-level reports. | -| `GET` | `/k8s-proxy/get/testSetReports` | Fetch test-set-level reports. | -| `GET` | `/k8s-proxy/get/testCaseReports` | Fetch per-testcase reports. | -| `PUT` | `/k8s-proxy/update/report` | Update a report. | -| `POST` | `/k8s-proxy/report/multipart` | Upload a multipart test-run report. | -| `POST` | `/k8s-proxy/autoreplay-metrics` | Insert auto-replay metrics. | -| `GET` | `/k8s-proxy/autoreplay-metrics` | Fetch auto-replay metrics. | -| `POST` | `/k8s-proxy/insert/schema` | Insert captured OpenAPI schema. | -| `PUT` | `/k8s-proxy/update/schema` | Update captured OpenAPI schema. | -| `GET` | `/k8s-proxy/get/schema` | Fetch captured OpenAPI schema. | -| `GET` | `/k8s-proxy/get/schema-coverage` | Fetch per-endpoint schema coverage. | -| `POST` | `/k8s-proxy/schema-coverage-report` | Save a schema coverage report. | -| `GET` | `/k8s-proxy/get/schema-coverage-summary` | Fetch schema coverage summary. | -| `GET` | `/k8s-proxy/get/top-schema-coverage-summary` | Fetch top-N schema coverage summary. | - -#### Saved config - -| Method | Path | Description | -| ------ | --------------------------------------------------- | ---------------------------------------------------------------------- | -| `POST` | `/k8s-proxy/config` | Insert or update saved proxy config. | -| `GET` | `/k8s-proxy/config/{namespace}/{deployment}/{kind}` | Fetch saved config. `kind` can be `record`, `replay`, or `autoreplay`. | -| `GET` | `/k8s-proxy/config/list/{kind}` | List saved configs by kind. | - -### Assertion-test generator (ATG) - -| Method | Path | Description | -| ------ | ------------------------------- | ---------------------------------------------------------------------------------- | -| `POST` | `/agent/run/{jobID}` | Process an ATG job. Accepts optional `?timeout=` (default `30`). | -| `POST` | `/agent/execute-request` | Execute an HTTP request through the ATG runtime (used during assertion authoring). | -| `POST` | `/agent/service-url` | Resolve a Service URL inside the cluster (used to target the app from the UI). | -| `POST` | `/agent/recordATGSandbox` | Bind to an already-running ATG sandbox recording session. | -| `POST` | `/agent/stopATGSandboxRecord` | Stop an ATG sandbox recording session. | -| `POST` | `/agent/replayATGSandbox` | Start an ATG sandbox replay session. | -| `POST` | `/agent/stopATGSandboxReplay` | Stop an ATG sandbox replay session. | -| `GET` | `/agent/ATGSandboxRecordStatus` | Fetch ATG sandbox recording status. | -| `GET` | `/agent/ATGSandboxRecordLogs` | Fetch ATG sandbox recording logs. | -| `GET` | `/agent/ATGSandboxReplayLogs` | Fetch ATG sandbox replay logs. | -| `GET` | `/agent/autoReplayLogs` | Fetch auto-replay logs. | - -### Proxy self-management - -| Method | Path | Description | -| ------ | ----------------------------------------------- | ------------------------------------------------------------------------------------- | -| `POST` | `/proxy/update` | Roll the proxy (and optionally the injected agent image) to a new version. See below. | -| `GET` | `/proxy/update/status` | Current rollout state: `""`, `updating`, `desired_applied`, or `reverted_by_gitops`. | -| `POST` | `/proxy/shutdown` | Gracefully terminate the proxy Pod (Kubernetes will reschedule it). | -| `GET` | `/logs/proxy?...` | Tail proxy-pod logs. Same flags as session log endpoints. | -| `GET` | `/logs/proxy/download` | Download proxy logs as ZIP (current + previous container, when available). | -| `GET` | `/autoreplay/debug-bundles` | List captured auto-replay debug bundles. | -| `GET` | `/autoreplay/debug-bundles/{bundleID}` | Fetch one auto-replay debug bundle metadata record. | -| `GET` | `/autoreplay/debug-bundles/{bundleID}/download` | Download one auto-replay debug bundle. | -| `POST` | `/autoreplay/debug-bundles/{bundleID}/share` | Share one auto-replay debug bundle through the configured API server. | - -**`POST /proxy/update` body:** - -```json -{ - "proxy_image": "ghcr.io/keploy/k8s-proxy:v1.4.0", - "agent_image": "ghcr.io/keploy/keploy:v3.7.1" -} -``` - -The proxy validates that you are bumping the _same_ image repository (you cannot swap `ghcr.io/keploy/k8s-proxy` for an unknown registry) and then patches its own Deployment. If a GitOps controller (Argo CD, Flux) reverts the bump, `/proxy/update/status` reports `reverted_by_gitops` with guidance to update your Helm values or manifest repo instead. - ---- - -## Namespace scoping - -When the proxy is installed with `watchNamespace=`, every API call is force-scoped to that namespace: - -- `GET /deployments` ignores the `namespace` query and returns only that namespace. -- Any request whose `namespace` field does not match returns `403` with `{"error":"this proxy is scoped to namespace \"\""}`. - -Leave `watchNamespace` unset to run cluster-wide. Cluster-wide mode requires Deployment `get`/`list`/`watch` RBAC across all namespaces, which the default Helm chart provisions. - ---- - -## Related guides - -- [Static Deduplication](/docs/keploy-cloud/static-deduplication/)—drop duplicate traffic at record time using the `static_dedup` field. -- [Remove Duplicate Tests](/docs/keploy-cloud/deduplication/)—coverage-based dedup at replay time (`keploy dedup`). -- [Public REST API](/docs/running-keploy/public-api/)—Keploy Cloud control plane (apps, suites, jobs). -- [Kubernetes installation](/docs/keploy-cloud/kubernetes/)—install and configure the Kubernetes Proxy. -- [GitOps with Argo CD](/docs/keploy-cloud/gitops-argocd/)—manage the proxy via GitOps, including how `/proxy/update` interacts with reverts. diff --git a/versioned_sidebars/version-4.0.0-sidebars.json b/versioned_sidebars/version-4.0.0-sidebars.json index 50eedd895..d5e7e313a 100644 --- a/versioned_sidebars/version-4.0.0-sidebars.json +++ b/versioned_sidebars/version-4.0.0-sidebars.json @@ -221,8 +221,7 @@ "label": "API Reference", "collapsed": false, "items": [ - "running-keploy/public-api", - "running-keploy/k8s-proxy-api" + "running-keploy/public-api" ] }, {