Skip to content

feat: add Dockerfile.worker for the per-DuckDB-version matrix build#501

Merged
fuziontech merged 1 commit into
mainfrom
feature/dockerfile-worker
May 1, 2026
Merged

feat: add Dockerfile.worker for the per-DuckDB-version matrix build#501
fuziontech merged 1 commit into
mainfrom
feature/dockerfile-worker

Conversation

@fuziontech
Copy link
Copy Markdown
Member

Summary

This is the image target for cmd/duckgres-worker (PR #500). It mirrors the existing all-in-one Dockerfile (extension downloads, multi-arch via TARGETARCH, ldflags version/commit injection) but with two key differences:

  • Builds ./cmd/duckgres-worker instead of ., so the resulting image only has the worker binary, not the standalone PG wire path.
  • Adds DUCKDB_GO_VERSION + DUCKDB_BINDINGS_VERSION build args. When set, runs go get to swap the go.mod pins before the build, then go mod tidy. This is the lever the matrix-build CD workflow will pull to produce one image per (DuckDB version × arch) without branching the source tree per version.
  • Drops the 5432 EXPOSE (worker doesn't serve PG wire); keeps 8816 (Flight SQL) and 9090 (metrics).

How the matrix will look

DUCKDB_GO_VERSION=v2.10501.0 DUCKDB_BINDINGS_VERSION=v0.10501.0 \
  DUCKDB_EXTENSION_VERSION=1.5.1 HTTPFS_EXTENSION_TAG=v1.5.1-stoi-fix \
  docker buildx build -f Dockerfile.worker -t duckgres-worker:1.5.1-... .

DUCKDB_GO_VERSION=v2.10502.0 DUCKDB_BINDINGS_VERSION=v0.10502.0 \
  DUCKDB_EXTENSION_VERSION=1.5.2 HTTPFS_EXTENSION_TAG=v1.5.2-stoi-fix \
  docker buildx build -f Dockerfile.worker -t duckgres-worker:1.5.2-... .

The CD workflow that wires this into a real per-version matrix is the next PR. Same goes for the Helm/charts cutover that points the per-org image config-store column at the worker images instead of the all-in-one duckgres image.

What stays the same

The original Dockerfile (all-in-one) is untouched and continues to work for the --mode standalone and existing CP+process-isolation deployment paths. There's no breakage in the existing CD pipeline.

🤖 Generated with Claude Code

This is the image target for cmd/duckgres-worker. It mirrors the
existing all-in-one Dockerfile (extension downloads, multi-arch via
TARGETARCH, ldflags version/commit injection) but:

  - Builds ./cmd/duckgres-worker instead of . (so the resulting image
    only has the worker binary, not the standalone PG wire path)
  - Adds DUCKDB_GO_VERSION + DUCKDB_BINDINGS_VERSION build args. When
    set, runs `go get` to swap the go.mod pins before the build, then
    `go mod tidy`. This is the lever the matrix-build CD workflow will
    pull to produce one image per (DuckDB version × arch), without
    branching the source tree per version.
  - Drops the 5432 EXPOSE (worker doesn't serve PG wire); keeps 8816
    (Flight SQL) and 9090 (metrics).

The matrix shape will look like:

  DUCKDB_GO_VERSION=v2.10501.0 DUCKDB_BINDINGS_VERSION=v0.10501.0 \
    DUCKDB_EXTENSION_VERSION=1.5.1 HTTPFS_EXTENSION_TAG=v1.5.1-stoi-fix \
    docker buildx build -f Dockerfile.worker -t duckgres-worker:1.5.1-...

  DUCKDB_GO_VERSION=v2.10502.0 DUCKDB_BINDINGS_VERSION=v0.10502.0 \
    DUCKDB_EXTENSION_VERSION=1.5.2 HTTPFS_EXTENSION_TAG=v1.5.2-stoi-fix \
    docker buildx build -f Dockerfile.worker -t duckgres-worker:1.5.2-...

The CD workflow that wires this into a real per-version matrix is the
next PR. Same goes for the Helm/charts cutover that points the per-org
`image` config-store column at the worker images instead of the
all-in-one duckgres image.

The original Dockerfile (all-in-one) is untouched and continues to
work for the `--mode standalone` and existing CP+process-isolation
deployment paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@fuziontech fuziontech enabled auto-merge (squash) May 1, 2026 18:23
@fuziontech fuziontech merged commit d5bdf8a into main May 1, 2026
21 of 22 checks passed
@fuziontech fuziontech deleted the feature/dockerfile-worker branch May 1, 2026 18:26
fuziontech added a commit that referenced this pull request May 1, 2026
Adds .github/workflows/container-image-worker-cd.yml — a new CD pipeline
that publishes one duckgres-worker image per (DuckDB version × arch),
using Dockerfile.worker (PR #501).

Matrix shape:
  - DuckDB 1.5.2 (default) → duckgres-worker:<sha>-duckdb1.5.2-{arm64,amd64}
                              + multi-arch :<sha>-duckdb1.5.2 manifest
                              + :<sha> and :latest (only on default rows)
  - DuckDB 1.5.1            → duckgres-worker:<sha>-duckdb1.5.1-{arm64,amd64}
                              + multi-arch :<sha>-duckdb1.5.1 manifest

Adding a DuckDB version is one new row under matrix.duckdb. The
DUCKDB_GO_VERSION / DUCKDB_BINDINGS_VERSION pair maps to duckdb-go
module versions; the encoding is `v0.<major><minor:02d><patch:02d>.0`
(see scripts/ducklake_version_matrix.sh for the same mapping in test
code), so DuckDB 1.5.1 → v2.10501.0 / v0.10501.0 and 1.5.2 →
v2.10502.0 / v0.10502.0.

The all-in-one image (.github/workflows/container-image-cd.yml) is
left untouched and continues to publish the existing duckgres image
unchanged. The new pipeline ships alongside it.

Operators flip a tenant's `image` config-store column to point at a
specific suffixed worker tag (e.g. duckgres-worker:<sha>-duckdb1.5.1)
to canary that DuckDB version for that org. PR #462 (the original
multi-version control plane work) wires the image-pinning lookup into
the worker activation path.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
fuziontech added a commit that referenced this pull request May 1, 2026
* feat: add Dockerfile.controlplane for the duckdb-free CP image

Builds cmd/duckgres-controlplane (PR #498). The image is the control-
plane Pod's runtime; all SQL execution is routed to remote
duckgres-worker images (Dockerfile.worker), so this image:

  - Does NOT link libduckdb (the controlplane-no-libduckdb CI guard
    from PR #499 enforces it)
  - Does NOT bundle the DuckDB extension downloads — without a DuckDB
    driver they'd be dead weight
  - Is meaningfully smaller than the all-in-one image

CGO is still enabled because the transpiler uses pg_query_go which
links libpg_query. That's a pure Postgres parser, nothing to do with
DuckDB.

Together with Dockerfile.worker (per-DuckDB-version, PR #501) and the
existing all-in-one Dockerfile (unchanged), the image set now mirrors
the binary set:

  duckgres                    (existing) — all-in-one, links libduckdb
  duckgres-worker             (new)      — worker-only, per-DuckDB-version
  duckgres-controlplane       (this PR)  — CP-only, no libduckdb

A CD workflow that publishes the controlplane image (single build per
sha, no DuckDB matrix needed since this binary is version-agnostic) is
the next PR.

Verified locally:
  - go build -o /tmp/duckgres-controlplane ./cmd/duckgres-controlplane
    builds clean (~40MB binary)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* ci: add CD pipeline for cmd/duckgres-controlplane image (#504)

Adds .github/workflows/container-image-controlplane-cd.yml — publishes
duckgres-controlplane:<sha> + duckgres-controlplane:latest as a multi-
arch manifest (arm64 + amd64) on every push to main.

Single build per sha — the CP is version-agnostic by design (one
image fits all worker fleets), so no DuckDB-version matrix here.
Contrast with container-image-worker-cd.yml (PR #502) which produces
one duckgres-worker image per (DuckDB version × arch).

Together with the existing all-in-one CD (container-image-cd.yml,
unchanged) and the worker matrix CD, the image pipeline now mirrors
the binary set:

  duckgres                container-image-cd.yml             (existing)
  duckgres-worker         container-image-worker-cd.yml      (PR #502)
  duckgres-controlplane   container-image-controlplane-cd.yml (this PR)

Stacked on PR #503 which adds Dockerfile.controlplane.

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant