diff --git a/content/en/docs/next/applications/clickhouse.md b/content/en/docs/next/applications/clickhouse.md
index b643671a..bb998eed 100644
--- a/content/en/docs/next/applications/clickhouse.md
+++ b/content/en/docs/next/applications/clickhouse.md
@@ -11,7 +11,7 @@ aliases:
diff --git a/content/en/docs/next/applications/foundationdb.md b/content/en/docs/next/applications/foundationdb.md
index 0a5ac154..71e94323 100644
--- a/content/en/docs/next/applications/foundationdb.md
+++ b/content/en/docs/next/applications/foundationdb.md
@@ -6,7 +6,7 @@ weight: 50
diff --git a/content/en/docs/next/applications/harbor.md b/content/en/docs/next/applications/harbor.md
index dd863844..13568e8e 100644
--- a/content/en/docs/next/applications/harbor.md
+++ b/content/en/docs/next/applications/harbor.md
@@ -7,7 +7,7 @@ weight: 50
diff --git a/content/en/docs/next/applications/kafka.md b/content/en/docs/next/applications/kafka.md
index 931aa700..941a21f4 100644
--- a/content/en/docs/next/applications/kafka.md
+++ b/content/en/docs/next/applications/kafka.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/mariadb.md b/content/en/docs/next/applications/mariadb.md
index d2fb0dd2..284e8f26 100644
--- a/content/en/docs/next/applications/mariadb.md
+++ b/content/en/docs/next/applications/mariadb.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/mongodb.md b/content/en/docs/next/applications/mongodb.md
index 7da554a5..6022b35e 100644
--- a/content/en/docs/next/applications/mongodb.md
+++ b/content/en/docs/next/applications/mongodb.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/nats.md b/content/en/docs/next/applications/nats.md
index 18d22af2..87a03bfd 100644
--- a/content/en/docs/next/applications/nats.md
+++ b/content/en/docs/next/applications/nats.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/openbao.md b/content/en/docs/next/applications/openbao.md
index 05749dd2..e34ff54e 100644
--- a/content/en/docs/next/applications/openbao.md
+++ b/content/en/docs/next/applications/openbao.md
@@ -7,7 +7,7 @@ weight: 50
diff --git a/content/en/docs/next/applications/postgres.md b/content/en/docs/next/applications/postgres.md
index 3e66c383..cf282bbc 100644
--- a/content/en/docs/next/applications/postgres.md
+++ b/content/en/docs/next/applications/postgres.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/qdrant.md b/content/en/docs/next/applications/qdrant.md
index c697b247..17060afb 100644
--- a/content/en/docs/next/applications/qdrant.md
+++ b/content/en/docs/next/applications/qdrant.md
@@ -7,7 +7,7 @@ weight: 50
diff --git a/content/en/docs/next/applications/rabbitmq.md b/content/en/docs/next/applications/rabbitmq.md
index 41935add..2326ba08 100644
--- a/content/en/docs/next/applications/rabbitmq.md
+++ b/content/en/docs/next/applications/rabbitmq.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/redis.md b/content/en/docs/next/applications/redis.md
index e6a0a2df..855ca38d 100644
--- a/content/en/docs/next/applications/redis.md
+++ b/content/en/docs/next/applications/redis.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/applications/tenant.md b/content/en/docs/next/applications/tenant.md
index 29e5e139..e8b69af1 100644
--- a/content/en/docs/next/applications/tenant.md
+++ b/content/en/docs/next/applications/tenant.md
@@ -10,7 +10,7 @@ aliases:
@@ -20,15 +20,15 @@ Tenants can be created recursively and are subject to the following rules:
### Tenant naming
-Tenant names must follow DNS-1035 naming rules:
-- Must start with a lowercase letter (`a-z`)
-- Can only contain lowercase letters, numbers, and hyphens (`a-z`, `0-9`, `-`)
-- Must end with a letter or number (not a hyphen)
+Tenant names must be alphanumeric:
+
+- Lowercase letters (`a-z`) and digits (`0-9`) only
+- Must start with a lowercase letter
+- Dashes (`-`) are **not allowed**, unlike with other services
- Maximum length depends on the cluster configuration (Helm release prefix and root domain)
-**Note:** Using dashes (`-`) in tenant names is **allowed but discouraged**, unlike with other services.
-This is to keep consistent naming in tenants, nested tenants, and services deployed in them.
-Names with dashes (e.g., `foo-bar`) may lead to ambiguous parsing of internal resource names like `tenant-foo-bar`.
+This restriction exists to keep consistent naming in tenants, nested tenants, and services deployed in them.
+A tenant cannot be named `foo-bar` because parsing internal resource names like `tenant-foo-bar` would be ambiguous.
For example:
diff --git a/content/en/docs/next/cozystack-api/go-types.md b/content/en/docs/next/cozystack-api/go-types.md
index ba0aca8f..f9df675d 100644
--- a/content/en/docs/next/cozystack-api/go-types.md
+++ b/content/en/docs/next/cozystack-api/go-types.md
@@ -13,7 +13,7 @@ Cozystack publishes its Kubernetes resource types as a Go module, enabling manag
Add the dependency to your Go module:
```bash
-go get github.com/cozystack/cozystack/api/apps/v1alpha1@v1.2.0
+go get github.com/cozystack/cozystack/api/apps/v1alpha1@{{< version-pin "cozystack_tag" >}}
```
## Use Cases
diff --git a/content/en/docs/next/getting-started/install-cozystack.md b/content/en/docs/next/getting-started/install-cozystack.md
index 1d81b3af..e97078fa 100644
--- a/content/en/docs/next/getting-started/install-cozystack.md
+++ b/content/en/docs/next/getting-started/install-cozystack.md
@@ -259,9 +259,9 @@ In the following steps, we'll access LINSTOR interface, create storage pools, an
to set `failmode=continue` on ZFS storage pools to allow DRBD to handle disk failures instead of ZFS.
```bash
- kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 -- zpool set failmode=continue data
- kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv2 -- zpool set failmode=continue data
- kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv3 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv1 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv2 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv3 -- zpool set failmode=continue data
```
1. Check the results by listing the storage pools:
diff --git a/content/en/docs/next/getting-started/install-talos.md b/content/en/docs/next/getting-started/install-talos.md
index fb080916..1f0c00e4 100644
--- a/content/en/docs/next/getting-started/install-talos.md
+++ b/content/en/docs/next/getting-started/install-talos.md
@@ -35,10 +35,12 @@ curl -sSL https://github.com/cozystack/boot-to-talos/raw/refs/heads/main/hack/in
Run `boot-to-talos` and provide the configuration values.
Make sure to use Cozystack's own Talos build, found at [ghcr.io/cozystack/cozystack/talos](https://github.com/cozystack/cozystack/pkgs/container/cozystack%2Ftalos).
+For Cozystack {{< version-pin "cozystack_tag" >}} the pinned Talos version is **{{< version-pin "talos" >}}** — override the installer's default when prompted:
+
```console
$ boot-to-talos
Target disk [/dev/sda]:
-Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.10.5]:
+Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.11.6]: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
Add networking configuration? [yes]:
Interface [eth0]:
IP address [10.0.2.15]:
@@ -47,7 +49,7 @@ Gateway (or 'none') [10.0.2.2]:
Configure serial console? (or 'no') [ttyS0]:
Summary:
- Image: ghcr.io/cozystack/cozystack/talos:v1.10.5
+ Image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
Disk: /dev/sda
Extra kernel args: ip=10.0.2.15::10.0.2.2:255.255.255.0::eth0::::: console=ttyS0
@@ -56,12 +58,12 @@ WARNING: ALL DATA ON /dev/sda WILL BE ERASED!
Continue? [yes]:
2025/08/03 00:11:03 created temporary directory /tmp/installer-3221603450
-2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:v1.10.5
+2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
2025/08/03 00:11:03 extracting image layers
2025/08/03 00:11:07 creating raw disk /tmp/installer-3221603450/image.raw (2 GiB)
2025/08/03 00:11:07 attached /tmp/installer-3221603450/image.raw to /dev/loop0
2025/08/03 00:11:07 starting Talos installer
-2025/08/03 00:11:07 running Talos installer v1.10.5
+2025/08/03 00:11:07 running Talos installer {{< version-pin "talos" >}}
2025/08/03 00:11:07 WARNING: config validation:
2025/08/03 00:11:07 use "worker" instead of "" for machine type
2025/08/03 00:11:07 created EFI (C12A7328-F81F-11D2-BA4B-00A0C93EC93B) size 104857600 bytes
@@ -76,7 +78,7 @@ Continue? [yes]:
2025/08/03 00:11:07 copying from io reader to /boot/A/initramfs.xz
2025/08/03 00:11:08 writing /boot/grub/grub.cfg to disk
2025/08/03 00:11:08 executing: grub-install --boot-directory=/boot --removable --efi-directory=/boot/EFI /dev/loop0
-2025/08/03 00:11:08 installation of v1.10.5 complete
+2025/08/03 00:11:08 installation of {{< version-pin "talos" >}} complete
2025/08/03 00:11:08 Talos installer finished successfully
2025/08/03 00:11:08 remounting all filesystems read-only
2025/08/03 00:11:08 copy /tmp/installer-3221603450/image.raw → /dev/sda
diff --git a/content/en/docs/next/getting-started/requirements.md b/content/en/docs/next/getting-started/requirements.md
index 59ee7319..e65c2bfc 100644
--- a/content/en/docs/next/getting-started/requirements.md
+++ b/content/en/docs/next/getting-started/requirements.md
@@ -9,7 +9,7 @@ weight: 1
You will need the following tools installed on your workstation:
-- [talosctl](https://www.talos.dev/v1.10/talos-guides/install/talosctl/), the command line client for Talos Linux.
+- [talosctl](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/install/talosctl/), the command line client for Talos Linux (use the {{< version-pin "talos_minor" >}}.x series that matches Cozystack {{< version-pin "cozystack_version" >}}).
- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl), the command line client for Kubernetes.
- [Talm](https://github.com/cozystack/talm?tab=readme-ov-file#installation), Cozystack's own configuration manager for Talos Linux:
diff --git a/content/en/docs/next/install/ansible.md b/content/en/docs/next/install/ansible.md
index 091457bb..9c9cc9c1 100644
--- a/content/en/docs/next/install/ansible.md
+++ b/content/en/docs/next/install/ansible.md
@@ -95,7 +95,7 @@ cluster:
**Always pin `cozystack_chart_version` explicitly.** The collection ships with a default version that may not match the release you intend to deploy. Set it in your inventory to avoid unexpected upgrades:
```yaml
-cozystack_chart_version: "1.0.0-rc.2"
+cozystack_chart_version: "{{< version-pin "cozystack_version" >}}"
```
Check [Cozystack releases](https://github.com/cozystack/cozystack/releases) for available versions.
@@ -177,7 +177,7 @@ The playbook performs the following steps automatically:
| Variable | Default | Description |
| --- | --- | --- |
| `cozystack_api_server_host` | *(required)* | Internal IP of the control-plane node. |
-| `cozystack_chart_version` | `1.0.0-rc.1` | Version of the Cozystack Helm chart. **Pin this explicitly.** |
+| `cozystack_chart_version` | `{{< version-pin "cozystack_version" >}}` | Version of the Cozystack Helm chart. **Pin this explicitly.** |
| `cozystack_platform_variant` | `isp-full-generic` | Platform variant: `default`, `isp-full`, `isp-hosted`, `isp-full-generic`. |
| `cozystack_root_host` | `""` | Domain for Cozystack services. Leave empty to skip publishing configuration. |
diff --git a/content/en/docs/next/install/cozystack/platform.md b/content/en/docs/next/install/cozystack/platform.md
index d3fcfe56..981033f8 100644
--- a/content/en/docs/next/install/cozystack/platform.md
+++ b/content/en/docs/next/install/cozystack/platform.md
@@ -31,7 +31,7 @@ This installs the operator, CRDs, and creates the `PackageSource` resource.
### Installing on non-Talos OS
-By default, the Cozystack operator is configured to use the [KubePrism](https://www.talos.dev/latest/kubernetes-guides/configuration/kubeprism/)
+By default, the Cozystack operator is configured to use the [KubePrism](https://www.talos.dev/{{< version-pin "talos_minor" >}}/kubernetes-guides/configuration/kubeprism/)
feature of Talos Linux, which allows access to the Kubernetes API via a local address on the node.
If you're installing Cozystack on a system other than Talos Linux, set the operator variant during installation:
@@ -268,9 +268,9 @@ It is [recommended](https://github.com/LINBIT/linstor-server/issues/463#issuecom
to set `failmode=continue` on ZFS storage pools to allow DRBD to handle disk failures instead of ZFS.
```bash
-kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 -- zpool set failmode=continue data
-kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv2 -- zpool set failmode=continue data
-kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv3 -- zpool set failmode=continue data
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv1 -- zpool set failmode=continue data
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv2 -- zpool set failmode=continue data
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv3 -- zpool set failmode=continue data
```
{{% /tab %}}
diff --git a/content/en/docs/next/install/how-to/kubespan.md b/content/en/docs/next/install/how-to/kubespan.md
index 09198bd4..eb93eca0 100644
--- a/content/en/docs/next/install/how-to/kubespan.md
+++ b/content/en/docs/next/install/how-to/kubespan.md
@@ -7,7 +7,7 @@ weight: 120
Talos Linux provides a full mesh WireGuard network for your cluster.
-To enable this functionality, you need to configure [KubeSpan](https://www.talos.dev/v1.8/talos-guides/network/kubespan/) and [Cluster Discovery](https://www.talos.dev/v1.2/kubernetes-guides/configuration/discovery/) in your Talos Linux configuration:
+To enable this functionality, you need to configure [KubeSpan](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/network/kubespan/) and [Cluster Discovery](https://www.talos.dev/{{< version-pin "talos_minor" >}}/kubernetes-guides/configuration/discovery/) in your Talos Linux configuration:
```yaml
machine:
diff --git a/content/en/docs/next/install/how-to/single-disk.md b/content/en/docs/next/install/how-to/single-disk.md
index c38447a6..683bc57c 100644
--- a/content/en/docs/next/install/how-to/single-disk.md
+++ b/content/en/docs/next/install/how-to/single-disk.md
@@ -37,7 +37,7 @@ provisioning:
For `talm`, append the same lines at end of the first node's configuration file, such as `nodes/node1.yaml`.
-Read more in the Talos documentation: https://www.talos.dev/v1.10/talos-guides/configuration/disk-management/.
+Read more in the Talos documentation: https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/configuration/disk-management/.
After applying the configuration, wipe the `data-storage` partition:
diff --git a/content/en/docs/next/install/kubernetes/generic.md b/content/en/docs/next/install/kubernetes/generic.md
index 67946de5..86810068 100644
--- a/content/en/docs/next/install/kubernetes/generic.md
+++ b/content/en/docs/next/install/kubernetes/generic.md
@@ -182,7 +182,7 @@ disable-kube-proxy: true
Download and apply Custom Resource Definitions:
```bash
-kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml
+kubectl apply -f https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-crds.yaml
```
### 2. Deploy Cozystack Operator
@@ -190,7 +190,7 @@ kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download
Download the generic operator manifest, replace the API server address placeholder, and apply:
```bash
-curl -fsSL https://github.com/cozystack/cozystack/releases/latest/download/cozystack-operator-generic.yaml \
+curl -fsSL https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-operator-generic.yaml \
| sed 's/REPLACE_ME//' \
| kubectl apply -f -
```
@@ -364,13 +364,13 @@ This example uses k3s default CIDRs. Adjust for kubeadm (`10.244.0.0/16`, `10.96
tasks:
- name: Apply Cozystack CRDs
ansible.builtin.command:
- cmd: kubectl apply -f https://github.com/cozystack/cozystack/releases/latest/download/cozystack-crds.yaml
+ cmd: kubectl apply -f https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-crds.yaml
changed_when: true
- name: Download and apply Cozystack operator manifest
ansible.builtin.shell:
cmd: >
- curl -fsSL https://github.com/cozystack/cozystack/releases/latest/download/cozystack-operator-generic.yaml
+ curl -fsSL https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-operator-generic.yaml
| sed 's/REPLACE_ME/{{ cozystack_api_host }}/'
| kubectl apply -f -
changed_when: true
diff --git a/content/en/docs/next/install/kubernetes/talm.md b/content/en/docs/next/install/kubernetes/talm.md
index 89b5db55..e1cf66aa 100644
--- a/content/en/docs/next/install/kubernetes/talm.md
+++ b/content/en/docs/next/install/kubernetes/talm.md
@@ -32,7 +32,7 @@ All you need for an installation with Talm is to have access to the nodes: direc
This guide will use private IPs as a default option in examples, and public IPs in instructions and examples which are specific for the public IP setup.
If you are using DHCP, you might not be aware of the IP addresses assigned to your nodes in the private subnet.
-Nodes with Talos Linux [expose Talos API on port `50000`](https://www.talos.dev/v1.10/learn-more/talos-network-connectivity/).
+Nodes with Talos Linux [expose Talos API on port `50000`](https://www.talos.dev/{{< version-pin "talos_minor" >}}/learn-more/talos-network-connectivity/).
You can use `nmap` to find them, providing your network mask (`192.168.123.0/24` in the example):
```bash
@@ -69,7 +69,7 @@ For this guide, you need a couple of tools installed:
brew install siderolabs/tap/talosctl
```
- For more installation options, see the [`talosctl` installation guide](https://www.talos.dev/v1.9/talos-guides/install/talosctl/)
+ For more installation options, see the [`talosctl` installation guide](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/install/talosctl/)
## 2. Initialize Cluster Configuration
@@ -111,9 +111,9 @@ endpoint: "https://192.168.100.10:6443"
clusterDomain: cozy.local
## Floating IP — should be an unused IP in the same subnet as nodes
floatingIP: 192.168.100.10
-## Talos source image: use the latest available version
+## Talos source image: pinned to the version that ships with the current Cozystack release
## https://github.com/cozystack/cozystack/pkgs/container/cozystack%2Ftalos
-image: "ghcr.io/cozystack/cozystack/talos:v1.10.5"
+image: "ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}"
## Pod subnet — used to assign IPs to pods
podSubnets:
- 10.244.0.0/16
diff --git a/content/en/docs/next/install/kubernetes/talos-bootstrap.md b/content/en/docs/next/install/kubernetes/talos-bootstrap.md
index c9ea33e9..feed8ced 100644
--- a/content/en/docs/next/install/kubernetes/talos-bootstrap.md
+++ b/content/en/docs/next/install/kubernetes/talos-bootstrap.md
@@ -64,7 +64,7 @@ talos-bootstrap --help
- name: vfio_pci
- name: vfio_iommu_type1
install:
- image: ghcr.io/cozystack/cozystack/talos:v1.10.3
+ image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
registries:
mirrors:
docker.io:
diff --git a/content/en/docs/next/install/kubernetes/talosctl.md b/content/en/docs/next/install/kubernetes/talosctl.md
index 92b9aeb6..64b1f850 100644
--- a/content/en/docs/next/install/kubernetes/talosctl.md
+++ b/content/en/docs/next/install/kubernetes/talosctl.md
@@ -84,7 +84,7 @@ Discovered open port 50000/tcp on 192.168.123.13
- name: vfio_pci
- name: vfio_iommu_type1
install:
- image: ghcr.io/cozystack/cozystack/talos:v1.10.3
+ image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
registries:
mirrors:
docker.io:
diff --git a/content/en/docs/next/install/providers/hetzner.md b/content/en/docs/next/install/providers/hetzner.md
index 55986df9..b44789f9 100644
--- a/content/en/docs/next/install/providers/hetzner.md
+++ b/content/en/docs/next/install/providers/hetzner.md
@@ -176,7 +176,7 @@ but has instructions and examples specific to Hetzner.
clusterDomain: cozy.local
# floatingIP points to the primary etcd node
floatingIP: 10.0.1.100
- image: "ghcr.io/cozystack/cozystack/talos:v1.9.5"
+ image: "ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}"
podSubnets:
- 10.244.0.0/16
serviceSubnets:
@@ -317,12 +317,12 @@ The final stage of deploying a Cozystack cluster on Hetzner is to install Cozyst
```bash
helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
- --version X.Y.Z \
+ --version {{< version-pin "cozystack_version" >}} \
--namespace cozy-system \
--create-namespace
```
- Replace `X.Y.Z` with the desired Cozystack version from the [releases page](https://github.com/cozystack/cozystack/releases).
+ The example pins the installer to Cozystack {{< version-pin "cozystack_tag" >}}. For a newer patch in the same minor series, pick the desired tag from the [releases page](https://github.com/cozystack/cozystack/releases).
1. Create a Platform Package file, **cozystack-platform.yaml**.
diff --git a/content/en/docs/next/install/providers/oracle-cloud.md b/content/en/docs/next/install/providers/oracle-cloud.md
index ffe728b1..45f60347 100644
--- a/content/en/docs/next/install/providers/oracle-cloud.md
+++ b/content/en/docs/next/install/providers/oracle-cloud.md
@@ -24,10 +24,10 @@ or come and share your experience in the [Cozystack community](https://t.me/cozy
The first step is to make a Talos Linux installation image available for use in Oracle Cloud as a custom image.
-1. Download the Talos Linux image archive from the [Cozystack releases page](https://github.com/cozystack/cozystack/releases/latest/) and unpack it:
+1. Download the Talos Linux image archive for Cozystack {{< version-pin "cozystack_tag" >}} from the [releases page](https://github.com/cozystack/cozystack/releases/tag/{{< version-pin "cozystack_tag" >}}) and unpack it:
```bash
- wget https://github.com/cozystack/cozystack/releases/latest/download/metal-amd64.raw.xz
+ wget https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/metal-amd64.raw.xz
xz -d metal-amd64.raw.xz
```
@@ -294,7 +294,7 @@ mv talm /usr/local/bin/talm
The node's public IP must be specified for both the `--nodes` (`-n`) and `--endpoints` (`-e`) parameters.
To learn more about Talos node configuration and endpoints, refer to the
- [Talos documentation](https://www.talos.dev/v1.10/learn-more/talosctl/#endpoints-and-nodes)
+ [Talos documentation](https://www.talos.dev/{{< version-pin "talos_minor" >}}/learn-more/talosctl/#endpoints-and-nodes)
1. Edit the node configuration file as needed.
diff --git a/content/en/docs/next/install/talos/boot-to-talos.md b/content/en/docs/next/install/talos/boot-to-talos.md
index 4e16ba91..268ed071 100644
--- a/content/en/docs/next/install/talos/boot-to-talos.md
+++ b/content/en/docs/next/install/talos/boot-to-talos.md
@@ -24,16 +24,16 @@ Three versions need to line up when you install Cozystack on Talos:
| **`talosctl`** on your workstation | downloaded separately from [siderolabs/talos releases](https://github.com/siderolabs/talos/releases) | the major.minor of the Talos version you wrote to the node |
| **Cozystack** | `--version` flag passed to `helm upgrade --install cozy-installer` | — (the anchor; everything else follows) |
-For **Cozystack v1.2.x** the pinned Talos version is **v1.12.6**
-([`packages/core/talos/images/talos/profiles/installer.yaml`](https://github.com/cozystack/cozystack/blob/release-1.2.1/packages/core/talos/images/talos/profiles/installer.yaml)).
-Use `ghcr.io/cozystack/cozystack/talos:v1.12.6` as the `boot-to-talos` image and download `talosctl` v1.12.x.
+For **Cozystack {{< version-pin "cozystack_version" >}}** the pinned Talos version is **{{< version-pin "talos" >}}**
+([`packages/core/talos/images/talos/profiles/installer.yaml`](https://github.com/cozystack/cozystack/blob/{{< version-pin "cozystack_tag" >}}/packages/core/talos/images/talos/profiles/installer.yaml)).
+Use `ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}` as the `boot-to-talos` image and download `talosctl` {{< version-pin "talos_minor" >}}.x.
{{% alert color="warning" %}}
`boot-to-talos` v0.7.x carries its own hardcoded default image
(`ghcr.io/cozystack/cozystack/talos:v1.11.6` as of v0.7.1, see
[`cmd/boot-to-talos/main.go`](https://github.com/cozystack/boot-to-talos/blob/v0.7.1/cmd/boot-to-talos/main.go)).
If you let the interactive prompt fall through to that default on a cluster
-you intend to run Cozystack v1.2.x, you will end up with a Talos v1.11 node
+you intend to run Cozystack v1.3.0, you will end up with a Talos v1.11 node
while the Cozystack installer and Talm templates target Talos v1.12 — you
will hit a mismatch at bootstrap time. Always type in the image matching
your target Cozystack release (or pass `-image` on the command line).
@@ -78,7 +78,7 @@ Mode:
2. install – prepare the environment, run the Talos installer, and then overwrite the system disk with the installed image.
Mode [1]: 2
Target disk [/dev/sda]:
-Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.11.6]: ghcr.io/cozystack/cozystack/talos:v1.12.6
+Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.11.6]: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
Add networking configuration? [yes]:
Interface [eth0]:
IP address [10.0.2.15]:
@@ -87,7 +87,7 @@ Gateway (or 'none') [10.0.2.2]:
Configure serial console? (or 'no') [ttyS0]:
Summary:
- Image: ghcr.io/cozystack/cozystack/talos:v1.12.6
+ Image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
Disk: /dev/sda
Extra kernel args: ip=10.0.2.15::10.0.2.2:255.255.255.0::eth0::::: console=ttyS0
@@ -96,12 +96,12 @@ WARNING: ALL DATA ON /dev/sda WILL BE ERASED!
Continue? [yes]:
2025/08/03 00:11:03 created temporary directory /tmp/installer-3221603450
-2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:v1.12.6
+2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
2025/08/03 00:11:03 extracting image layers
2025/08/03 00:11:07 creating raw disk /tmp/installer-3221603450/image.raw (2 GiB)
2025/08/03 00:11:07 attached /tmp/installer-3221603450/image.raw to /dev/loop0
2025/08/03 00:11:07 starting Talos installer
-2025/08/03 00:11:07 running Talos installer v1.12.6
+2025/08/03 00:11:07 running Talos installer {{< version-pin "talos" >}}
2025/08/03 00:11:07 WARNING: config validation:
2025/08/03 00:11:07 use "worker" instead of "" for machine type
2025/08/03 00:11:07 created EFI (C12A7328-F81F-11D2-BA4B-00A0C93EC93B) size 104857600 bytes
@@ -116,7 +116,7 @@ Continue? [yes]:
2025/08/03 00:11:07 copying from io reader to /boot/A/initramfs.xz
2025/08/03 00:11:08 writing /boot/grub/grub.cfg to disk
2025/08/03 00:11:08 executing: grub-install --boot-directory=/boot --removable --efi-directory=/boot/EFI /dev/loop0
-2025/08/03 00:11:08 installation of v1.12.6 complete
+2025/08/03 00:11:08 installation of {{< version-pin "talos" >}} complete
2025/08/03 00:11:08 Talos installer finished successfully
2025/08/03 00:11:08 remounting all filesystems read-only
2025/08/03 00:11:08 copy /tmp/installer-3221603450/image.raw → /dev/sda
diff --git a/content/en/docs/next/install/talos/iso.md b/content/en/docs/next/install/talos/iso.md
index 61980839..13a15e69 100644
--- a/content/en/docs/next/install/talos/iso.md
+++ b/content/en/docs/next/install/talos/iso.md
@@ -14,10 +14,10 @@ Note that Cozystack provides its own Talos builds, which are tested and optimize
## Installation
-1. Download Talos Linux asset from the Cozystack's [releases page](https://github.com/cozystack/cozystack/releases).
+1. Download the Talos Linux ISO for Cozystack {{< version-pin "cozystack_tag" >}} from the [releases page](https://github.com/cozystack/cozystack/releases/tag/{{< version-pin "cozystack_tag" >}}).
```bash
- wget https://github.com/cozystack/cozystack/releases/latest/download/metal-amd64.iso
+ wget https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/metal-amd64.iso
```
1. Boot your machine with ISO attached.
diff --git a/content/en/docs/next/kubernetes/_index.md b/content/en/docs/next/kubernetes/_index.md
index 8b8ecd5f..3f0d973e 100644
--- a/content/en/docs/next/kubernetes/_index.md
+++ b/content/en/docs/next/kubernetes/_index.md
@@ -10,7 +10,7 @@ aliases:
diff --git a/content/en/docs/next/networking/http-cache.md b/content/en/docs/next/networking/http-cache.md
index 3646e4d7..326c981b 100644
--- a/content/en/docs/next/networking/http-cache.md
+++ b/content/en/docs/next/networking/http-cache.md
@@ -11,7 +11,7 @@ aliases:
diff --git a/content/en/docs/next/networking/tcp-balancer.md b/content/en/docs/next/networking/tcp-balancer.md
index e8bb8875..bbc47b50 100644
--- a/content/en/docs/next/networking/tcp-balancer.md
+++ b/content/en/docs/next/networking/tcp-balancer.md
@@ -11,7 +11,7 @@ aliases:
diff --git a/content/en/docs/next/networking/vpc.md b/content/en/docs/next/networking/vpc.md
index 7494f51b..79d2ef19 100644
--- a/content/en/docs/next/networking/vpc.md
+++ b/content/en/docs/next/networking/vpc.md
@@ -11,7 +11,7 @@ aliases:
diff --git a/content/en/docs/next/networking/vpn.md b/content/en/docs/next/networking/vpn.md
index 1ecfcc00..4ed8ab40 100644
--- a/content/en/docs/next/networking/vpn.md
+++ b/content/en/docs/next/networking/vpn.md
@@ -11,7 +11,7 @@ aliases:
diff --git a/content/en/docs/next/operations/services/bootbox.md b/content/en/docs/next/operations/services/bootbox.md
index 409c2437..cc77a0c4 100644
--- a/content/en/docs/next/operations/services/bootbox.md
+++ b/content/en/docs/next/operations/services/bootbox.md
@@ -6,7 +6,7 @@ linkTitle: "BootBox"
diff --git a/content/en/docs/next/operations/services/etcd.md b/content/en/docs/next/operations/services/etcd.md
index b5f4d954..70bf6234 100644
--- a/content/en/docs/next/operations/services/etcd.md
+++ b/content/en/docs/next/operations/services/etcd.md
@@ -6,7 +6,7 @@ linkTitle: "Etcd"
diff --git a/content/en/docs/next/operations/services/ingress.md b/content/en/docs/next/operations/services/ingress.md
index 36678a2f..45e03567 100644
--- a/content/en/docs/next/operations/services/ingress.md
+++ b/content/en/docs/next/operations/services/ingress.md
@@ -6,7 +6,7 @@ linkTitle: "Ingress"
diff --git a/content/en/docs/next/operations/services/monitoring/parameters.md b/content/en/docs/next/operations/services/monitoring/parameters.md
index a8237c4f..df7a627e 100644
--- a/content/en/docs/next/operations/services/monitoring/parameters.md
+++ b/content/en/docs/next/operations/services/monitoring/parameters.md
@@ -7,7 +7,7 @@ weight: 1
diff --git a/content/en/docs/next/operations/services/seaweedfs.md b/content/en/docs/next/operations/services/seaweedfs.md
index 119d5ca2..1db1b626 100644
--- a/content/en/docs/next/operations/services/seaweedfs.md
+++ b/content/en/docs/next/operations/services/seaweedfs.md
@@ -6,7 +6,7 @@ linkTitle: "SeaweedFS"
diff --git a/content/en/docs/next/operations/troubleshooting/monitoring-troubleshooting.md b/content/en/docs/next/operations/troubleshooting/monitoring-troubleshooting.md
index 0291d8cd..77550d37 100644
--- a/content/en/docs/next/operations/troubleshooting/monitoring-troubleshooting.md
+++ b/content/en/docs/next/operations/troubleshooting/monitoring-troubleshooting.md
@@ -122,7 +122,7 @@ If you cannot access Grafana:
- Check the service and ingress:
```bash
-kubectl get svc,ingress -n cozy-monitoring -l app.kubernetes.io/name=grafana
+kubectl get svc,ingress -n -l app.kubernetes.io/name=grafana
```
- Verify RBAC permissions for your user.
diff --git a/content/en/docs/next/virtualization/vm-disk.md b/content/en/docs/next/virtualization/vm-disk.md
index 7868f40d..6c85c78d 100644
--- a/content/en/docs/next/virtualization/vm-disk.md
+++ b/content/en/docs/next/virtualization/vm-disk.md
@@ -10,7 +10,7 @@ aliases:
@@ -20,15 +20,17 @@ A Virtual Machine Disk
### Common parameters
-| Name | Description | Type | Value |
-| ------------------- | ------------------------------------------------------------------------------------------------------------------------ | ---------- | ------------ |
-| `source` | The source image location used to create a disk. | `object` | `{}` |
-| `source.image` | Use image by name. | `*object` | `null` |
-| `source.image.name` | Name of the image to use (uploaded as "golden image" or from the list: `ubuntu`, `fedora`, `cirros`, `alpine`, `talos`). | `string` | `""` |
-| `source.upload` | Upload local image. | `*object` | `null` |
-| `source.http` | Download image from an HTTP source. | `*object` | `null` |
-| `source.http.url` | URL to download the image. | `string` | `""` |
-| `optical` | Defines if disk should be considered optical. | `bool` | `false` |
-| `storage` | The size of the disk allocated for the virtual machine. | `quantity` | `5Gi` |
-| `storageClass` | StorageClass used to store the data. | `string` | `replicated` |
+| Name | Description | Type | Value |
+| ------------------- | ------------------------------------------------------- | ---------- | ------------ |
+| `source` | The source image location used to create a disk. | `object` | `{}` |
+| `source.image` | Use image by name from default collection. | `*object` | `null` |
+| `source.image.name` | Name of the image to use. | `string` | `""` |
+| `source.upload` | Upload local image. | `*object` | `null` |
+| `source.http` | Download image from an HTTP source. | `*object` | `null` |
+| `source.http.url` | URL to download the image. | `string` | `""` |
+| `source.disk` | Clone an existing vm-disk. | `*object` | `null` |
+| `source.disk.name` | Name of the vm-disk to clone. | `string` | `""` |
+| `optical` | Defines if disk should be considered optical. | `bool` | `false` |
+| `storage` | The size of the disk allocated for the virtual machine. | `quantity` | `5Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `replicated` |
diff --git a/content/en/docs/next/virtualization/vm-image.md b/content/en/docs/next/virtualization/vm-image.md
index 79e17011..97b80062 100644
--- a/content/en/docs/next/virtualization/vm-image.md
+++ b/content/en/docs/next/virtualization/vm-image.md
@@ -34,10 +34,10 @@ This means if you create a VMInstance named `ubuntu`, the VirtualMachine in Kube
Creating named VM images (golden images) requires an administrator account in Cozystack.
The simplest way to create named VM images is by using the CLI script.
-The [`cdi_golden_image_create.sh`](https://github.com/cozystack/cozystack/blob/main/hack/cdi_golden_image_create.sh) script can be downloaded from the Cozystack repository:
+The [`cdi_golden_image_create.sh`](https://github.com/cozystack/cozystack/blob/{{< version-pin "cozystack_tag" >}}/hack/cdi_golden_image_create.sh) script can be downloaded from the Cozystack {{< version-pin "cozystack_tag" >}} release tag:
```bash
-wget https://github.com/cozystack/cozystack/blob/main/hack/cdi_golden_image_create.sh
+wget https://raw.githubusercontent.com/cozystack/cozystack/{{< version-pin "cozystack_tag" >}}/hack/cdi_golden_image_create.sh
chmod +x cdi_golden_image_create.sh
```
diff --git a/content/en/docs/next/virtualization/vm-instance.md b/content/en/docs/next/virtualization/vm-instance.md
index 613a8956..958c945a 100644
--- a/content/en/docs/next/virtualization/vm-instance.md
+++ b/content/en/docs/next/virtualization/vm-instance.md
@@ -10,7 +10,7 @@ aliases:
@@ -61,8 +61,10 @@ virtctl ssh @
| `disks` | List of disks to attach. | `[]object` | `[]` |
| `disks[i].name` | Disk name. | `string` | `""` |
| `disks[i].bus` | Disk bus type (e.g. "sata"). | `string` | `""` |
-| `subnets` | Additional subnets | `[]object` | `[]` |
-| `subnets[i].name` | Subnet name | `string` | `""` |
+| `networks` | Networks to attach the VM to. | `[]object` | `[]` |
+| `networks[i].name` | Network attachment name. | `string` | `""` |
+| `subnets` | Deprecated: use networks instead. | `[]object` | `[]` |
+| `subnets[i].name` | Network attachment name. | `string` | `""` |
| `gpus` | List of GPUs to attach (NVIDIA driver requires at least 4 GiB RAM). | `[]object` | `[]` |
| `gpus[i].name` | The name of the GPU resource to attach. | `string` | `""` |
| `cpuModel` | Model specifies the CPU model inside the VMI. List of available models https://github.com/libvirt/libvirt/tree/master/src/cpu_map | `string` | `""` |
@@ -186,7 +188,7 @@ Specific characteristics of this series are:
## Development
To get started with customizing or creating your own instancetypes and preferences
-see [Developer Guide]({{% ref "/docs/next/development" %}}).
+see [DEVELOPMENT.md](./DEVELOPMENT.md).
## Resources
diff --git a/content/en/docs/v1.2/_index.md b/content/en/docs/v1.2/_index.md
index 71962a59..8c7c8025 100644
--- a/content/en/docs/v1.2/_index.md
+++ b/content/en/docs/v1.2/_index.md
@@ -5,7 +5,7 @@ description: "Free PaaS platform and framework for building clouds"
taxonomyCloud: []
cascade:
type: docs
-weight: 10
+weight: 20
---
Cozystack is a free PaaS platform and framework for building clouds
diff --git a/content/en/docs/v1.3/.gitkeep b/content/en/docs/v1.3/.gitkeep
new file mode 100644
index 00000000..e69de29b
diff --git a/content/en/docs/v1.3/_index.md b/content/en/docs/v1.3/_index.md
new file mode 100644
index 00000000..96901845
--- /dev/null
+++ b/content/en/docs/v1.3/_index.md
@@ -0,0 +1,15 @@
+---
+title: "Cozystack v1.3 Documentation"
+linkTitle: "Cozystack v1.3"
+description: "Free PaaS platform and framework for building clouds"
+taxonomyCloud: []
+cascade:
+ type: docs
+weight: 10
+---
+
+Cozystack is a free PaaS platform and framework for building clouds
+
+With Cozystack, you can transform your bunch of servers into an intelligent system with a simple REST API for spawning Kubernetes clusters, Database-as-a-Service, virtual machines, load balancers, HTTP caching services, and other services with ease.
+
+You can use Cozystack to build your own cloud or to provide a cost-effective development environments.
diff --git a/content/en/docs/v1.3/applications/_include/clickhouse.md b/content/en/docs/v1.3/applications/_include/clickhouse.md
new file mode 100644
index 00000000..b05db2cb
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/clickhouse.md
@@ -0,0 +1,10 @@
+---
+title: "Managed ClickHouse Service"
+linkTitle: "ClickHouse"
+description: ""
+weight: 50
+aliases:
+ - /docs/reference/applications/clickhouse
+ - /docs/v1.3/reference/applications/clickhouse
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/foundationdb.md b/content/en/docs/v1.3/applications/_include/foundationdb.md
new file mode 100644
index 00000000..514bfaaf
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/foundationdb.md
@@ -0,0 +1,5 @@
+---
+title: "FoundationDB"
+linkTitle: "FoundationDB"
+weight: 50
+---
diff --git a/content/en/docs/v1.3/applications/_include/harbor.md b/content/en/docs/v1.3/applications/_include/harbor.md
new file mode 100644
index 00000000..35ed0c33
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/harbor.md
@@ -0,0 +1,6 @@
+---
+title: "Managed Harbor Container Registry"
+linkTitle: "Harbor Container Registry"
+weight: 50
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/kafka.md b/content/en/docs/v1.3/applications/_include/kafka.md
new file mode 100644
index 00000000..f500fa15
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/kafka.md
@@ -0,0 +1,9 @@
+---
+title: "Managed Kafka Service"
+linkTitle: "Kafka"
+weight: 50
+aliases:
+ - /docs/reference/applications/kafka
+ - /docs/v1.3/reference/applications/kafka
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/mariadb.md b/content/en/docs/v1.3/applications/_include/mariadb.md
new file mode 100644
index 00000000..5e5fb595
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/mariadb.md
@@ -0,0 +1,9 @@
+---
+title: "Managed MariaDB Service"
+linkTitle: "MariaDB"
+weight: 50
+aliases:
+ - /docs/reference/applications/mariadb
+ - /docs/v1.3/reference/applications/mariadb
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/mongodb.md b/content/en/docs/v1.3/applications/_include/mongodb.md
new file mode 100644
index 00000000..6c1893e4
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/mongodb.md
@@ -0,0 +1,9 @@
+---
+title: "Managed MongoDB Service"
+linkTitle: "MongoDB"
+weight: 50
+aliases:
+ - /docs/reference/applications/mongodb
+ - /docs/v1.3/reference/applications/mongodb
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/nats.md b/content/en/docs/v1.3/applications/_include/nats.md
new file mode 100644
index 00000000..d63f7d94
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/nats.md
@@ -0,0 +1,9 @@
+---
+title: "Managed NATS Service"
+linkTitle: "NATS"
+weight: 50
+aliases:
+ - /docs/reference/applications/nats
+ - /docs/v1.3/reference/applications/nats
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/openbao.md b/content/en/docs/v1.3/applications/_include/openbao.md
new file mode 100644
index 00000000..e3bd6b24
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/openbao.md
@@ -0,0 +1,6 @@
+---
+title: "Managed OpenBAO Service"
+linkTitle: "OpenBAO"
+weight: 50
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/postgres.md b/content/en/docs/v1.3/applications/_include/postgres.md
new file mode 100644
index 00000000..53909983
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/postgres.md
@@ -0,0 +1,9 @@
+---
+title: "Managed PostgreSQL Service"
+linkTitle: "PostgreSQL"
+weight: 50
+aliases:
+ - /docs/reference/applications/postgres
+ - /docs/v1.3/reference/applications/postgres
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/qdrant.md b/content/en/docs/v1.3/applications/_include/qdrant.md
new file mode 100644
index 00000000..5eea69b7
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/qdrant.md
@@ -0,0 +1,6 @@
+---
+title: "Managed Qdrant Service"
+linkTitle: "Qdrant"
+weight: 50
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/rabbitmq.md b/content/en/docs/v1.3/applications/_include/rabbitmq.md
new file mode 100644
index 00000000..3f36fdea
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/rabbitmq.md
@@ -0,0 +1,9 @@
+---
+title: "Managed RabbitMQ Service"
+linkTitle: "RabbitMQ"
+weight: 50
+aliases:
+ - /docs/reference/applications/rabbitmq
+ - /docs/v1.3/reference/applications/rabbitmq
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/redis.md b/content/en/docs/v1.3/applications/_include/redis.md
new file mode 100644
index 00000000..49a0f13f
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/redis.md
@@ -0,0 +1,9 @@
+---
+title: "Managed Redis Service"
+linkTitle: "Redis"
+weight: 50
+aliases:
+ - /docs/reference/applications/redis
+ - /docs/v1.3/reference/applications/redis
+---
+
diff --git a/content/en/docs/v1.3/applications/_include/tenant.md b/content/en/docs/v1.3/applications/_include/tenant.md
new file mode 100644
index 00000000..f57cff7f
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_include/tenant.md
@@ -0,0 +1,9 @@
+---
+title: "Tenant Application Reference"
+linkTitle: "Tenant"
+weight: 50
+aliases:
+ - /docs/reference/applications/tenant
+ - /docs/v1.3/reference/applications/tenant
+---
+
diff --git a/content/en/docs/v1.3/applications/_index.md b/content/en/docs/v1.3/applications/_index.md
new file mode 100644
index 00000000..bf1b687b
--- /dev/null
+++ b/content/en/docs/v1.3/applications/_index.md
@@ -0,0 +1,21 @@
+---
+title: "Managed Applications: Guides and Reference"
+linkTitle: "Managed Applications"
+description: "Learn how to deploy, configure, access, and backup managed applications in Cozystack."
+weight: 45
+aliases:
+ - /docs/v1.3/components
+ - /docs/v1.3/guides/applications
+---
+
+## Available Application Versions
+
+Cozystack deploys applications in two complementary ways:
+
+- **Operator‑managed applications** – Cozystack bundles a specific version of a Kubernetes Operator that installs and continuously reconciles the application.
+ As a rule, the operator chooses one of the most recent stable versions of the application by default.
+
+- **Chart‑managed applications** – When no mature operator exists, Cozystack packages an upstream (or in‑house) Helm chart.
+ The chart’s `appVersion` pin tracks the latest stable upstream release, keeping deployments secure and up‑to‑date.
+
+
diff --git a/content/en/docs/v1.3/applications/clickhouse.md b/content/en/docs/v1.3/applications/clickhouse.md
new file mode 100644
index 00000000..99d4c161
--- /dev/null
+++ b/content/en/docs/v1.3/applications/clickhouse.md
@@ -0,0 +1,114 @@
+---
+title: "Managed ClickHouse Service"
+linkTitle: "ClickHouse"
+description: ""
+weight: 50
+aliases:
+ - /docs/reference/applications/clickhouse
+ - /docs/v1.3/reference/applications/clickhouse
+---
+
+
+
+
+ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
+It is used for online analytical processing (OLAP).
+
+### How to restore backup from S3
+
+1. Find the snapshot:
+
+ ```bash
+ restic -r s3:s3.example.org/clickhouse-backups/table_name snapshots
+ ```
+
+2. Restore it:
+
+ ```bash
+ restic -r s3:s3.example.org/clickhouse-backups/table_name restore latest --target /tmp/
+ ```
+
+For more details, read [Restic: Effective Backup from Stdin](https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1).
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ---------- | ------- |
+| `replicas` | Number of ClickHouse replicas. | `int` | `2` |
+| `shards` | Number of ClickHouse shards. | `int` | `1` |
+| `resources` | Explicit CPU and memory configuration for each ClickHouse replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ---------------------- | ------------------------------------------------------------- | ------------------- | ------- |
+| `logStorageSize` | Size of Persistent Volume for logs. | `quantity` | `2Gi` |
+| `logTTL` | TTL (expiration time) for `query_log` and `query_thread_log`. | `int` | `15` |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user. | `string` | `""` |
+| `users[name].readonly` | User is readonly (default: false). | `bool` | `false` |
+
+
+### Backup parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | ----------------------------------------------- | -------- | ------------------------------------------------------ |
+| `backup` | Backup configuration. | `object` | `{}` |
+| `backup.enabled` | Enable regular backups (default: false). | `bool` | `false` |
+| `backup.s3Region` | AWS S3 region where backups are stored. | `string` | `us-east-1` |
+| `backup.s3Bucket` | S3 bucket used for storing backups. | `string` | `s3.example.org/clickhouse-backups` |
+| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` |
+| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups. | `string` | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
+| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `` |
+| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `` |
+| `backup.resticPassword` | Password for Restic backup encryption. | `string` | `` |
+
+
+### ClickHouse Keeper parameters
+
+| Name | Description | Type | Value |
+| ---------------------------------- | ------------------------------------------------------------ | ---------- | ------- |
+| `clickhouseKeeper` | ClickHouse Keeper configuration. | `object` | `{}` |
+| `clickhouseKeeper.enabled` | Deploy ClickHouse Keeper for cluster coordination. | `bool` | `true` |
+| `clickhouseKeeper.size` | Persistent Volume Claim size available for application data. | `quantity` | `1Gi` |
+| `clickhouseKeeper.resourcesPreset` | Default sizing preset. | `string` | `micro` |
+| `clickhouseKeeper.replicas` | Number of Keeper replicas. | `int` | `3` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
diff --git a/content/en/docs/v1.3/applications/external.md b/content/en/docs/v1.3/applications/external.md
new file mode 100644
index 00000000..ef1a462a
--- /dev/null
+++ b/content/en/docs/v1.3/applications/external.md
@@ -0,0 +1,250 @@
+---
+title: "Adding External Applications to Cozystack Catalog"
+linkTitle: "External Apps"
+description: "Learn how to add managed applications from external sources"
+weight: 5
+---
+
+Cozystack administrators can add applications from external sources in addition to the standard application catalog.
+These applications appear in the same catalog and behave like regular managed applications for platform users.
+
+This guide explains the structure of an external application package and how to add it to a Cozystack cluster.
+
+For a complete working example, see [github.com/cozystack/external-apps-example](https://github.com/cozystack/external-apps-example).
+
+Just like standard Cozystack applications, this external application package uses Helm and FluxCD.
+To learn more about developing application packages, read the Cozystack [Developer Guide]({{% ref "/docs/v1.3/development" %}}).
+
+## Repository Structure
+
+An external application repository has the following layout:
+
+```text
+init.yaml # Bootstrap manifest (GitRepository + HelmRelease)
+scripts/
+ package.mk # Shared Makefile targets for app charts
+packages/
+ core/platform/ # Platform chart: namespaces, operators, HelmCharts, ApplicationDefinitions
+ apps// # Helm chart for each user-installable application
+```
+
+- `packages/core/platform` — a Helm chart deployed by FluxCD. It registers all applications via `ApplicationDefinition` CRDs, creates required namespaces, deploys operators, and defines `HelmChart` resources that point to the app charts in the same Git repository.
+- `packages/apps/` — standard Helm charts that template the actual Kubernetes resources (CRDs, ConfigMaps, Secrets, etc.).
+
+## Platform Chart
+
+The platform chart (`packages/core/platform/`) is the central piece. It contains templates for:
+
+### Namespaces
+
+Create namespaces for operators and system components:
+
+```yaml
+apiVersion: v1
+kind: Namespace
+metadata:
+ labels:
+ cozystack.io/system: "true"
+ name: external-
+```
+
+### HelmCharts
+
+Define `HelmChart` resources that tell FluxCD where to find each app chart within the Git repository:
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: HelmChart
+metadata:
+ name: external-apps-
+ namespace: cozy-public
+spec:
+ interval: 5m
+ chart: ./packages/apps/
+ sourceRef:
+ kind: GitRepository
+ name: external-apps
+ reconcileStrategy: Revision
+```
+
+Use `reconcileStrategy: Revision` so that charts with a static `version: 0.0.0` are re-reconciled whenever the Git content changes.
+
+### Operator Deployment
+
+If your application requires an operator, deploy it via a `HelmRepository` and `HelmRelease`:
+
+```yaml
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: HelmRepository
+metadata:
+ name:
+ namespace: external-
+spec:
+ type: oci
+ interval: 5m
+ url: oci://ghcr.io//charts
+---
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+ name:
+ namespace: external-
+spec:
+ interval: 5m
+ releaseName:
+ targetNamespace: external-
+ chart:
+ spec:
+ chart:
+ sourceRef:
+ kind: HelmRepository
+ name:
+ version: '>=1.0.0'
+```
+
+### ApplicationDefinitions
+
+Register each application in the Cozystack dashboard with an `ApplicationDefinition`:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: ApplicationDefinition
+metadata:
+ name:
+spec:
+ application:
+ kind:
+ singular:
+ plural:
+ openAPISchema: '{"title":"Chart Values","type":"object","properties":{...}}'
+ release:
+ chartRef:
+ kind: HelmChart
+ name: external-apps-
+ namespace: cozy-public
+ labels:
+ cozystack.io/ui: "true"
+ prefix: -
+ dashboard:
+ category:
+ singular:
+ plural:
+ description:
+ tags:
+ -
+ icon:
+ keysOrder:
+ - - apiVersion
+ - - appVersion
+ - - kind
+ - - metadata
+ - - metadata
+ - name
+ - - spec
+ -
+```
+
+Follow these naming conventions (matching the main Cozystack repository):
+
+| Field | Convention | Example for `my-app` |
+| --- | --- | --- |
+| `metadata.name` | lowercase, hyphens allowed | `my-app` |
+| `application.kind` | PascalCase, no hyphens | `MyApp` |
+| `application.singular` | lowercase, no hyphens | `myapp` |
+| `application.plural` | lowercase, no hyphens | `myapps` |
+| `release.prefix` | `-` | `my-app-` |
+| `openAPISchema` title | always `"Chart Values"` | — |
+
+The `openAPISchema` field contains a single-line JSON string with the schema for the application values. It intentionally omits `if`/`then`/`else` conditional rules because Kubernetes `apiextensions/v1` `JSONSchemaProps` does not support these keywords. Use conditional validation only in the Helm chart's `values.schema.json`.
+
+## Application Charts
+
+Each application chart in `packages/apps//` is a standard Helm chart:
+
+```text
+packages/apps//
+ Chart.yaml
+ Makefile
+ values.yaml
+ values.schema.json
+ templates/
+ .yaml
+```
+
+### Chart.yaml
+
+```yaml
+apiVersion: v2
+name:
+description:
+type: application
+version: 0.0.0
+appVersion: "1.0.0"
+```
+
+Use `version: 0.0.0` — the actual version is derived from the Git revision by FluxCD.
+
+### Makefile
+
+```makefile
+export NAME=
+export NAMESPACE=external-
+
+include ../../../scripts/package.mk
+```
+
+### values.schema.json
+
+Define the JSON Schema (draft-07) for the application values. This schema is used by Helm for validation at install time and can include conditional rules (`if`/`then`/`else`) that are not supported at the `ApplicationDefinition` level.
+
+## Bootstrap Manifest
+
+The `init.yaml` file creates two FluxCD resources that bootstrap the entire catalog:
+
+```yaml
+---
+apiVersion: source.toolkit.fluxcd.io/v1
+kind: GitRepository
+metadata:
+ name: external-apps
+ namespace: cozy-public
+spec:
+ interval: 1m0s
+ ref:
+ branch: main
+ timeout: 60s
+ url: https://github.com//.git
+---
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+ name: external-apps
+ namespace: cozy-system
+spec:
+ interval: 5m
+ targetNamespace: cozy-system
+ chart:
+ spec:
+ chart: ./packages/core/platform
+ sourceRef:
+ kind: GitRepository
+ name: external-apps
+ namespace: cozy-public
+ reconcileStrategy: Revision
+```
+
+Apply it to your Cozystack cluster:
+
+```bash
+kubectl apply -f init.yaml
+```
+
+After FluxCD reconciles, the applications will appear in the Cozystack dashboard.
+
+## FluxCD Reference
+
+These FluxCD documents will help you understand the resources used in this guide:
+
+- [GitRepository](https://fluxcd.io/flux/components/source/gitrepositories/)
+- [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/)
+- [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/)
diff --git a/content/en/docs/v1.3/applications/foundationdb.md b/content/en/docs/v1.3/applications/foundationdb.md
new file mode 100644
index 00000000..8faec7cc
--- /dev/null
+++ b/content/en/docs/v1.3/applications/foundationdb.md
@@ -0,0 +1,205 @@
+---
+title: "FoundationDB"
+linkTitle: "FoundationDB"
+weight: 50
+---
+
+
+
+A managed FoundationDB service for Cozystack.
+
+## Overview
+
+FoundationDB is a distributed database designed to handle large volumes of structured data across clusters of commodity servers. It organizes data as an ordered key-value store and employs ACID transactions for all operations.
+
+This package provides a managed FoundationDB cluster deployment using the FoundationDB Kubernetes Operator.
+
+## Features
+
+- **High Availability**: Multi-instance deployment with automatic failover
+- **ACID Transactions**: Full ACID transaction support across the cluster
+- **Scalable**: Easily scale storage and compute resources
+- **Backup Integration**: Optional S3-compatible backup storage
+- **Monitoring**: Built-in monitoring and alerting through WorkloadMonitor
+- **Flexible Configuration**: Support for custom FoundationDB parameters
+
+## Configuration
+
+### Basic Configuration
+
+```yaml
+# Cluster process configuration
+cluster:
+ version: "7.3.63"
+ processCounts:
+ storage: 3 # Number of storage processes (determines cluster size)
+ stateless: -1 # Automatically calculated
+ cluster_controller: 1
+ faultDomain:
+ key: "kubernetes.io/hostname"
+ valueFrom: "spec.nodeName"
+```
+
+### Storage
+
+```yaml
+storage:
+ size: "16Gi" # Storage size per instance
+ storageClass: "" # Storage class (optional)
+```
+
+### Resources
+
+```yaml
+# Use preset sizing
+resourcesPreset: "medium" # small, medium, large, xlarge, 2xlarge
+
+# Or custom resource configuration
+resources:
+ cpu: "2000m"
+ memory: "4Gi"
+```
+
+### Backup (Optional)
+
+```yaml
+backup:
+ enabled: true
+ s3:
+ bucket: "my-fdb-backups"
+ endpoint: "https://s3.amazonaws.com"
+ region: "us-east-1"
+ credentials:
+ accessKeyId: "AKIA..."
+ secretAccessKey: "..."
+ retentionPolicy: "7d"
+```
+
+### Advanced Configuration
+
+```yaml
+# Custom FoundationDB parameters
+customParameters:
+ - "knob_disable_posix_kernel_aio=1"
+
+# Image type (unified is default and recommended for new deployments)
+imageType: "unified"
+
+# Enable automatic pod replacements
+automaticReplacements: true
+
+# Security context configuration
+securityContext:
+ runAsUser: 4059
+ runAsGroup: 4059
+```
+
+## Prerequisites
+
+- FoundationDB Operator must be installed in the cluster
+- Sufficient storage and compute resources
+- For backups: S3-compatible storage credentials
+
+## Deployment
+
+1. Install the FoundationDB operator (system package)
+2. Deploy this application package with your desired configuration
+3. The cluster will be automatically provisioned and configured
+
+## Monitoring
+
+This package includes WorkloadMonitor integration for cluster health monitoring and resource tracking. Monitoring can be disabled by setting:
+
+```yaml
+monitoring:
+ enabled: false
+```
+
+## Security
+
+- All containers run with restricted security contexts
+- No privilege escalation allowed
+- Read-only root filesystem where possible
+- Custom security context configurations supported
+
+## Fault Tolerance
+
+FoundationDB is designed for high availability:
+- Automatic failure detection and recovery
+- Data replication across instances
+- Configurable fault domains for rack/zone awareness
+- Transaction log redundancy
+
+The included `WorkloadMonitor` is automatically configured based on the `cluster.redundancyMode` value. It sets the `minReplicas` property on the `WorkloadMonitor` resource to ensure the cluster's health status accurately reflects its fault tolerance level. The number of tolerated failures is as follows:
+- `single`: 0 failures
+- `double`: 1 failure
+- `triple` and datacenter-aware modes: 2 failures
+
+For example, with the default configuration (`redundancyMode: double` and 3 storage pods), `minReplicas` will be set to 2.
+
+## Performance Considerations
+
+- Use SSD storage for better performance
+- Consider dedicating nodes for storage processes
+- Monitor cluster metrics for optimization opportunities
+- Scale storage and stateless processes based on workload
+
+## Support
+
+For issues related to FoundationDB itself, refer to the [FoundationDB documentation](https://apple.github.io/foundationdb/).
+
+For Cozystack-specific issues, consult the Cozystack documentation or support channels.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------------------------ |
+| `cluster` | Cluster configuration. | `object` | `{}` |
+| `cluster.processCounts` | Process counts for different roles. | `object` | `{}` |
+| `cluster.processCounts.stateless` | Number of stateless processes (-1 for automatic). | `int` | `-1` |
+| `cluster.processCounts.storage` | Number of storage processes (determines cluster size). | `int` | `3` |
+| `cluster.processCounts.cluster_controller` | Number of cluster controller processes. | `int` | `1` |
+| `cluster.version` | Version of FoundationDB to use. | `string` | `7.3.63` |
+| `cluster.redundancyMode` | Database redundancy mode (single, double, triple, three_datacenter, three_datacenter_fallback). | `string` | `double` |
+| `cluster.storageEngine` | Storage engine (ssd-2, ssd-redwood-v1, ssd-rocksdb-v1, memory). | `string` | `ssd-2` |
+| `cluster.faultDomain` | Fault domain configuration. | `object` | `{}` |
+| `cluster.faultDomain.key` | Fault domain key. | `string` | `kubernetes.io/hostname` |
+| `cluster.faultDomain.valueFrom` | Fault domain value source. | `string` | `spec.nodeName` |
+| `storage` | Storage configuration. | `object` | `{}` |
+| `storage.size` | Size of persistent volumes for each instance. | `quantity` | `16Gi` |
+| `storage.storageClass` | Storage class (if not set, uses cluster default). | `string` | `""` |
+| `resources` | Explicit CPU and memory configuration for each FoundationDB instance. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each instance. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each instance. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `medium` |
+| `backup` | Backup configuration. | `object` | `{}` |
+| `backup.enabled` | Enable backups. | `bool` | `false` |
+| `backup.s3` | S3 configuration for backups. | `object` | `{}` |
+| `backup.s3.bucket` | S3 bucket name. | `string` | `""` |
+| `backup.s3.endpoint` | S3 endpoint URL. | `string` | `""` |
+| `backup.s3.region` | S3 region. | `string` | `us-east-1` |
+| `backup.s3.credentials` | S3 credentials. | `object` | `{}` |
+| `backup.s3.credentials.accessKeyId` | S3 access key ID. | `string` | `""` |
+| `backup.s3.credentials.secretAccessKey` | S3 secret access key. | `string` | `""` |
+| `backup.retentionPolicy` | Retention policy for backups. | `string` | `7d` |
+| `monitoring` | Monitoring configuration. | `object` | `{}` |
+| `monitoring.enabled` | Enable WorkloadMonitor integration. | `bool` | `true` |
+
+
+### FoundationDB configuration
+
+| Name | Description | Type | Value |
+| ---------------------------- | ------------------------------------------ | ---------- | --------- |
+| `customParameters` | Custom parameters to pass to FoundationDB. | `[]string` | `[]` |
+| `imageType` | Container image deployment type. | `string` | `unified` |
+| `securityContext` | Security context for containers. | `object` | `{}` |
+| `securityContext.runAsUser` | User ID to run the container. | `int` | `4059` |
+| `securityContext.runAsGroup` | Group ID to run the container. | `int` | `4059` |
+| `automaticReplacements` | Enable automatic pod replacements. | `bool` | `true` |
+
diff --git a/content/en/docs/v1.3/applications/harbor.md b/content/en/docs/v1.3/applications/harbor.md
new file mode 100644
index 00000000..c21a59dc
--- /dev/null
+++ b/content/en/docs/v1.3/applications/harbor.md
@@ -0,0 +1,58 @@
+---
+title: "Managed Harbor Container Registry"
+linkTitle: "Harbor Container Registry"
+weight: 50
+---
+
+
+
+
+Harbor is an open-source trusted cloud-native registry project that stores, signs, and scans content.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| -------------- | -------------------------------------------------------------------------------------------- | -------- | ----- |
+| `host` | Hostname for external access to Harbor (defaults to 'harbor' subdomain for the tenant host). | `string` | `""` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+
+
+### Component configuration
+
+| Name | Description | Type | Value |
+| ----------------------------- | -------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `core` | Core API server configuration. | `object` | `{}` |
+| `core.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `core.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
+| `core.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
+| `core.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `registry` | Container image registry configuration. | `object` | `{}` |
+| `registry.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `registry.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
+| `registry.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
+| `registry.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `jobservice` | Background job service configuration. | `object` | `{}` |
+| `jobservice.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `jobservice.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
+| `jobservice.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
+| `jobservice.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `trivy` | Trivy vulnerability scanner configuration. | `object` | `{}` |
+| `trivy.enabled` | Enable or disable the vulnerability scanner. | `bool` | `true` |
+| `trivy.size` | Persistent Volume size for vulnerability database cache. | `quantity` | `5Gi` |
+| `trivy.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `trivy.resources.cpu` | Number of CPU cores allocated. | `quantity` | `""` |
+| `trivy.resources.memory` | Amount of memory allocated. | `quantity` | `""` |
+| `trivy.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `database` | PostgreSQL database configuration. | `object` | `{}` |
+| `database.size` | Persistent Volume size for database storage. | `quantity` | `5Gi` |
+| `database.replicas` | Number of database instances. | `int` | `2` |
+| `redis` | Redis cache configuration. | `object` | `{}` |
+| `redis.size` | Persistent Volume size for cache storage. | `quantity` | `1Gi` |
+| `redis.replicas` | Number of Redis replicas. | `int` | `2` |
+
diff --git a/content/en/docs/v1.3/applications/kafka.md b/content/en/docs/v1.3/applications/kafka.md
new file mode 100644
index 00000000..08012650
--- /dev/null
+++ b/content/en/docs/v1.3/applications/kafka.md
@@ -0,0 +1,108 @@
+---
+title: "Managed Kafka Service"
+linkTitle: "Kafka"
+weight: 50
+aliases:
+ - /docs/reference/applications/kafka
+ - /docs/v1.3/reference/applications/kafka
+---
+
+
+
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ---------- | ------------------------------------------------ | ------ | ------- |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ---------------------- | --------------------- | ---------- | ----- |
+| `topics` | Topics configuration. | `[]object` | `[]` |
+| `topics[i].name` | Topic name. | `string` | `""` |
+| `topics[i].partitions` | Number of partitions. | `int` | `0` |
+| `topics[i].replicas` | Number of replicas. | `int` | `0` |
+| `topics[i].config` | Topic configuration. | `object` | `{}` |
+
+
+### Kafka configuration
+
+| Name | Description | Type | Value |
+| ------------------------ | -------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `kafka` | Kafka configuration. | `object` | `{}` |
+| `kafka.replicas` | Number of Kafka replicas. | `int` | `3` |
+| `kafka.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `kafka.resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `kafka.resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `kafka.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `kafka.size` | Persistent Volume size for Kafka. | `quantity` | `10Gi` |
+| `kafka.storageClass` | StorageClass used to store the Kafka data. | `string` | `""` |
+
+
+### ZooKeeper configuration
+
+| Name | Description | Type | Value |
+| ---------------------------- | -------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `zookeeper` | ZooKeeper configuration. | `object` | `{}` |
+| `zookeeper.replicas` | Number of ZooKeeper replicas. | `int` | `3` |
+| `zookeeper.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `zookeeper.resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `zookeeper.resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `zookeeper.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `zookeeper.size` | Persistent Volume size for ZooKeeper. | `quantity` | `5Gi` |
+| `zookeeper.storageClass` | StorageClass used to store the ZooKeeper data. | `string` | `""` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+### topics
+
+```yaml
+topics:
+ - name: Results
+ partitions: 1
+ replicas: 3
+ config:
+ min.insync.replicas: 2
+ - name: Orders
+ config:
+ cleanup.policy: compact
+ segment.ms: 3600000
+ max.compaction.lag.ms: 5400000
+ min.insync.replicas: 2
+ partitions: 1
+ replicas: 3
+```
diff --git a/content/en/docs/v1.3/applications/mariadb.md b/content/en/docs/v1.3/applications/mariadb.md
new file mode 100644
index 00000000..23acdb56
--- /dev/null
+++ b/content/en/docs/v1.3/applications/mariadb.md
@@ -0,0 +1,176 @@
+---
+title: "Managed MariaDB Service"
+linkTitle: "MariaDB"
+weight: 50
+aliases:
+ - /docs/reference/applications/mariadb
+ - /docs/v1.3/reference/applications/mariadb
+---
+
+
+
+
+The Managed MariaDB Service offers a powerful and widely used relational database solution.
+This service allows you to create and manage a replicated MariaDB cluster seamlessly.
+
+## Deployment Details
+
+This managed service is controlled by mariadb-operator, ensuring efficient management and seamless operation.
+
+- Docs: https://mariadb.com/kb/en/documentation/
+- GitHub: https://github.com/mariadb-operator/mariadb-operator
+
+## HowTos
+
+### How to switch master/slave replica
+
+```bash
+kubectl edit mariadb
+```
+update:
+
+```bash
+spec:
+ replication:
+ primary:
+ podIndex: 1
+```
+
+check status:
+
+```bash
+NAME READY STATUS PRIMARY POD AGE
+ True Running app-db1-1 41d
+```
+
+### How to restore backup:
+
+find snapshot:
+```bash
+restic -r s3:s3.example.org/mariadb-backups/database_name snapshots
+```
+
+
+restore:
+```bash
+restic -r s3:s3.example.org/mariadb-backups/database_name restore latest --target /tmp/
+```
+
+more details:
+- https://blog.aenix.io/restic-effective-backup-from-stdin-4bc1e8f083c1
+
+### Known issues
+
+- **Replication can't be finished with various errors**
+- **Replication can't be finished in case if `binlog` purged**
+
+ Until `mariadbbackup` is not used to bootstrap a node by mariadb-operator (this feature is not implemented yet), follow these manual steps to fix it:
+ https://github.com/mariadb-operator/mariadb-operator/issues/141#issuecomment-1804760231
+
+- **Corrupted indices**
+ Sometimes some indices can be corrupted on master replica, you can recover them from slave:
+
+ ```bash
+ mysqldump -h -P 3306 -u -p --column-statistics=0
~/tmp/fix-table.sql
+ mysql -h -P 3306 -u -p < ~/tmp/fix-table.sql
+ ```
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of MariaDB replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each MariaDB replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+| `version` | MariaDB major.minor version to deploy | `string` | `v11.8` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| -------------------------------- | ---------------------------------------- | ------------------- | ----- |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user. | `string` | `""` |
+| `users[name].maxUserConnections` | Maximum number of connections. | `int` | `0` |
+| `databases` | Databases configuration map. | `map[string]object` | `{}` |
+| `databases[name].roles` | Roles assigned to users. | `object` | `{}` |
+| `databases[name].roles.admin` | List of users with admin privileges. | `[]string` | `[]` |
+| `databases[name].roles.readonly` | List of users with read-only privileges. | `[]string` | `[]` |
+
+
+### Backup parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | ----------------------------------------------- | -------- | ------------------------------------------------------ |
+| `backup` | Backup configuration. | `object` | `{}` |
+| `backup.enabled` | Enable regular backups (default: false). | `bool` | `false` |
+| `backup.s3Region` | AWS S3 region where backups are stored. | `string` | `us-east-1` |
+| `backup.s3Bucket` | S3 bucket used for storing backups. | `string` | `s3.example.org/mariadb-backups` |
+| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` |
+| `backup.cleanupStrategy` | Retention strategy for cleaning up old backups. | `string` | `--keep-last=3 --keep-daily=3 --keep-within-weekly=1m` |
+| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `` |
+| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `` |
+| `backup.resticPassword` | Password for Restic backup encryption. | `string` | `` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+### users
+
+```yaml
+users:
+ user1:
+ maxUserConnections: 1000
+ password: hackme
+ user2:
+ maxUserConnections: 1000
+ password: hackme
+```
+
+
+### databases
+
+```yaml
+databases:
+ myapp1:
+ roles:
+ admin:
+ - user1
+ readonly:
+ - user2
+```
diff --git a/content/en/docs/v1.3/applications/mongodb.md b/content/en/docs/v1.3/applications/mongodb.md
new file mode 100644
index 00000000..1407b6bd
--- /dev/null
+++ b/content/en/docs/v1.3/applications/mongodb.md
@@ -0,0 +1,124 @@
+---
+title: "Managed MongoDB Service"
+linkTitle: "MongoDB"
+weight: 50
+aliases:
+ - /docs/reference/applications/mongodb
+ - /docs/v1.3/reference/applications/mongodb
+---
+
+
+
+
+MongoDB is a popular document-oriented NoSQL database known for its flexibility and scalability.
+The Managed MongoDB Service provides a self-healing replicated cluster managed by the Percona Operator for MongoDB.
+
+## Deployment Details
+
+This managed service is controlled by the Percona Operator for MongoDB, ensuring efficient management and seamless operation.
+
+- Docs:
+- Github:
+
+## Deployment Modes
+
+### Replica Set Mode (default)
+
+By default, MongoDB deploys as a replica set with the specified number of replicas.
+This mode is suitable for most use cases requiring high availability.
+
+### Sharded Cluster Mode
+
+Enable `sharding: true` for horizontal scaling across multiple shards.
+Each shard is a replica set, and mongos routers handle query routing.
+
+## Notes
+
+### External Access
+
+When `external: true` is enabled:
+- **Replica Set mode**: Traffic is load-balanced across all replica set members. This works well for read operations, but write operations require connecting to the primary. MongoDB drivers handle primary discovery automatically using the replica set connection string.
+- **Sharded mode**: Traffic is routed through mongos routers, which handle both reads and writes correctly.
+
+### Credentials
+
+On first install, the credentials secret will be empty until the Percona operator initializes the cluster.
+Run `helm upgrade` after MongoDB is ready to populate the credentials secret with the actual password.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | --------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of MongoDB replicas in replica set. | `int` | `3` |
+| `resources` | Explicit CPU and memory configuration for each MongoDB replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+| `version` | MongoDB major version to deploy. | `string` | `v8` |
+
+
+### Sharding configuration
+
+| Name | Description | Type | Value |
+| ----------------------------------- | ------------------------------------------------------------------ | ---------- | ------- |
+| `sharding` | Enable sharded cluster mode. When disabled, deploys a replica set. | `bool` | `false` |
+| `shardingConfig` | Configuration for sharded cluster mode. | `object` | `{}` |
+| `shardingConfig.configServers` | Number of config server replicas. | `int` | `3` |
+| `shardingConfig.configServerSize` | PVC size for config servers. | `quantity` | `3Gi` |
+| `shardingConfig.mongos` | Number of mongos router replicas. | `int` | `2` |
+| `shardingConfig.shards` | List of shard configurations. | `[]object` | `[...]` |
+| `shardingConfig.shards[i].name` | Shard name. | `string` | `""` |
+| `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `0` |
+| `shardingConfig.shards[i].size` | PVC size for this shard. | `quantity` | `""` |
+
+
+### Users configuration
+
+| Name | Description | Type | Value |
+| ---------------------- | -------------------------------------------------- | ------------------- | ----- |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user (auto-generated if omitted). | `string` | `""` |
+
+
+### Databases configuration
+
+| Name | Description | Type | Value |
+| -------------------------------- | ---------------------------------------------------------- | ------------------- | ----- |
+| `databases` | Databases configuration map. | `map[string]object` | `{}` |
+| `databases[name].roles` | Roles assigned to users. | `object` | `{}` |
+| `databases[name].roles.admin` | List of users with admin privileges (readWrite + dbAdmin). | `[]string` | `[]` |
+| `databases[name].roles.readonly` | List of users with read-only privileges. | `[]string` | `[]` |
+
+
+### Backup parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | ------------------------------------------------------ | -------- | ----------------------------------- |
+| `backup` | Backup configuration. | `object` | `{}` |
+| `backup.enabled` | Enable regular backups. | `bool` | `false` |
+| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` |
+| `backup.retentionPolicy` | Retention policy (e.g. "30d"). | `string` | `30d` |
+| `backup.destinationPath` | Destination path for backups (e.g. s3://bucket/path/). | `string` | `s3://bucket/path/to/folder/` |
+| `backup.endpointURL` | S3 endpoint URL for uploads. | `string` | `http://minio-gateway-service:9000` |
+| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `""` |
+| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `""` |
+
+
+### Bootstrap (recovery) parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | --------------------------------------------------------- | -------- | ------- |
+| `bootstrap` | Bootstrap configuration. | `object` | `{}` |
+| `bootstrap.enabled` | Whether to restore from a backup. | `bool` | `false` |
+| `bootstrap.recoveryTime` | Timestamp for point-in-time recovery; empty means latest. | `string` | `""` |
+| `bootstrap.backupName` | Name of backup to restore from. | `string` | `""` |
+
diff --git a/content/en/docs/v1.3/applications/nats.md b/content/en/docs/v1.3/applications/nats.md
new file mode 100644
index 00000000..daf1dcb1
--- /dev/null
+++ b/content/en/docs/v1.3/applications/nats.md
@@ -0,0 +1,74 @@
+---
+title: "Managed NATS Service"
+linkTitle: "NATS"
+weight: 50
+aliases:
+ - /docs/reference/applications/nats
+ - /docs/v1.3/reference/applications/nats
+---
+
+
+
+
+NATS is an open-source, simple, secure, and high performance messaging system.
+It provides a data layer for cloud native applications, IoT messaging, and microservices architectures.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------ | ---------- | ------- |
+| `replicas` | Number of replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each NATS replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ---------------------- | ------------------------------------------------------------- | ------------------- | ------ |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user. | `string` | `""` |
+| `jetstream` | Jetstream configuration. | `object` | `{}` |
+| `jetstream.enabled` | Enable or disable Jetstream for persistent messaging in NATS. | `bool` | `true` |
+| `jetstream.size` | Jetstream persistent storage size. | `quantity` | `10Gi` |
+| `config` | NATS configuration. | `object` | `{}` |
+| `config.merge` | Additional configuration to merge into NATS config. | `*object` | `{}` |
+| `config.resolver` | Additional resolver configuration to merge into NATS config. | `*object` | `{}` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
diff --git a/content/en/docs/v1.3/applications/openbao.md b/content/en/docs/v1.3/applications/openbao.md
new file mode 100644
index 00000000..367aaf19
--- /dev/null
+++ b/content/en/docs/v1.3/applications/openbao.md
@@ -0,0 +1,38 @@
+---
+title: "Managed OpenBAO Service"
+linkTitle: "OpenBAO"
+weight: 50
+---
+
+
+
+
+OpenBAO is an open-source secrets management solution forked from HashiCorp Vault.
+It provides identity-based secrets and encryption management for cloud infrastructure.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of OpenBAO replicas. HA with Raft is automatically enabled when replicas > 1. Switching between standalone (file storage) and HA (Raft storage) modes requires data migration. | `int` | `1` |
+| `resources` | Explicit CPU and memory configuration for each OpenBAO replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `size` | Persistent Volume Claim size for data storage. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ---- | -------------------------- | ------ | ------ |
+| `ui` | Enable the OpenBAO web UI. | `bool` | `true` |
+
diff --git a/content/en/docs/v1.3/applications/postgres.md b/content/en/docs/v1.3/applications/postgres.md
new file mode 100644
index 00000000..22b1fbcd
--- /dev/null
+++ b/content/en/docs/v1.3/applications/postgres.md
@@ -0,0 +1,215 @@
+---
+title: "Managed PostgreSQL Service"
+linkTitle: "PostgreSQL"
+weight: 50
+aliases:
+ - /docs/reference/applications/postgres
+ - /docs/v1.3/reference/applications/postgres
+---
+
+
+
+
+PostgreSQL is currently the leading choice among relational databases, known for its robust features and performance.
+The Managed PostgreSQL Service takes advantage of platform-side implementation to provide a self-healing replicated cluster.
+This cluster is efficiently managed using the highly acclaimed CloudNativePG operator, which has gained popularity within the community.
+
+## Deployment Details
+
+This managed service is controlled by the CloudNativePG operator, ensuring efficient management and seamless operation.
+
+- Docs:
+- Github:
+
+## Operations
+
+### How to enable backups
+
+To back up a PostgreSQL application, an external S3-compatible storage is required.
+
+To start regular backups, update the application, setting `backup.enabled` to `true`, and fill in the path and credentials to an `backup.*`:
+
+```yaml
+## @param backup.enabled Enable regular backups
+## @param backup.schedule Cron schedule for automated backups
+## @param backup.retentionPolicy Retention policy
+## @param backup.destinationPath Path to store the backup (i.e. s3://bucket/path/to/folder)
+## @param backup.endpointURL S3 Endpoint used to upload data to the cloud
+## @param backup.s3AccessKey Access key for S3, used for authentication
+## @param backup.s3SecretKey Secret key for S3, used for authentication
+backup:
+ enabled: false
+ retentionPolicy: 30d
+ destinationPath: s3://bucket/path/to/folder/
+ endpointURL: http://minio-gateway-service:9000
+ schedule: "0 2 * * * *"
+ s3AccessKey: oobaiRus9pah8PhohL1ThaeTa4UVa7gu
+ s3SecretKey: ju3eum4dekeich9ahM1te8waeGai0oog
+```
+
+### How to recover a backup
+
+CloudNativePG supports point-in-time-recovery.
+Recovering a backup is done by creating a new database instance and restoring the data in it.
+
+Create a new PostgreSQL application with a different name, but identical configuration.
+Set `bootstrap.enabled` to `true` and fill in the name of the database instance to recover from and the recovery time:
+
+```yaml
+## @param bootstrap.enabled Restore database cluster from a backup
+## @param bootstrap.recoveryTime Timestamp (PITR) up to which recovery will proceed, expressed in RFC 3339 format. If left empty, will restore latest
+## @param bootstrap.oldName Name of database cluster before deleting
+##
+bootstrap:
+ enabled: false
+ recoveryTime: "" # leave empty for latest or exact timestamp; example: 2020-11-26 15:22:00.00000+00
+ oldName: ""
+```
+
+### How to switch primary/secondary replica
+
+See:
+
+-
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ---------- | ------- |
+| `replicas` | Number of Postgres replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each PostgreSQL replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `micro` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+| `version` | PostgreSQL major version to deploy | `string` | `v18` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| --------------------------------------- | ---------------------------------------------------------------- | -------- | ----- |
+| `postgresql` | PostgreSQL server configuration. | `object` | `{}` |
+| `postgresql.parameters` | PostgreSQL server parameters. | `object` | `{}` |
+| `postgresql.parameters.max_connections` | Maximum number of concurrent connections to the database server. | `int` | `100` |
+
+
+### Quorum-based synchronous replication
+
+| Name | Description | Type | Value |
+| ------------------------ | ---------------------------------------------------------------------------------- | -------- | ----- |
+| `quorum` | Quorum configuration for synchronous replication. | `object` | `{}` |
+| `quorum.minSyncReplicas` | Minimum number of synchronous replicas required for commit. | `int` | `0` |
+| `quorum.maxSyncReplicas` | Maximum number of synchronous replicas allowed (must be less than total replicas). | `int` | `0` |
+
+
+### Users configuration
+
+| Name | Description | Type | Value |
+| ------------------------- | -------------------------------------------- | ------------------- | ------- |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user. | `string` | `""` |
+| `users[name].replication` | Whether the user has replication privileges. | `bool` | `false` |
+
+
+### Databases configuration
+
+| Name | Description | Type | Value |
+| -------------------------------- | ---------------------------------------- | ------------------- | ----- |
+| `databases` | Databases configuration map. | `map[string]object` | `{}` |
+| `databases[name].roles` | Roles assigned to users. | `object` | `{}` |
+| `databases[name].roles.admin` | List of users with admin privileges. | `[]string` | `[]` |
+| `databases[name].roles.readonly` | List of users with read-only privileges. | `[]string` | `[]` |
+| `databases[name].extensions` | List of enabled PostgreSQL extensions. | `[]string` | `[]` |
+
+
+### Backup parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | ------------------------------------------------------ | -------- | ----------------------------------- |
+| `backup` | Backup configuration. | `object` | `{}` |
+| `backup.enabled` | Enable regular backups. | `bool` | `false` |
+| `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * * *` |
+| `backup.retentionPolicy` | Retention policy (e.g. "30d"). | `string` | `30d` |
+| `backup.destinationPath` | Destination path for backups (e.g. s3://bucket/path/). | `string` | `s3://bucket/path/to/folder/` |
+| `backup.endpointURL` | S3 endpoint URL for uploads. | `string` | `http://minio-gateway-service:9000` |
+| `backup.s3AccessKey` | Access key for S3 authentication. | `string` | `` |
+| `backup.s3SecretKey` | Secret key for S3 authentication. | `string` | `` |
+
+
+### Bootstrap (recovery) parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | ------------------------------------------------------------------- | -------- | ------- |
+| `bootstrap` | Bootstrap configuration. | `object` | `{}` |
+| `bootstrap.enabled` | Whether to restore from a backup. | `bool` | `false` |
+| `bootstrap.recoveryTime` | Timestamp (RFC3339) for point-in-time recovery; empty means latest. | `string` | `""` |
+| `bootstrap.oldName` | Previous cluster name before deletion. | `string` | `""` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+### users
+
+```yaml
+users:
+ user1:
+ password: strongpassword
+ user2:
+ password: hackme
+ airflow:
+ password: qwerty123
+ debezium:
+ replication: true
+```
+
+### databases
+
+```yaml
+databases:
+ myapp:
+ roles:
+ admin:
+ - user1
+ - debezium
+ readonly:
+ - user2
+ airflow:
+ roles:
+ admin:
+ - airflow
+ extensions:
+ - hstore
+```
diff --git a/content/en/docs/v1.3/applications/qdrant.md b/content/en/docs/v1.3/applications/qdrant.md
new file mode 100644
index 00000000..faed70d7
--- /dev/null
+++ b/content/en/docs/v1.3/applications/qdrant.md
@@ -0,0 +1,63 @@
+---
+title: "Managed Qdrant Service"
+linkTitle: "Qdrant"
+weight: 50
+---
+
+
+
+
+Qdrant is a high-performance vector database and similarity search engine designed for AI and machine learning applications. It provides efficient storage and retrieval of high-dimensional vectors with advanced filtering capabilities, making it ideal for recommendation systems, semantic search, and RAG (Retrieval-Augmented Generation) applications.
+
+## Deployment Details
+
+Service deploys Qdrant as a StatefulSet with automatic cluster mode when multiple replicas are configured.
+
+- Docs: https://qdrant.tech/documentation/
+- GitHub: https://github.com/qdrant/qdrant
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | -------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of Qdrant replicas. Cluster mode is automatically enabled when replicas > 1. | `int` | `1` |
+| `resources` | Explicit CPU and memory configuration for each Qdrant replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `small` |
+| `size` | Persistent Volume Claim size available for vector data storage. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
diff --git a/content/en/docs/v1.3/applications/rabbitmq.md b/content/en/docs/v1.3/applications/rabbitmq.md
new file mode 100644
index 00000000..589524fe
--- /dev/null
+++ b/content/en/docs/v1.3/applications/rabbitmq.md
@@ -0,0 +1,80 @@
+---
+title: "Managed RabbitMQ Service"
+linkTitle: "RabbitMQ"
+weight: 50
+aliases:
+ - /docs/reference/applications/rabbitmq
+ - /docs/v1.3/reference/applications/rabbitmq
+---
+
+
+
+
+RabbitMQ is a robust message broker that plays a crucial role in modern distributed systems. Our Managed RabbitMQ Service simplifies the deployment and management of RabbitMQ clusters, ensuring reliability and scalability for your messaging needs.
+
+## Deployment Details
+
+The service utilizes official RabbitMQ operator. This ensures the reliability and seamless operation of your RabbitMQ instances.
+
+- Github: https://github.com/rabbitmq/cluster-operator/
+- Docs: https://www.rabbitmq.com/kubernetes/operator/operator-overview.html
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ---------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of RabbitMQ replicas. | `int` | `3` |
+| `resources` | Explicit CPU and memory configuration for each RabbitMQ replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+| `version` | RabbitMQ major.minor version to deploy | `string` | `v4.2` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ----------------------------- | -------------------------------- | ------------------- | ----- |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user. | `string` | `""` |
+| `vhosts` | Virtual hosts configuration map. | `map[string]object` | `{}` |
+| `vhosts[name].roles` | Virtual host roles list. | `object` | `{}` |
+| `vhosts[name].roles.admin` | List of admin users. | `[]string` | `[]` |
+| `vhosts[name].roles.readonly` | List of readonly users. | `[]string` | `[]` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `100m` | `128Mi` |
+| `micro` | `250m` | `256Mi` |
+| `small` | `500m` | `512Mi` |
+| `medium` | `500m` | `1Gi` |
+| `large` | `1` | `2Gi` |
+| `xlarge` | `2` | `4Gi` |
+| `2xlarge` | `4` | `8Gi` |
+
diff --git a/content/en/docs/v1.3/applications/redis.md b/content/en/docs/v1.3/applications/redis.md
new file mode 100644
index 00000000..4cbc69ab
--- /dev/null
+++ b/content/en/docs/v1.3/applications/redis.md
@@ -0,0 +1,74 @@
+---
+title: "Managed Redis Service"
+linkTitle: "Redis"
+weight: 50
+aliases:
+ - /docs/reference/applications/redis
+ - /docs/v1.3/reference/applications/redis
+---
+
+
+
+
+Redis is a highly versatile and blazing-fast in-memory data store and cache that can significantly boost the performance of your applications. Managed Redis Service offers a hassle-free solution for deploying and managing Redis clusters, ensuring that your data is always available and responsive.
+
+## Deployment Details
+
+Service utilizes the Spotahome Redis Operator for efficient management and orchestration of Redis clusters.
+
+- Docs: https://redis.io/docs/
+- GitHub: https://github.com/spotahome/redis-operator
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of Redis replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each Redis replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `1Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+| `version` | Redis major version to deploy | `string` | `v8` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ------------- | --------------------------- | ------ | ------ |
+| `authEnabled` | Enable password generation. | `bool` | `true` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
diff --git a/content/en/docs/v1.3/applications/tenant.md b/content/en/docs/v1.3/applications/tenant.md
new file mode 100644
index 00000000..db87d35d
--- /dev/null
+++ b/content/en/docs/v1.3/applications/tenant.md
@@ -0,0 +1,131 @@
+---
+title: "Tenant Application Reference"
+linkTitle: "Tenant"
+weight: 50
+aliases:
+ - /docs/reference/applications/tenant
+ - /docs/v1.3/reference/applications/tenant
+---
+
+
+
+
+A tenant is the main unit of security on the platform. The closest analogy would be Linux kernel namespaces.
+
+Tenants can be created recursively and are subject to the following rules:
+
+### Tenant naming
+
+Tenant names must be alphanumeric:
+
+- Lowercase letters (`a-z`) and digits (`0-9`) only
+- Must start with a lowercase letter
+- Dashes (`-`) are **not allowed**, unlike with other services
+- Maximum length depends on the cluster configuration (Helm release prefix and root domain)
+
+This restriction exists to keep consistent naming in tenants, nested tenants, and services deployed in them.
+A tenant cannot be named `foo-bar` because parsing internal resource names like `tenant-foo-bar` would be ambiguous.
+
+For example:
+
+- The root tenant is named `root`, but internally it's referenced as `tenant-root`.
+- A nested tenant could be named `foo`, which would result in `tenant-foo` in service names and URLs.
+
+### Unique domains
+
+Each tenant has its own domain.
+By default, (unless otherwise specified), it inherits the domain of its parent with a prefix of its name.
+For example, if the parent had the domain `example.org`, then `tenant-foo` would get the domain `foo.example.org` by default.
+
+Kubernetes clusters created in this tenant namespace would get domains like: `kubernetes-cluster.foo.example.org`
+
+Example:
+```text
+tenant-root (example.org)
+└── tenant-foo (foo.example.org)
+ └── kubernetes-cluster1 (kubernetes-cluster1.foo.example.org)
+```
+
+### Nesting tenants and reusing parent services
+
+Tenants can be nested.
+A tenant administrator can create nested tenants using the "Tenant" application from the catalogue.
+
+Higher-level tenants can view and manage the applications of all their children tenants.
+If a tenant does not run their own cluster services, it can access ones of its parent.
+
+For example, you create:
+- Tenant `tenant-u1` with a set of services like `etcd`, `ingress`, `monitoring`.
+- Tenant `tenant-u2` nested in `tenant-u1`.
+
+Let's see what will happen when you run Kubernetes and Postgres under `tenant-u2` namespace.
+
+Since `tenant-u2` does not have its own cluster services like `etcd`, `ingress`, and `monitoring`,
+the applications running in `tenant-u2` will use the cluster services of the parent tenant.
+
+This in turn means:
+
+- The Kubernetes cluster data will be stored in `etcd` for `tenant-u1`.
+- Access to the cluster will be through the common `ingress` of `tenant-u1`.
+- Essentially, all metrics will be collected in the `monitoring` from `tenant-u1`, and only that tenant will have access to them.
+
+Example:
+```
+tenant-u1
+├── etcd
+├── ingress
+├── monitoring
+└── tenant-u2
+ ├── kubernetes-cluster1
+ └── postgres-db1
+```
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ----------------- | -------------------------------------------------------------------------------------------------------------------------- | --------------------- | ------- |
+| `host` | The hostname used to access tenant services (defaults to using the tenant name as a subdomain for its parent tenant host). | `string` | `""` |
+| `etcd` | Deploy own Etcd cluster. | `bool` | `false` |
+| `monitoring` | Deploy own Monitoring Stack. | `bool` | `false` |
+| `ingress` | Deploy own Ingress Controller. | `bool` | `false` |
+| `seaweedfs` | Deploy own SeaweedFS. | `bool` | `false` |
+| `schedulingClass` | The name of a SchedulingClass CR to apply scheduling constraints for this tenant's workloads. | `string` | `""` |
+| `resourceQuotas` | Define resource quotas for the tenant. | `map[string]quantity` | `{}` |
+
+
+## Configuration
+
+### Resource Quotas
+
+The `resourceQuotas` parameter allows you to limit resources available to the tenant. Supported keys include:
+
+**Compute resources** (converted to `requests.X` and `limits.X`):
+- `cpu` - Total CPU cores (e.g., `"4"` or `"500m"`)
+- `memory` - Total memory (e.g., `"4Gi"` or `"512Mi"`)
+- `ephemeral-storage` - Ephemeral storage limit (e.g., `"10Gi"`)
+- `storage` - Total persistent storage (e.g., `"100Gi"`)
+
+**Object count quotas** (passed as-is):
+- `pods` - Maximum number of pods
+- `services` - Maximum number of services
+- `services.loadbalancers` - Maximum number of LoadBalancer services
+- `services.nodeports` - Maximum number of NodePort services
+- `configmaps` - Maximum number of ConfigMaps
+- `secrets` - Maximum number of Secrets
+- `persistentvolumeclaims` - Maximum number of PVCs
+
+**Example:**
+```yaml
+resourceQuotas:
+ cpu: 4
+ memory: 4Gi
+ storage: 10Gi
+ services.loadbalancers: "3"
+ pods: "50"
+```
diff --git a/content/en/docs/v1.3/cozystack-api/_index.md b/content/en/docs/v1.3/cozystack-api/_index.md
new file mode 100644
index 00000000..5bc37409
--- /dev/null
+++ b/content/en/docs/v1.3/cozystack-api/_index.md
@@ -0,0 +1,140 @@
+---
+title: Cozystack API
+description: Cozystack API for managing services and resources
+weight: 70
+aliases:
+ - /docs/v1.3/development/cozystack-api
+---
+
+## Cozystack API
+
+Cozystack provides a powerful API that allows you to deploy services using various tools. You can manage resources through kubectl, Terraform, or programmatically using Go.
+
+**The best way to learn the Cozystack API is to:**
+
+1. Use the dashboard to deploy an application.
+2. Examine the deployed resource in the Cozystack API and use it as a reference.
+3. Parameterize and replicate the example resource to create your own resources through the API.
+
+## Discovering Resources
+
+You can list all available resources using `kubectl`:
+
+```bash
+# kubectl api-resources | grep apps.cozystack
+buckets apps.cozystack.io/v1alpha1 true Bucket
+clickhouses apps.cozystack.io/v1alpha1 true ClickHouse
+etcds apps.cozystack.io/v1alpha1 true Etcd
+foundationdbs apps.cozystack.io/v1alpha1 true FoundationDB
+harbors apps.cozystack.io/v1alpha1 true Harbor
+httpcaches apps.cozystack.io/v1alpha1 true HTTPCache
+infos apps.cozystack.io/v1alpha1 true Info
+ingresses apps.cozystack.io/v1alpha1 true Ingress
+kafkas apps.cozystack.io/v1alpha1 true Kafka
+kuberneteses apps.cozystack.io/v1alpha1 true Kubernetes
+mariadbs apps.cozystack.io/v1alpha1 true MariaDB
+mongodbs apps.cozystack.io/v1alpha1 true MongoDB
+monitorings apps.cozystack.io/v1alpha1 true Monitoring
+natses apps.cozystack.io/v1alpha1 true NATS
+openbaos apps.cozystack.io/v1alpha1 true OpenBAO
+postgreses apps.cozystack.io/v1alpha1 true Postgres
+qdrants apps.cozystack.io/v1alpha1 true Qdrant
+rabbitmqs apps.cozystack.io/v1alpha1 true RabbitMQ
+redises apps.cozystack.io/v1alpha1 true Redis
+seaweedfses apps.cozystack.io/v1alpha1 true SeaweedFS
+tcpbalancers apps.cozystack.io/v1alpha1 true TCPBalancer
+tenants apps.cozystack.io/v1alpha1 true Tenant
+virtualprivate apps.cozystack.io/v1alpha1 true VirtualPrivateCloud
+vmdisks apps.cozystack.io/v1alpha1 true VMDisk
+vminstances apps.cozystack.io/v1alpha1 true VMInstance
+vpns apps.cozystack.io/v1alpha1 true VPN
+
+```
+
+## Using kubectl
+
+Request a specific resource type in your tenant namespace:
+
+```bash
+# kubectl get postgreses -n tenant-test
+NAME READY AGE VERSION
+test True 46s 0.7.1
+```
+
+View the YAML output:
+
+```yaml
+# kubectl get postgreses -n tenant-test test -o yaml
+apiVersion: apps.cozystack.io/v1alpha1
+appVersion: 0.7.1
+kind: Postgres
+metadata:
+ name: test
+ namespace: tenant-test
+spec:
+ databases: {}
+ replicas: 2
+ size: 10Gi
+ storageClass: ""
+ users: {}
+status:
+ conditions:
+ - lastTransitionTime: "2024-12-10T09:53:32Z"
+ message: Helm install succeeded for release tenant-test/postgres-test.v1 with chart postgres@0.7.1
+ reason: InstallSucceeded
+ status: "True"
+ type: Ready
+ - lastTransitionTime: "2024-12-10T09:53:32Z"
+ message: Helm install succeeded for release tenant-test/postgres-test.v1 with chart postgres@0.7.1
+ reason: InstallSucceeded
+ status: "True"
+ type: Released
+ version: 0.7.1
+```
+
+You can use this resource as an example to create a similar service via the API. Just save the output to a file, update the `name` and any parameters you need, then use `kubectl` to create a new Postgres instance:
+
+```bash
+kubectl apply -f postgres.yaml
+```
+
+## Using Terraform
+
+Cozystack integrates with Terraform. You can use the default `kubernetes` provider to create resources in the Cozystack API.
+
+**Example:**
+
+```hcl
+provider "kubernetes" {
+ config_path = "~/.kube/config"
+}
+
+resource "kubernetes_manifest" "vm_disk_iso" {
+ manifest = {
+ "apiVersion" = "apps.cozystack.io/v1alpha1"
+ "appVersion" = "0.7.1"
+ "kind" = "Postgres"
+ "metadata" = {
+ "name" = "test2"
+ "namespace" = "tenant-test"
+ }
+ "spec" = {
+ "replicas" = 2
+ "size" = "10Gi"
+ }
+ }
+}
+```
+
+Then run:
+
+```bash
+terraform plan
+terraform apply
+```
+
+Your new Postgres cluster will be deployed.
+
+## Using Go code
+
+Cozystack publishes its custom Kubernetes resource types as a Go module, enabling management of Cozystack resources from any Go code. For details and examples, see the [Go Types]({{< relref "go-types.md" >}}) page.
diff --git a/content/en/docs/v1.3/cozystack-api/application-definitions.md b/content/en/docs/v1.3/cozystack-api/application-definitions.md
new file mode 100644
index 00000000..f3a77133
--- /dev/null
+++ b/content/en/docs/v1.3/cozystack-api/application-definitions.md
@@ -0,0 +1,188 @@
+---
+title: ApplicationDefinition reference
+linkTitle: ApplicationDefinition
+description: How ApplicationDefinition resources describe application types and how to look them up from client code
+weight: 15
+---
+
+## Overview
+
+`ApplicationDefinition` (`applicationdefinitions.cozystack.io/v1alpha1`) is a
+cluster-scoped CRD that describes every application type the platform
+exposes. Each definition declares the Kubernetes kind that tenants use in
+the aggregated API (`spec.application.kind`), the OpenAPI schema used to
+render the dashboard form and validate user input
+(`spec.application.openAPISchema`), and dashboard metadata such as
+category, icon, and display names (`spec.dashboard`).
+
+The aggregated API server (`cozystack-api`) lists every `ApplicationDefinition`
+**once at startup** and registers a matching resource under
+`apps.cozystack.io/v1alpha1`. The set of tenant-facing kinds does not change
+while the API server is running — adding, removing, or renaming an
+`ApplicationDefinition` takes effect only after `cozystack-api` restarts.
+
+A dedicated controller (`applicationdefinition-controller`, shipped with
+Cozystack) watches `ApplicationDefinition` and triggers that restart
+automatically: on any change to the set it computes a SHA-256 checksum over
+the sorted definitions and writes it to the `cozystack.io/config-hash`
+annotation on the `cozy-system/cozystack-api` Deployment's pod template,
+which Kubernetes then reconciles as a rolling restart. Events are debounced
+over a short window, and if the checksum is unchanged the restart is
+skipped. Operators do not need to `kubectl rollout restart` by hand.
+
+When a user creates a `Postgres` CR through the dashboard, `kubectl`, or a Go
+client, the aggregated layer translates it into a Flux `HelmRelease` that uses
+the chart referenced by the definition.
+
+## Naming convention
+
+`ApplicationDefinition` uses two independent naming styles. Each definition
+sets them explicitly, and the relationship between them is **not derivable
+by any string transform**:
+
+| Field | Style | Example (HTTP cache) | Example (VM disk) | Example (TCP balancer) |
+| --- | --- | --- | --- | --- |
+| `metadata.name` | lowercase-with-hyphens | `http-cache` | `vm-disk` | `tcp-balancer` |
+| `spec.application.kind` | CamelCase, preserves acronyms | `HTTPCache` | `VMDisk` | `TCPBalancer` |
+| `spec.application.singular` | lowercase, no hyphens | `httpcache` | `vmdisk` | `tcpbalancer` |
+| `spec.application.plural` | lowercase, no hyphens | `httpcaches` | `vmdisks` | `tcpbalancers` |
+
+Note that `metadata.name` is not a function of `spec.application.kind`. The
+hyphen positions (`tcp-balancer`, `vm-disk`, `http-cache`) and the absence of
+hyphens in `singular`/`plural` (`tcpbalancer`, `vmdisk`, `httpcache`) are
+conventions chosen per application, not outputs of a shared algorithm.
+`strings.ToLower(kind)` yields `httpcache`, which matches
+`spec.application.singular` but **not** `metadata.name`. A direct lookup by
+the lowercased kind therefore fails:
+
+```bash
+# The aggregated API resource uses the lowercased plural:
+$ kubectl get httpcaches --namespace tenant-demo
+NAME READY AGE VERSION
+frontend True 2m 1.2.0
+
+# But the ApplicationDefinition that backs it is stored under a different name:
+$ kubectl get applicationdefinition httpcache
+Error from server (NotFound): applicationdefinitions.cozystack.io "httpcache" not found
+
+$ kubectl get applicationdefinition http-cache
+NAME AGE
+http-cache 14d
+```
+
+Acronyms make this more visible: `TCPBalancer`, `HTTPCache`, and `VMDisk` all
+lose their capitalisation in the aggregated resource name (`tcpbalancers`,
+`httpcaches`, `vmdisks`) but keep hyphens in the CRD name (`tcp-balancer`,
+`http-cache`, `vm-disk`).
+
+## Recommended lookup pattern
+
+Client code that needs to resolve a Cozystack kind — for example a dashboard
+that receives `HTTPCache` from a HelmRelease label and wants to render the
+matching form — should **list all `ApplicationDefinition`s and filter by
+`spec.application.kind`** instead of attempting a direct `Get` by the lowercased
+kind. The set of definitions is small (tens of items) and changes rarely, so
+this pattern is cheap and stable. Return the whole matched object so that
+downstream callers can read `spec.application.openAPISchema`,
+`spec.dashboard`, or any other field without issuing a second API request.
+
+Before relying on the group and resource names below, confirm them against
+your cluster with:
+
+```bash
+$ kubectl api-resources | grep applicationdefinition
+applicationdefinitions cozystack.io/v1alpha1 false ApplicationDefinition
+```
+
+The row should list `applicationdefinitions` in the `NAME` column,
+`cozystack.io/v1alpha1` in the `APIVERSION` column, `false` under
+`NAMESPACED` (the resource is cluster-scoped), and `ApplicationDefinition`
+in the `KIND` column. If the group differs on your cluster, adjust
+`GroupVersionResource` in the example accordingly.
+
+```go
+import (
+ "context"
+ "fmt"
+
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
+ "k8s.io/apimachinery/pkg/runtime/schema"
+ "k8s.io/client-go/dynamic"
+)
+
+// findByKind returns the ApplicationDefinition whose spec.application.kind
+// matches the requested kind, or an error if no match is found. The caller
+// gets the full object, so fields such as spec.application.openAPISchema
+// are available without a second API round trip.
+func findByKind(ctx context.Context, client dynamic.Interface, kind string) (*unstructured.Unstructured, error) {
+ if kind == "" {
+ return nil, fmt.Errorf("kind must not be empty")
+ }
+
+ gvr := schema.GroupVersionResource{
+ Group: "cozystack.io",
+ Version: "v1alpha1",
+ Resource: "applicationdefinitions",
+ }
+
+ // The set of ApplicationDefinitions on a Cozystack cluster is small
+ // (on the order of tens), so a single unpaginated List is sufficient.
+ // If you adapt this helper for a larger catalog, set ListOptions.Limit
+ // and loop on the continue token to avoid silent truncation.
+ list, err := client.Resource(gvr).List(ctx, metav1.ListOptions{})
+ if err != nil {
+ return nil, fmt.Errorf("list %s/%s/%s: %w",
+ gvr.Group, gvr.Version, gvr.Resource, err)
+ }
+ for i := range list.Items {
+ specKind, found, err := unstructured.NestedString(
+ list.Items[i].Object, "spec", "application", "kind")
+ if err != nil || !found {
+ // Skip definitions with missing or non-string kind so the
+ // iteration does not match a malformed entry.
+ continue
+ }
+ if specKind == kind {
+ return &list.Items[i], nil
+ }
+ }
+ // Include the GVR in the error so a wrong group (for example after a
+ // CRD rename) is distinguishable from a genuine "no such kind".
+ return nil, fmt.Errorf("no ApplicationDefinition with spec.application.kind %q found under %s/%s/%s",
+ kind, gvr.Group, gvr.Version, gvr.Resource)
+}
+```
+
+The set of `ApplicationDefinition`s served via the aggregated API is frozen
+at `cozystack-api` startup (see [Overview](#overview)), but the backing
+CRDs can still be edited at runtime: an administrator can tweak
+`spec.application.openAPISchema` or `spec.dashboard` on an existing
+definition, or add a new kind — `applicationdefinition-controller` then
+triggers a rolling restart of `cozystack-api` so the change becomes
+reachable through the aggregated API without manual intervention. How
+aggressively a client should cache therefore depends on its own lifetime:
+
+- **Short-lived processes** (CLI tools, one-shot scripts, serverless
+ functions) can safely cache the result of `findByKind` for the entire
+ process lifetime.
+- **Long-running processes** (dashboards, controllers, operators) should
+ re-list `ApplicationDefinition`s on a cadence that matches how often
+ their operators edit schemas — once every few minutes is usually
+ enough. Definitions change rarely, so a watch is not worth the
+ complexity. A new `ApplicationDefinition` will become reachable through
+ the aggregated API shortly after it is created, once the controller-
+ driven rolling restart of `cozystack-api` completes.
+
+{{% alert color="info" %}}
+The lowercased plural (`httpcaches`, `vmdisks`) **is** the correct name for
+tenant-facing resources under `apps.cozystack.io/v1alpha1`. It is only the
+`applicationdefinitions.cozystack.io` CRD that uses the hyphenated form.
+{{% /alert %}}
+
+## See also
+
+- [Cozystack API overview]({{% ref "/docs/v1.3/cozystack-api" %}}) — kubectl,
+ Terraform, and Go client usage for tenant-facing resources.
+- [Go Types]({{% ref "/docs/v1.3/cozystack-api/go-types" %}}) — typed Go clients
+ for `apps.cozystack.io/v1alpha1` resources.
diff --git a/content/en/docs/v1.3/cozystack-api/go-types.md b/content/en/docs/v1.3/cozystack-api/go-types.md
new file mode 100644
index 00000000..f9df675d
--- /dev/null
+++ b/content/en/docs/v1.3/cozystack-api/go-types.md
@@ -0,0 +1,213 @@
+---
+title: Go Types
+description: Programmatic management of Cozystack resources using Go types
+weight: 2
+---
+
+## Go Types
+
+Cozystack publishes its Kubernetes resource types as a Go module, enabling management of Cozystack resources from any Go code. The types are available at [pkg.go.dev/github.com/cozystack/cozystack/api/apps/v1alpha1](https://pkg.go.dev/github.com/cozystack/cozystack/api/apps/v1alpha1).
+
+## Installation
+
+Add the dependency to your Go module:
+
+```bash
+go get github.com/cozystack/cozystack/api/apps/v1alpha1@{{< version-pin "cozystack_tag" >}}
+```
+
+## Use Cases
+
+The Go types are useful for:
+
+- **Building custom automation tools** - Create scripts or applications that programmatically deploy and manage Cozystack resources
+- **Integrating with external systems** - Connect Cozystack with your own CI/CD pipelines, monitoring systems, or orchestration tools
+- **Validating configurations** - Use the types to validate resource specifications before applying them to the cluster
+- **Generating documentation** - Parse and analyze existing Cozystack resources
+- **Building dashboards** - Create custom UIs for Cozystack management
+
+## Available Packages
+
+The module contains packages for each resource type, you can explore it for your specific version in [pkg.go.dev/github.com/cozystack/cozystack/api/apps/v1alpha1](https://pkg.go.dev/github.com/cozystack/cozystack/api/apps/v1alpha1)
+
+### Simple Example
+
+For basic usage, importing a specific package is straightforward:
+
+```go
+package main
+
+import (
+ "fmt"
+
+ "github.com/cozystack/cozystack/api/apps/v1alpha1/vmdisk"
+)
+
+func main() {
+ // Create a VMDisk source from a named image
+ image := vmdisk.SourceImage{Name: "ubuntu"}
+ fmt.Printf("Source: %+v\n", image)
+}
+```
+
+## Complex Example
+
+This example demonstrates creating and marshaling several Cozystack resource types:
+
+```go
+package main
+
+import (
+ "encoding/json"
+ "fmt"
+
+ "github.com/cozystack/cozystack/api/apps/v1alpha1/postgresql"
+ "github.com/cozystack/cozystack/api/apps/v1alpha1/vminstance"
+ "github.com/cozystack/cozystack/api/apps/v1alpha1/redis"
+ metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
+ "k8s.io/apimachinery/pkg/api/resource"
+)
+
+func main() {
+ // Create a PostgreSQL config with users and databases
+ pgConfig := postgresql.Config{
+ TypeMeta: metav1.TypeMeta{
+ APIVersion: "apps.cozystack.io/v1alpha1",
+ Kind: "Postgres",
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-app-db",
+ Namespace: "tenant-myapp",
+ },
+ Spec: postgresql.ConfigSpec{
+ Replicas: 3,
+ Size: resource.MustParse("50Gi"),
+ Version: postgresql.Version("v18"),
+ Users: map[string]postgresql.User{
+ "appuser": {
+ Password: "secretpassword",
+ Replication: false,
+ },
+ "readonly": {
+ Password: "readonlypass",
+ },
+ },
+ Databases: map[string]postgresql.Database{
+ "myapp": {
+ Extensions: []string{"pg_trgm", "uuid-ossp"},
+ Roles: postgresql.DatabaseRoles{
+ Admin: []string{"appuser"},
+ Readonly: []string{"readonly"},
+ },
+ },
+ },
+ Backup: postgresql.Backup{
+ Enabled: true,
+ DestinationPath: "s3://mybackups/postgres/",
+ EndpointURL: "http://minio:9000",
+ RetentionPolicy: "30d",
+ S3AccessKey: "myaccesskey",
+ S3SecretKey: "mysecretkey",
+ Schedule: "0 2 * * * *",
+ },
+ Quorum: postgresql.Quorum{
+ MinSyncReplicas: 1,
+ MaxSyncReplicas: 1,
+ },
+ Postgresql: postgresql.PostgreSQL{
+ Parameters: postgresql.PostgreSQLParameters{
+ MaxConnections: 200,
+ },
+ },
+ },
+ }
+
+ // Marshal to JSON for kubectl apply
+ pgJSON, _ := json.MarshalIndent(pgConfig, "", " ")
+ fmt.Println(string(pgJSON))
+
+ // Create a Redis config
+ redisConfig := redis.Config{
+ TypeMeta: metav1.TypeMeta{
+ APIVersion: "apps.cozystack.io/v1alpha1",
+ Kind: "Redis",
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "cache",
+ Namespace: "tenant-myapp",
+ },
+ Spec: redis.ConfigSpec{
+ Replicas: 2,
+ Size: resource.MustParse("5Gi"),
+ Version: redis.Version("v8"),
+ AuthEnabled: true,
+ ResourcesPreset: redis.ResourcesPreset("medium"),
+ },
+ }
+
+ // Create a VMInstance with disks
+ vmConfig := vminstance.Config{
+ TypeMeta: metav1.TypeMeta{
+ APIVersion: "apps.cozystack.io/v1alpha1",
+ Kind: "VMInstance",
+ },
+ ObjectMeta: metav1.ObjectMeta{
+ Name: "my-vm",
+ Namespace: "tenant-myapp",
+ },
+ Spec: vminstance.ConfigSpec{
+ InstanceType: "u1.medium",
+ InstanceProfile: "ubuntu",
+ RunStrategy: vminstance.RunStrategy("Always"),
+ External: true,
+ ExternalMethod: vminstance.ExternalMethod("PortList"),
+ ExternalPorts: []int{22, 80, 443},
+ Resources: vminstance.Resources{
+ Cpu: resource.MustParse("2"),
+ Memory: resource.MustParse("4Gi"),
+ Sockets: resource.MustParse("1"),
+ },
+ Disks: []vminstance.Disk{
+ {Bus: "sata", Name: "rootdisk"},
+ {Bus: "sata", Name: "datadisk"},
+ },
+ Subnets: []vminstance.Subnet{
+ {Name: "default"},
+ },
+ SshKeys: []string{
+ "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQ...",
+ },
+ CloudInit: `#cloud-config
+packages:
+ - nginx`,
+ },
+ }
+}
+```
+
+## Deploying Resources
+
+After creating your resource configurations, you can deploy them using:
+
+1. **kubectl** - Marshal to YAML and apply:
+ ```go
+ yamlData, _ := json.Marshal(yourConfig)
+ // Use YAML marshaling library to convert to YAML
+ ```
+
+2. **Direct Kubernetes client** - Use client-go:
+ ```go
+ import (
+ "k8s.io/client-go/kubernetes"
+ "k8s.io/apimachinery/pkg/runtime"
+ )
+
+ scheme := runtime.NewScheme()
+ // Register your types with the scheme
+ ```
+
+## Additional Resources
+
+- [Go Package Documentation](https://pkg.go.dev/github.com/cozystack/cozystack/api/apps/v1alpha1)
+- [Cozystack GitHub Repository](https://github.com/cozystack/cozystack)
+- [Kubernetes API Reference](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.35/)
diff --git a/content/en/docs/v1.3/cozystack-api/rest.md b/content/en/docs/v1.3/cozystack-api/rest.md
new file mode 100644
index 00000000..189676e8
--- /dev/null
+++ b/content/en/docs/v1.3/cozystack-api/rest.md
@@ -0,0 +1,9 @@
+---
+title: REST API Reference
+linkTitle: REST API
+description: "Cozystack REST API Reference"
+type: swagger
+weight: 10
+---
+
+{{< swaggerui src="/docs/v1.3/cozystack-api/api.json" >}}
diff --git a/content/en/docs/v1.3/development.md b/content/en/docs/v1.3/development.md
new file mode 100644
index 00000000..00fd0005
--- /dev/null
+++ b/content/en/docs/v1.3/development.md
@@ -0,0 +1,383 @@
+---
+linkTitle: Developer Guide
+title: Cozystack Internals and Developer Guide
+description: Cozystack Internals and Development
+weight: 100
+aliases:
+ - /docs/v1.3/development/development
+---
+
+## How it works
+
+Cozystack is an operator-driven platform. The bootstrap and ongoing management are
+handled by a set of controllers that run inside the cluster. The high-level flow is:
+
+1. **Installer chart** (`packages/core/installer`) is applied via `helm install`.
+ It deploys the `cozystack-operator` Deployment into the `cozy-system` namespace.
+
+2. **cozystack-operator** starts and performs one-time bootstrap:
+ - Installs Cozystack CRDs (`Package`, `PackageSource`) from embedded manifests
+ (`internal/crdinstall`).
+ - Installs Flux components (source-controller, helm-controller,
+ source-watcher) from embedded manifests (`internal/fluxinstall`).
+ - Creates the **initial OCIRepository** (`cozystack-platform`) from the
+ `platformSourceUrl` and `platformSourceRef` values configured in the installer.
+ - Creates a `PackageSource` that references the initial OCIRepository.
+
+3. **Reconciliation loop** takes over. The operator watches `PackageSource` and
+ `Package` CRDs and translates them into Flux `HelmRelease` objects. Flux
+ then installs and manages the actual Helm charts.
+
+4. **Platform chart** (`packages/core/platform`) is deployed as a regular
+ Package. It reads the cluster configuration from the
+ `cozystack.cozystack-platform`
+ [Package]({{% ref "/docs/v1.3/operations/configuration/platform-package" %}})
+ resource and templates bundle manifests that define which system components
+ should be installed.
+
+ The platform chart also creates the **secondary OCIRepository** (`cozystack-packages`)
+ by copying the spec from the initial OCIRepository. All PackageSources reference
+ this secondary repository. During upgrades, the platform chart runs migrations
+ as `pre-upgrade` hooks before creating or updating component HelmReleases.
+
+5. **FluxCD** is the execution engine — it reconciles `HelmRelease` objects
+ created by the operator, pulling chart artifacts from `ExternalArtifact`
+ resources and applying them to the cluster.
+
+For the full reconciliation chain (PackageSource → ArtifactGenerator → ExternalArtifact → Package → HelmRelease → Pods), dependency resolution, update and rollback flows, and the cozypkg CLI, see [Key Concepts]({{% ref "/docs/v1.3/guides/concepts" %}}).
+
+### OCIRepositories and Migration Flow
+
+Cozystack uses two OCIRepository resources to manage platform updates:
+
+| OCIRepository | Created By | References |
+|---|---|---|
+| `cozystack-platform` | cozystack-operator | Configured via installer values (`platformSourceUrl`, `platformSourceRef`) |
+| `cozystack-packages` | Platform chart (`repository.yaml`) | Copies spec from `cozystack-platform` |
+
+All PackageSources in `packages/core/platform/sources/` reference `cozystack-packages`.
+
+#### Migration Execution
+
+Migrations run as Helm `pre-upgrade` hooks in the platform chart:
+
+```yaml
+# packages/core/platform/templates/migration-hook.yaml
+metadata:
+ name: cozystack-migration-hook
+ annotations:
+ helm.sh/hook: pre-upgrade,pre-install
+ helm.sh/hook-weight: "1"
+```
+
+The migration container reads the current version from the `cozystack-version` ConfigMap and executes migration scripts sequentially from `CURRENT_VERSION` to `TARGET_VERSION - 1`. Each migration updates the ConfigMap on success, ensuring migrations are idempotent and can resume after failures.
+
+#### Why Two Repositories?
+
+The separation ensures that:
+
+1. The initial OCIRepository is managed by the operator (via installer values).
+2. All PackageSources have a consistent reference (`cozystack-packages`) rather than pointing to the operator-managed source directly.
+3. The platform chart can run migrations before creating the secondary OCIRepository, guaranteeing migrations execute before component updates.
+
+### Key binaries
+
+| Binary | Source | Role |
+|---|---|---|
+| **cozystack-operator** | `cmd/cozystack-operator` | Bootstrap (CRDs, Flux, platform source), `PackageSource` and `Package` reconciliation, `cozystack-values` secret replication. |
+| **cozystack-controller** | `cmd/cozystack-controller` | Workload and ApplicationDefinition reconciliation, dashboard management. |
+| **cozystack-api** | `cmd/cozystack-api` | Kubernetes API aggregation layer for `apps.cozystack.io` and `core.cozystack.io` API groups. |
+| **cozypkg** | `cmd/cozypkg` | CLI tool for managing packages — dependency visualization, interactive installation, deletion. |
+
+## Repository Structure
+
+The main structure of the [cozystack](https://github.com/cozystack/cozystack) repository is:
+
+```shell
+.
+├── api # Go types for Cozystack CRDs (Package, PackageSource, etc.)
+├── cmd # Entry points for all binaries
+│ ├── cozystack-operator # Main platform operator
+│ ├── cozystack-controller # Workload and application controllers
+│ ├── cozystack-api # Aggregated API server
+│ └── cozypkg # Package management CLI
+├── internal # Controller and reconciler implementations
+│ ├── operator # PackageSource and Package reconcilers
+│ ├── controller # Workload, ApplicationDefinition controllers
+│ ├── fluxinstall # Embedded Flux manifests and installer
+│ ├── crdinstall # Embedded CRD manifests and installer
+│ └── cozyvaluesreplicator # Secret replication logic
+├── packages # Helm charts organized by layer
+│ ├── core # Bootstrap and platform configuration
+│ ├── system # Infrastructure operators and upstream charts
+│ ├── apps # User-facing application charts
+│ └── extra # Tenant-specific application charts
+├── pkg # Shared Go libraries
+├── dashboards # Grafana dashboards
+├── hack # Helper scripts for local development
+└── docs # Changelogs and release notes
+```
+
+Development can be done locally by modifying and updating files in this repository.
+
+## Packages
+
+### [core](https://github.com/cozystack/cozystack/tree/main/packages/core)
+
+Core packages handle bootstrap and platform-level configuration.
+
+#### installer
+
+A Helm chart that deploys the `cozystack-operator` Deployment. It creates the
+`cozy-system` namespace, a ServiceAccount with cluster-admin privileges, and the
+operator Deployment with flags that trigger CRD and Flux installation on startup.
+The operator image and platform source URL are injected at build time.
+
+#### platform
+
+A Helm chart deployed as a regular `Package` (not applied directly). It reads the
+cluster configuration from the `cozystack.cozystack-platform`
+[Package]({{% ref "/docs/v1.3/operations/configuration/platform-package" %}})
+resource and templates manifests according to the specified
+[variant]({{% ref "/docs/v1.3/operations/configuration/variants" %}}) and
+component settings, defining which system components should be installed.
+
+#### flux-aio
+
+Flux components packaged for deployment by the operator.
+
+#### talos
+
+Talos OS configuration assets.
+
+{{% alert color="info" %}}
+Core packages do not use Helm to apply manifests; they are intended to be used only as `helm template . | kubectl apply -f -`.
+{{% /alert %}}
+
+### [system](https://github.com/cozystack/cozystack/tree/main/packages/system)
+
+System packages configure the system to manage and deploy user applications. The
+necessary system components are specified in the bundle configuration.
+
+System packages include two kinds of components:
+
+- **Operators** (e.g., `postgres-operator`, `kafka-operator`, `redis-operator`): Controllers
+ that know how to manage the full lifecycle of a specific application, including day-2 operations.
+- **Upstream Helm charts** for applications without a dedicated operator (e.g., `nats`, `ingress-nginx`):
+ These charts are placed in system so that apps and extra packages can deploy them
+ via Flux `HelmRelease` CRs, effectively using FluxCD as the operator.
+
+{{% alert color="info" %}}
+System packages use Helm to install and are managed by FluxCD.
+{{% /alert %}}
+
+### [apps](https://github.com/cozystack/cozystack/tree/main/packages/apps)
+
+These user-facing applications appear in the dashboard and include manifests to be applied to the cluster.
+
+Apps charts serve as a high-level API for users. They define only the parameters that
+should be exposed and validated through `values.schema.json`, keeping the interface
+minimal and secure. Apps charts should not contain business logic for deploying the
+application itself — instead they delegate to an operator or to FluxCD.
+
+Depending on whether the application has a dedicated operator, apps follow one of two patterns:
+
+#### Operator-based pattern
+
+When an application has a dedicated operator (e.g., PostgreSQL, MongoDB, Redis, Kafka),
+the app chart creates **CRD instances** that the operator manages:
+
+```
+packages/system/postgres-operator/ # Operator Helm chart
+packages/apps/postgres/ # App chart creates postgresql.cnpg.io/v1.Cluster CRs
+```
+
+The operator handles all deployment details and day-2 operations (scaling, backups, failover).
+The app chart simply creates the appropriate CRD with values derived from user input.
+
+#### HelmRelease-based pattern
+
+When an application has no dedicated operator and a Helm chart is the standard deployment
+method, the upstream chart is placed in `system/` and the app chart creates a
+**Flux `HelmRelease` CR** pointing to it:
+
+```
+packages/system/nats/ # Upstream NATS Helm chart
+packages/apps/nats/ # App chart creates helm.toolkit.fluxcd.io/v2.HelmRelease
+```
+
+In this case FluxCD acts as the operator, managing the Helm release lifecycle. The app
+chart controls which upstream values are exposed to the user, providing an additional layer
+of security — users cannot bypass validation to deploy the chart with arbitrary values.
+
+Other examples of this pattern: `extra/ingress`, `extra/seaweedfs`, `extra/monitoring`.
+
+### [extra](https://github.com/cozystack/cozystack/tree/main/packages/extra)
+
+Similar to `apps` but not shown in the application catalog. They can only be installed as part of a tenant.
+They are allowed to use by bottom tenants installed in current tenant namespace.
+
+Read more about [Tenant System](/docs/guides/concepts/#tenant-system) on the Core Concepts page.
+
+It is possible to use only one application type within a single tenant namespace.
+
+Extra packages follow the same two architectural patterns as apps (operator-based or HelmRelease-based).
+
+{{% alert color="info" %}}
+Apps and extra packages use Helm for application and are installed from the dashboard and managed by FluxCD.
+{{% /alert %}}
+
+## Package Structure
+
+Every package is a typical Helm chart containing all necessary images and manifests
+for the platform. We follow an umbrella chart logic to keep upstream charts in the
+`./charts` directory and override values.yaml in the application's root.
+This structure simplifies upstream chart updates.
+
+```shell
+.
+├── Chart.yaml # Helm chart definition and parameter description
+├── Makefile # Common targets for simplifying local development
+├── charts # Directory for upstream charts
+├── images # Directory for Docker images
+├── patches # Optional directory for upstream chart patches
+├── templates # Additional manifests for the upstream Helm chart
+├── templates/dashboard-resourcemap.yaml # Role used to display k8s resources in dashboard
+├── values.yaml # Override values for the upstream Helm chart
+└── values.schema.json # JSON schema used for input values validation and to render UI elements in dashboard
+```
+
+You can use bitnami's [readme-generator](https://github.com/bitnami/readme-generator-for-helm) for generating `README.md` and `values.schema.json` files.
+
+Just install it as `readme-generator` binary in your system and run generation using `make generate` command.
+
+## Helm Chart Development Principles
+
+The package structure and development workflow in Cozystack are guided by the following principles:
+
+### Easy to update upstream charts
+
+The original upstream chart must be easy to update, override, and modify. We use the umbrella chart pattern — upstream charts live in the `./charts` directory and are vendored as-is. Customizations go into `values.yaml` overrides and additional `templates/`, while structural changes to the upstream chart are applied via `patches/`. This separation ensures that updating to a new upstream version is straightforward: run `make update`, review the diff, and re-apply patches if needed.
+
+### Local-first artifacts
+
+Patches and container images are stored locally and are part of the package. The `patches/` directory holds any modifications to the upstream chart, and the `images/` directory contains Dockerfiles for building all required images. This ensures full reproducibility — everything needed to build and deploy the package is self-contained within the repository.
+
+{{% alert color="info" %}}
+Currently, not all packages build their images locally — some still reference externally-built images. We are actively working toward fully local image builds to achieve complete self-containment and reproducibility.
+{{% /alert %}}
+
+### Local development and testing workflow
+
+Every package must be easy to update and test locally against a real cluster, without relying on CI. The standard `make` targets (`make image`, `make diff`, `make apply`) provide a fast feedback loop: build images, compare rendered manifests against the live cluster, and apply changes — all from a developer's workstation.
+
+### No external dependencies
+
+Packages must not depend on external resources at runtime. All charts, images, and patches are vendored into the repository. This guarantees that builds and deployments are deterministic and do not break due to upstream registry outages, removed tags, or network issues.
+
+{{% alert color="info" %}}
+As noted above, full image self-containment is a work in progress. Some packages still pull images from external registries — this is a known gap that we plan to close as capacity allows.
+{{% /alert %}}
+
+## Development
+
+### Buildx configuration
+
+To build images, you need to install and configure the [`docker buildx`](https://github.com/docker/buildx) plugin.
+
+Instead of a built-in builder, you can [configure additional ones](https://docs.docker.com/build/builders/), which may be remote, or support multiple architectures.
+This example shows how to create a builder with `kubernetes` driver, which allows you to build images directly in a Kubernetes cluster:
+
+```bash
+docker buildx create \
+ --bootstrap \
+ --name=buildkit \
+ --driver=kubernetes \
+ --driver-opt=namespace=tenant-kvaps,replicas=2 \
+ --platform=linux/amd64 \
+ --platform=linux/arm64 \
+ --use
+```
+
+Alternatively, omit the --driver* options to set up the build environment in an local Docker environment.
+
+### Packages management
+
+Each application includes a Makefile to simplify the development process. We follow this logic for every package:
+
+```shell
+make update # Update Helm chart and versions from the upstream source
+make image # Build Docker images used in the package
+make show # Show output of rendered templates
+make diff # Diff Helm release against objects in a Kubernetes cluster
+make apply # Apply Helm release to a Kubernetes cluster
+```
+
+For example, to update cilium:
+
+```shell
+cd packages/system/cilium # Go to application directory
+make update # Download new version from upstream
+make image # Build cilium image
+git diff . # Show diff with changed manifests
+make diff # Show diff with applied cluster manifests
+make apply # Apply changed manifests to the cluster
+kubectl get pod -n cozy-cilium # Check if everything works as expected
+git commit -m "Update cilium" # Commit changes to the branch
+```
+
+To build the cozystack container with an updated chart:
+
+```shell
+cd packages/core/installer # Go to the cozystack package
+make image-packages # Build packages image
+make apply # Apply to the cluster
+kubectl get pod -n cozy-system # Check if everything works as expected
+kubectl get hr -A # Check HelmRelease objects
+```
+
+{{% alert color="info" %}}
+When rebuilding images, specify the `REGISTRY` environment variable to point to your Docker registry.
+
+Feel free to look inside each Makefile to better understand the logic.
+{{% /alert %}}
+
+### Testing
+
+The platform includes an [`e2e.sh`](https://github.com/cozystack/cozystack/blob/main/hack/e2e.sh) script that performs the following tasks:
+
+- Runs three QEMU virtual machines
+- Configures Talos Linux
+- Installs Cozystack
+- Waits for all HelmReleases to be installed
+- Performs additional checks to ensure that components are up and running
+
+You can run e2e.sh either locally or directly within a Kubernetes container.
+
+To run tests in a Kubernetes cluster, navigate to the `packages/core/testing` directory and execute the following commands:
+
+```shell
+make apply # Create testing sandbox in Kubernetes cluster
+make test # Run the end-to-end tests in existing sandbox
+make delete # Remove testing sandbox from Kubernetes cluster
+```
+
+{{% alert color="warning" %}}
+:warning: To run e2e tests in a Kubernetes cluster, your nodes must have sufficient free resources to create 3 VMs and store the data for the deployed applications.
+
+It is recommended to use bare-metal nodes of the parent Cozystack cluster.
+{{% /alert %}}
+
+### Dynamic Development Environment
+
+If you prefer to develop Cozystack in virtual machines instead of modifying the existing cluster, you can utilize the same sandbox from testing environment. The Makefile in the `packages/core/testing` includes additional options:
+
+```shell
+make exec # Opens an interactive shell in the sandbox container.
+make login # Downloads the kubeconfig into a temporary directory and runs a shell with the sandbox environment; mirrord must be installed.
+make proxy # Enable a SOCKS5 proxy server; mirrord and gost must be installed.
+```
+
+Socks5 proxy can be configured in a browser to access services of a cluster running in sandbox. Firefox has a handy extension for toogling proxy on/off:
+
+- [Proxy Toggle](https://addons.mozilla.org/en-US/firefox/addon/proxy-toggle/)
diff --git a/content/en/docs/v1.3/getting-started/_index.md b/content/en/docs/v1.3/getting-started/_index.md
new file mode 100644
index 00000000..e7842cb2
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/_index.md
@@ -0,0 +1,27 @@
+---
+title: "Getting Started with Cozystack: Deploying Private Cloud from Scratch"
+linkTitle: "Getting Started"
+description: "Make your first steps, run a home lab, build a POC with Cozystack."
+weight: 10
+aliases:
+ - /docs/v1.3/get-started
+---
+
+This tutorial will guide you through your first deployment of a Cozystack cluster.
+Along the way, you will get to know about key concepts, learn to use Cozystack via dashboard and CLI,
+and get a working proof-of-concept.
+
+The tutorial is divided into several steps.
+Make sure to complete each step before starting the next one:
+
+| Step | Description |
+|-----------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
+| [Requirements: prepare infrastructure and tools]({{% ref "requirements" %}}) | Prepare infrastructure and install required CLI tools on your machine before running this tutorial. |
+| 1. [Install Talos Linux]({{% ref "install-talos" %}}) | Install a Cozystack-specific distribution of Talos Linux using [`boot-to-talos`][btt], likely the easiest installation method. |
+| 2. [Install and bootstrap a Kubernetes cluster]({{% ref "install-kubernetes" %}}) | Bootstrap a Kubernetes cluster using [Talm][talm], the Talos configuration management tool made for Cozystack. |
+| 3. [Install and configure Cozystack]({{% ref "install-cozystack" %}}) | Install Cozystack, get administrative access, perform basic configuration, and access the Cozystack dashboard. |
+| 4. [Create a tenant for users and teams]({{% ref "create-tenant" %}}) | Create a user tenant, the foundation of RBAC in Cozystack, and get access to it via dashboard and Cozystack API. |
+| 5. [Deploy managed applications]({{% ref "deploy-app" %}}) | Start using Cozystack: deploy a virtual machine, managed application, and a tenant Kubernetes cluster. |
+
+[btt]: https://github.com/cozystack/boot-to-talos
+[talm]: https://github.com/cozystack/talm
\ No newline at end of file
diff --git a/content/en/docs/v1.3/getting-started/create-tenant.md b/content/en/docs/v1.3/getting-started/create-tenant.md
new file mode 100644
index 00000000..e3889538
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/create-tenant.md
@@ -0,0 +1,176 @@
+---
+title: "4. Create a User Tenant and Configure Access"
+linkTitle: "4. Create User Tenant"
+description: "Create a user tenant, the foundation of RBAC in Cozystack, and get access to it via dashboard and Cozystack API."
+weight: 40
+---
+
+## Objectives
+
+At this step of the tutorial, you will create a user tenant — a space for users to deploy applications and VMs.
+You will also get tenant credentials and log in as a user with access to this tenant.
+
+## Prerequisites
+
+Before you begin:
+
+
+- Complete the previous steps of the tutorial to get
+ a [Cozystack cluster]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) running,
+ with storage, networking, and management dashboard configured.
+
+- Make sure you can access the dashboard, as described in the
+ [previous step of the tutorial]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}).
+
+- If you're using OIDC, users and roles must be configured.
+ See the [OIDC guide]({{% ref "/docs/v1.3/operations/oidc" %}}) for details on how to work with the built-in OIDC server.
+
+During [Kubernetes installation]({{% ref "/docs/v1.3/getting-started/install-kubernetes" %}}) for Cozystack,
+you should have obtained the administrative `kubeconfig` file for your new cluster.
+Keep it at hand — it may be useful later for troubleshooting.
+However, for day-to-day operations, you'll want to create user-specific credentials.
+
+
+## Introduction
+
+Tenants are the isolation mechanism in Cozystack.
+They are used to separate clients, teams, or environments.
+Each tenant has its own set of applications and one or more nested Kubernetes clusters.
+Tenant users have full access to their clusters.
+Optionally, you can configure quotas for each tenant to limit resource usage and prevent overconsumption.
+
+To learn more about tenants, read the [Core Concepts]({{% ref "/docs/v1.3/guides/concepts#tenant-system" %}}) guide.
+
+
+## Create a Tenant
+
+Tenants are created using the Cozystack application named `Tenant`.
+After installation, Cozystack includes a built-in tenant called `tenant-root`.
+This root tenant is reserved for platform administrators and should only be used to create child tenants.
+Although it’s technically possible to install applications in `tenant-root`,
+doing so is **not recommended** for production environments.
+
+{{< tabs name="create_tenant" >}}
+{{% tab name="Using Dashboard" %}}
+
+1. Open the dashboard as a `tenant-root` user.
+1. Ensure the current context is set to `tenant-root`.
+ Switch context and reload the page if needed.
+1. Navigate to the **Catalog** tab.
+1. Search for the **Tenant** application and open it.
+1. Review the documentation, then click the **Deploy** button to proceed to the parameters page.
+1. Fill in the tenant `name`.
+ It is the only parameter that can't be changed later.
+1. (Optional) Fill in the domain name in `host`.
+ This domain name must already exist.
+ Ensure that the tenant user has enough control over the domain to configure DNS records.
+ If left blank, the domain will default to `.`.
+1. Select the checkboxes to install system-level apps: `etcd`, `monitoring`, `ingress`, and `seaweedfs`.
+ Tenant users will **not** be able to install or uninstall these apps — only administrators can.
+
+ The `etcd` option is required for nested Kubernetes.
+ Select it before installing the **Kubernetes** application in the tenant.
+ Only disable it if you're certain the tenant won’t use nested Kubernetes.
+1. By default, no resource quotas are set.
+ This means no usage limits.
+ You can define quotas to prevent resource overuse.
+1. Click **Deploy ** to install the tenant application into the root tenant.
+
+{{% /tab %}}
+
+{{% tab name="Using kubectl" %}}
+
+Create a HelmRelease manifest for the tenant. You can use a manifest created via the dashboard as a starting point:
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+ name: tenant-team1
+ namespace: tenant-root
+spec:
+ chart:
+ spec:
+ chart: tenant
+ reconcileStrategy: Revision
+ sourceRef:
+ kind: HelmRepository
+ name: cozystack-apps
+ namespace: cozy-public
+ version: 1.9.1
+ interval: 0s
+ values:
+ etcd: true
+ host: team1.example.org
+ ingress: true
+ monitoring: false
+ resourceQuotas: {}
+ seaweedfs: false
+```
+
+Apply the manifest:
+
+```bash
+# Use the kubeconfig for the root tenant
+export KUBECONFIG=./kubeconfig-tenant-root
+# Apply the manifest
+kubectl -n tenant-root apply -f hr-tenant-team1.yaml
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+{{% alert color="info" %}}
+Cilium network policies in Cozystack v1.0+ always isolate sibling tenants from
+each other — there is no `isolated` field in either the Dashboard form or
+the HelmRelease values. Pods inside a tenant namespace also cannot reach
+`kube-apiserver` by default, or the tenant's own `etcd` when the tenant was
+created with `etcd: true`. To opt a pod into one of those paths, label it
+with `policy.cozystack.io/allow-to-apiserver: "true"` or
+`policy.cozystack.io/allow-to-etcd: "true"` respectively. See
+[Tenant `isolated` flag removed]({{% ref "/docs/v1.3/operations/upgrades#tenant-isolated-flag-removed" %}})
+in the upgrade notes for the full table and a worked example.
+{{% /alert %}}
+
+You can assist tenant users with installing database applications or nested Kubernetes clusters.
+As an administrator, you can switch context in the dashboard to access any tenant.
+Tenant users, however, can only access their own tenant and any child tenants.
+
+
+## Get Tenant Kubeconfig
+
+Tenant users need a kubeconfig file to access their Kubernetes cluster.
+The method to retrieve it depends on whether OIDC is enabled in your Cozystack setup.
+
+### With OIDC Enabled
+
+You can retrieve the kubeconfig file directly from the dashboard, as described in the
+[OIDC guide]({{% ref "/docs/v1.3/operations/oidc/enable_oidc#step-4-retrieve-kubeconfig" %}}).
+
+### Without OIDC
+
+As an administrator, you'll need to retrieve a service account token from the tenant namespace.
+The secret holding the token has the same name as the tenant.
+
+To retrieve the token for a tenant named `team1`, run:
+
+```bash
+kubectl -n tenant-team1 get secret tenant-team1 -o json | jq -r '.data.token | @base64d'
+```
+
+Next, insert this token into a kubeconfig template, and save the file as `kubeconfig-tenant-.yaml`.
+
+Make sure to also set the default namespace to the tenant name.
+Many GUI clients will display permission errors if the namespace is not explicitly defined.
+
+The same token can also be used by the tenant user to log into the Cozystack dashboard if OIDC is disabled.
+
+### Get Nested Kubernetes Kubeconfig
+
+In general, administrators do **not** need to retrieve kubeconfig files for nested Kubernetes clusters.
+
+These clusters are installed by the tenant user, within their own tenant namespace.
+Tenant users have full control over their nested Kubernetes environments.
+
+To access a nested Kubernetes cluster, the tenant user can download the kubeconfig file
+directly from the corresponding application's page in the dashboard.
diff --git a/content/en/docs/v1.3/getting-started/deploy-app.md b/content/en/docs/v1.3/getting-started/deploy-app.md
new file mode 100644
index 00000000..e22fa8b9
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/deploy-app.md
@@ -0,0 +1,334 @@
+---
+title: "5. Deploy Managed Applications, VMs, and tenant Kubernetes cluster"
+linkTitle: "5. Deploy Applications"
+description: "Start using Cozystack: deploy a virtual machine, managed application, and a tenant Kubernetes cluster."
+weight: 50
+---
+
+## Objectives
+
+This guide will walk you through setting up the environment needed to run a typical web application with common service
+dependencies—PostgreSQL and Redis—on Cozystack, a Kubernetes-based PaaS framework.
+
+You’ll learn how to:
+
+- Deploy managed applications in your tenant: a PostgreSQL database and Redis cache.
+- Create a managed Kubernetes cluster, configure DNS, and access the cluster.
+- Deploy a containerized application to the new cluster.
+
+You don’t need in-depth Kubernetes knowledge to complete this tutorial—most steps are done through the Cozystack web interface.
+
+This is your fast track to a successful first deployment on Cozystack.
+Once you're done, you’ll have a working setup ready for your own applications—and a solid foundation to build upon and showcase to your team.
+
+## Prerequisites
+
+Before you begin:
+
+- **Cozystack cluster** should already be [installed and running]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}).
+ You won’t need to install or configure anything on the infrastructure level—this
+ guide assumes that part is already done, possibly by you or someone else on your team.
+- **Tenant and credentials:** You must have access to your tenant in Cozystack.
+ This can be either through a `kubeconfig` file or OIDC login for the dashboard.
+ If you don’t have access, ask your Ops team or refer to the guide on creating a tenant.
+- **DNS for dev/testing:** To access the deployed app over HTTPS you need a DNS record set up.
+ A wildcard DNS record is preferred, as it's more convenient to use.
+
+> 🛠️ **CLI is optional.**
+> You don’t need to use `kubectl` or `helm` unless you want to.
+> All major steps (like creating the Kubernetes cluster and managed services) can be done entirely in the Cozystack Dashboard.
+> The only point where you’ll need the CLI is when deploying the app to a Kubernetes cluster.
+
+## 1. Access the Cozystack Dashboard
+
+Open the Cozystack dashboard in your browser.
+The link usually looks like `https://dashboard.`.
+
+Depending on how authentication is configured in your Cozystack cluster, you'll see one of the following:
+
+- An **OIDC login screen** with a button that redirects you to Keycloak.
+- A **Token login screen**, where you manually paste a token from your kubeconfig file.
+
+Choose your login method below:
+
+{{< tabs name="access_dashboard" >}}
+{{% tab name="OIDC" %}}
+Click the `OIDC Login` button.
+This will take you to the Keycloak login page.
+
+Enter your credentials and click `Login`.
+If everything is configured correctly, you'll be logged in and redirected back to the dashboard.
+{{% /tab %}}
+
+{{% tab name="kubeconfig" %}}
+This login form doesn’t have a `username` field—only a `token` input.
+You can get this token from your kubeconfig file.
+
+1. Open your kubeconfig file and copy the token value (it’s a long string).
+ Make sure you copy it without extra spaces or line breaks.
+1. Paste it into the form and click `Submit`.
+
+{{% /tab %}}
+{{< /tabs >}}
+
+Once you're logged in, the dashboard will automatically show your tenant context.
+
+You may see system-level applications like `ingress` or `monitoring` already running—these are managed by your cluster admin.
+As a tenant user, you can’t install or modify them, but your own apps will run alongside them in your isolated tenant environment.
+
+## 2. Create a Managed PostgreSQL
+
+Cozystack lets you provision managed databases directly on the hardware layer for maximum performance.
+Each database is created inside your tenant namespace and is automatically accessible from your nested Kubernetes cluster.
+
+If you're familiar with services like AWS RDS or GCP Cloud SQL, the experience is similar—
+except it's fully integrated with Cozystack and isolated within your own tenant.
+
+> Throughout this tutorial, you’ll have the option to use either the Cozystack dashboard (UI) or `kubectl`:
+>
+> - **Cozystack Dashboard** offers the quickest and most straightforward experience—recommended if this is your first time using Cozystack.
+> - **`kubectl`** provides in-depth visibility into how managed services are deployed behind the scenes.
+>
+> While neither approach reflects how services are typically deployed in production,
+> both are well-suited for learning and experimentation—making them ideal for this tutorial.
+
+### 2.1 Deploy PostgresSQL
+
+{{< tabs name="create_database" >}}
+{{% tab name="Cozystack Dashboard" %}}
+
+1. Open the Cozystack dashboard and go to the **Catalog** tab.
+1. Search for the **Postgres** application badge and click it to open its built-in documentation.
+1. Click the **Deploy** button to open the deployment configuration page.
+1. Fill in `instaphoto-postgres` in the **`name`** field. Application name must be unique within your tenant and **cannot be changed after deployment**.
+1. Review the other parameters. They come pre-filled with sensible defaults, so you can keep them unchanged.
+ - Try using both the **Visual editor** and the **YAML editor**. You can switch between editors at any time.
+ - The YAML editor includes inline comments to guide you.
+ - Don’t worry if you’re unsure about some settings. Most of them can be updated later.
+1. Click **Deploy** again. The database will be installed in your tenant’s namespace.
+
+
+{{% /tab %}}
+
+{{% tab name="kubectl" %}}
+Create a manifest `postgres.yaml` with the following content:
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+ name: postgres-instaphoto-dev
+ namespace: tenant-team1
+spec:
+ chart:
+ spec:
+ chart: postgres
+ reconcileStrategy: Revision
+ sourceRef:
+ kind: HelmRepository
+ name: cozystack-apps
+ namespace: cozy-public
+ version: 0.10.0
+ interval: 0s
+ values:
+ databases:
+ myapp:
+ roles:
+ admin:
+ - user1
+ external: true
+ replicas: 2
+ resourcesPreset: nano
+ size: 5Gi
+ users:
+ user1:
+ password: strongpassword
+```
+
+Apply the manifest using:
+
+```bash
+kubectl apply -f postgres.yaml
+```
+
+> 💡 Tip: You can generate a similar manifest by deploying the Postgres app through the dashboard first.
+> Then, export the configuration and edit it as needed.
+> It's useful if you’re trying to reproduce or automate the setup.
+
+{{% /tab %}}
+{{< /tabs >}}
+
+
+### 2.2 Get the Connection Credentials
+
+Navigate to the **Applications** tab, then find and open the `instaphoto-postgres` application.
+Once the application is installed and ready, you’ll find connection details in the **Application Resources** section of the dashboard.
+
+- The **Secrets** tab contains the database password for each user you defined.
+- The **Services** tab lists the internal service endpoints:
+ - Use `postgres--ro` to connect to the **read-only replica**.
+ - Use `postgres--rw` to connect to the **primary (read-write)** instance.
+
+These service names are resolvable from within the nested Kubernetes cluster and can be used in your app’s configuration.
+
+If you need to connect to the database from outside the cluster, you can expose it externally by setting the `external` parameter to `true`.
+This will create a service named `postgres--external-write` with a public IP address.
+
+> ⚠️ **Only enable external access if absolutely necessary.** Exposing databases to the internet introduces security risks and should be avoided in most cases.
+
+## 3. Create a Cache Service
+
+From this point on, you'll use your tenant credentials to access the platform.
+Use the tenant's kubeconfig for `kubectl`, and the token from it to access the dashboard.
+
+{{< tabs name="create_redis" >}}
+{{% tab name="Cozystack Dashboard" %}}
+
+1. Open the dashboard.
+1. Follow the same steps as with PostgreSQL, but for Redis application.
+1. The Redis application has an `authEnabled` parameter, which will create a default user. That’s sufficient for our application.
+1. Once you're done configuring the parameters, click the **Deploy** button. The application will be installed in your tenant.
+
+{{% /tab %}}
+{{% tab name="kubectl" %}}
+
+Create a manifest file named `redis.yaml` with the following content:
+
+```yaml
+apiVersion: helm.toolkit.fluxcd.io/v2
+kind: HelmRelease
+metadata:
+ name: redis-instaphoto
+ namespace: tenant-team1
+spec:
+ chart:
+ spec:
+ chart: redis
+ reconcileStrategy: Revision
+ sourceRef:
+ kind: HelmRepository
+ name: cozystack-apps
+ namespace: cozy-public
+ version: 0.6.0
+ interval: 0s
+ values:
+ authEnabled: true
+ external: false
+ replicas: 2
+ resources: {}
+ resourcesPreset: nano
+ size: 1Gi
+```
+
+Then apply it:
+
+```bash
+kubectl apply -f redis.yaml
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+After a short time, the Redis application will be installed in the `team1` tenant.
+The generated password can be found in the dashboard.
+
+{{< tabs name="redis_password" >}}
+{{% tab name="Cozystack Dashboard" %}}
+
+1. Open the dashboard as the `tenant-team1` user.
+1. Click on the **Applications** tab in the left menu.
+1. Find the `redis-instaphoto` application and click on it.
+1. The password is shown in the **Secrets** section, with buttons to copy or reveal it.
+
+{{% /tab %}}
+{{% tab name="kubectl" %}}
+
+```bash
+# Use the tenant kubeconfig
+export KUBECONFIG=./kubeconfig-tenant-team1
+# Get the password
+kubectl -n tenant-team1 get secret redis-instaphoto-auth
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+## 4. Deploy a Nested Kubernetes Cluster
+
+The nested Kubernetes cluster is created in the same way as the database and cache.
+However, there are a few important additional points to consider:
+
+- **`etcd` must be enabled in the tenant**
+ The `etcd` service is required to run a nested Kubernetes cluster and can only be enabled by a Cozystack administrator.
+- **Verify your quota.**
+ Ensure your tenant has enough CPU, RAM, and disk resources to create and run a cluster.
+- **Choose an appropriate instance preset.**
+ Avoid selecting presets that are too small. A Kubernetes node consumes approximately 2.5 GB of RAM just for system components.
+ For example, if you select a 4 GB RAM preset, only about 1.5 GB will be available for your actual workloads.
+ 4 GB is sufficient for testing, but in general, it’s better to provision **fewer nodes with more RAM** than many nodes with minimal RAM.
+- **Enable `ingress` and `cert-manager` if needed.**
+ If you're deploying web applications, you will likely need ingress and certificate management.
+ Both can be enabled with a checkbox when configuring the nested Kubernetes application in Cozystack.
+
+Once the nested Kubernetes cluster is ready, you'll find its kubeconfig files in the **Secrets** tab of the application page in the dashboard.
+Several options are provided:
+
+- **`admin.conf`** — The standard kubeconfig for accessing your new cluster.
+ You can create additional Kubernetes users using this configuration.
+- **`admin.svc`** — Same token as `admin.conf`, but with the API server address set to the internal service name.
+ Use it for applications running inside the cluster that need API access.
+- **`super-admin.conf`** — Similar to `admin.conf`, but with extended administrative permissions.
+ Intended for troubleshooting and cluster maintenance tasks.
+- **`super-admin.svc`** — Same as `super-admin.conf`, but pointing to the internal API server address.
+
+## 5. Update DNS and Access the Cluster
+
+After deployment, the nested Kubernetes cluster will automatically claim one of the floating IP addresses from the main cluster.
+
+You can find the assigned DNS name and IP address in one of two ways:
+- Open the application page for the cluster in the dashboard.
+- Check the ingress status using `kubectl`.
+
+Once you have the correct DNS name and IP address, update your DNS settings to point your domain or subdomain to the assigned IP.
+
+After the DNS records are updated and propagated, you can access your nested Kubernetes cluster using the downloaded kubeconfig file.
+
+Here’s an example of how to configure and use it:
+
+1. Save the contents of `admin.conf` in a file, for example, `~/.kube/kubeconfig-team1.example.org`:
+
+ ```console
+ $ cat ~/.kube/kubeconfig-team1.example.org
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tL
+ ...
+ ```
+
+1. Set up `KUBECONFIG` env variable to this file and check that the nodes are ready:
+
+ ```console
+ $ export KUBECONFIG=~/.kube/kubeconfig-team1.example.org
+ $ kubectl get nodes
+ NAME STATUS ROLES AGE VERSION
+ kubernetes-dev-md0-vn8dh-jjbm9 Ready ingress-nginx 29m v1.30.11
+ kubernetes-dev-md0-vn8dh-xhsvl Ready ingress-nginx 25m v1.30.11
+ ```
+
+## 6. Deploy an Application with Helm
+
+From this point, working with your cluster is the same as working with any standard Kubernetes environment.
+
+You can use `kubectl`, `helm`, or your CI/CD pipeline to deploy Kubernetes-native applications.
+
+To deploy your application:
+
+1. Update your Helm chart values to include the correct credentials for the database and cache.
+1. Run a standard Helm deployment command, for example:
+
+ ```bash
+ helm upgrade --install -f values.yaml
+ ```
+
+Service names such as the database and cache do not need DNS suffixes.
+They are accessible within the same namespace by their service names.
\ No newline at end of file
diff --git a/content/en/docs/v1.3/getting-started/install-cozystack.md b/content/en/docs/v1.3/getting-started/install-cozystack.md
new file mode 100644
index 00000000..7ed5f761
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/install-cozystack.md
@@ -0,0 +1,706 @@
+---
+title: "3. Install and Configure Cozystack"
+linkTitle: "3. Install Cozystack"
+description: "Install Cozystack, get administrative access, perform basic configuration, and enable the UI dashboard."
+weight: 20
+---
+
+## Objectives
+
+{{% alert color="info" %}}
+This tutorial covers installing Cozystack as a **ready-to-use platform**.
+If you want to build your own platform by installing only specific components,
+see the [BYOP (Build Your Own Platform) guide]({{% ref "/docs/v1.3/install/cozystack/kubernetes-distribution" %}}).
+{{% /alert %}}
+
+In this step of the tutorial, we'll install Cozystack on top of a [Kubernetes cluster, prepared in the previous step]({{% ref "./install-kubernetes" %}}).
+
+The tutorial will guide you through the following stages:
+
+1. Install the Cozystack operator
+1. Prepare a Cozystack configuration file and apply it
+1. Configure storage
+1. Configure networking
+1. Deploy etcd, ingress and monitoring stack in the root tenant
+1. Finalize deployment and access Cozystack dashboard
+
+## 1. Install the Cozystack Operator
+
+Install the Cozystack operator using the Helm chart from the OCI registry.
+The operator manages all Cozystack components and handles the Platform Package lifecycle.
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace
+```
+
+Replace `X.Y.Z` with the desired Cozystack version.
+You can find available versions on the [Cozystack releases page](https://github.com/cozystack/cozystack/releases).
+
+{{% alert color="info" %}}
+**If the install aborts because `cozy-system` already exists.** Helm refuses
+to take over a namespace it did not create and prints an `invalid ownership
+metadata` error (or `namespaces "cozy-system" already exists`, depending on
+the Helm version) when `cozy-system` was left over from an earlier aborted
+install or was created manually for this purpose.
+
+If the namespace is **not** managed by another tool (Terraform, Argo CD, a
+different Helm release, etc.), rerun the command with `--take-ownership`
+(requires Helm 3.17+) to let Helm adopt it:
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace \
+ --take-ownership
+```
+
+Do not use `--take-ownership` if `cozy-system` is owned by another system —
+Helm will silently become the new owner and subsequent upgrades or an
+uninstall of the Cozystack release may mutate or delete the namespace (and
+anything else the flag adopted) against the wishes of that other system.
+{{% /alert %}}
+
+## 2. Prepare and Apply the Platform Package
+
+### 2.1. Prepare a Configuration File
+
+Now that the operator is running, we will prepare a configuration file for it.
+Take the example below and write it in a file **cozystack-platform.yaml**:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://api.example.org:443"
+ exposedServices:
+ - dashboard
+ - api
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+```
+
+Action points:
+
+1. Replace `example.org` in `publishing.host` and `publishing.apiServerEndpoint` with a routable fully-qualified domain name (FQDN) that you control.
+ If you only have a public IP, but no FQDN, use [nip.io](https://nip.io/) with dash notation.
+2. Use the same values for `networking.*` as on the previous step, where you bootstrapped a Kubernetes cluster with Talm or `talosctl`.
+ Settings provided in the example are sane defaults that can be used in most cases.
+
+There are other values in this config that you don't need to change in the course of the tutorial.
+However, let's overview and explain each value:
+
+- `metadata.name` must be `cozystack.cozystack-platform` to match the PackageSource created by the installer.
+- `publishing.host` is used as the main domain for all services created under Cozystack, such as the dashboard, Grafana, Keycloak, etc.
+- `publishing.apiServerEndpoint` is the Cluster API endpoint. It's used for generating kubeconfig files for your users. It is recommended to use routable IP addresses instead of local ones.
+- `spec.variant: "isp-full"` means that we're using the most complete set of Cozystack components.
+ Learn more about variants in the [Cozystack Variants reference]({{% ref "/docs/v1.3/operations/configuration/variants" %}}).
+- `publishing.exposedServices` lists services to make accessible by users — here the dashboard (UI) and API.
+- `networking.*` are internal networking configurations for the underlying Kubernetes cluster:
+ - `networking.podCIDR` — CIDR range from which Kube-OVN allocates pod IPs. Must not overlap with
+ any network your nodes already route.
+ - `networking.podGateway` — gateway address Kube-OVN assigns to the default pod subnet. Use the
+ `.1` address of the `podCIDR` network (for example, `10.244.0.1` for `10.244.0.0/16`).
+ - `networking.serviceCIDR` — CIDR range for `ClusterIP` Services. This **must** match the
+ `cluster.network.serviceSubnets` value you used when bootstrapping the Kubernetes cluster:
+ the value is baked into the kube-apiserver at bootstrap time and cannot be changed without
+ rebuilding the cluster, so a mismatch here silently breaks DNS and service routing.
+ - `networking.joinCIDR` — CIDR range for the Kube-OVN *join* subnet, the internal network that carries
+ traffic between cluster nodes and pods. The default `100.64.0.0/16` is part of the
+ [RFC 6598](https://datatracker.ietf.org/doc/html/rfc6598) shared address space (`100.64.0.0/10`)
+ that is reserved for this kind of internal-only use. Change it only if it overlaps with a network
+ your nodes already route; see the
+ [Kube-OVN join subnet reference](https://kubeovn.github.io/docs/stable/en/guide/subnet/#join-subnet)
+ for background on what this subnet does.
+
+You can learn more about this configuration file in the [Platform Package reference]({{% ref "/docs/v1.3/operations/configuration/platform-package" %}}).
+
+{{% alert color="info" %}}
+Cozystack gathers anonymous usage statistics by default. Learn more about what data is collected and how to opt out in the [Telemetry Documentation]({{% ref "/docs/v1.3/operations/configuration/telemetry" %}}).
+{{% /alert %}}
+
+
+### 2.2. Apply the Platform Package
+
+Apply the configuration file:
+
+```bash
+kubectl apply -f cozystack-platform.yaml
+```
+
+As the installation goes on, you can track the logs of the operator:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator -f
+```
+
+
+### 2.3. Check Installation Status
+
+Wait for a while, then check the status of installation:
+
+```bash
+kubectl get hr -A
+```
+
+Wait and check again until you see `True` on each line, as in this example:
+
+```console
+NAMESPACE NAME AGE READY STATUS
+cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
+cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
+cozy-cilium cilium 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
+cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
+cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
+cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
+cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
+cozy-linstor linstor 4m1s True Release reconciliation succeeded
+cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
+cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
+cozy-metallb metallb 4m1s True Release reconciliation succeeded
+cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
+cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
+cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
+cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
+cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
+cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
+tenant-root tenant-root 4m1s True Release reconciliation succeeded
+```
+
+The list of components in your installation may be different from the example above,
+as it depends on your configuration and Cozystack version.
+
+Once every component shows `READY: True`, we're ready to proceed by configuring subsystems.
+
+
+## 3. Configure Storage
+
+Kubernetes needs a storage subsystem to provide persistent volumes to applications, but it doesn't include one of its own.
+Cozystack provides [LINSTOR](https://github.com/LINBIT/linstor-server) as a storage subsystem.
+
+In the following steps, we'll access LINSTOR interface, create storage pools, and define storage classes.
+
+
+### 3.1. Check Storage Devices
+
+1. Set up an alias to access LINSTOR:
+
+ ```bash
+ alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
+ ```
+
+1. List your nodes and check their readiness:
+
+ ```bash
+ linstor node list
+ ```
+
+ Example output shows node names and state:
+
+ ```console
+ +-------------------------------------------------------+
+ | Node | NodeType | Addresses | State |
+ |=======================================================|
+ | srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
+ | srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
+ | srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+ +-------------------------------------------------------+
+ ```
+
+1. List available empty devices:
+
+ ```bash
+ linstor physical-storage list
+ ```
+
+ Example output shows the same node names:
+
+ ```console
+ +--------------------------------------------+
+ | Size | Rotational | Nodes |
+ |============================================|
+ | 107374182400 | True | srv3[/dev/sdb] |
+ | | | srv1[/dev/sdb] |
+ | | | srv2[/dev/sdb] |
+ +--------------------------------------------+
+ ```
+
+### 3.2. Create Storage Pools
+
+1. Create storage pools using ZFS:
+
+ ```bash
+ linstor ps cdp zfs srv1 /dev/sdb --pool-name data --storage-pool data
+ linstor ps cdp zfs srv2 /dev/sdb --pool-name data --storage-pool data
+ linstor ps cdp zfs srv3 /dev/sdb --pool-name data --storage-pool data
+ ```
+
+ It is [recommended](https://github.com/LINBIT/linstor-server/issues/463#issuecomment-3401472020)
+ to set `failmode=continue` on ZFS storage pools to allow DRBD to handle disk failures instead of ZFS.
+
+ ```bash
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv1 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv2 -- zpool set failmode=continue data
+ kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv3 -- zpool set failmode=continue data
+ ```
+
+1. Check the results by listing the storage pools:
+
+ ```bash
+ linstor sp l
+ ```
+
+ Example output:
+
+ ```console
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ | StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
+ |=====================================================================================================================================|
+ | DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
+ | data | srv1 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv1;data |
+ | data | srv2 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv2;data |
+ | data | srv3 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv3;data |
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ ```
+
+### 3.3. Create Storage Classes
+
+Finally, we can create a couple of storage classes, one of which will be the default class.
+
+
+1. Create a file with storage class definitions.
+ Below is a sane default example providing two classes: `local` (default) and `replicated`.
+
+ **storageclasses.yaml:**
+
+ ```yaml
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: local
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/layerList: "storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
+ volumeBindingMode: WaitForFirstConsumer
+ allowVolumeExpansion: true
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: replicated
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/autoPlace: "3"
+ linstor.csi.linbit.com/layerList: "drbd storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
+ property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
+ property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
+ volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ ```
+
+1. Apply the storage class configuration
+
+ ```bash
+ kubectl apply -f storageclasses.yaml
+ ```
+
+1. Check that the storage classes were successfully created:
+
+ ```bash
+ kubectl get storageclasses
+ ```
+
+ Example output:
+
+ ```console
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
+ replicated linstor.csi.linbit.com Delete Immediate true 11m
+ ```
+
+
+## 4. Configure Networking
+
+Next, we will configure how the Cozystack cluster can be accessed.
+This step has two options depending on your available infrastructure:
+
+- For your own bare metal or self-hosted VMs, choose the MetalLB option.
+ MetalLB is Cozystack's default load balancer.
+- For VMs and dedicated servers from cloud providers, choose the public IP setup.
+ [Most cloud providers don't support MetalLB](https://metallb.universe.tf/installation/clouds/).
+
+ Check out the [provider-specific installation]({{% ref "/docs/v1.3/install/providers" %}}) section.
+ It may have instructions for your provider, which you can use to deploy a production-ready cluster.
+
+### 4.a MetalLB Setup
+
+Cozystack has three types of IP addresses used:
+
+- Node IPs: persistent and valid only within the cluster.
+- Virtual floating IP: used to access one of the nodes in the cluster and valid only within the cluster.
+- External access IPs: used by LoadBalancers to expose services outside the cluster.
+
+Services with external IPs may be exposed in two modes: L2 and BGP.
+L2 mode is a simple one, but requires that nodes belong to a single L2 domain, and does not load-balance well.
+BGP has more complicated setup -- you need BGP peers ready to accept announces, but gives the ability to make proper load-balancing, and provides more options for choosing IP address ranges.
+
+Select a range of unused IPs for the services, here will use the `192.168.100.200-192.168.100.250` range.
+If you use L2 mode, these IPs should either be from the same network as the nodes, or have all necessary routes to them.
+
+For BGP mode, you will also need BGP peer IP addresses and local and remote AS numbers. Here we will use `192.168.20.254` as peer IP, and AS numbers 65000 and 65001 as local and remote.
+
+Create and apply a file describing an address pool.
+
+**metallb-ip-address-pool.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ addresses:
+ # used to expose services outside the cluster
+ - 192.168.100.200-192.168.100.250
+ autoAssign: true
+ avoidBuggyIPs: false
+```
+
+```bash
+kubectl apply -f metallb-ip-address-pool.yml
+```
+
+Create and apply resources needed for an L2 or a BGP advertisement.
+
+{{< tabs name="metallb_announce" >}}
+{{% tab name="L2 mode" %}}
+L2Advertisement uses the name of the IPAddressPool resource we created previously.
+
+**metallb-l2-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+
+Apply changes.
+
+```bash
+kubectl apply -f metallb-l2-advertisement.yml
+```
+{{% /tab %}}
+{{% tab name="BGP mode" %}}
+First, create a separate BGPPeer resource for **each** peer.
+
+**metallb-bgp-peer.yml**
+```yaml
+apiVersion: metallb.io/v1beta2
+kind: BGPPeer
+metadata:
+ name: peer1
+ namespace: cozy-metallb
+spec:
+ myASN: 65000
+ peerASN: 65001
+ peerAddress: 192.168.20.254
+```
+
+
+Next, create a single BGPAdvertisement resource.
+
+**metallb-bgp-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: BGPAdvertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+Apply changes.
+
+```bash
+kubectl apply -f metallb-bgp-peer.yml
+kubectl apply -f metallb-bgp-advertisement.yml
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+Now that MetalLB is configured, enable `ingress` in the `tenant-root`:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "ingress": true
+}}'
+```
+
+To confirm successful configuration, check the HelmReleases `ingress` and `ingress-nginx-system`:
+
+```bash
+kubectl -n tenant-root get hr ingress ingress-nginx-system
+```
+
+Example of correct output:
+```console
+NAME AGE READY STATUS
+ingress 47m True Helm upgrade succeeded for release tenant-root/ingress.v3 with chart ingress@1.8.0
+ingress-nginx-system 47m True Helm upgrade succeeded for release tenant-root/ingress-nginx-system.v2 with chart cozy-ingress-nginx@0.35.1
+```
+
+Next, check the state of service `root-ingress-controller`:
+
+```bash
+kubectl -n tenant-root get svc root-ingress-controller
+```
+
+The service should be deployed as `TYPE: LoadBalancer` and have correct external IP:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.91.83 192.168.100.200 80/TCP,443/TCP 48m
+```
+
+### 4.b. Node Public IP Setup
+
+If your cloud provider does not support MetalLB, you can expose ingress controller using external IPs on your nodes.
+
+If public IPs are attached directly to nodes, specify them.
+If public IPs are provided with a 1:1 NAT, as some clouds do, use IP addresses of **external** network interfaces.
+
+Here we will use `192.168.100.11`, `192.168.100.12`, and `192.168.100.13`.
+
+First, patch the Platform Package with the external IPs:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "publishing": {
+ "externalIPs": [
+ "192.168.100.11",
+ "192.168.100.12",
+ "192.168.100.13"
+ ]
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+Next, enable `ingress` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '{
+ "spec":{
+ "ingress": true
+ }
+}'
+```
+
+Finally, add external IPs to the `externalIPs` list in the Ingress configuration:
+
+```bash
+kubectl patch -n tenant-root ingresses.apps.cozystack.io ingress --type=merge -p '{
+ "spec":{
+ "externalIPs": [
+ "192.168.100.11",
+ "192.168.100.12",
+ "192.168.100.13"
+ ]
+ }
+}'
+```
+
+After that, your Ingress will be available on the specified IPs.
+Check it in the following way:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+The service should be deployed as `TYPE: ClusterIP` and have the full range of external IPs:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller ClusterIP 10.96.91.83 192.168.100.11,192.168.100.12,192.168.100.13 80/TCP,443/TCP 48m
+```
+
+## 5. Finalize Installation
+
+### 5.1. Setup Root Tenant Services
+
+Enable `etcd` and `monitoring` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "monitoring": true,
+ "etcd": true
+}}'
+```
+
+### 5.2. Check the cluster state and composition
+
+Check the provisioned persistent volumes:
+
+```bash
+kubectl get pvc -n tenant-root
+```
+
+Example output:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local 2m10s
+data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local 115s
+data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local 91s
+grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local 2m41s
+grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local 113s
+vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local 2m41s
+vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local 2m40s
+vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local 2m41s
+vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local 2m41s
+```
+
+Check that all pods are running:
+
+```bash
+kubectl get pod -n tenant-root
+```
+
+example output:
+```console
+NAME READY STATUS RESTARTS AGE
+etcd-0 1/1 Running 0 2m1s
+etcd-1 1/1 Running 0 106s
+etcd-2 1/1 Running 0 82s
+grafana-db-1 1/1 Running 0 119s
+grafana-db-2 1/1 Running 0 13s
+grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
+grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
+root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
+root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
+root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
+vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
+vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
+vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
+vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
+vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
+vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
+vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
+vmselect-longterm-0 1/1 Running 0 2m12s
+vmselect-longterm-1 1/1 Running 0 2m12s
+vmselect-shortterm-0 1/1 Running 0 2m31s
+vmselect-shortterm-1 1/1 Running 0 2m30s
+vmstorage-longterm-0 1/1 Running 0 2m12s
+vmstorage-longterm-1 1/1 Running 0 2m12s
+vmstorage-shortterm-0 1/1 Running 0 2m32s
+vmstorage-shortterm-1 1/1 Running 0 2m31s
+```
+
+Get the public IP of ingress controller:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+Example output:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
+```
+
+### 5.3 Access the Cozystack Dashboard
+
+If you included `dashboard` in the `publishing.exposedServices` list of your Platform Package (as shown in step 2), the Cozystack Dashboard is already available.
+
+If the initial configuration did not include it, patch the Platform Package:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=json \
+ -p '[{"op": "add", "path": "/spec/components/platform/values/publishing/exposedServices/-", "value": "dashboard"}]'
+```
+
+Open `dashboard.example.org` to access the system dashboard, where `example.org` is your domain specified for `tenant-root`.
+There you will see a login window which expects an authentication token.
+
+Get the authentication token for `tenant-root`:
+
+```bash
+kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
+```
+
+Log in using the token.
+Now you can use the dashboard as an administrator.
+
+Further on, you will be able to:
+
+- Set up OIDC to authenticate with it instead of tokens.
+- Create user tenants and grant users access to them via tokens or OIDC.
+
+### 5.4 Access metrics in Grafana
+
+Use `grafana.example.org` to access the system monitoring, where `example.org` is your domain specified for `tenant-root`.
+In this example, `grafana.example.org` is located at 192.168.100.200.
+
+- login: `admin`
+- request a password:
+
+ ```bash
+ kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
+ ```
+
+## Next Step
+
+Continue the Cozystack tutorial by [creating a user tenant]({{% ref "/docs/v1.3/getting-started/create-tenant" %}}).
diff --git a/content/en/docs/v1.3/getting-started/install-kubernetes.md b/content/en/docs/v1.3/getting-started/install-kubernetes.md
new file mode 100644
index 00000000..ab9733f6
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/install-kubernetes.md
@@ -0,0 +1,31 @@
+---
+title: "2. Install and Bootstrap a Kubernetes cluster"
+linkTitle: "2. Install Kubernetes"
+description: "Use Talm CLI to bootstrap a Kubernetes cluster, ready for Cozystack."
+weight: 15
+---
+
+## Objectives
+
+We start this step of the tutorial, having [three nodes with Talos Linux installed on them]({{% ref "/docs/v1.3/getting-started/install-talos" %}}).
+
+As a result of this step, we will have a Kubernetes cluster installed, configured, and ready to install Cozystack.
+We will also have a `kubeconfig` for this cluster, and will have performed basic checks on the cluster.
+
+## Installing Kubernetes
+
+Install and bootstrap a Kubernetes cluster using [Talm]({{% ref "/docs/v1.3/install/kubernetes/talm" %}}), a declarative CLI configuration tool with ready configuration presets for Cozystack.
+
+{{% alert color="info" %}}
+This part of the tutorial is being reworked.
+It will include simplified instructions for Talm installation, without all the extra options and corner cases, included in the main Talm guide.
+{{% /alert %}}
+
+
+## Next Step
+
+Continue the Cozystack tutorial by [installing and configuring Cozystack]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}).
+
+Extra tasks:
+
+- Check out [github.com/cozystack/talm](https://github.com/cozystack/talm) and give it a star!
diff --git a/content/en/docs/v1.3/getting-started/install-talos.md b/content/en/docs/v1.3/getting-started/install-talos.md
new file mode 100644
index 00000000..31be8c3f
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/install-talos.md
@@ -0,0 +1,98 @@
+---
+title: "1. Install Talos Linux"
+linkTitle: "1. Install Talos"
+description: "Install Talos Linux on any machine using cozystack/boot-to-talos."
+weight: 10
+aliases:
+ - /docs/v1.3/getting-started/first-deployment
+ - /docs/v1.3/getting-started/deploy-cluster
+---
+
+## Before you begin
+
+Make sure that you have nodes (bare-metal servers or VMs) that fit the
+[hardware requirements]({{% ref "/docs/v1.3/getting-started/requirements" %}}).
+
+## Objectives
+
+On this step of the tutorial you will install Talos Linux on bare-metal servers or VMs running some other Linux distribution.
+
+The tutorial is using `boot-to-talos`, a simple-to-use CLI app made by Cozystack team for users and teams adopting Cozystack.
+There are multiple ways to [install Talos Linux for Cozystack]({{% ref "/docs/v1.3/install/talos" %}}), not used here and covered in separate guides.
+
+## Installation
+
+### 1. Install `boot-to-talos`
+
+Install `boot-to-talos` using the installer script:
+
+```bash
+curl -sSL https://github.com/cozystack/boot-to-talos/raw/refs/heads/main/hack/install.sh | sh -s
+```
+
+### 2. Run to install Talos
+
+Run `boot-to-talos` and provide the configuration values.
+Make sure to use Cozystack's own Talos build, found at [ghcr.io/cozystack/cozystack/talos](https://github.com/cozystack/cozystack/pkgs/container/cozystack%2Ftalos).
+
+For Cozystack {{< version-pin "cozystack_tag" >}} the pinned Talos version is **{{< version-pin "talos" >}}** — override the installer's default when prompted:
+
+```console
+$ boot-to-talos
+Target disk [/dev/sda]:
+Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.11.6]: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+Add networking configuration? [yes]:
+Interface [eth0]:
+IP address [10.0.2.15]:
+Netmask [255.255.255.0]:
+Gateway (or 'none') [10.0.2.2]:
+Configure serial console? (or 'no') [ttyS0]:
+
+Summary:
+ Image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+ Disk: /dev/sda
+ Extra kernel args: ip=10.0.2.15::10.0.2.2:255.255.255.0::eth0::::: console=ttyS0
+
+WARNING: ALL DATA ON /dev/sda WILL BE ERASED!
+
+Continue? [yes]:
+
+2025/08/03 00:11:03 created temporary directory /tmp/installer-3221603450
+2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+2025/08/03 00:11:03 extracting image layers
+2025/08/03 00:11:07 creating raw disk /tmp/installer-3221603450/image.raw (2 GiB)
+2025/08/03 00:11:07 attached /tmp/installer-3221603450/image.raw to /dev/loop0
+2025/08/03 00:11:07 starting Talos installer
+2025/08/03 00:11:07 running Talos installer {{< version-pin "talos" >}}
+2025/08/03 00:11:07 WARNING: config validation:
+2025/08/03 00:11:07 use "worker" instead of "" for machine type
+2025/08/03 00:11:07 created EFI (C12A7328-F81F-11D2-BA4B-00A0C93EC93B) size 104857600 bytes
+2025/08/03 00:11:07 created BIOS (21686148-6449-6E6F-744E-656564454649) size 1048576 bytes
+2025/08/03 00:11:07 created BOOT (0FC63DAF-8483-4772-8E79-3D69D8477DE4) size 1048576000 bytes
+2025/08/03 00:11:07 created META (0FC63DAF-8483-4772-8E79-3D69D8477DE4) size 1048576 bytes
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p1" as "vfat" with label "EFI"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p2" as "zeroes" with label "BIOS"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p3" as "xfs" with label "BOOT"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p4" as "zeroes" with label "META"
+2025/08/03 00:11:07 copying from io reader to /boot/A/vmlinuz
+2025/08/03 00:11:07 copying from io reader to /boot/A/initramfs.xz
+2025/08/03 00:11:08 writing /boot/grub/grub.cfg to disk
+2025/08/03 00:11:08 executing: grub-install --boot-directory=/boot --removable --efi-directory=/boot/EFI /dev/loop0
+2025/08/03 00:11:08 installation of {{< version-pin "talos" >}} complete
+2025/08/03 00:11:08 Talos installer finished successfully
+2025/08/03 00:11:08 remounting all filesystems read-only
+2025/08/03 00:11:08 copy /tmp/installer-3221603450/image.raw → /dev/sda
+2025/08/03 00:11:19 installation image copied to /dev/sda
+2025/08/03 00:11:19 rebooting system
+```
+
+## Next Step
+
+Continue the Cozystack tutorial by [installing and bootstrapping a Kubernetes cluster using Talm]({{% ref "/docs/v1.3/getting-started/install-kubernetes" %}}).
+
+Extra tasks:
+
+- Read the [Talos Linux overview]({{% ref "/docs/v1.3/guides/talos" %}}) to learn why Talos Linux is the optimal OS choice for Cozystack
+ and what it brings to the platform.
+- Learn more about [`boot-to-talos`]({{% ref "/docs/v1.3/install/talos/boot-to-talos#about-the-application" %}}).
+- Check out [github.com/cozystack/boot-to-talos](https://github.com/cozystack/boot-to-talos) and give it a star!
\ No newline at end of file
diff --git a/content/en/docs/v1.3/getting-started/requirements.md b/content/en/docs/v1.3/getting-started/requirements.md
new file mode 100644
index 00000000..a9baeb21
--- /dev/null
+++ b/content/en/docs/v1.3/getting-started/requirements.md
@@ -0,0 +1,52 @@
+---
+title: "Requirements and Toolchain"
+linkTitle: "Requirements"
+description: "Prepare infrastructure and install the toolchain."
+weight: 1
+---
+
+## Toolchain
+
+You will need the following tools installed on your workstation:
+
+- [talosctl](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/install/talosctl/), the command line client for Talos Linux (use the {{< version-pin "talos_minor" >}}.x series that matches Cozystack {{< version-pin "cozystack_version" >}}).
+- [kubectl](https://kubernetes.io/docs/tasks/tools/#kubectl), the command line client for Kubernetes.
+- [Talm](https://github.com/cozystack/talm?tab=readme-ov-file#installation), Cozystack's own configuration manager for Talos Linux:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/talm/raw/refs/heads/main/hack/install.sh | sh -s
+ ```
+
+## Hardware Requirements
+
+To run this tutorial, you will need the following setup:
+
+**Cluster nodes:** three bare-metal servers or virtual machines. Hardware requirements depend on your usage scenario:
+
+{{< include "docs/v1.3/install/_include/hardware-config-tabs.md" >}}
+
+**Storage:**
+- **Primary Disk**: Used for Talos Linux, etcd storage, and downloaded images. Low latency is required.
+- **Secondary Disk**: Used for user application data (ZFS pool).
+
+**OS:**
+- Any Linux distribution, for example, Ubuntu.
+- There are [other installation methods]({{% ref "/docs/v1.3/install/talos" %}}) which require either any Linux or no OS at all to start.
+
+**BIOS/UEFI Settings:**
+- **Secure Boot must be disabled.**
+ Secure Boot is currently not supported and must be disabled in the BIOS/UEFI settings before installation.
+
+**Networking:**
+- Routable FQDN domain. If you don't have one, you can use [nip.io](https://nip.io/) with dash notation
+- Located in the same L2 network segment.
+- Anti-spoofing disabled.
+ It is required for MetalLB, the load balancer used in Cozystack.
+
+**Virtual machines:**
+- CPU passthrough enabled and CPU model set to `host` in the hypervisor settings.
+- Nested virtualization enabled.
+ Required for virtual machines and tenant kubernetes clusters.
+
+For a more detailed explanation of hardware requirements for different setups, refer to the [Hardware Requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}})
+
diff --git a/content/en/docs/v1.3/guides/_index.md b/content/en/docs/v1.3/guides/_index.md
new file mode 100644
index 00000000..119fb6dc
--- /dev/null
+++ b/content/en/docs/v1.3/guides/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Learn Cozystack"
+linkTitle: "Learn Cozystack"
+description: "Learn to use Cozystack as a cluster administrator and tenant owner."
+weight: 20
+---
diff --git a/content/en/docs/v1.3/guides/concepts.md b/content/en/docs/v1.3/guides/concepts.md
new file mode 100644
index 00000000..a08ef4c7
--- /dev/null
+++ b/content/en/docs/v1.3/guides/concepts.md
@@ -0,0 +1,383 @@
+---
+title: Key Concepts
+linkTitle: Key Concepts
+description: "Learn about the key concepts of Cozystack, such as management cluster, tenants, variants, and the PackageSource/Package lifecycle."
+weight: 10
+aliases:
+ - /docs/v1.3/concepts
+---
+
+Cozystack is an open-source, Kubernetes-native platform that turns bare-metal or virtual infrastructure into a fully featured, multi-tenant cloud.
+At its core are a few foundational building blocks:
+
+- the **management cluster** that runs the platform itself;
+- **tenants** that provide strict, hierarchical isolation;
+- **tenant clusters** that give users their own Kubernetes control planes;
+- rich catalog of **managed applications** and virtual machines;
+- **variants** that assemble these components into a turnkey stack.
+
+Understanding how these concepts fit together will help you plan, deploy, and operate Cozystack effectively,
+whether you are building an internal developer platform or a public cloud service.
+
+## Management Cluster
+
+Cozystack is a system of services working on a Kubernetes cluster, usually deployed on top of Talos Linux on bare metal or virtual machines.
+This Kubernetes cluster is called the **management cluster** to highlight its role and distinguish it from tenant Kubernetes clusters.
+Only Cozystack administrators have full access to the management cluster.
+
+The management cluster is used to deploy preconfigured applications, such as tenants, system components, managed apps, VMs, and tenant clusters.
+Cozystack users can interact with the management cluster through dashboard and API, and deploy managed applications.
+However, they don't have administrative rights and may not deploy custom applications in the management cluster, but can use tenant clusters instead.
+
+## Tenant
+
+A **tenant** in Cozystack is the primary unit of isolation and security, analogous to a Kubernetes namespace but with enhanced scope.
+Each tenant represents an isolated environment with its own resources, networking, and RBAC (role-based access control).
+Some cloud providers use the term "projects" for a similar entity.
+
+When Cozystack is used to build a private cloud and an internal development platform, a tenant usually belongs to a team or subteam.
+In a hosting business, where Cozystack is the foundation of a public cloud, a tenant can belong to a customer.
+
+Read more: [Tenant System]({{% ref "/docs/v1.3/guides/tenants" %}}).
+
+## Tenant Cluster
+
+Users can deploy separate Kubernetes clusters in their own tenants.
+These are not namespaces of the management cluster, but complete Kubernetes-in-Kubernetes clusters.
+
+Tenant clusters are what many cloud providers call "managed Kubernetes".
+They are used as development, testing, and production environments.
+
+Read more: [tenant Kubernetes clusters]({{% ref "/docs/v1.3/kubernetes" %}}).
+
+## Managed Applications
+
+Cozystack comes with a catalog of **managed applications** (services) that can be deployed on the platform with minimal effort.
+These include relational databases (PostgreSQL, MySQL/MariaDB), NoSQL/queues (Redis, NATS, Kafka, RabbitMQ), HTTP cache, load balancer, and others.
+
+Tenants, tenant Kubernetes clusters, and VMs are also managed applications in terms of Cozystack.
+They are created with the same user workflow and are managed with Helm and Flux, just as other applications.
+
+Read more: [managed applications]({{% ref "/docs/v1.3/applications" %}}).
+
+## Cozystack API
+
+Instead of a proprietary API or UI-only management, Cozystack exposes its functionality through
+[Kubernetes Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/)
+and the standard Kubernetes API, accessible via REST API, `kubectl` client, and the Cozystack dashboard.
+
+This approach combines well with role-based access control.
+Non-administrative users can use `kubectl` to access the management cluster,
+but their kubeconfig will authorize them only to create custom resources in their tenants.
+
+Read more: [Cozystack API]({{% ref "/docs/v1.3/cozystack-api" %}}).
+
+## Variants
+
+Variants are pre-defined configurations of Cozystack that determine which bundles and components are enabled.
+Each variant is tested, versioned, and guaranteed to work as a unit.
+They simplify installation, reduce the risk of misconfiguration, and make it easier to choose the right set of features for your deployment.
+
+Read more: [Variants]({{% ref "/docs/v1.3/operations/configuration/variants" %}}).
+
+## PackageSource and Package
+
+`PackageSource` and `Package` are the two Custom Resource Definitions (CRDs) that drive the entire application lifecycle in Cozystack.
+
+- **PackageSource** (cluster-scoped) defines what is available: it references a Flux source (OCIRepository or GitRepository) that polls an external registry, lists variants, declares dependencies, and specifies the components that make up each application.
+- **Package** (cluster-scoped) defines what is deployed: it selects a variant from a PackageSource, provides per-component value overrides, and triggers the creation of a HelmRelease that manages the actual Kubernetes resources.
+
+Together, they form a declarative pipeline: external charts flow through Flux sources and artifact generators into ready-to-install Helm charts, which Packages then instantiate as running workloads.
+
+### OCIRepositories: Platform and Packages
+
+Cozystack uses two OCIRepository resources to manage the update flow and ensure migrations run before any component upgrades.
+
+#### Initial OCIRepository (`cozystack-platform`)
+
+Created by the cozystack-operator during bootstrap. The operator receives a platform source URL (e.g., `oci://ghcr.io/cozystack/cozystack/cozystack-packages`) and creates an OCIRepository named `cozystack-platform`. This repository points to the platform chart artifact, is configured via installer values (`platformSourceUrl`, `platformSourceRef`), and provides the platform chart that will create migrations and the secondary OCIRepository.
+
+#### Secondary OCIRepository (`cozystack-packages`)
+
+Created by the platform Helm chart (`packages/core/platform/templates/repository.yaml`). It copies the spec from `cozystack-platform` and creates a new OCIRepository named `cozystack-packages`. This repository is referenced by all PackageSources (networking, monitoring, postgres-operator, etc.), contains all system and application charts, and decouples the platform source from component PackageSources.
+
+#### Migration Ordering
+
+The two-repository design ensures that system migrations execute before any component updates:
+
+```mermaid
+flowchart TD
+ A["Installer Chart (helm install)"]
+ B["cozystack-operator starts"]
+ C["Initial OCIRepository (cozystack-platform) Created by operator"]
+ D["Platform Chart from initial OCIRepository"]
+ E["Pre-upgrade Hooks: Run migrations sequentially Update cozystack-version ConfigMap"]
+ F["Secondary OCIRepository (cozystack-packages) Created by platform chart"]
+ G["PackageSources reference cozystack-packages"]
+ H["System Components HelmReleases deploy"]
+
+ A --> B --> C --> D --> E --> F --> G --> H
+```
+
+When a new platform version is released and the cluster is upgraded:
+
+1. The initial OCIRepository (`cozystack-platform`) provides the new platform chart.
+2. During `helm upgrade`, the platform chart's `pre-upgrade` hooks execute migrations sequentially (from current to target version).
+3. Each migration script performs necessary transformations and updates the `cozystack-version` ConfigMap.
+4. After migrations complete, the platform chart creates or updates the `cozystack-packages` OCIRepository.
+5. PackageSources reference `cozystack-packages` and trigger reconciliation of system components.
+
+This guarantees migrations run before component upgrades, and the migration scripts come from the same chart version being deployed.
+
+### Reconciliation Flow
+
+The full reconciliation chain from an external registry to running Kubernetes resources:
+
+```mermaid
+flowchart TD
+ REG["External Helm Registry (OCI Registry or Git Repo)"]
+ SRC["Flux Source (OCIRepository / GitRepository) Periodically polls the registry"]
+ PS["PackageSource (cluster-scoped) Defines variants, dependencies, libraries, and components"]
+ AG["ArtifactGenerator (in cozy-system namespace) Builds an ExternalArtifact for each component"]
+ EA["ExternalArtifact Assembled Helm chart ready for installation"]
+ PKG["Package (cluster-scoped) Selects variant + per-component values"]
+ HR["HelmRelease (namespace-scoped) References ExternalArtifact via chartRef"]
+ K8S["Kubernetes Resources (Pods, Services, ConfigMaps, Secrets, ...)"]
+
+ REG --> SRC
+ PS -->|"references via sourceRef"| SRC
+ PS -->|"reconciler creates"| AG
+ AG -->|"reads charts from"| SRC
+ AG --> EA
+ PKG -->|"triggers creation of"| HR
+ HR -->|"references via chartRef"| EA
+ HR --> K8S
+```
+
+The naming convention for ExternalArtifacts follows the pattern `--`, with dots replaced by dashes to comply with Kubernetes naming rules. For example, a PackageSource named `cozystack.keycloak` with variant `default` and component `keycloak` produces `cozystack-keycloak-default-keycloak`.
+
+### Package Dependencies
+
+PackageSource variants can declare `dependsOn` to gate HelmRelease creation until all dependencies are ready:
+
+```mermaid
+flowchart LR
+ A["Package A"]
+ B["Package B"]
+ CHECK{"All dependencies Ready?"}
+ C["Package C"]
+ HR["HelmRelease for C"]
+
+ A -->|"status: Ready"| CHECK
+ B -->|"status: Ready"| CHECK
+ CHECK -->|"Yes"| HR
+ CHECK -->|"No"| WAIT["Package C waits"]
+```
+
+If all dependencies report a `Ready` status, the dependent Package proceeds to create its HelmRelease. Otherwise, the Package remains in a waiting state until the conditions are met.
+
+Dependencies in PackageSource used at two levels:
+
+- **Variant-level** (`spec.variants.dependsOn[]`): references other Package names. The PackageReconciler checks that all dependencies are ready before creating any HelmReleases. This ensures infrastructure packages (e.g., CNI, storage) are fully running before dependent packages attempt installation. The `spec.ignoreDependencies` field on a Package can override this check for specific dependencies.
+- **Component-level** (`spec.variants.components.install.dependsOn[]`): translated into `spec.dependsOn[]` field on a HelmRelease resource. These dependencies enforce correct ordering of components during installation of package.
+
+### Namespace and Values Management
+
+When the PackageReconciler creates HelmReleases for a Package, it also:
+
+- **Creates namespaces** declared in component `Install.namespace` fields, setting labels such as `cozystack.io/system=true` and `pod-security.kubernetes.io/enforce=privileged` where needed.
+- **Injects cluster-wide configuration** via the `cozystack-values` Secret. The **CozyValuesReplicator** watches this Secret in `cozy-system` and replicates it to every namespace labeled `cozystack.io/system=true`. Each HelmRelease references this Secret through `valuesFrom`, ensuring all components receive consistent platform configuration.
+
+### Update Flow
+
+When a new chart version is pushed to the registry, updates propagate automatically through the reconciliation chain:
+
+```mermaid
+flowchart TD
+ PUSH["Push new chart version to OCI registry"]
+ FLUX["Flux detects digest change (periodic polling)"]
+ REBUILD["ArtifactGenerator rebuilds ExternalArtifact"]
+ UPGRADE["HelmRelease triggers helm upgrade"]
+ DEPLOY["New version deployed"]
+
+ PUSH --> FLUX
+ FLUX --> REBUILD
+ REBUILD --> UPGRADE
+ UPGRADE --> DEPLOY
+```
+
+To speed up synchronization without waiting for the next polling interval (Flux sources live in the `cozy-system` namespace):
+
+```text
+flux reconcile source oci --namespace cozy-system
+```
+
+To update application values without changing the chart version, patch the Package CR directly.
+Values are scoped per component under `spec.components..values`:
+
+```text
+kubectl patch package --type merge --patch '{"spec":{"components":{"":{"values":{"key":"value"}}}}}'
+```
+
+### Rollback Strategies
+
+There are three approaches to rolling back a Package, listed from most to least recommended:
+
+**GitOps rollback (recommended):** Push the previous chart version to the OCI registry. Flux detects the change and triggers an upgrade to the "old" version through the standard reconciliation flow.
+
+```mermaid
+flowchart LR
+ PUSH["Push previous chart to registry"]
+ FLUX["Flux detects change"]
+ UP["Helm upgrade to previous version"]
+ OK["Rollback complete"]
+
+ PUSH --> FLUX --> UP --> OK
+```
+
+**Emergency rollback:** Run `helm rollback` directly and suspend the HelmRelease to prevent Flux from re-applying the newer version. This bypasses GitOps and should only be used in emergencies.
+
+```mermaid
+flowchart LR
+ ROLLBACK["helm rollback <release> <revision>"]
+ SUSPEND["flux suspend helmrelease <name>"]
+ NOTE["Flux will NOT re-apply while suspended"]
+
+ ROLLBACK --> SUSPEND --> NOTE
+```
+
+**Controlled rollback:** Suspend the HelmRelease first, then run `helm rollback`, fix the chart in the registry, and resume the HelmRelease.
+
+```mermaid
+flowchart TD
+ S1["Suspend HelmRelease"]
+ S2["helm rollback"]
+ S3["Fix chart in registry"]
+ S4["Resume HelmRelease"]
+ S5["Flux reconciles with fixed chart"]
+
+ S1 --> S2 --> S3 --> S4 --> S5
+```
+
+### FluxPlunger Auto-Recovery
+
+FluxPlunger is an automatic recovery component that handles the common "has no deployed releases" HelmRelease error. This error occurs when Helm's release state becomes inconsistent.
+
+```mermaid
+flowchart TD
+ ERR["HelmRelease enters error state 'has no deployed releases'"]
+ DETECT["FluxPlunger detects the error"]
+ FIND["Finds the last release Secret"]
+ SUSPEND["Suspends HelmRelease"]
+ DEL["Deletes the stale Secret"]
+ ANNOTATE["Records processed version in annotation (crash recovery)"]
+ RESUME["Resumes HelmRelease"]
+ REINSTALL["Flux performs a clean reinstall"]
+
+ ERR --> DETECT
+ DETECT --> FIND
+ FIND --> SUSPEND
+ SUSPEND --> DEL
+ DEL --> ANNOTATE
+ ANNOTATE --> RESUME
+ RESUME --> REINSTALL
+```
+
+If FluxPlunger crashes mid-process, the `flux-plunger.cozystack.io/last-processed-version` annotation ensures it can resume correctly on the next reconciliation.
+
+### Lifecycle Operations Summary
+
+| Action | What to do | Handled by |
+| --- | --- | --- |
+| Update chart version | Push new chart to OCI registry | Flux + ArtifactGenerator |
+| Update values | Patch the Package CR | Package controller + HelmRelease |
+| Speed up sync | `flux reconcile source oci ` | Manual trigger |
+| GitOps rollback | Push previous chart version to registry | Flux (standard flow) |
+| Emergency rollback | `helm rollback` + suspend HelmRelease | Manual intervention |
+| Recovery from error | Automatic via FluxPlunger | FluxPlunger controller |
+
+## cozypkg CLI
+
+`cozypkg` is a command-line tool for managing Package and PackageSource resources interactively.
+It handles dependency resolution, variant selection, and safe deletion with cascade analysis, so you don't have to craft YAML manifests by hand.
+
+### Installation
+
+Pre-built binaries are available for Linux, macOS, and Windows (amd64 and arm64) as part of each Cozystack release.
+
+### Commands
+
+#### `cozypkg add` --- Install Packages
+
+Installs one or more packages with automatic dependency resolution:
+
+```text
+cozypkg add cozystack.keycloak cozystack.monitoring
+cozypkg add --file packages.yaml
+```
+
+For each package, `cozypkg add`:
+
+1. Finds the corresponding PackageSource in the cluster.
+2. Prompts you to select a variant if multiple are available.
+3. Resolves all transitive dependencies (topological sort).
+4. Creates Package resources in dependency-first order, skipping already-installed packages.
+
+#### `cozypkg list` --- List Packages
+
+```text
+cozypkg list # Available PackageSources
+cozypkg list --installed # Installed Packages
+cozypkg list --installed --components # Installed Packages with component details
+```
+
+Example output:
+
+```text
+NAME VARIANT READY STATUS
+cozystack.networking cilium True reconciliation succeeded, generated 2 helmrelease(s)
+cozystack.keycloak default False DependenciesNotReady
+```
+
+#### `cozypkg del` --- Delete Packages
+
+Safely removes packages with reverse-dependency analysis:
+
+```text
+cozypkg del cozystack.keycloak
+```
+
+Before deletion, `cozypkg del` shows which other installed packages depend on the target and asks for confirmation. Packages are deleted in reverse topological order (dependents first).
+
+#### `cozypkg dot` --- Visualize Dependencies
+
+Generates a dependency graph in GraphViz DOT format:
+
+```text
+cozypkg dot | dot -Tpng > dependencies.png
+cozypkg dot --installed --components # Component-level graph of installed packages
+```
+
+Missing dependencies are highlighted in red, making it easy to spot incomplete installations.
+
+### How cozypkg Fits into the Lifecycle
+
+```mermaid
+flowchart LR
+ USER["User"]
+ CLI["cozypkg add"]
+ PKG["Package CR"]
+ CTRL["Package Controller"]
+ HR["HelmRelease"]
+
+ USER -->|"selects variant"| CLI
+ CLI -->|"creates"| PKG
+ PKG -->|"reconciled by"| CTRL
+ CTRL -->|"creates"| HR
+```
+
+`cozypkg` operates exclusively on `Package` and `PackageSource` custom resources.
+It does not interact with HelmReleases, ArtifactGenerators, or Flux sources directly --- those are managed by the controllers described above.
+
+You can always manage Package resources with `kubectl` instead of `cozypkg`.
+The CLI simply automates variant selection, dependency ordering, and cascade analysis.
diff --git a/content/en/docs/v1.3/guides/platform-stack/_index.md b/content/en/docs/v1.3/guides/platform-stack/_index.md
new file mode 100644
index 00000000..7d393c9c
--- /dev/null
+++ b/content/en/docs/v1.3/guides/platform-stack/_index.md
@@ -0,0 +1,314 @@
+---
+title: "Cozystack Architecture and Platform Stack"
+linkTitle: "Platform Stack"
+description: "Learn of the core components that power the functionality and flexibility of Cozystack"
+weight: 15
+---
+
+This article explains Cozystack composition through its four layers, and shows the role and value of each component in the platform stack.
+
+## Overview
+
+To understand Cozystack composition, it's helpful to view it as sub-systems, layered from hardware to user-facing:
+
+
+
+## Layer 1: OS and Hardware
+
+This is a foundation layer, providing cluster functionality on bare metal.
+It consists of Talos Linux and a Kubernetes cluster installed on Talos.
+
+### Talos Linux
+
+Talos Linux is a Linux distribution made and optimized for a single purpose: to run Kubernetes.
+It provides the foundation for reliability and security in a Cozystack cluster.
+Its use allows Cozystack to limit the technology stack, improving stability and security.
+
+Read more about it in the [Talos Linux]({{% ref "/docs/v1.3/guides/talos" %}}) section.
+
+### Kubernetes
+
+Kubernetes has already become a kind of de facto standard for managing server workloads.
+
+One of the key features of Kubernetes is a convenient and unified API that is understandable to everyone (everything is YAML). Also, the best software design patterns that provide continuous recovery in any situation (reconciliation method) and efficient scaling to a large number of servers.
+
+This fully solves the integration problem, since all existing virtualization platforms have an outdated and rather complex APIs that cannot be extended without modifying the source code. As a result, there is always a need to create your own custom solutions, which requires additional effort.
+
+## Layer 2: Infrastructure Services
+
+Second layer contains the key components which perform major roles such as storage, networking, and virtualization.
+Adding these components to the base Kubernetes cluster makes it much more functional.
+
+### Flux CD
+
+FluxCD provides a simple and uniform interface for both installing all platform components and managing their lifecycle.
+Cozystack developers have adopted FluxCD as the core element of the platform, believing it sets a new industry standard for platform engineering.
+
+### KubeVirt
+
+KubeVirt brings virtualization capability to Cozystack.
+It enables creating virtual machines and worker nodes for tenant Kubernetes clusters.
+
+KubeVirt is a project started by global industry leaders with a common vision to unify Kubernetes and a desire to introduce it to the world of virtualization.
+It extends the capabilities of Kubernetes by providing convenient abstractions for launching and managing virtual machines,
+as well the all related entities such as snapshots, presets, virtual volumes, and more.
+
+At the moment, the KubeVirt project is being jointly developed by such world-famous companies as RedHat, NVIDIA, ARM.
+
+### DRBD and LINSTOR
+
+DRBD and LINSTOR are the foundation of replicated storage in Cozystack.
+
+DRBD is the fastest replication block storage running right in the Linux kernel.
+When DRBD only deals with data replication, time-tested technologies such as LVM or ZFS are used to securely store the data.
+The DRBD kernel module is included in the mainline Linux kernel and has been used to build fault-tolerant systems for over a decade.
+
+DRBD is managed using LINSTOR, a system integrated with Kubernetes.
+LINSTOR is a management layer for creating virtual volumes based on DRBD.
+It enables managing hundreds or thousands of virtual volumes in the Cozystack cluster.
+
+### Kube-OVN
+
+The networking functionality in Cozystack is based on Kube-OVN and Cilium.
+
+OVN is a free implementation of virtual network fabric for Kubernetes and OpenStack based on the Open vSwitch technology.
+With Kube-OVN, you get a robust and functional virtual network that ensures reliable isolation between tenants and provides floating addresses for virtual machines.
+
+In the future, this will enable seamless integration with other clusters and customer network services.
+
+### Cilium
+
+Utilizing Cilium in conjunction with OVN enables the most efficient and flexible network policies,
+along with a productive services network in Kubernetes, leveraging an offloaded Linux network stack featuring the cutting-edge eBPF technology.
+
+Cilium is a highly promising project, widely adopted and supported by numerous cloud providers worldwide.
+
+## Layer 3: Platform Services
+
+These are components that provide the user-side functionality to Cozystack and its managed applications.
+
+### OpenAPI UI
+
+OpenAPI UI provides the main web interface for deploying and managing applications in Cozystack.
+It serves as the primary dashboard that allows users to interact with the Cozystack API through a user-friendly interface.
+
+The interface is built on top of the Cozystack OpenAPI specifications, automatically generating forms and documentation
+for all available managed applications. Users can deploy databases, Kubernetes clusters, virtual machines, and other services
+directly through the dashboard without needing to write YAML manifests manually.
+
+The dashboard also integrates with OIDC authentication via Keycloak, providing secure single sign-on access to the platform.
+
+### Kamaji
+
+Cozystack uses Kamaji Control Plane to deploy tenant Kubernetes clusters.
+Kamaji provides a straightforward and convenient method for launching all the necessary Kubernetes control-plane in containers.
+Worker nodes are then connected to these control planes and handle user workloads.
+
+The approach developed by the Kamaji project is modeled after the design of modern clouds and ensures security by design
+where end users do not have any control plane nodes for their clusters.
+
+### Grafana
+
+Grafana with Grafana Loki and the OnCall extension provides a single interface to Observability.
+It allows you to conveniently view charts, logs and manage alerts for your infrastructure and applications.
+
+### Victoria Metrics
+
+Victoria Metrics allows you to most efficiently collect, store and process metrics in the Open Metrics format,
+doing it more efficiently than Prometheus in the same setup.
+
+### MetalLB
+
+MetalLB is the default load balancer for Kubernetes;
+with its help, your services can obtain public addresses that are accessible not only from inside,
+but also from outside your cluster network.
+
+### HAProxy
+
+HAProxy is an advanced and widely known TCP balancer.
+It continuously checks service availability and carefully balances production traffic between them in real time.
+
+See the application reference: [TCP Balancer]({{% ref "/docs/v1.3/networking/tcp-balancer" %}})
+
+### SeaweedFS
+
+SeaweedFS is a simple and highly scalable distributed file system designed for two main objectives:
+to store billions of files and to serve the files faster. It allows access O(1), usually just one disk read operation.
+
+### Kubernetes Operators
+
+Cozystack includes a set of Kubernetes operators, used for managing system services and managed applications.
+
+## Layer 4: User-side services
+
+Cozystack is shipped with a number of user-side applications, pre-configured for reliability and resource efficiency,
+coming with monitoring and observability included:
+
+- [Tenant Kubernetes clusters]({{% ref "/docs/v1.3/kubernetes" %}}), fully-functional managed Kubernetes clusters for development and production workloads.
+- [Managed applications]({{% ref "/docs/v1.3/applications" %}}), such as databases and queues.
+- [Virtual machines]({{% ref "/docs/v1.3/virtualization" %}}), supporting Linux and Windows OS.
+- [Networking appliances]({{% ref "/docs/v1.3/networking" %}}), including VPN, HTTP cache, TCP load balancer, and virtual routers.
+
+### Managed Kubernetes
+
+Cozystack deploys and manages tenant Kubernetes clusters as standalone applications within each tenant’s isolated environment.
+These clusters are fully separate from the root management cluster and are intended for deploying tenant-specific or customer-developed applications.
+
+Deployment involves the following components:
+
+- **Kamaji Control Plane**: [Kamaji](https://kamaji.clastix.io/) is an open-source project that facilitates the deployment
+ of Kubernetes control planes as pods within a root cluster.
+ Each control plane pod includes essential components like `kube-apiserver`, `controller-manager`, and `scheduler`,
+ allowing for efficient multi-tenancy and resource utilization.
+
+- **Etcd Cluster**: A dedicated etcd cluster is deployed using Ænix's [aenix-io/etcd-operator](https://github.com/aenix-io/etcd-operator).
+ It provides reliable and scalable key-value storage for the Kubernetes control plane.
+
+- **Worker Nodes**: Virtual Machines are provisioned to serve as worker nodes.
+ These nodes are configured to join the tenant Kubernetes cluster, enabling the deployment and management of workloads.
+
+This architecture ensures isolated, scalable, and efficient Kubernetes environments tailored for each tenant.
+
+- Supported version: Kubernetes v1.32.4
+- Operator: [aenix-io/etcd-operator](https://github.com/aenix-io/etcd-operator) v0.4.2
+- Managed application reference: [Kubernetes]({{% ref "/docs/v1.3/kubernetes" %}})
+
+
+### Virtual Machines
+
+In Cozystack, virtualization features are powered by [KubeVirt]({{% ref "/docs/v1.3/guides/platform-stack#kubevirt" %}}).
+Cozystack has a number of applications providing virtualization functionality:
+
+- [Virtual machine instance]({{% ref "/docs/v1.3/virtualization/vm-instance" %}}) with more advanced configuration.
+- [Virtual machine disk]({{% ref "/docs/v1.3/virtualization/vm-disk" %}}), offering a choice of image sources.
+- [VM image (Golden Disk)]({{% ref "/docs/v1.3/virtualization/vm-image" %}}), which makes OS images locally available, improving VM creation time and saving network traffic.
+
+
+### ClickHouse
+
+ClickHouse is an open source high-performance and column-oriented SQL database management system (DBMS).
+It is used for online analytical processing (OLAP).
+In the Cozystack platform, we use the Altinity operator to provide ClickHouse.
+
+- Supported version: 24.9.2.42
+- Kubernetes operator: [Altinity/clickhouse-operator](https://github.com/Altinity/clickhouse-operator) v0.25.0
+- Website: [clickhouse.com](https://clickhouse.com/)
+- Managed application reference: [ClickHouse]({{% ref "/docs/v1.3/applications/clickhouse" %}})
+
+
+### Kafka
+
+Apache Kafka is an open-source distributed event streaming platform.
+It aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds.
+Cozystack is using [Strimzi](https://github.com/cozystack/cozystack/blob/main/packages/system/kafka-operator/charts/strimzi-kafka-operator/README.md)
+to run an Apache Kafka cluster on Kubernetes in various deployment configurations.
+
+- Supported version: Apache Kafka 3.9.0
+- Kubernetes operator: [strimzi/strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) v0.45.0
+- Website: [kafka.apache.org](https://kafka.apache.org/)
+- Managed application reference: [Kafka]({{% ref "/docs/v1.3/applications/kafka" %}})
+
+
+### MariaDB (MySQL fork)
+
+MySQL is a widely used and well-known relational database.
+The implementation in the platform provides the ability to create a replicated MariaDB cluster.
+This cluster is managed using the increasingly popular mariadb-operator.
+
+For each database, there is an interface for configuring users, their permissions,
+as well as schedules for creating backups using [Restic](https://restic.net/), one of the most efficient tools currently available.
+
+- Supported version: MariaDB 11.4.3
+- Kubernetes operator: [mariadb-operator/mariadb-operator](https://github.com/mariadb-operator/mariadb-operator) v0.18.0
+- Website: [mariadb.com](https://mariadb.com/)
+- Managed application reference: [MySQL]({{% ref "/docs/v1.3/applications/mariadb" %}})
+
+
+### NATS Messaging
+
+NATS is an open-source, simple, secure, and high performance messaging system.
+It provides a data layer for cloud native applications, IoT messaging, and microservices architectures.
+
+- Supported version: NATS 2.10.17
+- Website: [nats.io](https://nats.io/)
+- Managed application reference: [NATS]({{% ref "/docs/v1.3/applications/nats" %}})
+
+
+### PostgreSQL
+
+Nowadays, PostgreSQL is the most popular relational database.
+Its platform-side implementation involves a self-healing replicated cluster.
+This is managed with the increasingly popular CloudNativePG operator within the community.
+
+
+- Supported version: PostgreSQL 17
+- Kubernetes operator: [cloudnative-pg/cloudnative-pg](https://github.com/cloudnative-pg/cloudnative-pg) v1.24.0
+- Website: [cloudnative-pg.io](https://cloudnative-pg.io/)
+- Managed application reference: [PostgreSQL]({{% ref "/docs/v1.3/applications/postgres" %}})
+
+
+### RabbitMQ
+
+RabbitMQ is a widely known message broker.
+The platform-side implementation allows you to create failover clusters managed by the official RabbitMQ operator.
+
+- Supported version: RabbitMQ 4.1.0+ (latest stable version)
+- Kubernetes operator: [rabbitmq/cluster-operator](https://github.com/rabbitmq/cluster-operator) v1.10.0
+- Website: [rabbitmq.com](https://www.rabbitmq.com/)
+- Managed application reference: [RabbitMQ]({{% ref "/docs/v1.3/applications/rabbitmq" %}})
+
+
+### Redis
+
+Redis is the most commonly used key-value in-memory data store.
+It is most often used as a cache, as storage for user sessions, or as a message broker.
+The platform-side implementation involves a replicated failover Redis cluster with Sentinel.
+This is managed by the spotahome/redis-operator.
+
+- Supported version: Redis 6.2.6+ (based on `alpine`)
+- Kubernetes operator: [spotahome/redis-operator](https://github.com/spotahome/redis-operator) v1.3.0-rc1
+- Website: [redis.io](https://redis.io/)
+- Managed application reference: [Redis]({{% ref "/docs/v1.3/applications/redis" %}})
+
+
+### VPN Service
+
+The VPN Service is powered by the Outline Server, an advanced and user-friendly VPN solution.
+It is internally known as "Shadowbox," which simplifies the process of setting up and sharing Shadowsocks servers.
+It operates by launching Shadowsocks instances on demand.
+
+The Shadowsocks protocol uses symmetric encryption algorithms.
+This enables fast internet access while complicating traffic analysis and blocking through DPI (Deep Packet Inspection).
+
+- Supported version: Outline Server, v1.12.3+ (stable)
+- Website: [getoutline.org](https://getoutline.org/)
+- Managed application reference: [VPN]({{% ref "/docs/v1.3/networking/vpn" %}})
+
+### HTTP Cache
+
+Nginx-based HTTP caching service helps protect your application from overload using the powerful Nginx.
+Nginx is traditionally used to build CDNs and caching servers.
+
+The platform-side implementation features efficient caching without using a clustered file system.
+It also supports horizontal scaling without duplicating data on multiple servers.
+
+- Included versions: Nginx 1.25.3, HAProxy latest stable.
+- Website: [nginx.org](https://nginx.org/)
+- Managed application reference: [HTTP Cache]({{% ref "/docs/v1.3/networking/http-cache" %}})
+
+
+### TCP Balancer
+
+The Managed TCP Load Balancer service provides deployment and management of load balancers.
+It efficiently distributes incoming TCP traffic across multiple backend servers, ensuring high availability and optimal resource utilization.
+
+TCP Load Balancer service is powered by [HAProxy](https://www.haproxy.org/), a mature and reliable TCP load balancer.
+
+- Managed application reference: [TCP balancer]({{% ref "/docs/v1.3/networking/tcp-balancer" %}})
+- Docs: [HAProxy Documentation](https://www.haproxy.com/documentation/)
+
+
+### Tenants
+
+Tenants in Cozystack are implemented as managed applications.
+Learn more about tenants in [Tenant System]({{% ref "/docs/v1.3/guides/tenants" %}}).
diff --git a/content/en/docs/v1.3/guides/platform-stack/cozystack-layers.png b/content/en/docs/v1.3/guides/platform-stack/cozystack-layers.png
new file mode 100644
index 00000000..ff82049e
Binary files /dev/null and b/content/en/docs/v1.3/guides/platform-stack/cozystack-layers.png differ
diff --git a/content/en/docs/v1.3/guides/resource-management/_index.md b/content/en/docs/v1.3/guides/resource-management/_index.md
new file mode 100644
index 00000000..6aa6a655
--- /dev/null
+++ b/content/en/docs/v1.3/guides/resource-management/_index.md
@@ -0,0 +1,184 @@
+---
+title: Resource Management in Cozystack
+linkTitle: Resource Management
+description: >
+ How CPU, memory, and presets work across VMs, Kubernetes clusters, and managed
+ workloads in Cozystack; and how to reconfigure resources via the UI, CLI, or API.
+weight: 25
+---
+
+## Introduction
+
+Cozystack runs everything, including system components and user-side applications, as services in a Kubernetes cluster,
+having a finite pool of CPU and memory.
+
+This guide explains how users can configure available resources for an application, and how Cozystack handles this configuration.
+
+
+## Service Resource Configuration
+
+Resources, available to each service (managed application, VM, or tenant cluster), are defined in its configuration file.
+There are two ways to specify CPU time and memory available for a service in Cozystack:
+
+- Using resource presets.
+- Using explicit resource configurations.
+
+
+### Using Resource Presets
+
+Cozystack provides a number of named resource presets.
+Each user-side service, including managed applications, tenant Kubernetes clusters and virtual machines, has a default preset value.
+
+When deploying a service, a preset is defined in `resourcesPreset` configuration variable, for example:
+
+```yaml
+## @param resourcesPreset Default sizing preset used when `resources` is omitted.
+## Allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge.
+resourcesPreset: "small"
+```
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `100m` | `128Mi` |
+| `micro` | `250m` | `256Mi` |
+| `small` | `500m` | `512Mi` |
+| `medium` | `500m` | `1Gi` |
+| `large` | `1` | `2Gi` |
+| `xlarge` | `2` | `4Gi` |
+| `2xlarge` | `4` | `8Gi` |
+
+In CPU, the `m` unit is 1/1000th of a full CPU time.
+
+Cozystack presets are defined in an internal library
+[`cozy-lib`](https://github.com/cozystack/cozystack/tree/main/packages/library/cozy-lib).
+
+
+### Defining Resources Explicitly
+
+A service configuration can define available CPU and memory explicitly, using the `resources` variable.
+Cozystack has a simple resource configuration format for `cpu` and `memory`:
+
+```yaml
+## @param resources Explicit CPU and memory configuration for each ClickHouse replica.
+## When left empty, the preset defined in `resourcesPreset` is applied.
+resources:
+ cpu: 1
+ memory: 2Gi
+```
+
+If both `resources` and `resourcesPreset` are defined, `resource` is used and `resourcsePreset` is ignored.
+
+
+## Resource Requests and Limits
+
+Everything in Cozystack runs as Kubernetes services, and Kubernetes uses two important mechanisms in resource management:
+requests and limits.
+First, let's understand what they are.
+
+- **Resource request** defines the amount of resource that will be reserved for a service and always provided.
+ If there is not enough resource to fulfill a request, a service will not run at all.
+
+- **Resource limit** defines how much a service can use from a free resource pool.
+
+{{% alert color="info" %}}
+For a detailed explanation of how requests and limits work in Kubernetes, read [Resource Management for Pods and Containers](
+https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/).
+{{% /alert %}}
+
+CPU time is easily shared between multiple services with uneven CPU load.
+For this reason, it's a common practice to set low CPU requests with much higher limits.
+For services that are CPU-intensive, the optimal ratio can be 1:2 or 1:4.
+For less CPU-intensive services, as much as 1:10 can provide great resource efficiency and still be enough.
+
+On the other hand, memory is a resource that, once given to a service, usually can't be taken back without OOM-killing the service.
+For this reason, it's usually best to set memory requests at a level that guarantees service operation.
+
+
+## CPU Allocation Ratio
+
+Cozystack has a single-point-of-truth configuration variable `cpuAllocationRatio`.
+It defines the ratio between CPU requests and limits for all services.
+
+CPU allocation ratio is defined in the Platform Package:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ # ...
+ resources:
+ cpuAllocationRatio: 4
+```
+
+By default, `cpuAllocationRatio` equals 10, which means that CPU requests will be 1/10th of CPU limits.
+Cozystack borrows this default value from [KubeVirt](https://kubevirt.io/user-guide/compute/resources_requests_and_limits/#cpu).
+
+### How Cozystack Derives CPU Requests and Limits
+
+```yaml
+## @param resources Explicit CPU and memory configuration for each ClickHouse replica.
+## When left empty, the preset defined in `resourcesPreset` is applied.
+resources:
+ cpu: 1
+ ## actual cpu limit: 1
+ ## actual cpu request: (cpu / cpu-allocation-ratio)
+ memory: 2Gi
+```
+
+### Example 1, default setting: `cpu-allocation-ratio: 10`
+
+| Preset name | `resources.cpu` | actual CPU request | actual CPU limit |
+|-------------|-----------------|--------------------|------------------|
+| `nano` | `100m` | `10m` | `100m` |
+| `micro` | `250m` | `25m` | `250m` |
+| `small` | `500m` | `50m` | `500m` |
+| `medium` | `500m` | `50m` | `500m` |
+| `large` | `1` | `100m` | `1` |
+| `xlarge` | `2` | `200m` | `2` |
+| `2xlarge` | `4` | `400m` | `4` |
+
+### Example 2: `cpu-allocation-ratio: 4`
+
+| Preset name | `resources.cpu` | actual CPU request | actual CPU limit |
+|-------------|-----------------|--------------------|------------------|
+| `nano` | `100m` | `25m` | `100m` |
+| `micro` | `250m` | `63m` | `250m` |
+| `small` | `500m` | `125m` | `500m` |
+| `medium` | `500m` | `125m` | `500m` |
+| `large` | `1` | `250m` | `1` |
+| `xlarge` | `2` | `500m` | `2` |
+| `2xlarge` | `4` | `1` | `4` |
+
+## Configuration Format Before v0.31.0
+
+Before Cozystack v0.31.0, service configuration allowed users to define requests and limits explicitly.
+After updating Cozystack from earlier versions to v0.31.0 or later, such services will require no immediate action.
+
+When users update such applications, they need to change the configuration to the new form.
+
+```yaml
+resources:
+ requests:
+ cpu: 250m
+ memory: 512Mi
+ limits:
+ cpu: 1
+ memory: 2Gi
+```
+
+There were several reasons for this change.
+
+Managed applications assume that the user doesn't need in-depth knowledge of Kubernetes.
+However, explicit request/limit configuration was a “leaky abstraction”, confusing users and leading to misconfigurations.
+
+For hosting companies that run public clouds on Cozystack, a unified ratio across the cloud is crucial.
+This approach helps ensure a stable level of service and simplifies billing.
+
+Users who deploy their own applications to tenant Kubernetes clusters still have the freedom to define precise resource requests and limits.
+
diff --git a/content/en/docs/v1.3/guides/talos.md b/content/en/docs/v1.3/guides/talos.md
new file mode 100644
index 00000000..efa3f27f
--- /dev/null
+++ b/content/en/docs/v1.3/guides/talos.md
@@ -0,0 +1,45 @@
+---
+title: "Talos Linux in Cozystack"
+linkTitle: "Talos Linux"
+description: "Learn why Cozystack uses Talos Linux as the foundation for its Kubernetes clusters. Discover the benefits of Talos Linux, including reliability, scalability, and Kubernetes optimization."
+weight: 30
+---
+
+## Why Cozystack is Using Talos Linux
+
+Talos Linux is a Linux distribution made and optimized for one job: to run Kubernetes.
+It is the foundation of reliability and security in Cozystack cluster.
+Selecting it enables Cozystack to strictly limit the technology stack and make the system stable as a rock.
+
+Let's see why Cozystack developers chose Talos as the foundation of a Kubernetes cluster and what it brings to Cozystack.
+
+### Reliable and Straightforward
+
+Talos Linux is an immutable OS that's managed through an API.
+It has no moving parts, no traditional package manager, no file structure, and no ability to run anything except Kubernetes containers.
+
+The base layer of the platform includes the latest version of the kernel, all the necessary kernel modules,
+container runtime and a Kubernetes-like API for interacting with the system.
+Updating the system is done by rewriting the Talos image "as is" entirely onto the hard drive.
+
+
+### Scalable and Reproducible
+
+Talos Linux implements the infrastructure-as-code principle.
+Talos is configured via an external, declarative manifest that can be version‑controlled in Git and
+reused for all operations such as re-deploying the same cluster, adding extra nodes, and such.
+
+When you discover an optimal configuration or solve an operational problem,
+you apply it once in the manifest and instantly propagate the change to any number of nodes, making scale‑out trivial.
+All nodes automatically converge to exactly the same configuration, eliminating configuration drift and making troubleshooting deterministic.
+
+### Tailored for Kubernetes
+
+Talos contains built‑in logic to bootstrap and maintain a Kubernetes cluster, reducing the cognitive load of the first cluster installation.
+It provides full lifecycle management of both the operating system and Kubernetes itself through a single `talosctl` command set,
+covering upgrades, node replacement, and disaster recovery.
+
+### Fine‑tuned for Cozystack
+
+Cozystack ships a curated Talos build that already includes the extensions and kernel modules required by its storage,
+networking, and observability stack, so clusters come up production‑ready out of the box.
\ No newline at end of file
diff --git a/content/en/docs/v1.3/guides/tenants/_index.md b/content/en/docs/v1.3/guides/tenants/_index.md
new file mode 100644
index 00000000..14690bbf
--- /dev/null
+++ b/content/en/docs/v1.3/guides/tenants/_index.md
@@ -0,0 +1,234 @@
+---
+title: Tenant System
+description: "Learn about tenants, the way Cozystack helps manage resources and improve security."
+weight: 17
+---
+
+## Introduction
+
+A **tenant** in Cozystack is the primary unit of isolation and security, analogous to a Kubernetes namespace but with enhanced scope.
+Each tenant represents an isolated environment with its own resources, networking, and RBAC (role-based access control).
+Some cloud providers use the term "projects" for a similar entity.
+
+Cozystack administrators and users create tenants using the [Tenant application]({{% ref "/docs/v1.3/applications/tenant" %}})
+from the application catalog.
+Tenants can be created via the Cozystack dashboard (UI), `kubectl`, or directly via Cozystack API.
+
+
+### Tenant Nesting
+
+All user tenants belong to the base `root` tenant.
+This `root` tenant is used only to deploy user tenants and system components.
+All user-side applications are deployed in their respective tenants.
+
+Tenants can be nested further: an administrator of a tenant can create sub-tenants as applications in the Cozystack catalog.
+Parent tenants can share their resources with their children and oversee their applications.
+In turn, children can use their parent's services.
+
+
+
+
+### Sharing Cluster Services
+
+Tenants may have [cluster services]({{% ref "/docs/v1.3/operations/services" %}}) deployed in them.
+Cluster services are middleware services providing core functionality to the tenants and user-facing applications.
+
+The `root` tenant has a set of services like `etcd`, `ingress`, and `monitoring` by default.
+Lower-level tenants can run their own cluster services or access ones of their parent.
+
+For example, a Cozystack user creates the following tenants and services:
+
+- Tenant `foo` inside of tenant `root`, having its own instances of `etcd` and `monitoring` running.
+- Tenant `bar` inside of tenant `foo`, having its own instance of `etcd`.
+- [Tenant Kubernetes cluster]({{% ref "/docs/v1.3/kubernetes" %}}) and a
+ [Postgres database]({{% ref "/docs/v1.3/applications/postgres" %}}) in the tenant `bar`.
+
+All applications need services like `ingress` and `monitoring`.
+Since tenant `bar` does not have these services, the applications will use the parent tenant's services.
+
+Here's how this configuration will be resolved:
+
+- The tenant Kubernetes cluster will store its data in the `bar` tenant's own `etcd` service.
+- All metrics will be collected in the monitoring stack of the parent tenant `foo`.
+- Access to the applications will be through the common `ingress` deployed in the tenant `root`.
+
+
+
+
+### Network Isolation Between Tenants
+
+Every tenant namespace is isolated from its siblings by Cilium network
+policies installed automatically by the `tenant` chart. There is no
+per-tenant opt-out: the previous `isolated` field was removed in
+Cozystack v1.0. Pods inside a tenant namespace also cannot reach
+`kube-apiserver` by default, or the tenant's own `etcd` when the tenant
+was created with `etcd: true` — they need to opt in with one of two pod
+labels:
+
+- `policy.cozystack.io/allow-to-apiserver: "true"` — reach the
+ in-cluster Kubernetes API (for operators, dashboards, etc.).
+- `policy.cozystack.io/allow-to-etcd: "true"` — reach the tenant's
+ own etcd (only applicable when the tenant was created with
+ `etcd: true`).
+
+See [Tenant `isolated` flag removed]({{% ref "/docs/v1.3/operations/upgrades#tenant-isolated-flag-removed" %}})
+in the upgrade notes for a full worked example.
+
+
+### Customizing Tenant Services
+
+The tenant flags `etcd`, `monitoring`, `ingress`, and `seaweedfs` install a
+*default* configuration of each service. After the service is running, you
+can change its spec — add storage pools, tune resource quotas, switch a
+SeaweedFS topology to `MultiZone`, etc. — by editing the underlying
+application CR. Those manual edits are **not** overwritten when the parent
+`Tenant` reconciles.
+
+The workflow has two steps:
+
+1. Turn on the flag on the tenant (checkbox in the Dashboard, or `etcd: true` /
+ `seaweedfs: true` / ... under `spec.values` in the Tenant `HelmRelease`
+ manifest you apply with `kubectl`). Cozystack creates the matching
+ application CR with defaults.
+2. Edit the application CR in place. For example, to add a pool to the
+ tenant-root SeaweedFS instance:
+
+ ```bash
+ kubectl edit -n tenant-root seaweedfses.apps.cozystack.io seaweedfs
+ ```
+
+ Or patch it non-interactively:
+
+ ```bash
+ kubectl patch -n tenant-root seaweedfses.apps.cozystack.io seaweedfs \
+ --type=merge -p '{"spec":{"volume":{"pools":{"ssd":{"diskType":"ssd","size":"50Gi"}}}}}'
+ ```
+
+The same pattern applies to every tenant-level application CR: `etcd`,
+`monitoring`, `ingress`, `seaweedfs`. See
+[SeaweedFS storage pools]({{% ref "/docs/v1.3/operations/services/object-storage/storage-pools" %}})
+for a worked example that walks the full flow — enabling SeaweedFS on the
+tenant and then customizing the resulting CR.
+
+{{% alert color="warning" %}}
+Do not try to preconfigure a tenant-level service by applying its CR manifest
+*before* the tenant is created — you will hit "namespace not found". And
+editing the `Tenant` resource itself to nest service-specific fields (like
+SeaweedFS `pools`) under the `Tenant` spec does not work either: tenant-level
+flags are booleans, the per-service spec is a separate resource. Enable the
+flag first, edit the downstream CR second.
+{{% /alert %}}
+
+
+### Unique Domain Names
+
+Each tenant has its own domain.
+By default, (unless otherwise specified), it inherits the domain of its parent with a prefix of its name.
+For example, if the `root` tenant has domain `example.org`, then tenant `foo` gets the domain `foo.example.org` by default.
+However, it can be redefined to have another domain, such as `example.com`.
+
+Kubernetes clusters created in this tenant namespace would get domains like: `kubernetes-cluster.foo.example.org`
+
+
+### Tenant Naming Limitations
+
+Tenant names must be alphanumeric.
+Using dashes (`-`) in tenant names is not allowed, unlike with other services.
+This limitation exists to keep consistent naming in tenants, nested tenants, and services deployed in them.
+
+For example:
+
+- The root tenant is named `root`, but internally it's referenced as `tenant-root`.
+- A user tenant is named `foo`, which results in `tenant-foo`.
+- However, a tenant cannot be named `foo-bar`, because parsing names like `tenant-foo-bar` can be ambiguous.
+
+### Tenant Namespace Layout
+
+Each tenant corresponds to a Kubernetes workload namespace. The `root`
+tenant is a special case: its namespace is hardcoded to `tenant-root`.
+For every nested tenant, the namespace is derived from its parent's
+workload namespace and its own name using two rules:
+
+- A tenant created directly inside `tenant-root` gets the namespace
+ `tenant-`. The parent's `tenant-root-` prefix is **not** included.
+- A tenant created at any deeper level gets the namespace
+ `-`, appending the child's name to the
+ parent's full namespace.
+
+For example, starting from `tenant-root`:
+
+| Tenant path | Workload namespace |
+| --- | --- |
+| `root` | `tenant-root` |
+| `root/alpha` | `tenant-alpha` |
+| `root/alpha/beta` | `tenant-alpha-beta` |
+| `root/alpha/beta/gamma` | `tenant-alpha-beta-gamma` |
+
+Both the `tenant` Helm chart and the aggregated API implement these rules
+when a new tenant is created:
+
+- the Helm chart helper in
+ [`packages/apps/tenant/templates/_helpers.tpl`](https://github.com/cozystack/cozystack/blob/main/packages/apps/tenant/templates/_helpers.tpl)
+ computes the namespace for the child release being installed,
+- the `computeTenantNamespace` function in
+ [`pkg/registry/apps/application/rest.go`](https://github.com/cozystack/cozystack/blob/main/pkg/registry/apps/application/rest.go)
+ publishes the same value as `status.namespace` on the `Tenant` CR.
+
+Because tenant names themselves are constrained to be alphanumeric (see
+*Tenant Naming Limitations* above), namespace fragments never contain
+tenant-internal dashes.
+
+{{% alert color="warning" %}}
+Kubernetes namespace names are RFC 1123 labels and cannot exceed **63
+characters**. Because deeper tenants accumulate the full ancestor chain
+into their workload namespace name (`tenant-alpha-beta-gamma`), long tenant
+names combined with deep nesting can bump into this limit. Plan the
+hierarchy accordingly: short tenant names at deeper levels, or shallower
+trees when long names are unavoidable. Kubernetes will reject the namespace
+creation if the computed name exceeds 63 characters, and the containing
+`tenant` Helm release will surface that rejection as a reconcile failure.
+{{% /alert %}}
+
+### Deriving Parent and Child Relationships
+
+Downstream integrations — custom dashboards, audit tooling, cost-allocation
+jobs, policy engines — sometimes need to walk the tenant tree to render
+breadcrumbs, compute inherited settings, or scope queries. A tempting
+shortcut is to derive the parent namespace by splitting the workload
+namespace on `-` and rebuilding it minus the last segment. That works
+today only because tenant names are constrained to be alphanumeric, so
+the `-` character unambiguously separates ancestor segments; it also
+assumes the current namespace-generation rules never change. Both
+assumptions are implementation details, not a stable contract.
+
+The stable contract is the `Tenant` custom resource itself. Cozystack
+stores every `Tenant` CR in its parent's workload namespace, so:
+
+- **`metadata.namespace`** of a `Tenant` CR equals the **parent's** workload
+ namespace. This is the reliable pointer to the parent — no string parsing
+ required.
+- **`status.namespace`** of a `Tenant` CR equals the tenant's **own** workload
+ namespace (the one where the tenant's applications, nested tenants, and
+ `HelmRelease`s live).
+- To list the direct children of a tenant with workload namespace `N`, list
+ `Tenant` CRs whose `metadata.namespace == N`. With `kubectl`, this is a
+ single command against the parent's workload namespace:
+
+ ```bash
+ kubectl get tenants --namespace
+ ```
+
+ The `tenants` resource is served by the Cozystack aggregated API
+ (`apps.cozystack.io/v1alpha1`), so `cozystack-api` must be running and
+ reachable from the client. Run `kubectl api-resources --api-group apps.cozystack.io`
+ to confirm the resource is visible from your kubeconfig context.
+
+This approach is stable regardless of whether the tenant is a direct child of
+`tenant-root` or a deeper descendant, and it survives any future adjustments
+to the namespace layout because it does not depend on the layout at all.
+
+
+### Reference
+
+See the reference for the application implementing tenant management: [`tenant`]({{% ref "/docs/v1.3/applications/tenant#parameters" %}})
+
diff --git a/content/en/docs/v1.3/guides/tenants/tenants1.png b/content/en/docs/v1.3/guides/tenants/tenants1.png
new file mode 100644
index 00000000..3a024287
Binary files /dev/null and b/content/en/docs/v1.3/guides/tenants/tenants1.png differ
diff --git a/content/en/docs/v1.3/guides/tenants/tenants2.png b/content/en/docs/v1.3/guides/tenants/tenants2.png
new file mode 100644
index 00000000..81369d2f
Binary files /dev/null and b/content/en/docs/v1.3/guides/tenants/tenants2.png differ
diff --git a/content/en/docs/v1.3/guides/use-cases/_index.md b/content/en/docs/v1.3/guides/use-cases/_index.md
new file mode 100644
index 00000000..2d123821
--- /dev/null
+++ b/content/en/docs/v1.3/guides/use-cases/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Use Cases"
+linkTitle: "Use Cases"
+description: "Cozystack use cases."
+weight: 30
+aliases:
+ - /docs/v1.3/use-cases
+---
diff --git a/content/en/docs/v1.3/guides/use-cases/kubernetes-distribution.md b/content/en/docs/v1.3/guides/use-cases/kubernetes-distribution.md
new file mode 100644
index 00000000..8449be33
--- /dev/null
+++ b/content/en/docs/v1.3/guides/use-cases/kubernetes-distribution.md
@@ -0,0 +1,30 @@
+---
+title: "Build Your Own Platform (BYOP)"
+linkTitle: "Build Your Own Platform"
+description: "How to build your own platform with Cozystack by installing only the components you need"
+weight: 30
+aliases:
+ - /docs/v1.3/use-cases/kubernetes-distribution
+---
+
+Cozystack can be used in BYOP (Build Your Own Platform) mode — installing only the components you need from the Cozystack package repository,
+rather than deploying the full platform.
+
+### Overview
+
+Cozystack provides a package management system inspired by Linux distribution package managers.
+The Cozystack Operator manages `PackageSource` and `Package` resources, while the `cozypkg` CLI tool
+provides an interactive interface for listing available packages, resolving dependencies, and installing them selectively.
+
+This approach is useful when:
+
+- You have an existing Kubernetes cluster and only need specific components.
+- Your cluster already has networking and storage configured.
+- You want full control over which components are installed.
+
+The `default` variant of `cozystack-platform` installs no components — it only registers PackageSources.
+From there, you use `cozypkg` to install individual packages like networking, storage, ingress, database operators, and more.
+
+For a step-by-step guide, see the [BYOP installation guide]({{% ref "/docs/v1.3/install/cozystack/kubernetes-distribution" %}}).
+
+
diff --git a/content/en/docs/v1.3/guides/use-cases/private-cloud.md b/content/en/docs/v1.3/guides/use-cases/private-cloud.md
new file mode 100644
index 00000000..7eb2eea4
--- /dev/null
+++ b/content/en/docs/v1.3/guides/use-cases/private-cloud.md
@@ -0,0 +1,24 @@
+---
+title: Using Cozystack to build private cloud
+linkTitle: Private Cloud
+description: "How to use Cozystack to build private cloud"
+weight: 20
+aliases:
+ - /docs/v1.3/use-cases/private-cloud
+---
+
+You can use Cozystack as platform to build a private cloud powered by Infrastructure-as-Code
+
+### Overview
+
+One of the use cases is a self-portal for users within your company, where they can order the service they're interested in or a managed database.
+
+You can implement best GitOps practices, where users will launch their own Kubernetes clusters and databases for their needs with a simple commit of configuration into your infrastructure Git repository.
+
+Thanks to the standardization of the approach to deploying applications, you can expand the platform's capabilities using the functionality of standard Helm charts.
+
+
+
+Here you can find reference repository to learn how to configure Cozystack services using GitOps approach:
+
+- https://github.com/aenix-io/cozystack-gitops-example
diff --git a/content/en/docs/v1.3/guides/use-cases/public-cloud.md b/content/en/docs/v1.3/guides/use-cases/public-cloud.md
new file mode 100644
index 00000000..de92461a
--- /dev/null
+++ b/content/en/docs/v1.3/guides/use-cases/public-cloud.md
@@ -0,0 +1,22 @@
+---
+title: Using Cozystack to build public cloud
+linkTitle: Public Cloud
+description: "How to use Cozystack to build public cloud"
+weight: 10
+aliases:
+ - /docs/v1.3/use-cases/public-cloud
+---
+
+You can use Cozystack as backend for a public cloud
+
+### Overview
+
+Cozystack positions itself as a kind of framework for building public clouds. The key word here is framework. In this case, it's important to understand that Cozystack is made for cloud providers, not for end users.
+
+Despite having a graphical interface, the current security model does not imply public user access to your management cluster.
+
+Instead, end users get access to their own Kubernetes clusters, can order LoadBalancers and additional services from it, but they have no access and know nothing about your management cluster powered by Cozystack.
+
+Thus, to integrate with your billing system, it's enough to teach your system to go to the management Kubernetes and place a YAML file signifying the service you're interested in. Cozystack will do the rest of the work for you.
+
+
diff --git a/content/en/docs/v1.3/install/_include/hardware-config-tabs.md b/content/en/docs/v1.3/install/_include/hardware-config-tabs.md
new file mode 100644
index 00000000..db030a87
--- /dev/null
+++ b/content/en/docs/v1.3/install/_include/hardware-config-tabs.md
@@ -0,0 +1,67 @@
+{{< tabs name="hardware_config" >}}
+{{% tab name="Minimal" %}}
+
+Here are the baseline requirements for running a small installation.
+The minimum recommended configuration for each node is as follows:
+
+| Component | Requirement |
+|------------------|--------------|
+| Hosts | 3x Physical hosts (or VMs with host CPU passthrough) |
+| Architecture | x86_64 |
+| CPU | 8 cores |
+| RAM | 24 GB |
+| Primary Disk | 50 GB SSD (or RAW for VMs) |
+| Secondary Disk | 256 GB SSD (raw) |
+
+**Suitable for:**
+- Dev/Test environments
+- Small demonstration setups
+- 1-2 Tenants
+- Up to 3 Kubernetes clusters
+- Few VMs or Databases
+
+{{% /tab %}}
+{{% tab name="Recommended" %}}
+
+For small production environments, the recommended configuration for each node is as follows:
+
+| Component | Requirement |
+|------------------|--------------|
+| Hosts | 3x Physical hosts |
+| Architecture | x86_64 |
+| CPU | 16-32 cores |
+| RAM | 64 GB |
+| Primary Disk | 100 GB SSD or NVMe |
+| Secondary Disk | 1-2 TB SSD or NVMe |
+
+**Suitable for:**
+- Small to medium production environments
+- 5-10 Tenants
+- 5+ Kubernetes clusters
+- Dozens Virtual Machines or Databases
+- S3-compatible storage
+
+{{% /tab %}}
+{{% tab name="Optimal" %}}
+
+For medium to large production environments, the optimal configuration for each node is as follows:
+
+| Component | Requirement |
+|------------------|--------------|
+| Hosts | 6x+ Physical hosts |
+| Architecture | x86_64 |
+| CPU | 32-64 cores |
+| RAM | 128-256 GB |
+| Primary Disk | 200 GB SSD or NVMe |
+| Secondary Disk | 4-10 TB NVMe |
+
+**Suitable for:**
+- Large production environments
+- 20+ Tenants
+- Dozens Kubernetes clusters
+- Hundreds of Virtual Machines and Databases
+- S3-compatible storage
+
+{{% /tab %}}
+{{< /tabs >}}
+
diff --git a/content/en/docs/v1.3/install/_index.md b/content/en/docs/v1.3/install/_index.md
new file mode 100644
index 00000000..304fd6e3
--- /dev/null
+++ b/content/en/docs/v1.3/install/_index.md
@@ -0,0 +1,55 @@
+---
+title: "Cozystack Deployment Guide: from Infrastructure to a Ready Cluster"
+linkTitle: "Deploying Cozystack"
+description: "Learn how to deploy a Cozystack cluster using Talos Linux and Kubernetes. This guide covers installation, configuration, and best practices for a reliable and secure Cozystack deployment."
+weight: 30
+aliases:
+ - /docs/v1.3/talos
+ - /docs/v1.3/operations/talos
+---
+
+## Cozystack Tutorial
+
+If this is your first time installing Cozystack, consider [going through the Cozystack tutorial]({{% ref "/docs/v1.3/getting-started" %}}).
+It shows the shortest way to getting a proof-of-concept Cozystack cluster.
+
+## Generic Installation Path
+
+Installing Cozystack on bare-metal servers or VMs involves three consecutive steps.
+Each of them has a variety of options, and while there is a recommended option, we provide alternatives to make the installation process flexible:
+
+1. [Install Talos Linux]({{% ref "./talos" %}}) on bare metal or VMs running Linux or having no OS at all.
+1. [Install and bootstrap a Kubernetes cluster]({{% ref "./kubernetes" %}}) on top of Talos Linux.
+1. [Install and configure Cozystack]({{% ref "./cozystack" %}}) on the Kubernetes cluster.
+
+## Air-gapped Environment
+
+Cozystack can be installed in an isolated environment without direct Internet access.
+The key difference of such installation is in using proxy registries for images:
+
+1. [Install Talos Linux]({{% ref "./talos" %}}) on bare metal or VMs running Linux or having no OS at all.
+1. [Configure Talos nodes for air-gap and bootstrap a Kubernetes cluster]({{% ref "./kubernetes/air-gapped" %}}).
+1. [Install and configure Cozystack]({{% ref "./cozystack" %}}) on the Kubernetes cluster.
+
+## Automated Installation with Ansible
+
+For generic Linux deployments (Ubuntu, Debian, RHEL, Rocky, openSUSE), the [Ansible collection]({{% ref "/docs/v1.3/install/ansible" %}}) automates the full pipeline: OS preparation, k3s cluster bootstrap, and Cozystack installation.
+
+## Provider-specific Installation
+
+There are specific guides for cloud providers, covering all the steps from preparing infrastructure to installing and configuring Cozystack.
+If that's your case, we recommend using the guides below:
+
+- [Hetzner]({{% ref "/docs/v1.3/install/providers/hetzner" %}})
+- [Oracle Cloud Infrastructure (OCI)]({{% ref "/docs/v1.3/install/providers/oracle-cloud" %}})
+- [Servers.com]({{% ref "/docs/v1.3/install/providers/servers-com" %}})
+
+
+## Upgrading and Post-deployment Configuration
+
+After you've deployed a cluster, proceed to the [Cluster Administration]({{% ref "/docs/v1.3/operations" %}}) section for
+the next actions:
+
+- [Configure OIDC]({{% ref "/docs/v1.3/operations/oidc" %}})
+- [Deploy Cozystack in a Multi-Datacenter Setup]({{% ref "/docs/v1.3/operations/stretched" %}})
+- [Upgrading Cozystack]({{% ref "/docs/v1.3/operations/cluster/upgrade" %}})
diff --git a/content/en/docs/v1.3/install/ansible.md b/content/en/docs/v1.3/install/ansible.md
new file mode 100644
index 00000000..904c9e72
--- /dev/null
+++ b/content/en/docs/v1.3/install/ansible.md
@@ -0,0 +1,236 @@
+---
+title: "Automated Installation with Ansible"
+linkTitle: "Ansible"
+description: "Deploy Cozystack on generic Kubernetes using the cozystack.installer Ansible collection"
+weight: 45
+---
+
+The [`cozystack.installer`](https://github.com/cozystack/ansible-cozystack) Ansible collection automates the full deployment pipeline: OS preparation, k3s cluster bootstrap, and Cozystack installation. It is suited for deploying Cozystack on bare-metal servers or VMs running a standard Linux distribution.
+
+## When to Use Ansible
+
+Consider this approach when:
+
+- You want a fully automated, repeatable deployment from bare OS to a running Cozystack
+- You are deploying on generic Linux (Ubuntu, Debian, RHEL, Rocky, openSUSE) rather than Talos Linux
+- You want to manage multiple nodes with a single inventory file
+
+For manual installation steps without Ansible, see the [Generic Kubernetes]({{% ref "/docs/v1.3/install/kubernetes/generic" %}}) guide.
+
+## Prerequisites
+
+### Controller Machine
+
+- Python >= 3.9
+- Ansible >= 2.15
+
+### Target Nodes
+
+- **Operating System**: Ubuntu/Debian, RHEL 8+/CentOS Stream 8+/Rocky/Alma, or openSUSE/SLE
+- **Architecture**: amd64 or arm64
+- **SSH access** with passwordless sudo
+- See [hardware requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}}) for CPU, RAM, and disk sizing
+
+## Installation
+
+### 1. Install the Ansible Collection
+
+```bash
+ansible-galaxy collection install git+https://github.com/cozystack/ansible-cozystack.git
+```
+
+Install required dependency collections. The `requirements.yml` file is not included in the packaged collection, so download it from the repository:
+
+```bash
+curl --silent --location --output /tmp/requirements.yml \
+ https://raw.githubusercontent.com/cozystack/ansible-cozystack/main/requirements.yml
+ansible-galaxy collection install --requirements-file /tmp/requirements.yml
+```
+
+This installs the following dependencies:
+
+- `ansible.posix`, `community.general`, `kubernetes.core` — from Ansible Galaxy
+- [`k3s.orchestration`](https://github.com/k3s-io/k3s-ansible) — k3s deployment collection, installed from Git
+
+### 2. Create an Inventory
+
+Create an `inventory.yml` file. The **internal (private) IP** of each node must be used as the host key, because KubeOVN validates host IPs through `NODE_IPS`. The public IP (if different) goes in `ansible_host`.
+
+```yaml
+cluster:
+ children:
+ server:
+ hosts:
+ 10.0.0.10:
+ ansible_host: 203.0.113.10
+ agent:
+ hosts:
+ 10.0.0.11:
+ ansible_host: 203.0.113.11
+ 10.0.0.12:
+ ansible_host: 203.0.113.12
+
+ vars:
+ ansible_port: 22
+ ansible_user: ubuntu
+
+ # k3s settings — check https://github.com/k3s-io/k3s/releases for available versions
+ k3s_version: v1.35.0+k3s3
+ token: "CHANGE_ME" # REPLACE with a strong random secret
+ api_endpoint: "10.0.0.10"
+ cluster_context: my-cluster
+
+ # Cozystack settings
+ cozystack_api_server_host: "10.0.0.10"
+ cozystack_root_host: "cozy.example.com"
+ cozystack_platform_variant: "isp-full-generic"
+ # cozystack_k3s_extra_args: "--tls-san=203.0.113.10" # add public IP if nodes are behind NAT
+```
+
+{{% alert color="warning" %}}
+**Replace `token` with a strong random secret.** This token is used for k3s node joining and grants full cluster access. Generate one with `openssl rand -hex 32`.
+{{% /alert %}}
+
+{{% alert color="warning" %}}
+**Always pin `cozystack_chart_version` explicitly.** The collection ships with a default version that may not match the release you intend to deploy. Set it in your inventory to avoid unexpected upgrades:
+
+```yaml
+cozystack_chart_version: "{{< version-pin "cozystack_version" >}}"
+```
+
+Check [Cozystack releases](https://github.com/cozystack/cozystack/releases) for available versions.
+{{% /alert %}}
+
+### 3. Create a Playbook
+
+Create a `site.yml` file that chains OS preparation, k3s deployment, and Cozystack installation.
+
+The collection repository includes example prepare playbooks for each supported OS family in the [`examples/`](https://github.com/cozystack/ansible-cozystack/tree/main/examples) directory. Copy the one matching your target OS into your project directory, then reference it as a local file:
+
+{{< tabs name="prepare_playbook" >}}
+{{% tab name="Ubuntu / Debian" %}}
+
+Copy `prepare-ubuntu.yml` from [examples/ubuntu/](https://github.com/cozystack/ansible-cozystack/tree/main/examples/ubuntu), then create `site.yml`:
+
+```yaml
+- name: Prepare nodes
+ ansible.builtin.import_playbook: prepare-ubuntu.yml
+
+- name: Deploy k3s cluster
+ ansible.builtin.import_playbook: k3s.orchestration.site
+
+- name: Install Cozystack
+ ansible.builtin.import_playbook: cozystack.installer.site
+```
+
+{{% /tab %}}
+{{% tab name="RHEL / Rocky / Alma" %}}
+
+Copy `prepare-rhel.yml` from [examples/rhel/](https://github.com/cozystack/ansible-cozystack/tree/main/examples/rhel), then create `site.yml`:
+
+```yaml
+- name: Prepare nodes
+ ansible.builtin.import_playbook: prepare-rhel.yml
+
+- name: Deploy k3s cluster
+ ansible.builtin.import_playbook: k3s.orchestration.site
+
+- name: Install Cozystack
+ ansible.builtin.import_playbook: cozystack.installer.site
+```
+
+{{% /tab %}}
+{{% tab name="openSUSE / SLE" %}}
+
+Copy `prepare-suse.yml` from [examples/suse/](https://github.com/cozystack/ansible-cozystack/tree/main/examples/suse), then create `site.yml`:
+
+```yaml
+- name: Prepare nodes
+ ansible.builtin.import_playbook: prepare-suse.yml
+
+- name: Deploy k3s cluster
+ ansible.builtin.import_playbook: k3s.orchestration.site
+
+- name: Install Cozystack
+ ansible.builtin.import_playbook: cozystack.installer.site
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+### 4. Run the Playbook
+
+```bash
+ansible-playbook --inventory inventory.yml site.yml
+```
+
+The playbook performs the following steps automatically:
+
+1. **Prepare nodes** — installs required packages (`nfs-common`, `open-iscsi`, `multipath-tools`), configures sysctl, enables storage services
+2. **Deploy k3s** — bootstraps a k3s cluster with Cozystack-compatible settings (disables built-in Traefik, ServiceLB, kube-proxy, Flannel; sets `cluster-domain=cozy.local`)
+3. **Install Cozystack** — installs Helm and the helm-diff plugin (used for idempotent upgrades), deploys the `cozy-installer` chart, waits for the operator and CRDs, then creates the Platform Package
+
+## Configuration Reference
+
+### Core Variables
+
+| Variable | Default | Description |
+| --- | --- | --- |
+| `cozystack_api_server_host` | *(required)* | Internal IP of the control-plane node. |
+| `cozystack_chart_version` | `{{< version-pin "cozystack_version" >}}` | Version of the Cozystack Helm chart. **Pin this explicitly.** |
+| `cozystack_platform_variant` | `isp-full-generic` | Platform variant: `default`, `isp-full`, `isp-hosted`, `isp-full-generic`. |
+| `cozystack_root_host` | `""` | Domain for Cozystack services. Leave empty to skip publishing configuration. |
+
+### Networking
+
+| Variable | Default | Description |
+| --- | --- | --- |
+| `cozystack_pod_cidr` | `10.42.0.0/16` | Pod CIDR range. |
+| `cozystack_pod_gateway` | `10.42.0.1` | Pod network gateway. |
+| `cozystack_svc_cidr` | `10.43.0.0/16` | Service CIDR range. |
+| `cozystack_join_cidr` | `100.64.0.0/16` | Join CIDR for inter-node communication. |
+| `cozystack_api_server_port` | `6443` | Kubernetes API server port. |
+
+### Advanced
+
+| Variable | Default | Description |
+| --- | --- | --- |
+| `cozystack_chart_ref` | `oci://ghcr.io/cozystack/cozystack/cozy-installer` | OCI reference for the Helm chart. |
+| `cozystack_operator_variant` | `generic` | Operator variant: `generic`, `talos`, `hosted`. |
+| `cozystack_namespace` | `cozy-system` | Namespace for Cozystack operator and resources. |
+| `cozystack_release_name` | `cozy-installer` | Helm release name. |
+| `cozystack_release_namespace` | `kube-system` | Namespace where Helm release secret is stored (not the operator namespace). |
+| `cozystack_kubeconfig` | `/etc/rancher/k3s/k3s.yaml` | Path to kubeconfig on the target node. |
+| `cozystack_create_platform_package` | `true` | Whether to create the Platform Package after chart installation. |
+| `cozystack_helm_version` | `3.17.3` | Helm version to install on target nodes. |
+| `cozystack_helm_binary` | `/usr/local/bin/helm` | Path to the Helm binary on target nodes. |
+| `cozystack_helm_diff_version` | `3.12.5` | Version of the helm-diff plugin. |
+| `cozystack_operator_wait_timeout` | `300` | Timeout in seconds for operator readiness. |
+
+### Prepare Playbook Variables
+
+The example prepare playbooks (copied from the `examples/` directory) support additional variables:
+
+| Variable | Default | Description |
+| --- | --- | --- |
+| `cozystack_flush_iptables` | `false` | Flush iptables INPUT chain before installation. Useful on cloud providers with restrictive default rules. |
+| `cozystack_k3s_extra_args` | `""` | Extra arguments passed to k3s server (e.g., `--tls-san=` for nodes behind NAT). |
+
+## Verification
+
+After the playbook completes, verify the deployment from the first server node:
+
+```bash
+# Check operator
+kubectl get deployment cozystack-operator --namespace cozy-system
+
+# Check Platform Package
+kubectl get packages.cozystack.io cozystack.cozystack-platform
+
+# Check all pods
+kubectl get pods --all-namespaces
+```
+
+## Idempotency
+
+The playbook is idempotent — running it again will not re-apply resources that haven't changed. The Platform Package is only applied when a diff is detected via `kubectl diff`.
diff --git a/content/en/docs/v1.3/install/cozystack/_index.md b/content/en/docs/v1.3/install/cozystack/_index.md
new file mode 100644
index 00000000..6d855b1c
--- /dev/null
+++ b/content/en/docs/v1.3/install/cozystack/_index.md
@@ -0,0 +1,32 @@
+---
+title: "Installing and Configuring Cozystack"
+linkTitle: "3. Install Cozystack"
+description: "Step 3: Installing Cozystack on a Kubernetes Cluster — as a ready-to-use platform or in BYOP (Build Your Own Platform) mode."
+weight: 30
+---
+
+**The third step** in deploying a Cozystack cluster is to install Cozystack on a Kubernetes cluster that has been previously installed and configured.
+A prerequisite to this step is having [installed a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
+
+Cozystack can be installed in two modes, depending on how much control you need over the installed components:
+
+## As a Platform
+
+Install Cozystack as a ready-to-use platform with all components managed automatically.
+You choose a [variant]({{% ref "/docs/v1.3/operations/configuration/variants" %}}) (such as `isp-full`), and Cozystack installs and configures
+all necessary components — networking, storage, monitoring, dashboard, operators, and managed applications.
+
+This is the recommended approach for most users who want a fully functional platform out of the box.
+
+**[Install Cozystack as a Platform]({{% ref "./platform" %}})**
+
+## Build Your Own Platform (BYOP)
+
+Use Cozystack to build your own platform by installing only the components you need.
+You install the operator with the `default` variant, which only provides the package registry (PackageSources).
+Then you use the `cozypkg` CLI tool to selectively install individual packages — networking, storage, ingress, operators, and anything else available in the Cozystack repository.
+
+This approach is ideal when you already have an existing Kubernetes cluster with some infrastructure in place,
+or when you only need specific components from the Cozystack ecosystem.
+
+**[Build Your Own Platform with Cozystack]({{% ref "./kubernetes-distribution" %}})**
diff --git a/content/en/docs/v1.3/install/cozystack/kubernetes-distribution.md b/content/en/docs/v1.3/install/cozystack/kubernetes-distribution.md
new file mode 100644
index 00000000..c01a30e1
--- /dev/null
+++ b/content/en/docs/v1.3/install/cozystack/kubernetes-distribution.md
@@ -0,0 +1,216 @@
+---
+title: "Build Your Own Platform (BYOP)"
+linkTitle: "Build Your Own Platform"
+description: "Build your own platform with Cozystack by installing only the components you need using the cozypkg CLI tool."
+weight: 20
+---
+
+## Overview
+
+Cozystack can be used in BYOP (Build Your Own Platform) mode — similar to how Linux distributions let you install only the packages you need.
+Instead of deploying the full platform with all components, you selectively install only what you need from the Cozystack package repository.
+
+This approach is useful when:
+
+- You have an existing Kubernetes cluster and only need specific components (e.g., a Postgres operator or monitoring).
+- Your cluster already has networking (CNI) and storage configured, and you don't want Cozystack to manage them.
+- You want full control over which components are installed and how they are configured.
+
+The workflow relies on two Kubernetes resources managed by the Cozystack Operator:
+
+- **PackageSource** — describes a package repository and the available variants for each package.
+- **Package** — declares that a specific package should be installed in a chosen variant, optionally with custom values.
+
+The `cozypkg` CLI tool provides a convenient interface for working with these resources: listing available packages, resolving dependencies, and installing packages interactively.
+
+
+## 1. Install the Cozystack Operator
+
+Install the Cozystack operator using Helm from the OCI registry:
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace
+```
+
+Replace `X.Y.Z` with the desired Cozystack version.
+You can find available versions on the [Cozystack releases page](https://github.com/cozystack/cozystack/releases).
+
+If you're installing on a non-Talos Kubernetes distribution (k3s, kubeadm, RKE2, etc.), set the operator variant:
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace \
+ --set cozystackOperator.variant=generic \
+ --set cozystack.apiServerHost= \
+ --set cozystack.apiServerPort=6443
+```
+
+The operator installs FluxCD (in all-in-one mode, working without CNI) and creates the initial `cozystack.cozystack-platform` PackageSource.
+
+At this point, only one PackageSource exists:
+
+```bash
+kubectl get packagesource
+```
+
+```console
+NAME VARIANTS READY STATUS
+cozystack.cozystack-platform default,isp-full,isp-full... True ...
+```
+
+
+## 2. Install cozypkg
+
+Install the `cozypkg` CLI tool using Homebrew:
+
+```bash
+brew tap cozystack/tap
+brew install cozypkg
+```
+
+Pre-built binaries for other platforms are available on the [GitHub releases page](https://github.com/cozystack/cozystack/releases).
+
+
+## 3. Install the Platform Package
+
+The first step is to install the `cozystack-platform` package with the `default` variant.
+This variant does not install any components — it only registers PackageSources for all packages available in the Cozystack repository.
+
+```bash
+cozypkg add cozystack.cozystack-platform
+```
+
+The tool will prompt you to select a variant. Choose `default`:
+
+```console
+PackageSource: cozystack.cozystack-platform
+Available variants:
+ 1. default
+ 2. isp-full
+ 3. isp-full-generic
+ 4. isp-hosted
+Select variant (1-4): 1
+```
+
+After the platform package is installed, all other PackageSources become available:
+
+```bash
+cozypkg list
+```
+
+```console
+NAME VARIANTS READY STATUS
+cozystack.cert-manager default True ...
+cozystack.cozystack-platform default,isp-full,isp-full... True ...
+cozystack.ingress-nginx default True ...
+cozystack.linstor default True ...
+cozystack.metallb default True ...
+cozystack.monitoring default True ...
+cozystack.networking noop,cilium,cilium-kilo,... True ...
+cozystack.postgres-operator default True ...
+...
+```
+
+
+## 4. Install Packages
+
+Use `cozypkg add` to install any available package. The tool automatically resolves dependencies and prompts you to select a variant for each package that needs to be installed.
+
+```bash
+cozypkg add
+```
+
+For example, when installing a package that depends on networking, `cozypkg` will detect the dependency, show which packages are already installed, and ask you to choose a variant for each missing dependency.
+
+### Networking Variants
+
+The `cozystack.networking` package has several variants to accommodate different environments:
+
+| Variant | Description |
+|:--------|:------------|
+| `noop` | Installs nothing. Use when networking is already configured in your cluster (e.g., existing CNI and kube-proxy). |
+| `cilium` | Cilium CNI for Talos Linux clusters. |
+| `cilium-generic` | Cilium CNI for generic Kubernetes distributions (k3s, kubeadm, RKE2). |
+| `kubeovn-cilium` | Cilium + KubeOVN for Talos Linux. Required for full virtualization features (live migration). |
+| `kubeovn-cilium-generic` | Cilium + KubeOVN for generic Kubernetes distributions. |
+| `cilium-kilo` | Cilium + Kilo for WireGuard-based cluster mesh. |
+
+If your cluster already has a CNI plugin configured, choose `noop`.
+Since networking is a dependency of most other packages, the `noop` variant satisfies the dependency without installing anything.
+
+### Viewing Installed Packages
+
+To see which packages are currently installed and their variants:
+
+```bash
+cozypkg list --installed
+```
+
+```console
+NAME VARIANT READY STATUS
+cozystack.cozystack-platform default True ...
+cozystack.networking noop True ...
+cozystack.cert-manager default True ...
+```
+
+
+## 5. Override Component Values
+
+Each package consists of one or more components (Helm charts). You can override values for specific components by editing the Package resource directly.
+
+The Package spec supports a `components` map where you can specify values for each component:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.metallb
+spec:
+ variant: default
+ components:
+ metallb:
+ values:
+ metallb:
+ frrk8s:
+ enabled: true
+```
+
+Apply the resource:
+
+```bash
+kubectl apply -f metallb-package.yaml
+```
+
+To find available values for a component, refer to the corresponding `values.yaml` in the [Cozystack repository](https://github.com/cozystack/cozystack/tree/main/packages/system).
+
+You can also enable or disable individual components within a package:
+
+```yaml
+spec:
+ components:
+ some-component:
+ enabled: false
+```
+
+
+## 6. Remove Packages
+
+To remove an installed package:
+
+```bash
+cozypkg del
+```
+
+The tool checks for reverse dependencies — if other installed packages depend on the one you're removing, it will list them and ask for confirmation before deleting all affected packages.
+
+
+## Next Steps
+
+- Learn about [Cozystack variants]({{% ref "/docs/v1.3/operations/configuration/variants" %}}) and how they define package composition.
+- See the [Components reference]({{% ref "/docs/v1.3/operations/configuration/components" %}}) for details on overriding component parameters.
+- For a full platform installation, see the [Platform installation guide]({{% ref "./platform" %}}).
diff --git a/content/en/docs/v1.3/install/cozystack/platform.md b/content/en/docs/v1.3/install/cozystack/platform.md
new file mode 100644
index 00000000..29cfb552
--- /dev/null
+++ b/content/en/docs/v1.3/install/cozystack/platform.md
@@ -0,0 +1,727 @@
+---
+title: "Installing Cozystack as a Platform"
+linkTitle: "As a Platform"
+description: "Install Cozystack as a ready-to-use platform with all components managed automatically."
+weight: 10
+---
+
+**The third step** in deploying a Cozystack cluster is to install Cozystack on a Kubernetes cluster that has been previously installed and configured on Talos Linux nodes.
+A prerequisite to this step is having [installed a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
+
+If this is your first time installing Cozystack, consider starting with the [Cozystack tutorial]({{% ref "/docs/v1.3/getting-started" %}}).
+
+To plan a production-ready installation, follow the guide below.
+It mirrors the tutorial in structure, but gives much more details and explains various installation options.
+
+## 1. Install Cozystack Operator
+
+Install the Cozystack operator using Helm from the OCI registry:
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace
+```
+
+Replace `X.Y.Z` with the desired Cozystack version.
+You can find available versions on the [Cozystack releases page](https://github.com/cozystack/cozystack/releases).
+
+This installs the operator, CRDs, and creates the `PackageSource` resource.
+
+### Installing on non-Talos OS
+
+By default, the Cozystack operator is configured to use the [KubePrism](https://www.talos.dev/{{< version-pin "talos_minor" >}}/kubernetes-guides/configuration/kubeprism/)
+feature of Talos Linux, which allows access to the Kubernetes API via a local address on the node.
+
+If you're installing Cozystack on a system other than Talos Linux, set the operator variant during installation:
+
+```bash
+helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system \
+ --create-namespace \
+ --set cozystackOperator.variant=generic \
+ --set cozystack.apiServerHost= \
+ --set cozystack.apiServerPort=6443
+```
+
+Replace `` with the internal IP address of your Kubernetes API server (IP only, without protocol or port).
+
+For a complete guide on deploying Cozystack on generic Kubernetes distributions, see [Deploying Cozystack on Generic Kubernetes]({{% ref "/docs/v1.3/install/kubernetes/generic" %}}).
+
+## 2. Define and Apply Platform Package
+
+Now that the operator is running, the next step is to define a Platform Package and apply it.
+The Platform Package is a `Package` resource that defines the [Cozystack variant]({{% ref "/docs/v1.3/operations/configuration/variants" %}}), [component settings]({{% ref "/docs/v1.3/operations/configuration/components" %}}),
+key network settings, exposed services, and other options.
+
+Cozystack configuration can be updated after installing it.
+However, some values, as shown below, are required for installation.
+
+Here's a minimal example of **cozystack-platform.yaml**:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://api.example.org:443"
+ exposedServices:
+ - dashboard
+ - api
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+```
+
+{{% alert color="info" %}}
+The Package name **must** be `cozystack.cozystack-platform` to match the PackageSource created by the installer.
+You can verify available PackageSources with `kubectl get packagesource`.
+{{% /alert %}}
+
+
+### 2.1. Choose a Variant
+
+The composition of Cozystack is defined by a variant.
+Variant `isp-full` is the most complete one, as it covers all layers from hardware to managed applications.
+Choose it if you deploy Cozystack on bare metal or VMs and if you want to use its full power.
+
+If you deploy Cozystack on a provided Kubernetes cluster, or if you only want to deploy a Kubernetes cluster without services,
+refer to the [variants overview and comparison]({{% ref "/docs/v1.3/operations/configuration/variants" %}}).
+
+### 2.2. Fine-tune the Components
+
+You can add some optional components or remove ones that are included by default.
+Refer to the [components reference]({{% ref "/docs/v1.3/operations/configuration/components" %}}).
+
+If you deploy on VMs or dedicated servers of a cloud provider, you'll likely need to disable MetalLB and
+enable a provider-specific load balancer, or use a different network setup.
+Check out the [provider-specific installation]({{% ref "/docs/v1.3/install/providers" %}}) section.
+It may include a complete guide for your provider that you can use to deploy a production-ready cluster.
+
+### 2.3. Define Network Configuration
+
+Replace `example.org` in `publishing.host` and `publishing.apiServerEndpoint` with a routable fully-qualified domain name (FQDN) that you control.
+If you only have a public IP, but no routable FQDN, use [nip.io](https://nip.io/) with dash notation.
+
+The following section contains sane defaults.
+Check that they match Talos node settings that you used in the previous steps.
+If you were using Talm to install Kubernetes, they should be the same.
+
+```yaml
+networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+```
+
+{{% alert color="info" %}}
+Cozystack gathers anonymous usage statistics by default. Learn more about what data is collected and how to opt out in the [Telemetry Documentation]({{% ref "/docs/v1.3/operations/configuration/telemetry" %}}).
+{{% /alert %}}
+
+### 2.4. Apply Platform Package
+
+Once the configuration file is ready, apply it:
+
+```bash
+kubectl apply -f cozystack-platform.yaml
+```
+
+As the installation goes on, you can track the logs of the operator:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator -f
+```
+
+Wait for a while, then check the status of installation:
+
+```bash
+kubectl get hr -A
+```
+
+Wait until all releases become to `Ready` state:
+
+```console
+NAMESPACE NAME AGE READY STATUS
+cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
+cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
+cozy-cilium cilium 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-operator 4m1s True Release reconciliation succeeded
+cozy-cluster-api capi-providers 4m1s True Release reconciliation succeeded
+cozy-dashboard dashboard 4m1s True Release reconciliation succeeded
+cozy-grafana-operator grafana-operator 4m1s True Release reconciliation succeeded
+cozy-kamaji kamaji 4m1s True Release reconciliation succeeded
+cozy-kubeovn kubeovn 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi 4m1s True Release reconciliation succeeded
+cozy-kubevirt-cdi kubevirt-cdi-operator 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt 4m1s True Release reconciliation succeeded
+cozy-kubevirt kubevirt-operator 4m1s True Release reconciliation succeeded
+cozy-linstor linstor 4m1s True Release reconciliation succeeded
+cozy-linstor piraeus-operator 4m1s True Release reconciliation succeeded
+cozy-mariadb-operator mariadb-operator 4m1s True Release reconciliation succeeded
+cozy-metallb metallb 4m1s True Release reconciliation succeeded
+cozy-monitoring monitoring 4m1s True Release reconciliation succeeded
+cozy-postgres-operator postgres-operator 4m1s True Release reconciliation succeeded
+cozy-rabbitmq-operator rabbitmq-operator 4m1s True Release reconciliation succeeded
+cozy-redis-operator redis-operator 4m1s True Release reconciliation succeeded
+cozy-telepresence telepresence 4m1s True Release reconciliation succeeded
+cozy-victoria-metrics-operator victoria-metrics-operator 4m1s True Release reconciliation succeeded
+tenant-root tenant-root 4m1s True Release reconciliation succeeded
+```
+
+### Dividing Control Plane and Worker Nodes
+
+Normally Cozystack requires at least three worker nodes to run workloads in HA mode. There are no tolerations in
+Cozystack components that will allow them to run on control-plane nodes.
+
+However, it's common to have only three nodes for testing purposes. Or you might only have big hardware nodes, and you
+want to use them for both control-plane and worker workloads. In this case, you have to remove the control-plane taint
+from the nodes.
+
+Example of removing control-plane taint from the nodes:
+
+```bash
+kubectl taint nodes --all node-role.kubernetes.io/control-plane-
+```
+
+## 3. Configure Storage
+
+Kubernetes needs a storage subsystem to provide persistent volumes to applications, but it doesn't include one of its own.
+Cozystack provides [LINSTOR](https://github.com/LINBIT/linstor-server) as a storage subsystem.
+
+In the following steps, we'll access LINSTOR interface, create storage pools, and define storage classes.
+
+
+### 3.1. Check Storage Devices
+
+1. Set up an alias to access LINSTOR:
+
+ ```bash
+ alias linstor='kubectl exec -n cozy-linstor deploy/linstor-controller -- linstor'
+ ```
+
+1. List your nodes and check their readiness:
+
+ ```bash
+ linstor node list
+ ```
+
+ Example output shows node names and state:
+
+ ```console
+ +-------------------------------------------------------+
+ | Node | NodeType | Addresses | State |
+ |=======================================================|
+ | srv1 | SATELLITE | 192.168.100.11:3367 (SSL) | Online |
+ | srv2 | SATELLITE | 192.168.100.12:3367 (SSL) | Online |
+ | srv3 | SATELLITE | 192.168.100.13:3367 (SSL) | Online |
+ +-------------------------------------------------------+
+ ```
+
+1. List available empty devices:
+
+ ```bash
+ linstor physical-storage list
+ ```
+
+ Example output shows the same node names:
+
+ ```console
+ +--------------------------------------------+
+ | Size | Rotational | Nodes |
+ |============================================|
+ | 107374182400 | True | srv3[/dev/sdb] |
+ | | | srv1[/dev/sdb] |
+ | | | srv2[/dev/sdb] |
+ +--------------------------------------------+
+ ```
+
+
+
+### 3.2. Create Storage Pools
+
+1. Create storage pools using ZFS or LVM.
+
+ You can also restore previously created storage pools after a node reset.
+
+ {{< tabs name="create_storage_pools" >}}
+ {{% tab name="ZFS" %}}
+
+```bash
+linstor ps cdp zfs srv1 /dev/sdb --pool-name data --storage-pool data
+linstor ps cdp zfs srv2 /dev/sdb --pool-name data --storage-pool data
+linstor ps cdp zfs srv3 /dev/sdb --pool-name data --storage-pool data
+```
+
+It is [recommended](https://github.com/LINBIT/linstor-server/issues/463#issuecomment-3401472020)
+to set `failmode=continue` on ZFS storage pools to allow DRBD to handle disk failures instead of ZFS.
+
+```bash
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv1 -- zpool set failmode=continue data
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv2 -- zpool set failmode=continue data
+kubectl exec -ti -n cozy-linstor pod/linstor-satellite.srv3 -- zpool set failmode=continue data
+```
+
+ {{% /tab %}}
+ {{% tab name="LVM" %}}
+
+```bash
+linstor ps cdp lvm srv1 /dev/sdb --pool-name data --storage-pool data
+linstor ps cdp lvm srv2 /dev/sdb --pool-name data --storage-pool data
+linstor ps cdp lvm srv3 /dev/sdb --pool-name data --storage-pool data
+```
+
+ {{% /tab %}}
+ {{% tab name="Restore ZFS/LVM storage-pool on nodes after reset" %}}
+
+```bash
+for node in $(kubectl get nodes --no-headers -o custom-columns=":metadata.name"); do
+ echo "linstor storage-pool create zfs $node data data"
+done
+# linstor storage-pool create zfs data data
+```
+
+ {{% /tab %}}
+ {{< /tabs >}}
+
+1. Check the results by listing the storage pools:
+
+ ```bash
+ linstor sp l
+ ```
+
+ Example output:
+
+ ```console
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ | StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
+ |=====================================================================================================================================|
+ | DfltDisklessStorPool | srv1 | DISKLESS | | | | False | Ok | srv1;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv2 | DISKLESS | | | | False | Ok | srv2;DfltDisklessStorPool |
+ | DfltDisklessStorPool | srv3 | DISKLESS | | | | False | Ok | srv3;DfltDisklessStorPool |
+ | data | srv1 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv1;data |
+ | data | srv2 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv2;data |
+ | data | srv3 | ZFS | data | 96.41 GiB | 99.50 GiB | True | Ok | srv3;data |
+ +-------------------------------------------------------------------------------------------------------------------------------------+
+ ```
+
+
+### 3.3. Create Storage Classes
+
+Create storage classes, one of which should be the default class.
+
+
+1. Create a file with storage class definitions.
+ Below is a sane default example providing two classes: `local` (default) and `replicated`.
+
+ **storageclasses.yaml:**
+
+ ```yaml
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: local
+ annotations:
+ storageclass.kubernetes.io/is-default-class: "true"
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/layerList: "storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "false"
+ volumeBindingMode: WaitForFirstConsumer
+ allowVolumeExpansion: true
+ ---
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: replicated
+ provisioner: linstor.csi.linbit.com
+ parameters:
+ linstor.csi.linbit.com/storagePool: "data"
+ linstor.csi.linbit.com/autoPlace: "3"
+ linstor.csi.linbit.com/layerList: "drbd storage"
+ linstor.csi.linbit.com/allowRemoteVolumeAccess: "true"
+ property.linstor.csi.linbit.com/DrbdOptions/auto-quorum: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-no-data-accessible: suspend-io
+ property.linstor.csi.linbit.com/DrbdOptions/Resource/on-suspended-primary-outdated: force-secondary
+ property.linstor.csi.linbit.com/DrbdOptions/Net/rr-conflict: retry-connect
+ volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ ```
+
+1. Apply the storage class configuration
+
+ ```bash
+ kubectl apply -f storageclasses.yaml
+ ```
+
+1. Check that the storage classes were successfully created:
+
+ ```bash
+ kubectl get storageclasses
+ ```
+
+ Example output:
+
+ ```console
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ local (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 11m
+ replicated linstor.csi.linbit.com Delete Immediate true 11m
+ ```
+
+
+
+## 4. Configure Networking
+
+Next, we will configure how the Cozystack cluster can be accessed.
+This step has two options depending on your available infrastructure:
+
+- For your own bare metal or self-hosted VMs, choose the MetalLB option.
+ MetalLB is Cozystack's default load balancer.
+- For VMs and dedicated servers from cloud providers, choose the public IP setup.
+ [Most cloud providers don't support MetalLB](https://metallb.universe.tf/installation/clouds/).
+
+ Check out the [provider-specific installation]({{% ref "/docs/v1.3/install/providers" %}}) section.
+ It may have instructions for your provider, which you can use to deploy a production-ready cluster.
+
+### 4.a MetalLB Setup
+
+Cozystack has three types of IP addresses used:
+
+- Node IPs: persistent and valid only within the cluster.
+- Virtual floating IP: used to access one of the nodes in the cluster and valid only within the cluster.
+- External access IPs: used by LoadBalancers to expose services outside the cluster.
+
+Services with external IPs may be exposed in two modes: L2 and BGP.
+L2 mode is a simple one, but requires that nodes belong to a single L2 domain, and does not load-balance well.
+BGP has more complicated setup -- you need BGP peers ready to accept announces, but gives the ability to make proper load-balancing, and provides more options for choosing IP address ranges.
+
+Select a range of unused IPs for the services, here will use the `192.168.100.200-192.168.100.250` range.
+If you use L2 mode, these IPs should either be from the same network as the nodes, or have all necessary routes to them.
+
+For BGP mode, you will also need BGP peer IP addresses and local and remote AS numbers. Here we will use `192.168.20.254` as peer IP, and AS numbers 65000 and 65001 as local and remote.
+
+Create and apply a file describing an address pool.
+
+**metallb-ip-address-pool.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: IPAddressPool
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ addresses:
+ # used to expose services outside the cluster
+ - 192.168.100.200-192.168.100.250
+ autoAssign: true
+ avoidBuggyIPs: false
+```
+
+```bash
+kubectl apply -f metallb-ip-address-pool.yml
+```
+
+Create and apply resources needed for an L2 or a BGP advertisement.
+
+{{< tabs name="metallb_announce" >}}
+{{% tab name="L2 mode" %}}
+L2Advertisement uses the name of the IPAddressPool resource we created previously.
+
+**metallb-l2-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: L2Advertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+
+Apply changes.
+
+```bash
+kubectl apply -f metallb-l2-advertisement.yml
+```
+{{% /tab %}}
+{{% tab name="BGP mode" %}}
+First, create a separate BGPPeer resource for **each** peer.
+
+**metallb-bgp-peer.yml**
+```yaml
+apiVersion: metallb.io/v1beta2
+kind: BGPPeer
+metadata:
+ name: peer1
+ namespace: cozy-metallb
+spec:
+ myASN: 65000
+ peerASN: 65001
+ peerAddress: 192.168.20.254
+```
+
+
+Next, create a single BGPAdvertisement resource.
+
+**metallb-bgp-advertisement.yml**
+```yaml
+apiVersion: metallb.io/v1beta1
+kind: BGPAdvertisement
+metadata:
+ name: cozystack
+ namespace: cozy-metallb
+spec:
+ ipAddressPools:
+ - cozystack
+```
+
+Apply changes.
+
+```bash
+kubectl apply -f metallb-bgp-peer.yml
+kubectl apply -f metallb-bgp-advertisement.yml
+```
+{{% /tab %}}
+{{< /tabs >}}
+
+
+Now that MetalLB is configured, enable `ingress` in the `tenant-root`:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "ingress": true
+}}'
+```
+
+To confirm successful configuration, check the HelmReleases `ingress` and `ingress-nginx-system`:
+
+```bash
+kubectl -n tenant-root get hr ingress ingress-nginx-system
+```
+
+Example of correct output:
+```console
+NAME AGE READY STATUS
+ingress 47m True Helm upgrade succeeded for release tenant-root/ingress.v3 with chart ingress@1.8.0
+ingress-nginx-system 47m True Helm upgrade succeeded for release tenant-root/ingress-nginx-system.v2 with chart cozy-ingress-nginx@0.35.1
+```
+
+Next, check the state of service `root-ingress-controller`:
+
+```bash
+kubectl -n tenant-root get svc root-ingress-controller
+```
+
+The service should be deployed as `TYPE: LoadBalancer` and have correct external IP:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.91.83 192.168.100.200 80/TCP,443/TCP 48m
+```
+
+### 4.b. Node Public IP Setup
+
+If your cloud provider does not support MetalLB, you can expose ingress controller using external IPs on your nodes.
+
+If public IPs are attached directly to nodes, specify them.
+If public IPs are provided with a 1:1 NAT, as some clouds do, use IP addresses of **external** network interfaces.
+
+Here we will use `192.168.100.11`, `192.168.100.12`, and `192.168.100.13`.
+
+First, patch the Platform Package with IPs to expose:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "publishing": {
+ "externalIPs": [
+ "192.168.100.11",
+ "192.168.100.12",
+ "192.168.100.13"
+ ]
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+Next, enable `ingress` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '{
+ "spec":{
+ "ingress": true
+ }
+}'
+```
+
+After that, your Ingress will be available on the specified IPs.
+Check it in the following way:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+The service should be deployed as `TYPE: ClusterIP` and have the full range of external IPs:
+
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller ClusterIP 10.96.91.83 192.168.100.11,192.168.100.12,192.168.100.13 80/TCP,443/TCP 48m
+```
+
+## 5. Finalize Installation
+
+### 5.1. Setup Root Tenant Services
+
+Enable `etcd` and `monitoring` for the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "ingress": true,
+ "monitoring": true,
+ "etcd": true
+}}'
+```
+
+### 5.2. Check the Cluster State and composition
+
+Check the provisioned persistent volumes:
+
+```bash
+kubectl get pvc -n tenant-root
+```
+
+example output:
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
+data-etcd-0 Bound pvc-4cbd29cc-a29f-453d-b412-451647cd04bf 10Gi RWO local 2m10s
+data-etcd-1 Bound pvc-1579f95a-a69d-4a26-bcc2-b15ccdbede0d 10Gi RWO local 115s
+data-etcd-2 Bound pvc-907009e5-88bf-4d18-91e7-b56b0dbfb97e 10Gi RWO local 91s
+grafana-db-1 Bound pvc-7b3f4e23-228a-46fd-b820-d033ef4679af 10Gi RWO local 2m41s
+grafana-db-2 Bound pvc-ac9b72a4-f40e-47e8-ad24-f50d843b55e4 10Gi RWO local 113s
+vmselect-cachedir-vmselect-longterm-0 Bound pvc-622fa398-2104-459f-8744-565eee0a13f1 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-longterm-1 Bound pvc-fc9349f5-02b2-4e25-8bef-6cbc5cc6d690 2Gi RWO local 2m21s
+vmselect-cachedir-vmselect-shortterm-0 Bound pvc-7acc7ff6-6b9b-4676-bd1f-6867ea7165e2 2Gi RWO local 2m41s
+vmselect-cachedir-vmselect-shortterm-1 Bound pvc-e514f12b-f1f6-40ff-9838-a6bda3580eb7 2Gi RWO local 2m40s
+vmstorage-db-vmstorage-longterm-0 Bound pvc-e8ac7fc3-df0d-4692-aebf-9f66f72f9fef 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-longterm-1 Bound pvc-68b5ceaf-3ed1-4e5a-9568-6b95911c7c3a 10Gi RWO local 2m21s
+vmstorage-db-vmstorage-shortterm-0 Bound pvc-cee3a2a4-5680-4880-bc2a-85c14dba9380 10Gi RWO local 2m41s
+vmstorage-db-vmstorage-shortterm-1 Bound pvc-d55c235d-cada-4c4a-8299-e5fc3f161789 10Gi RWO local 2m41s
+```
+
+Check that all pods are running:
+
+
+```bash
+kubectl get pod -n tenant-root
+```
+
+Example output:
+
+```console
+NAME READY STATUS RESTARTS AGE
+etcd-0 1/1 Running 0 2m1s
+etcd-1 1/1 Running 0 106s
+etcd-2 1/1 Running 0 82s
+grafana-db-1 1/1 Running 0 119s
+grafana-db-2 1/1 Running 0 13s
+grafana-deployment-74b5656d6-5dcvn 1/1 Running 0 90s
+grafana-deployment-74b5656d6-q5589 1/1 Running 1 (105s ago) 111s
+root-ingress-controller-6ccf55bc6d-pg79l 2/2 Running 0 2m27s
+root-ingress-controller-6ccf55bc6d-xbs6x 2/2 Running 0 2m29s
+root-ingress-defaultbackend-686bcbbd6c-5zbvp 1/1 Running 0 2m29s
+vmalert-vmalert-644986d5c-7hvwk 2/2 Running 0 2m30s
+vmalertmanager-alertmanager-0 2/2 Running 0 2m32s
+vmalertmanager-alertmanager-1 2/2 Running 0 2m31s
+vminsert-longterm-75789465f-hc6cz 1/1 Running 0 2m10s
+vminsert-longterm-75789465f-m2v4t 1/1 Running 0 2m12s
+vminsert-shortterm-78456f8fd9-wlwww 1/1 Running 0 2m29s
+vminsert-shortterm-78456f8fd9-xg7cw 1/1 Running 0 2m28s
+vmselect-longterm-0 1/1 Running 0 2m12s
+vmselect-longterm-1 1/1 Running 0 2m12s
+vmselect-shortterm-0 1/1 Running 0 2m31s
+vmselect-shortterm-1 1/1 Running 0 2m30s
+vmstorage-longterm-0 1/1 Running 0 2m12s
+vmstorage-longterm-1 1/1 Running 0 2m12s
+vmstorage-shortterm-0 1/1 Running 0 2m32s
+vmstorage-shortterm-1 1/1 Running 0 2m31s
+```
+
+Now you can get the public IP of ingress controller:
+
+```bash
+kubectl get svc -n tenant-root root-ingress-controller
+```
+
+example output:
+```console
+NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
+root-ingress-controller LoadBalancer 10.96.16.141 192.168.100.200 80:31632/TCP,443:30113/TCP 3m33s
+```
+
+### 5.3 Access the Cozystack Dashboard
+
+If you included `dashboard` in `publishing.exposedServices` of your Platform Package, the Cozystack Dashboard should already be available.
+
+If the initial Package did not include it, patch the Platform Package:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=json \
+ -p '[{"op": "add", "path": "/spec/components/platform/values/publishing/exposedServices/-", "value": "dashboard"}]'
+```
+
+Open `dashboard.example.org` to access the system dashboard, where `example.org` is your domain specified for `tenant-root`.
+There you will see a login window which expects an authentication token.
+
+Get the authentication token for `tenant-root`:
+
+```bash
+kubectl get secret -n tenant-root tenant-root -o go-template='{{ printf "%s\n" (index .data "token" | base64decode) }}'
+```
+
+Log in using the token.
+Now you can use the dashboard as an administrator.
+
+Further on, you will be able to:
+
+- Set up OIDC to authenticate with it instead of tokens.
+- Create user tenants and grant users access to them via tokens or OIDC.
+
+### 5.4 Access metrics in Grafana
+
+Use `grafana.example.org` to access the system monitoring, where `example.org` is your domain specified for `tenant-root`.
+In this example, `grafana.example.org` is located at 192.168.100.200.
+
+- login: `admin`
+- request a password:
+ ```bash
+ kubectl get secret -n tenant-root grafana-admin-password -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
+ ```
+
+
+## Next Steps
+
+- [Configure OIDC]({{% ref "/docs/v1.3/operations/oidc/" %}}).
+- [Create a user tenant]({{% ref "/docs/v1.3/getting-started/create-tenant" %}}).
diff --git a/content/en/docs/v1.3/install/hardware-requirements.md b/content/en/docs/v1.3/install/hardware-requirements.md
new file mode 100644
index 00000000..df252099
--- /dev/null
+++ b/content/en/docs/v1.3/install/hardware-requirements.md
@@ -0,0 +1,121 @@
+---
+title: "Hardware requirements"
+linkTitle: "Hardware Requirements"
+description: "Define the hardware requirements for your Cozystack use case."
+weight: 5
+aliases:
+ - /docs/v1.3/getting-started/hardware-requirements
+ - /docs/v1.3/talos/hardware-requirements
+---
+
+Cozystack utilizes [Talos Linux]({{% ref "/docs/v1.3/guides/talos" %}}), a minimalistic Linux distribution designed solely to run Kubernetes.
+Usually, this means you cannot share a server with any services other than those run by Cozystack.
+The good news is that whichever service you need, Cozystack will run it perfectly: securely, efficiently, and
+in a fully containerized or virtualized environment.
+
+Hardware requirements depend on your usage scenario.
+Below are several common deployment options; review them to determine which setup fits your needs best.
+
+{{< include "docs/v1.3/install/_include/hardware-config-tabs.md" >}}
+
+**Compute:**
+
+- Three or more physical or virtual servers with amd64/x86_64 architecture, with the specifications shown in the table above.
+- Virtualized servers need nested virtualization enabled and the CPU model set to `host` (without emulation).
+- PXE installation requires an extra management instance connected to the same network, with any Linux system able to run a Docker container.
+ It should also have `x86-64-v2` architecture, which most probably may be achieved by setting CPU model to `host` in case of a VM.
+
+**Storage:**
+
+Storage in a Cozystack cluster is used both by the system and by the user workloads.
+There are two options: having a dedicated disk for each role or allocating space on system disk for user storage.
+Low latency is critical for control-plane nodes storage, local SSDs are recommended.
+
+**Using two disks**
+
+Separating disks by role is the primary and more reliable option.
+
+- **Primary Disk**: This disk contains the Talos Linux operating system, essential system kernel modules and
+ Cozystack system base pods, logs, and base container images. Also an etcd cluster will be running on top of it, so a low-latency volume should be used, preferably a local SSD.
+
+ Minimum sizes vary by configuration (see table above). Talos installation expects `/dev/sda` as the system disk (virtio drives usually appear as `/dev/vda`).
+
+- **Secondary Disk**: Dedicated to workload data and can be increased based on workload requirements.
+ Used for provisioning volumes via PersistentVolumeClaims (PVCs).
+
+ Minimum sizes vary by configuration (see table above). Disk path (usually `/dev/sdb`) will be defined in the storage configuration.
+ It does not affect the Talos installation.
+
+ Learn more about configuring Linstor StorageClass from the
+ [Deploy Cozystack tutorial]({{% ref "/docs/v1.3/getting-started/install-cozystack#3-configure-storage" %}})
+
+**Using a single disk**
+
+It's possible to use a single disk with space allocated for user storage.
+See [How to install Talos on a single-disk machine]({{% ref "/docs/v1.3/install/how-to/single-disk" %}})
+Using a local SSD disk is recommended.
+
+**Networking:**
+
+- Machines must be allowed to use additional IPs, or an external load balancer must be available.
+ Using additional IPs is disabled by default and must be enabled explicitly in most public clouds.
+- Additional public IPs for ingress and virtual machines may be needed. Check if your public cloud provider supports floating IPs.
+- Routable FQDN domain (or use [nip.io](https://nip.io/) with dash notation)
+- Located in the same L2 network segment
+- Anti-spoofing disabled (required for MetalLB)
+- Minimum 1 Gbps (10 Gbps recommended for production)
+- Low latency between cluster nodes
+
+## Production Cluster
+
+For a production environment, consider the following:
+
+**Compute:**
+
+- Having at least **three worker nodes** is mandatory for running highly available applications.
+ If one of the three nodes becomes unavailable due to hardware failure or maintenance, you’ll be operating in a degraded state.
+ While database clusters and replicated storage will continue functioning, starting new database instances or creating replicated volumes won’t be possible.
+- Having separate servers for Kubernetes master nodes is highly recommended, although not required.
+ It’s much easier to take a pure worker node offline for maintenance or upgrades, than if it also serves as a management node.
+
+**Networking:**
+
+- In a setup with multiple data centers, it’s ideal to have direct, dedicated optical links between them.
+- Servers must support out-of-band management (IPMI, iLO, iDRAC, etc.) to allow remote monitoring, recovery, and management.
+
+## Distributed Cluster
+
+You can build a [distributed cluster]({{% ref "/docs/v1.3/operations/stretched/" %}}) with Cozystack.
+
+**Networking:**
+
+- Distributed cluster requires both a fast and reliable network, and it **must** have low RTT (Round Trip Time), as
+ Kubernetes is not designed to operate efficiently over high-latency networks.
+
+ Data centers in the same city typically have less than 1 ms latency, which is ideal.
+ The *maximum acceptable* RTT is 10 ms.
+ Running Kubernetes or replicated storage over a network with RTT above 20 ms is strongly discouraged.
+ To measure actual RTT, you can use the `ping` command.
+
+- It's also recommended to have at least 2–3 nodes per data center in a distributed cluster.
+ This ensures that the cluster would be able to survive one data center loss without major disruption.
+
+- If it's hard to keep a single address space between data centers, instead of using some external VPN,
+ you can enable **KubeSpan**, a Talos Linux feature that creates a WireGuard-backed full-mesh VPN between nodes.
+
+## Highly Available Applications
+
+Achieving high availability adds to the basic production environment requirements.
+
+**Networking:**
+
+- It is recommended to have multiple 10 Gbps (or faster) network cards.
+ You can separate storage and application traffic by assigning them to different network interfaces.
+
+- Expect a significant amount of horizontal, inter-node traffic inside clusters.
+ It is usually caused by multiple replicas of services and databases deployed across different nodes exchanging data.
+ Also, virtual machines with live migration require replicated volumes, which further increases the amount of traffic.
+
+## System Resource Planning
+
+For detailed recommendations on system resource allocation (CPU and memory) per node, based on cluster scale and number of tenants, refer to [System Resource Planning Recommendations]({{% ref "/docs/v1.3/install/resource-planning" %}}).
diff --git a/content/en/docs/v1.3/install/how-to/_index.md b/content/en/docs/v1.3/install/how-to/_index.md
new file mode 100644
index 00000000..e95727d2
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/_index.md
@@ -0,0 +1,6 @@
+---
+title: "Guides for Specific Cases in Cozystack Deployment"
+linkTitle: "How-Tos"
+description: ""
+weight: 50
+---
diff --git a/content/en/docs/v1.3/install/how-to/bonding.md b/content/en/docs/v1.3/install/how-to/bonding.md
new file mode 100644
index 00000000..99f60772
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/bonding.md
@@ -0,0 +1,207 @@
+---
+title: "How to configure network bonding (LACP)"
+linkTitle: "Configure bonding (LACP)"
+description: "How to configure LACP (802.3ad) network bonding for link aggregation and redundancy"
+weight: 120
+---
+
+Network bonding allows you to combine multiple physical network interfaces into a single logical interface.
+This provides increased bandwidth and link redundancy.
+
+LACP (Link Aggregation Control Protocol, IEEE 802.3ad) is the most common bonding mode,
+which dynamically negotiates link aggregation with the network switch.
+
+{{% alert color="warning" %}}
+LACP requires configuration on both the server and the network switch.
+Make sure your switch has a corresponding LACP port-channel configured for the server's ports.
+{{% /alert %}}
+
+## Identify network interfaces
+
+After running `talm template`, the generated node configuration file will contain
+a comment block with discovered network interfaces:
+
+```yaml
+machine:
+ network:
+ # -- Discovered interfaces:
+ # eno1:
+ # hardwareAddr: aa:bb:cc:dd:ee:f0
+ # busPath: 0000:02:00.0
+ # driver: tg3
+ # vendor: Broadcom Inc. and subsidiaries
+ # product: NetXtreme BCM5719 Gigabit Ethernet PCIe
+ # eno2:
+ # hardwareAddr: aa:bb:cc:dd:ee:f1
+ # busPath: 0000:02:00.1
+ # driver: tg3
+ # vendor: Broadcom Inc. and subsidiaries
+ # product: NetXtreme BCM5719 Gigabit Ethernet PCIe
+ # eth0:
+ # hardwareAddr: aa:bb:cc:dd:ee:f2
+ # busPath: 0000:04:00.0
+ # driver: bnx2x
+ # vendor: Broadcom Inc. and subsidiaries
+ # product: NetXtreme II BCM57810 10 Gigabit Ethernet
+ # eth1:
+ # hardwareAddr: aa:bb:cc:dd:ee:f3
+ # busPath: 0000:04:00.1
+ # driver: bnx2x
+ # vendor: Broadcom Inc. and subsidiaries
+ # product: NetXtreme II BCM57810 10 Gigabit Ethernet
+```
+
+Choose the interfaces you want to bond. Typically these are ports of the same speed
+connected to the same switch or switch stack. Note the `busPath` values — you will need them.
+
+## Configure bonding
+
+Edit the generated node configuration file (e.g. `nodes/node1.yaml`) and replace the default
+`machine.network.interfaces` section with a bond configuration:
+
+```yaml
+machine:
+ network:
+ interfaces:
+ - interface: bond0
+ dhcp: false
+ bond:
+ mode: 802.3ad
+ adSelect: bandwidth
+ miimon: 100
+ updelay: 200
+ downdelay: 200
+ minLinks: 1
+ xmitHashPolicy: encap3+4
+ deviceSelectors:
+ - busPath: "0000:04:00.0"
+ - busPath: "0000:04:00.1"
+ addresses:
+ - 192.168.100.11/24
+ routes:
+ - network: 0.0.0.0/0
+ gateway: 192.168.100.1
+```
+
+### Bond parameters explained
+
+| Parameter | Value | Description |
+| --- | --- | --- |
+| `mode` | `802.3ad` | LACP — dynamic link aggregation with switch negotiation |
+| `adSelect` | `bandwidth` | Selects the active aggregator by highest total bandwidth |
+| `miimon` | `100` | Link monitoring interval in milliseconds |
+| `updelay` | `200` | Delay (ms) before a recovered link becomes active |
+| `downdelay` | `200` | Delay (ms) before a failed link is declared down |
+| `minLinks` | `1` | Minimum number of active links to keep the bond up |
+| `xmitHashPolicy` | `encap3+4` | Hash by IP and TCP/UDP port for load distribution across links |
+
+### Selecting interfaces
+
+The recommended way to select bond members is by PCI bus path using `deviceSelectors`.
+This is more reliable than interface names, which may change across reboots:
+
+```yaml
+bond:
+ deviceSelectors:
+ - busPath: "0000:04:00.0"
+ - busPath: "0000:04:00.1"
+```
+
+Alternatively, you can select by interface name:
+
+```yaml
+bond:
+ interfaces:
+ - eth0
+ - eth1
+```
+
+Or by hardware address:
+
+```yaml
+bond:
+ deviceSelectors:
+ - hardwareAddr: "aa:bb:cc:dd:ee:f2"
+ - hardwareAddr: "aa:bb:cc:dd:ee:f3"
+```
+
+## VLAN on top of bond
+
+You can create VLAN interfaces on top of the bond.
+This is useful for separating traffic (e.g. management, storage, tenant networks):
+
+```yaml
+machine:
+ network:
+ interfaces:
+ - interface: bond0
+ dhcp: false
+ bond:
+ mode: 802.3ad
+ adSelect: bandwidth
+ miimon: 100
+ updelay: 200
+ downdelay: 200
+ minLinks: 1
+ xmitHashPolicy: encap3+4
+ deviceSelectors:
+ - busPath: "0000:04:00.0"
+ - busPath: "0000:04:00.1"
+ addresses:
+ - 192.168.100.11/24
+ routes:
+ - network: 0.0.0.0/0
+ gateway: 192.168.100.1
+ vlans:
+ - vlanId: 100
+ addresses:
+ - 10.0.0.11/24
+```
+
+## Floating IP (VIP) with bonding
+
+For control plane nodes, place the `vip` section on the interface (or VLAN)
+that is used for the cluster API endpoint:
+
+```yaml
+machine:
+ network:
+ interfaces:
+ - interface: bond0
+ dhcp: false
+ bond:
+ mode: 802.3ad
+ adSelect: bandwidth
+ miimon: 100
+ updelay: 200
+ downdelay: 200
+ minLinks: 1
+ xmitHashPolicy: encap3+4
+ deviceSelectors:
+ - busPath: "0000:04:00.0"
+ - busPath: "0000:04:00.1"
+ addresses:
+ - 192.168.100.11/24
+ routes:
+ - network: 0.0.0.0/0
+ gateway: 192.168.100.1
+ vip:
+ ip: 192.168.100.10
+```
+
+Make sure the floating IP matches the one configured in `values.yaml`.
+
+## Apply configuration
+
+After editing all node files, apply the configuration as usual:
+
+```bash
+talm apply -f nodes/node1.yaml -i
+talm apply -f nodes/node2.yaml -i
+talm apply -f nodes/node3.yaml -i
+```
+
+{{% alert color="info" %}}
+The `-i` (`--insecure`) flag is only needed for the first apply, when nodes are in maintenance mode.
+For already initialized nodes, omit the flag: `talm apply -f nodes/node1.yaml`.
+{{% /alert %}}
diff --git a/content/en/docs/v1.3/install/how-to/hugepages.md b/content/en/docs/v1.3/install/how-to/hugepages.md
new file mode 100644
index 00000000..3e22702a
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/hugepages.md
@@ -0,0 +1,62 @@
+---
+title: "How to enable Hugepages"
+linkTitle: "Enable Hugepages"
+description: "How to enable Hugepages"
+weight: 130
+---
+
+Enabling Hugepages for Cozystack can be done both on initial installation and at any time after it.
+Applying this configuration after installation will require a full node reboot.
+
+Read more in the Linux Kernel documentation: [HugeTLB Pages](https://docs.kernel.org/admin-guide/mm/hugetlbpage.html).
+
+
+## Using Talm
+
+Requires Talm `v0.16.0` or later.
+
+1. Add the following lines to `values.yaml`:
+
+ ```yaml
+ ...
+ certSANs: []
+ nr_hugepages: 3000
+ ```
+
+ `vm.nr_hugepages` is the count of pages per 2Mi.
+
+1. Apply the configuration:
+
+ ```bash
+ talm apply -f nodes/node0.yaml
+ ```
+
+1. Finally, reboot the nodes:
+
+ ```bash
+ talm -f nodes/node0.yaml reboot
+ ```
+
+## Using talosctl
+
+1. Add the following lines to your node template:
+
+ ```yaml
+ machine:
+ sysctls:
+ vm.nr_hugepages: "3000"
+ ```
+
+ `vm.nr_hugepages` is the count of pages per 2Mi.
+
+1. Apply the configuration:
+
+ ```bash
+ talosctl apply -f nodetemplate.yaml -n 192.168.123.11 -e 192.168.123.11
+ ```
+
+1. Reboot the nodes:
+
+ ```bash
+ talosctl reboot -n 192.168.123.11 -e 192.168.123.11
+ ```
diff --git a/content/en/docs/v1.3/install/how-to/kubespan.md b/content/en/docs/v1.3/install/how-to/kubespan.md
new file mode 100644
index 00000000..eb93eca0
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/kubespan.md
@@ -0,0 +1,38 @@
+---
+title: How to Enable KubeSpan
+linkTitle: Enable KubeSpan
+description: "How to Enable KubeSpan."
+weight: 120
+---
+
+Talos Linux provides a full mesh WireGuard network for your cluster.
+
+To enable this functionality, you need to configure [KubeSpan](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/network/kubespan/) and [Cluster Discovery](https://www.talos.dev/{{< version-pin "talos_minor" >}}/kubernetes-guides/configuration/discovery/) in your Talos Linux configuration:
+
+```yaml
+machine:
+ network:
+ kubespan:
+ enabled: true
+cluster:
+ discovery:
+ enabled: true
+```
+
+Since KubeSpan encapsulates traffic into a WireGuard tunnel, Kube-OVN should also be configured with a lower MTU value.
+
+To achieve this, add the following to the `networking` component of your Platform Package:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ # ...
+ components:
+ networking:
+ values:
+ kube-ovn:
+ mtu: 1222
+```
diff --git a/content/en/docs/v1.3/install/how-to/public-ip.md b/content/en/docs/v1.3/install/how-to/public-ip.md
new file mode 100644
index 00000000..a6cbaf7b
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/public-ip.md
@@ -0,0 +1,24 @@
+---
+title: "Public-network Kubernetes deployment"
+linkTitle: "Deploy with public networks"
+description: ""
+weight: 110
+---
+
+A Kubernetes cluster for Cozystack can be deployed using only public networks:
+
+- Both management and worker nodes have public IP addresses.
+- Worker nodes connect to the management nodes over the public Internet, without a private internal network or VPN.
+
+Such a setup is not recommended for production, but can be used for research and testing,
+when hosting limitations prevent provisioning a private network.
+
+To enable this setup when deploying with `talosctl`, add the following data in the node configuration files:
+
+```yaml
+cluster:
+ controlPlane:
+ endpoint: https://:6443
+```
+
+For `talm`, append the same lines at end of the first node's configuration file, such as `nodes/node1.yaml`.
diff --git a/content/en/docs/v1.3/install/how-to/single-disk.md b/content/en/docs/v1.3/install/how-to/single-disk.md
new file mode 100644
index 00000000..c726fcf0
--- /dev/null
+++ b/content/en/docs/v1.3/install/how-to/single-disk.md
@@ -0,0 +1,74 @@
+---
+title: "How to install Talos on a single-disk machine"
+linkTitle: "Install on a single disk"
+description: "How to install Talos on a single-disk machine, allocating space on system disk for user storage"
+weight: 100
+aliases:
+ - /docs/v1.3/operations/faq/single-disk-installation
+---
+
+Default Talos setup assumes that each node has a primary and secondary disks, used for system and user storage, respectively.
+However, it's possible to use a single disk, allocating space for user storage.
+
+This configuration must be applied with the first [`talosctl apply`]({{% ref "/docs/v1.3/install/kubernetes/talosctl#3-apply-node-configuration" %}})
+or [`talm apply`]({{% ref "/docs/v1.3/install/kubernetes/talm#3-apply-node-configuration" %}})
+— the one with the `-i` (`--insecure`) flag.
+Applying changes after initialization will not have any effect.
+
+For `talosctl`, append the following lines to `patch.yaml`:
+
+```yaml
+---
+apiVersion: v1alpha1
+kind: VolumeConfig
+name: EPHEMERAL
+provisioning:
+ minSize: 70GiB
+
+---
+apiVersion: v1alpha1
+kind: UserVolumeConfig
+name: data-storage
+provisioning:
+ diskSelector:
+ match: disk.transport == 'nvme'
+ minSize: 400GiB
+```
+
+For `talm`, append the same lines at end of the first node's configuration file, such as `nodes/node1.yaml`.
+
+Read more in the Talos documentation: https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/configuration/disk-management/.
+
+After applying the configuration, wipe the `data-storage` partition:
+
+```bash
+kubectl -n kube-system debug -it --profile sysadmin --image=alpine node/node1
+
+apk add util-linux
+
+umount /dev/nvme0n1p6 ### The partition allocated for user storage
+rm -rf /host/var/mnt/data-storage
+wipefs -a /dev/nvme0n1p6
+exit
+```
+
+When the storage is configured, add the new partition to LINSTOR:
+```bash
+linstor ps cdp zfs node1 nvme0n1p6 --pool-name data --storage-pool data1
+```
+
+Check the result:
+```bash
+linstor sp l
+```
+
+Output will be similar to this example:
+
+```text
++---------------------------------------------------------------------------------------------------------------------------------------+
+| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
+|=======================================================================================================================================|
+| DfltDisklessStorPool | node1 | DISKLESS | | | | False | Ok | node1;DfltDisklessStorPool |
+| data | node1 | ZFS | data | 351.46 GiB | 476 GiB | True | Ok | node1;data |
+| data1 | node1 | ZFS | data | 378.93 GiB | 412 GiB | True | Ok | node1;data1 |
+```
diff --git a/content/en/docs/v1.3/install/kubernetes/_index.md b/content/en/docs/v1.3/install/kubernetes/_index.md
new file mode 100644
index 00000000..47df7f0a
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/_index.md
@@ -0,0 +1,43 @@
+---
+title: "Installing and Configuring Kubernetes Cluster"
+linkTitle: "2. Install Kubernetes"
+description: "Step 2: Installing and configuring a Kubernetes cluster ready for Cozystack installation."
+weight: 20
+aliases:
+ - /docs/v1.3/operations/talos/configuration
+ - /docs/v1.3/talos/bootstrap
+ - /docs/v1.3/talos/configuration
+---
+
+
+**The second step** in deploying a Cozystack cluster is to install and configure a Kubernetes cluster.
+The result is a Kubernetes cluster installed, configured, and ready to install Cozystack.
+
+If this is your first time installing Cozystack, [start with the Cozystack tutorial]({{% ref "/docs/v1.3/getting-started" %}}).
+
+## Installation Options
+
+### Talos Linux (Recommended)
+
+For production deployments, Cozystack recommends [Talos Linux]({{% ref "/docs/v1.3/guides/talos" %}}) as the underlying operating system.
+A prerequisite to using these methods is having [installed Talos Linux]({{% ref "/docs/v1.3/install/talos" %}}).
+
+There are several methods to configure Talos nodes and bootstrap a Kubernetes cluster:
+
+- **Recommended**: [using Talm]({{% ref "./talm" %}}), a declarative CLI tool, which has ready presets for Cozystack and uses the power of Talos API under the hood.
+- [Using `talos-bootstrap`]({{% ref "./talos-bootstrap" %}}), an interactive script for bootstrapping Kubernetes clusters on Talos OS.
+- [Using talosctl]({{% ref "./talosctl" %}}), a specialized command-line tool for managing Talos.
+- [Air-gapped installation]({{% ref "./air-gapped" %}}) is possible with Talm or talosctl.
+
+### Generic Kubernetes
+
+Cozystack can also be deployed on other Kubernetes distributions:
+
+- [Generic Kubernetes]({{% ref "./generic" %}}) — deploy Cozystack on k3s, kubeadm, RKE2, or other distributions.
+
+If you encounter problems with installation, refer to the [Troubleshooting section]({{% ref "./troubleshooting" %}}).
+
+## Further Steps
+
+- After installing and configuring a Kubernetes cluster, you are ready to
+ [install and configure Cozystack]({{% ref "/docs/v1.3/install/cozystack" %}}).
diff --git a/content/en/docs/v1.3/install/kubernetes/air-gapped.md b/content/en/docs/v1.3/install/kubernetes/air-gapped.md
new file mode 100644
index 00000000..d45723b6
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/air-gapped.md
@@ -0,0 +1,374 @@
+---
+title: Bootstrap an Air-Gapped Cluster
+linkTitle: Air-Gapped
+description: "Bootstrap a Cozystack cluster in an isolated (air-gapped) environment with container registry mirrors."
+weight: 20
+aliases:
+ - /docs/v1.3/operations/talos/configuration/air-gapped
+ - /docs/v1.3/talos/bootstrap/air-gapped
+---
+
+## Introduction
+
+This guide outlines the steps to bootstrap a Cozystack cluster in an **air-gapped environment**.
+
+**Air-gapped** installation means that the cluster has no direct access to the Internet.
+All necessary resources, such as images and metadata, must be available on the private network.
+
+## Configuring Talos Nodes
+
+{{% alert color="info" %}}
+For installing with Talm, it's enough to make all mentioned changes once in `./templates/_helpers.tpl` and then build the actual node configuration files with `talm template`.
+
+For installing with `talosctl`, the changes should be made in `patch.yaml` and `patch-controlplane.yaml`.
+{{% /alert %}}
+
+## 1. Configure NTP Servers
+
+Accurate time synchronization is critical for the cluster.
+In your Talos machine configuration, set **local NTP servers** that are accessible inside your private network:
+
+```yaml
+machine:
+ time:
+ servers:
+ # example values
+ - 192.168.0.4
+ - 10.10.0.5
+```
+
+Ensure that these NTP servers are reachable from the first Talos node.
+
+## 2. Configure Container Registry Mirrors
+
+Since the cluster cannot access public container registries, it needs to use their local mirrors.
+Creating such mirrors is out of the scope of this guide.
+
+Update your machine configuration in the following way,
+providing the IP addresses and ports of your local mirrors for each registry:
+
+```yaml
+machine:
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - http://10.0.0.1:8082
+ ghcr.io:
+ endpoints:
+ - http://10.0.0.1:8083
+ gcr.io:
+ endpoints:
+ - http://10.0.0.1:8084
+ registry.k8s.io:
+ endpoints:
+ - http://10.0.0.1:8085
+ quay.io:
+ endpoints:
+ - http://10.0.0.1:8086
+ cr.fluentbit.io:
+ endpoints:
+ - http://10.0.0.1:8087
+ docker-registry3.mariadb.com:
+ endpoints:
+ - http://10.0.0.1:8088
+ config:
+ "10.0.0.1:8082":
+ tls:
+ insecureSkipVerify: true
+ auth:
+ username: myuser
+ password: mypass
+```
+
+Of course, the values for `config.[0].auth.*` are given as examples, and you need to use real credentials.
+Make sure your local registry proxies mirror all required images for Talos and Kubernetes components.
+
+## 3. Add CA Certificate
+
+To use a private Certificate Authority, you need to add its certificate to the nodes.
+
+```yaml
+# talm: nodes=["10.10.10.10"], endpoints=["10.10.10.10"], templates=["templates/controlplane.yaml"]
+# THIS FILE IS AUTOGENERATED. PREFER TEMPLATE EDITS OVER MANUAL ONES.
+machine:
+# ...
+# ...
+ discovery:
+ enabled: false
+ etcd:
+ advertisedSubnets:
+ - 10.4.100.10/24
+ allowSchedulingOnControlPlanes: true
+---
+apiVersion: v1alpha1
+kind: TrustedRootsConfig
+name: my-enterprise-ca
+certificates: |
+ -----BEGIN CERTIFICATE-----
+ ...
+ -----END CERTIFICATE-----
+```
+
+## 4. Apply Changes
+
+After you have made the changes above, you can apply the configuration and bootstrap a cluster:
+
+### Using Talm
+
+Rebuild the node configuration files and apply them to each node:
+
+```bash
+talm template -e -n -t templates/controlplane.yaml -i > nodes/node1.yaml
+talm apply -f nodes/node1.yaml
+# repeat for each node
+```
+
+Finally, bootstrap the cluster as usual:
+
+```bash
+talm bootstrap -f nodes/node1.yaml
+```
+
+Read the [Talm configuration guide]({{% ref "/docs/v1.3/install/kubernetes/talm" %}}) to learn more.
+
+### Using talosctl
+
+Apply the configuration to each node:
+
+```bash
+talosctl apply -f controlplane.yaml -n -e -i
+```
+
+Finally, bootstrap the cluster using one of the nodes:
+
+```bash
+talosctl bootstrap -n -e
+```
+
+Read the [`talosctl` configuration guide]({{% ref "/docs/v1.3/install/kubernetes/talosctl" %}}) to learn more.
+
+## 5. Configure Container Registry Mirrors for Tenant Kubernetes
+
+Tenant Kubernetes clusters in Cozystack use [Kamaji](https://kamaji.clastix.io/) for the control plane.
+The control plane components run as pods on the management cluster nodes,
+so they automatically use the registry mirrors configured in [step 2](#2-configure-container-registry-mirrors) for Talos.
+
+However, tenant **worker nodes** run as separate virtual machines with their own containerd instance.
+These worker nodes need a separate registry mirror configuration.
+
+To perform this configuration, you first need to deploy a Cozystack cluster of
+(or upgrade your cluster to) version v0.32.0 or later.
+Check your current cluster version with:
+
+```bash
+kubectl get deploy -n cozy-system cozystack -oyaml | grep installer
+```
+
+### Option A: Configure via platform package
+
+The platform package can automatically generate the `patch-containerd` secret
+from the `registries` section in the platform values.
+
+Add the `registries` section to your **cozystack-platform.yaml**:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ # ... your existing publishing, networking, etc. ...
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - http://10.0.0.1:8082
+ ghcr.io:
+ endpoints:
+ - http://10.0.0.1:8083
+ gcr.io:
+ endpoints:
+ - http://10.0.0.1:8084
+ registry.k8s.io:
+ endpoints:
+ - http://10.0.0.1:8085
+ quay.io:
+ endpoints:
+ - http://10.0.0.1:8086
+ cr.fluentbit.io:
+ endpoints:
+ - http://10.0.0.1:8087
+ docker-registry3.mariadb.com:
+ endpoints:
+ - http://10.0.0.1:8088
+ config:
+ "10.0.0.1:8082":
+ tls:
+ insecureSkipVerify: true
+ auth:
+ username: myuser
+ password: mypass
+```
+
+Then apply it:
+
+```bash
+kubectl apply -f cozystack-platform.yaml
+```
+
+This will create a `patch-containerd` secret in the `cozy-system` namespace,
+which is automatically copied to every tenant Kubernetes cluster.
+
+
+Alternatively, patch an existing platform package
+
+If the platform package is already deployed, you can add registry mirrors with a patch:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "registries": {
+ "mirrors": {
+ "docker.io": {
+ "endpoints": ["http://10.0.0.1:8082"]
+ },
+ "ghcr.io": {
+ "endpoints": ["http://10.0.0.1:8083"]
+ },
+ "gcr.io": {
+ "endpoints": ["http://10.0.0.1:8084"]
+ },
+ "registry.k8s.io": {
+ "endpoints": ["http://10.0.0.1:8085"]
+ },
+ "quay.io": {
+ "endpoints": ["http://10.0.0.1:8086"]
+ },
+ "cr.fluentbit.io": {
+ "endpoints": ["http://10.0.0.1:8087"]
+ },
+ "docker-registry3.mariadb.com": {
+ "endpoints": ["http://10.0.0.1:8088"]
+ }
+ },
+ "config": {
+ "10.0.0.1:8082": {
+ "tls": {
+ "insecureSkipVerify": true
+ },
+ "auth": {
+ "username": "myuser",
+ "password": "mypass"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+
+
+### Option B: Create the secret manually
+
+Alternatively, create a [Kubernetes Secret](https://kubernetes.io/docs/concepts/configuration/secret/) named `patch-containerd` directly:
+
+```yaml
+apiVersion: v1
+kind: Secret
+metadata:
+ name: patch-containerd
+ namespace: cozy-system
+type: Opaque
+stringData:
+ docker.io.toml: |
+ server = "https://registry-1.docker.io"
+ [host."http://10.0.0.1:8082"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ ghcr.io.toml: |
+ server = "https://ghcr.io"
+ [host."http://10.0.0.1:8083"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ gcr.io.toml: |
+ server = "https://gcr.io"
+ [host."http://10.0.0.1:8084"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ registry.k8s.io.toml: |
+ server = "https://registry.k8s.io"
+ [host."http://10.0.0.1:8085"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ quay.io.toml: |
+ server = "https://quay.io"
+ [host."http://10.0.0.1:8086"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ cr.fluentbit.io.toml: |
+ server = "https://cr.fluentbit.io"
+ [host."http://10.0.0.1:8087"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ docker-registry3.mariadb.com.toml: |
+ server = "https://docker-registry3.mariadb.com"
+ [host."http://10.0.0.1:8088"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+```
+
+If your registry mirrors require authentication, add a custom `Authorization` header
+with Base64-encoded credentials:
+
+```toml
+server = "https://registry-1.docker.io"
+[host."http://10.0.0.1:8082"]
+ capabilities = ["pull", "resolve"]
+ skip_verify = true
+ [host."http://10.0.0.1:8082".header]
+ Authorization = "Basic bXl1c2VyOm15cGFzcw=="
+```
+
+To generate the Base64-encoded value, run:
+
+```bash
+echo -n 'myuser:mypass' | base64
+```
+
+For dynamic or token-based authentication (e.g., Docker Hub), use
+[Kubernetes image pull secrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
+instead of plaintext credentials.
+
+### How it works
+
+The `patch-containerd` secret from the `cozy-system` namespace is automatically copied
+to every tenant Kubernetes cluster namespace during deployment.
+The secret data is mounted into worker node VMs as containerd registry configuration files
+at `/etc/containerd/certs.d//hosts.toml`.
+
+### Per-cluster configuration
+
+It is possible to configure registry mirrors for a particular tenant Kubernetes cluster
+instead of using the global `patch-containerd` secret:
+
+- The tenant cluster must be deployed with a Kubernetes package version 0.23.1 or later, which is available since Cozystack 0.32.1.
+- Before deploying the tenant cluster, create a Kubernetes Secret named `kubernetes--patch-containerd` in the tenant cluster namespace, using the same format as the examples above.
+
+{{% alert color="warning" %}}
+**Important:** If both the global `patch-containerd` secret and a per-cluster secret exist, the global secret takes precedence and the per-cluster secret is ignored. To use a per-cluster configuration, ensure that the global `patch-containerd` secret in the `cozy-system` namespace is not present.
+{{% /alert %}}
+
+To learn more about registry configuration values, read the [CRI Plugin configuration guide](
+https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration)
diff --git a/content/en/docs/v1.3/install/kubernetes/generic.md b/content/en/docs/v1.3/install/kubernetes/generic.md
new file mode 100644
index 00000000..f438457b
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/generic.md
@@ -0,0 +1,496 @@
+---
+title: "Deploying Cozystack on Generic Kubernetes"
+linkTitle: "Generic Kubernetes"
+description: "How to deploy Cozystack on k3s, kubeadm, RKE2, or other Kubernetes distributions without Talos Linux"
+weight: 50
+---
+
+This guide explains how to deploy Cozystack on generic Kubernetes distributions such as k3s, kubeadm, or RKE2.
+While Talos Linux remains the recommended platform for production deployments, Cozystack supports deployment on other Kubernetes distributions using the `isp-full-generic` bundle.
+
+## When to Use Generic Kubernetes
+
+Consider using generic Kubernetes instead of Talos Linux when:
+
+- You have an existing Kubernetes cluster you want to enhance with Cozystack
+- Your infrastructure doesn't support Talos Linux (certain cloud providers, embedded systems)
+- You need specific Linux features or packages not available in Talos
+
+For new production deployments, [Talos Linux]({{% ref "/docs/v1.3/guides/talos" %}}) is recommended due to its security and operational benefits.
+
+## Prerequisites
+
+### Supported Distributions
+
+Cozystack has been tested on:
+
+- **k3s** v1.32+ (recommended for single-node and edge deployments)
+- **kubeadm** v1.28+
+- **RKE2** v1.28+
+
+### Host Requirements
+
+- **Operating System**: Ubuntu 22.04+ or Debian 12+ (kernel 5.x+ with systemd)
+- **Architecture**: amd64 or arm64
+- **Hardware**: See [hardware requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}})
+
+### Required Packages
+
+Install the following packages on all nodes:
+
+```bash
+apt-get update
+apt-get install -y nfs-common open-iscsi multipath-tools
+```
+
+### Required Kernel Modules
+
+Load the `br_netfilter` module (required for bridge netfilter sysctl settings):
+
+```bash
+modprobe br_netfilter
+echo "br_netfilter" > /etc/modules-load.d/br_netfilter.conf
+```
+
+### Required Services
+
+Enable and start required services:
+
+```bash
+systemctl enable --now iscsid
+systemctl enable --now multipathd
+```
+
+## Sysctl Configuration
+
+{{% alert color="warning" %}}
+:warning: **Critical**: The sysctl settings below are mandatory for Cozystack to function properly.
+Without these settings, Kubernetes components will fail due to insufficient inotify watches.
+{{% /alert %}}
+
+Create `/etc/sysctl.d/99-cozystack.conf` with the following content:
+
+```ini
+# Inotify limits (critical for Cozystack)
+fs.inotify.max_user_watches = 524288
+fs.inotify.max_user_instances = 8192
+fs.inotify.max_queued_events = 65536
+
+# Filesystem limits
+fs.file-max = 2097152
+fs.aio-max-nr = 1048576
+
+# Network forwarding (required for Kubernetes)
+net.ipv4.ip_forward = 1
+net.ipv4.conf.all.forwarding = 1
+net.bridge.bridge-nf-call-iptables = 1
+net.bridge.bridge-nf-call-ip6tables = 1
+
+# VM tuning
+vm.swappiness = 1
+```
+
+Apply the settings:
+
+```bash
+sysctl --system
+```
+
+## Kubernetes Configuration
+
+Cozystack manages its own networking (Cilium/KubeOVN), storage (LINSTOR), and ingress (NGINX).
+Your Kubernetes distribution must be configured to **not** install these components.
+
+### Required Configuration
+
+| Component | Requirement |
+| ----------- | ------------- |
+| CNI | **Disabled** — Cozystack deploys Cilium or KubeOVN |
+| Ingress Controller | **Disabled** — Cozystack deploys NGINX |
+| Storage Provisioner | **Disabled** — Cozystack deploys LINSTOR |
+| kube-proxy | **Disabled** — Cilium replaces it |
+| Cluster Domain | Must be `cozy.local` |
+
+{{< tabs name="kubernetes_distributions" >}}
+{{% tab name="k3s" %}}
+
+When installing k3s, use the following flags:
+
+```bash
+curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server \
+ --disable=traefik \
+ --disable=servicelb \
+ --disable=local-storage \
+ --disable=metrics-server \
+ --disable-network-policy \
+ --disable-kube-proxy \
+ --flannel-backend=none \
+ --cluster-domain=cozy.local \
+ --tls-san= \
+ --kubelet-arg=max-pods=220" sh -
+```
+
+Replace `` with your node's IP address.
+
+{{% /tab %}}
+{{% tab name="kubeadm" %}}
+
+Create a kubeadm configuration file:
+
+```yaml
+apiVersion: kubeadm.k8s.io/v1beta3
+kind: ClusterConfiguration
+networking:
+ podSubnet: "10.244.0.0/16"
+ serviceSubnet: "10.96.0.0/16"
+ dnsDomain: "cozy.local"
+---
+apiVersion: kubeproxy.config.k8s.io/v1alpha1
+kind: KubeProxyConfiguration
+mode: "none" # Cilium will replace kube-proxy
+```
+
+Initialize the cluster without the default CNI:
+
+```bash
+kubeadm init --config kubeadm-config.yaml --skip-phases=addon/kube-proxy
+```
+
+Do not install a CNI plugin after `kubeadm init` — Cozystack will deploy Kube-OVN and Cilium automatically.
+
+{{% /tab %}}
+{{% tab name="RKE2" %}}
+
+Create `/etc/rancher/rke2/config.yaml`:
+
+```yaml
+cni: none
+disable:
+ - rke2-ingress-nginx
+ - rke2-metrics-server
+cluster-domain: cozy.local
+disable-kube-proxy: true
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+## Installing Cozystack
+
+### 1. Apply CRDs
+
+Download and apply Custom Resource Definitions:
+
+```bash
+kubectl apply -f https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-crds.yaml
+```
+
+### 2. Deploy Cozystack Operator
+
+Download the generic operator manifest, replace the API server address placeholder, and apply:
+
+```bash
+curl -fsSL https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-operator-generic.yaml \
+ | sed 's/REPLACE_ME//' \
+ | kubectl apply -f -
+```
+
+Replace `` with the IP address of your Kubernetes API server (IP only, without protocol or port).
+
+The manifest includes the operator deployment, the `cozystack-operator-config` ConfigMap with the API server address, and the `PackageSource` resource.
+
+### 3. Create Platform Package
+
+After the operator starts and reconciles the `PackageSource`, create a `Package` resource to trigger the platform installation.
+
+{{% alert color="warning" %}}
+:warning: **Important**: The `podCIDR` and `serviceCIDR` values **must match** your Kubernetes cluster configuration.
+Different distributions use different defaults:
+
+- **k3s**: `10.42.0.0/16` (pods), `10.43.0.0/16` (services)
+- **kubeadm**: `10.244.0.0/16` (pods), `10.96.0.0/16` (services)
+- **RKE2**: `10.42.0.0/16` (pods), `10.43.0.0/16` (services)
+{{% /alert %}}
+
+Example for **k3s** (adjust CIDRs for other distributions):
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+ # Package is cluster-scoped — no namespace needed
+spec:
+ variant: isp-full-generic
+ components:
+ platform:
+ values:
+ publishing:
+ host: "example.com"
+ apiServerEndpoint: "https://:6443"
+ networking:
+ podCIDR: "10.42.0.0/16"
+ podGateway: "10.42.0.1"
+ serviceCIDR: "10.43.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+```
+
+Adjust the values:
+
+| Field | Description |
+| ------- | ------------- |
+| `publishing.host` | Your domain for Cozystack services |
+| `publishing.apiServerEndpoint` | Kubernetes API endpoint URL |
+| `networking.podCIDR` | Pod network CIDR (must match your k8s config) |
+| `networking.podGateway` | First IP in pod CIDR (e.g., `10.42.0.1` for `10.42.0.0/16`) |
+| `networking.serviceCIDR` | Service network CIDR (must match your k8s config) |
+| `networking.joinCIDR` | Network for nested cluster communication |
+
+Apply it:
+
+```bash
+kubectl apply -f cozystack-platform-package.yaml
+```
+
+{{% alert color="info" %}}
+The Package name **must** match the PackageSource name (`cozystack.cozystack-platform`).
+You can verify available PackageSources with `kubectl get packagesource`.
+{{% /alert %}}
+
+### 4. Monitor Installation
+
+Watch the installation progress:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator -f
+```
+
+Check HelmRelease status:
+
+```bash
+kubectl get hr -A
+```
+
+{{% alert color="info" %}}
+During initial deployment, HelmReleases may show errors such as `ExternalArtifact not found` or `dependency is not ready` for the first few minutes while Cilium and other core components are being reconciled. This is expected — wait a few minutes and check again.
+{{% /alert %}}
+
+You can verify that Cilium has been deployed and nodes are networked by waiting for them to become Ready:
+
+```bash
+kubectl wait --for=condition=Ready nodes --all --timeout=300s
+```
+
+## Example: Ansible Playbook
+
+Below is a minimal Ansible playbook for preparing nodes and deploying Cozystack.
+
+Install the required Ansible collections first:
+
+```bash
+ansible-galaxy collection install ansible.posix community.general kubernetes.core ansible.utils
+```
+
+### Node Preparation Playbook
+
+```yaml
+---
+- name: Prepare nodes for Cozystack
+ hosts: all
+ become: true
+ tasks:
+ - name: Load br_netfilter module
+ community.general.modprobe:
+ name: br_netfilter
+ persistent: present
+
+ - name: Install required packages
+ ansible.builtin.apt:
+ name:
+ - nfs-common
+ - open-iscsi
+ - multipath-tools
+ state: present
+ update_cache: true
+
+ - name: Configure sysctl for Cozystack
+ ansible.posix.sysctl:
+ name: "{{ item.name }}"
+ value: "{{ item.value }}"
+ sysctl_set: true
+ state: present
+ reload: true
+ loop:
+ - { name: fs.inotify.max_user_watches, value: "524288" }
+ - { name: fs.inotify.max_user_instances, value: "8192" }
+ - { name: fs.inotify.max_queued_events, value: "65536" }
+ - { name: fs.file-max, value: "2097152" }
+ - { name: fs.aio-max-nr, value: "1048576" }
+ - { name: net.ipv4.ip_forward, value: "1" }
+ - { name: net.ipv4.conf.all.forwarding, value: "1" }
+ - { name: net.bridge.bridge-nf-call-iptables, value: "1" }
+ - { name: net.bridge.bridge-nf-call-ip6tables, value: "1" }
+ - { name: vm.swappiness, value: "1" }
+
+ - name: Enable iscsid service
+ ansible.builtin.systemd:
+ name: iscsid
+ enabled: true
+ state: started
+
+ - name: Enable multipathd service
+ ansible.builtin.systemd:
+ name: multipathd
+ enabled: true
+ state: started
+```
+
+### Cozystack Deployment Playbook
+
+This example uses k3s default CIDRs. Adjust for kubeadm (`10.244.0.0/16`, `10.96.0.0/16`) or your custom configuration.
+
+```yaml
+---
+- name: Deploy Cozystack
+ hosts: localhost
+ connection: local
+ vars:
+ cozystack_root_host: "example.com"
+ cozystack_api_host: "10.0.0.1"
+ cozystack_api_port: "6443"
+ # k3s defaults - adjust for kubeadm (10.244.0.0/16, 10.96.0.0/16)
+ cozystack_pod_cidr: "10.42.0.0/16"
+ cozystack_svc_cidr: "10.43.0.0/16"
+ tasks:
+ - name: Apply Cozystack CRDs
+ ansible.builtin.command:
+ cmd: kubectl apply -f https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-crds.yaml
+ changed_when: true
+
+ - name: Download and apply Cozystack operator manifest
+ ansible.builtin.shell:
+ cmd: >
+ curl -fsSL https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/cozystack-operator-generic.yaml
+ | sed 's/REPLACE_ME/{{ cozystack_api_host }}/'
+ | kubectl apply -f -
+ changed_when: true
+
+ - name: Wait for PackageSource to be ready
+ kubernetes.core.k8s_info:
+ api_version: cozystack.io/v1alpha1
+ kind: PackageSource
+ name: cozystack.cozystack-platform
+ register: pkg_source
+ until: >
+ pkg_source.resources | length > 0 and
+ (
+ pkg_source.resources[0].status.conditions
+ | selectattr('type', 'equalto', 'Ready')
+ | map(attribute='status')
+ | first
+ | default('False')
+ ) == "True"
+ retries: 30
+ delay: 10
+
+ - name: Create Platform Package
+ kubernetes.core.k8s:
+ state: present
+ definition:
+ apiVersion: cozystack.io/v1alpha1
+ kind: Package
+ metadata:
+ name: cozystack.cozystack-platform
+ spec:
+ variant: isp-full-generic
+ components:
+ platform:
+ values:
+ publishing:
+ host: "{{ cozystack_root_host }}"
+ apiServerEndpoint: "https://{{ cozystack_api_host }}:{{ cozystack_api_port }}"
+ networking:
+ podCIDR: "{{ cozystack_pod_cidr }}"
+ podGateway: "{{ cozystack_pod_cidr | ansible.utils.ipaddr('1') | ansible.utils.ipaddr('address') }}"
+ serviceCIDR: "{{ cozystack_svc_cidr }}"
+ joinCIDR: "100.64.0.0/16"
+```
+
+## Troubleshooting
+
+### linstor-scheduler Image Tag Invalid
+
+**Symptom**: `InvalidImageName` error for linstor-scheduler pod.
+
+**Cause**: k3s version format (e.g., `v1.35.0+k3s1`) contains `+` which is invalid in Docker image tags.
+
+**Solution**: This is fixed in Cozystack v1.0.0+. Ensure you're using the latest release.
+
+### KubeOVN Not Scheduling
+
+**Symptom**: ovn-central pods stuck in Pending state.
+
+**Cause**: KubeOVN uses Helm `lookup` to find control-plane nodes, which may fail on fresh clusters.
+
+**Solution**: Ensure your Platform Package includes explicit `MASTER_NODES` configuration:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full-generic
+ components:
+ platform:
+ values:
+ networking:
+ kubeovn:
+ MASTER_NODES: ""
+```
+
+The key is `kubeovn` (no dash), matching the field in
+`packages/core/platform/values.yaml` — see also
+[`networking.kubeovn.MASTER_NODES`]({{% ref "/docs/v1.3/operations/configuration/platform-package" %}})
+in the Platform Package reference.
+
+### Cilium Cannot Reach API Server
+
+**Symptom**: Cilium pods in CrashLoopBackOff with API connection errors.
+
+**Cause**: Single-node clusters or non-standard API endpoints require explicit configuration.
+
+**Solution**: Verify your Platform Package includes correct API server settings:
+
+```yaml
+spec:
+ components:
+ networking:
+ values:
+ cilium:
+ k8sServiceHost: ""
+ k8sServicePort: "6443"
+```
+
+### Inotify Limit Errors
+
+**Symptom**: Pods failing with "too many open files" or inotify errors.
+
+**Cause**: Default Linux inotify limits are too low for Kubernetes.
+
+**Solution**: Apply sysctl settings from the [Sysctl Configuration](#sysctl-configuration) section and reboot the node.
+
+## Further Steps
+
+After Cozystack installation completes:
+
+1. [Configure storage with LINSTOR]({{% ref "/docs/v1.3/getting-started/install-cozystack#3-configure-storage" %}})
+2. [Set up the root tenant]({{% ref "/docs/v1.3/getting-started/install-cozystack#51-setup-root-tenant-services" %}})
+3. [Deploy your first application]({{% ref "/docs/v1.3/applications" %}})
+
+## References
+
+- [PR #1939: Non-Talos Kubernetes Support](https://github.com/cozystack/cozystack/pull/1939)
+- [Issue #1950: Complete non-Talos Support](https://github.com/cozystack/cozystack/issues/1950)
+- [k3s Documentation](https://docs.k3s.io/)
+- [kubeadm Documentation](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/)
diff --git a/content/en/docs/v1.3/install/kubernetes/talm.md b/content/en/docs/v1.3/install/kubernetes/talm.md
new file mode 100644
index 00000000..8b3ab2d2
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/talm.md
@@ -0,0 +1,331 @@
+---
+title: Use Talm to bootstrap a Cozystack cluster
+linkTitle: Talm
+description: "`talm` is a declarative CLI tool made by Cozystack devs and optimized for deploying Cozystack. Recommended for infrastructure-as-code and GitOps."
+weight: 5
+aliases:
+ - /docs/v1.3/operations/talos/configuration/talm
+ - /docs/v1.3/talos/bootstrap/talm
+ - /docs/v1.3/talos/configuration/talm
+---
+
+This guide explains how to install and configure Kubernetes on a Talos Linux cluster using Talm.
+As a result of completing this guide you will have a Kubernetes cluster ready to install Cozystack.
+
+[Talm](https://github.com/cozystack/talm) is a Helm-like utility for declarative configuration management of Talos Linux.
+Talm was created by Ænix to allow more declarative and customizable configurations for cluster management.
+Talm comes with pre-built presets for Cozystack.
+
+## Prerequisites
+
+By the start of this guide you should have [Talos Linux installed]({{% ref "/docs/v1.3/install/talos" %}}), but not initialized (bootstrapped), on several nodes.
+These nodes should belong to one subnet or have public IPs.
+
+This guide uses an example where the nodes of a cluster are located in the subnet `192.168.123.0/24`, having the following IP addresses:
+
+- `node1`: private `192.168.123.11` or public `12.34.56.101`.
+- `node2`: private `192.168.123.12` or public `12.34.56.102`.
+- `node3`: private `192.168.123.13` or public `12.34.56.103`.
+
+Public IPs are optional.
+All you need for an installation with Talm is to have access to the nodes: directly, through VPN, bastion host, or other means.
+This guide will use private IPs as a default option in examples, and public IPs in instructions and examples which are specific for the public IP setup.
+
+If you are using DHCP, you might not be aware of the IP addresses assigned to your nodes in the private subnet.
+Nodes with Talos Linux [expose Talos API on port `50000`](https://www.talos.dev/{{< version-pin "talos_minor" >}}/learn-more/talos-network-connectivity/).
+You can use `nmap` to find them, providing your network mask (`192.168.123.0/24` in the example):
+
+```bash
+nmap -Pn -n -p 50000 192.168.123.0/24 -vv | grep 'Discovered'
+```
+
+Example output:
+
+```console
+Discovered open port 50000/tcp on 192.168.123.11
+Discovered open port 50000/tcp on 192.168.123.12
+Discovered open port 50000/tcp on 192.168.123.13
+```
+
+
+## 1. Install Dependencies
+
+For this guide, you need a couple of tools installed:
+
+- **Talm**.
+ To install the latest build for your platform, download and run the installer script:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/talm/raw/refs/heads/main/hack/install.sh | sh -s
+ ```
+ Talm has binaries built for Linux, macOS, and Windows, both AMD and ARM.
+ You can also [download a binary from GitHub](https://github.com/cozystack/talm/releases)
+ or [build Talm from the source](https://github.com/cozystack/talm).
+
+
+- **talosctl** is distributed as a brew package:
+
+ ```bash
+ brew install siderolabs/tap/talosctl
+ ```
+
+ For more installation options, see the [`talosctl` installation guide](https://www.talos.dev/{{< version-pin "talos_minor" >}}/talos-guides/install/talosctl/)
+
+## 2. Initialize Cluster Configuration
+
+The first step is to initialize configuration templates and provide configuration values for templating.
+
+
+### 2.1 Initialize Configuration
+
+Start by initializing configuration for a new cluster, using the `cozystack` preset:
+
+```bash
+mkdir -p cozystack-cluster
+cd cozystack-cluster
+talm init --preset cozystack --name mycluster
+```
+
+The structure of the project mostly mirrors an ordinary Helm chart:
+
+- `charts` - a directory that includes a common library chart with functions used for querying information from Talos Linux.
+- `Chart.yaml` - a file containing the common information about your project; the name of the chart is used as the name for the newly created cluster.
+- `templates` - a directory used to describe templates for the configuration generation.
+- `secrets.yaml` - a file containing secrets for your cluster.
+- `values.yaml` - a common values file used to provide parameters for the templating.
+- `nodes` - an optional directory used to describe and store generated configuration for nodes.
+
+
+### 2.2. Edit Configuration Values and Templates
+
+The power of Talm is in templating.
+There are several files with source values and templates which you can edit: `Chart.yaml`, `values.yaml`, and `templates/*`.
+Talm uses these values and templates to generate Talos configuration for all nodes in the cluster, both control plane and workers.
+
+All configuration values that are often changed, are placed in `values.yaml`:
+
+```yaml
+## Used to access the cluster's control plane
+endpoint: "https://192.168.100.10:6443"
+## Cozystack API cluster domain — used by services and tenant K8s clusters to access the management cluster
+clusterDomain: cozy.local
+## Floating IP — should be an unused IP in the same subnet as nodes
+floatingIP: 192.168.100.10
+## Talos source image: pinned to the version that ships with the current Cozystack release
+## https://github.com/cozystack/cozystack/pkgs/container/cozystack%2Ftalos
+image: "ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}"
+## Pod subnet — used to assign IPs to pods
+podSubnets:
+- 10.244.0.0/16
+## Service subnet — used to assign IPs to services
+serviceSubnets:
+- 10.96.0.0/16
+## Subnet with node IPs
+advertisedSubnets:
+- 192.168.100.0/24
+## Add OIDC issuer URL to enable OIDC — see comments below.
+oidcIssuerUrl: ""
+certSANs: []
+```
+
+You don't need to fill in the node IPs at this step.
+Instead, you will provide them later, when you generate node configurations.
+
+
+### 2.3 Add Keycloak Configuration
+
+By default, the cluster will be accessible only by authentication with a token.
+However, you can configure an OIDC provider to use account-based authentication.
+This configuration starts at this step and continues later, after installing Cozystack.
+
+To configure Keycloak as an OIDC provider, apply the following changes to the templates:
+
+- For Talm v0.6.6 or later: in `./templates/_helpers.tpl` replace `keycloak.example.com` with `keycloak.`.
+
+- For Talm earlier than v0.6.6, update `./templates/_helpers.tpl` in the following way:
+
+ ```yaml
+ cluster:
+ apiServer:
+ extraArgs:
+ oidc-issuer-url: "https://keycloak.example.com/realms/cozy"
+ oidc-client-id: "kubernetes"
+ oidc-username-claim: "preferred_username"
+ oidc-groups-claim: "groups"
+ ```
+
+
+## 3. Generate Node Configuration Files
+
+Next step is to make node configuration files from templates.
+Create a `nodes` directory and collect the information from each node into a node-specific file:
+
+```bash
+mkdir nodes
+talm template -e 192.168.123.11 -n 192.168.123.11 -t templates/controlplane.yaml -i > nodes/node1.yaml
+talm template -e 192.168.123.12 -n 192.168.123.12 -t templates/controlplane.yaml -i > nodes/node2.yaml
+talm template -e 192.168.123.13 -n 192.168.123.13 -t templates/controlplane.yaml -i > nodes/node3.yaml
+```
+
+The `--insecure` (`-i`) parameter is required because Talm must retrieve configuration data
+from Talos nodes that are not initialized yet, awaiting in maintenance mode, and therefore unable to accept an authenticated connection.
+The nodes will be initialized only on the next step, with `talm apply`.
+
+The generated files include a comment block with discovered network interfaces and disks.
+You can edit these files before applying to customize the network configuration.
+For example, if you need to configure network bonding (LACP), see
+[Configure bonding (LACP)]({{% ref "/docs/v1.3/install/how-to/bonding" %}}).
+
+
+## 4. Apply Configuration and Bootstrap a Cluster
+
+At this point, the configuration files in `node/*.yaml` are ready for applying to nodes.
+
+
+### 4.1 Apply Configuration Files
+
+Use `talm apply` to apply the configuration files to the corresponding nodes:
+
+```bash
+talm apply -f nodes/node1.yaml -i
+talm apply -f nodes/node2.yaml -i
+talm apply -f nodes/node3.yaml -i
+```
+
+This command initializes nodes, setting up authenticated connection, so that `-i` (`--insecure`) won't be required further on.
+If the command succeeded, it will return the node's IP:
+
+```console
+$ talm apply -f nodes/node1.yaml -i
+- talm: file=nodes/node1.yaml, nodes=[192.168.123.11], endpoints=[192.168.123.11]
+```
+
+Later on, you can also use the following options with `talm apply`:
+
+- `--dry-run` - dry run mode will show a diff with the existing configuration without making changes.
+- `-m try` - try mode will roll back the configuration in 1 minute.
+
+
+### 4.2 Wait for Reboot
+
+Wait until all nodes have rebooted.
+If an installation media was used, such as a USB stick, remove it to ensure that the nodes boot from the internal disk.
+
+When nodes are ready, they will expose port `50000`, which is a sign that the node has completed Talos configuration and rebooted.
+If you need to automate the node readiness check, consider this example:
+
+```bash
+timeout 60 sh -c 'until \
+ nc -nzv 192.168.123.11 50000 && \
+ nc -nzv 192.168.123.12 50000 && \
+ nc -nzv 192.168.123.13 50000; \
+ do sleep 1; done'
+```
+
+
+### 4.3. Bootstrap Kubernetes
+
+Bootstrap the Kubernetes cluster by running `talm bootstrap` against one of the control plane nodes:
+
+```bash
+talm bootstrap -f nodes/node1.yaml
+```
+
+
+## 5. Access the Kubernetes Cluster
+
+At this point, the Kubernetes cluster is ready to install Cozystack.
+
+Before this step, you were interacting with the cluster using Talos API and `talosctl`.
+Further steps require Kubernetes API and `kubectl`, which require a `kubeconfig`.
+
+
+### 5.1. Get a kubeconfig
+
+Use Talm to generate an administrative `kubeconfig`:
+
+```bash
+talm kubeconfig -f nodes/node1.yaml
+```
+
+This command will produce a `kubeconfig` file in the current directory.
+
+
+### 5.2. Change Cluster API URL
+
+The `kubeconfig` now has the Cluster API URL set to the floating IP (VIP) in the private subnet.
+
+If you’re using a public IP instead of floatingIP, update the endpoint accordingly.
+Edit the `kubeconfig` — change the cluster URL to a public IP of one of the nodes:
+
+```diff
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: ...
+- server: https://10.0.1.101:6443
++ server: https://12.34.56.101:6443
+```
+
+
+### 5.3. Activate kubeconfig
+
+Finally, set up the `KUBECONFIG` variable or use other tools to make this kubeconfig
+accessible to your `kubectl` client:
+
+```bash
+export KUBECONFIG=$PWD/kubeconfig
+```
+
+{{% alert color="info" %}}
+To make this `kubeconfig` permanently available, you can make it the default one (`~/.kube/config`),
+use `kubectl config use-context`, or employ a variety of other methods.
+Check out the [Kubernetes documentation on cluster access](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
+{{% /alert %}}
+
+
+### 5.4. Check Cluster Availability
+
+Check that the cluster is available:
+
+```bash
+kubectl get ns
+```
+
+Example output:
+
+```console
+NAME STATUS AGE
+default Active 7m56s
+kube-node-lease Active 7m56s
+kube-public Active 7m56s
+kube-system Active 7m56s
+```
+
+### 5.5. Check Node State
+
+Check the state of cluster nodes:
+
+```bash
+kubectl get nodes
+```
+
+Output shows node status and Kubernetes version:
+
+```console
+NAME STATUS ROLES AGE VERSION
+node1 NotReady control-plane 7m56s v1.33.1
+node2 NotReady control-plane 7m56s v1.33.1
+node3 NotReady control-plane 7m56s v1.33.1
+```
+
+Note that all nodes show `STATUS: NotReady`, which is normal at this step.
+This happens because the default [Kubernetes CNI plugin](https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/)
+was disabled in the Talos configuration to enable Cozystack installing its own CNI plugin.
+
+
+## Further Steps
+
+Now you have a Kubernetes cluster bootstrapped and ready for installing Cozystack.
+To complete the installation, follow the deployment guide, starting with the
+[Install Cozystack]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) section.
diff --git a/content/en/docs/v1.3/install/kubernetes/talos-bootstrap.md b/content/en/docs/v1.3/install/kubernetes/talos-bootstrap.md
new file mode 100644
index 00000000..bc7b4743
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/talos-bootstrap.md
@@ -0,0 +1,209 @@
+---
+title: Use talos-bootstrap script to bootstrap a Cozystack cluster
+linkTitle: talos-bootstrap
+description: "`talos-bootstrap` is a CLI for step-by-step cluster bootstrapping, made by Cozystack devs. Recommended for first deployments."
+weight: 10
+aliases:
+ - /docs/v1.3/talos/bootstrap/talos-bootstrap
+ - /docs/v1.3/talos/configuration/talos-bootstrap
+ - /docs/v1.3/operations/talos/configuration/talos-bootstrap
+---
+
+[talos-bootstrap](https://github.com/cozystack/talos-bootstrap/) is an interactive script for bootstrapping Kubernetes clusters on Talos OS.
+
+It was created by Cozystack developers to simplify the installation of Talos Linux on bare-metal nodes in a user-friendly manner.
+
+## 1. Install Dependencies
+
+Install the following dependencies
+
+- `talosctl`
+- `dialog`
+- `nmap`
+
+Download the latest version of `talos-bootstrap` from the [releases page](https://github.com/cozystack/talos-bootstrap/releases) or directly from the trunk:
+
+```bash
+curl -fsSL -o /usr/local/bin/talos-bootstrap \
+ https://github.com/cozystack/talos-bootstrap/raw/master/talos-bootstrap
+chmod +x /usr/local/bin/talos-bootstrap
+talos-bootstrap --help
+```
+
+## 2. Prepare Configuration Files
+
+1. Start by making a configuration directory for the new cluster:
+
+ ```bash
+ mkdir -p cluster1
+ cd cluster1
+ ```
+
+1. Make a configuration patch file `patch.yaml` with common node settings, using the following example:
+
+ ```yaml
+ machine:
+ kubelet:
+ nodeIP:
+ validSubnets:
+ - 192.168.100.0/24
+ extraConfig:
+ maxPods: 512
+ sysctls:
+ net.ipv4.neigh.default.gc_thresh1: "4096"
+ net.ipv4.neigh.default.gc_thresh2: "8192"
+ net.ipv4.neigh.default.gc_thresh3: "16384"
+ kernel:
+ modules:
+ - name: openvswitch
+ - name: drbd
+ parameters:
+ - usermode_helper=disabled
+ - name: zfs
+ - name: spl
+ - name: vfio_pci
+ - name: vfio_iommu_type1
+ install:
+ image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+ files:
+ - content: |
+ [plugins]
+ [plugins."io.containerd.grpc.v1.cri"]
+ device_ownership_from_security_context = true
+ [plugins."io.containerd.cri.v1.runtime"]
+ device_ownership_from_security_context = true
+ path: /etc/cri/conf.d/20-customization.part
+ op: create
+ - op: overwrite
+ path: /etc/lvm/lvm.conf
+ permissions: 0o644
+ content: |
+ backup {
+ backup = 0
+ archive = 0
+ }
+ devices {
+ global_filter = [ "r|^/dev/drbd.*|", "r|^/dev/dm-.*|", "r|^/dev/zd.*|" ]
+ }
+
+ cluster:
+ network:
+ cni:
+ name: none
+ dnsDomain: cozy.local
+ podSubnets:
+ - 10.244.0.0/16
+ serviceSubnets:
+ - 10.96.0.0/16
+ ```
+
+1. Make another configuration patch file `patch-controlplane.yaml` with settings exclusive to control plane nodes:
+
+ ```yaml
+ machine:
+ nodeLabels:
+ node.kubernetes.io/exclude-from-external-load-balancers:
+ $patch: delete
+ cluster:
+ allowSchedulingOnControlPlanes: true
+ controllerManager:
+ extraArgs:
+ bind-address: 0.0.0.0
+ scheduler:
+ extraArgs:
+ bind-address: 0.0.0.0
+ apiServer:
+ certSANs:
+ - 127.0.0.1
+ proxy:
+ disabled: true
+ discovery:
+ enabled: false
+ etcd:
+ advertisedSubnets:
+ - 192.168.100.0/24
+ ```
+
+1. To configure Keycloak as an OIDC provider, add the following section to `patch-controlplane.yaml`, replacing `example.com` with your domain:
+
+ ```yaml
+ cluster:
+ apiServer:
+ extraArgs:
+ oidc-issuer-url: "https://keycloak.example.com/realms/cozy"
+ oidc-client-id: "kubernetes"
+ oidc-username-claim: "preferred_username"
+ oidc-groups-claim: "groups"
+ ```
+
+## 3. Bootstrap and Access the Cluster
+
+Once you have the configuration files ready, run `talos-bootstrap` on each node of a cluster:
+
+```bash
+# in the cluster config directory
+talos-bootstrap install
+```
+
+{{% alert color="warning" %}}
+:warning: If your nodes are running on an external network, you must specify each node explicitly in the argument:
+```bash
+talos-bootstrap install -n 1.2.3.4
+```
+
+Where `1.2.3.4` is the IP-address of your remote node.
+{{% /alert %}}
+
+{{% alert color="info" %}}
+`talos-bootstrap` will enable bootstrap on the first configured node in a cluster.
+If you want to re-bootstrap the etcd cluster, remove the line `BOOTSTRAP_ETCD=false` from your `cluster.conf` file.
+{{% /alert %}}
+
+Repeat this step for the other nodes in a cluster.
+
+After completing the `install` command, `talos-bootstrap` saves the cluster's config as `./kubeconfig`.
+
+Set up `kubectl` to use this new config by exporting the `KUBECONFIG` variable:
+
+```bash
+export KUBECONFIG=$PWD/kubeconfig
+```
+
+{{% alert color="info" %}}
+To make this `kubeconfig` permanently available, you can make it the default one (`~/.kube/config`),
+use `kubectl config use-context`, or employ a variety of other methods.
+Check out the [Kubernetes documentation on cluster access](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
+{{% /alert %}}
+
+Check that the cluster is available with this new `kubeconfig`:
+
+```bash
+kubectl get ns
+```
+
+Example output:
+
+```console
+NAME STATUS AGE
+default Active 7m56s
+kube-node-lease Active 7m56s
+kube-public Active 7m56s
+kube-system Active 7m56s
+```
+
+{{% alert color="info" %}}
+:warning: All nodes will show as `READY: False`, which is normal at this step.
+This happens because the default CNI plugin was disabled in the previous step to enable Cozystack installing its own CNI plugin.
+{{% /alert %}}
+
+
+## Further Steps
+
+Now you have a Kubernetes cluster bootstrapped and ready for installing Cozystack.
+To complete the installation, follow the deployment guide, starting with the
+[Install Cozystack]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) section.
diff --git a/content/en/docs/v1.3/install/kubernetes/talosctl.md b/content/en/docs/v1.3/install/kubernetes/talosctl.md
new file mode 100644
index 00000000..c3782f42
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/talosctl.md
@@ -0,0 +1,266 @@
+---
+title: Use talosctl to bootstrap a Cozystack cluster
+linkTitle: talosctl
+description: "`talosctl` is the default CLI of Talos Linux, requiring more boilerplate code, but giving full flexibility in configuration."
+weight: 15
+aliases:
+ - /docs/v1.3/talos/bootstrap/talosctl
+ - /docs/v1.3/talos/configuration/talosctl
+ - /docs/v1.3/operations/talos/configuration/talosctl
+---
+
+This guide explains how to prepare a Talos Linux cluster for deploying Cozystack using `talosctl`,
+a specialized command line tool for managing Talos.
+
+## Prerequisites
+
+By the start of this guide you should have Talos OS booted from ISO, but not initialized (bootstrapped), on several nodes.
+These nodes should belong to one subnet or have public IPs.
+
+This guide uses an example where the nodes of a cluster are located in the subnet `192.168.123.0/24`, having the following IP addresses:
+
+- `192.168.123.11`
+- `192.168.123.12`
+- `192.168.123.13`
+
+IP `192.168.123.10` is an internal address which does not belong to any of these nodes, but is created by Talos.
+It's used as VIP.
+
+{{% alert color="info" %}}
+If you are using DHCP, you might not be aware of the IP addresses assigned to your nodes.
+You can use `nmap` to find them, providing your network mask (`192.168.123.0/24` in the example):
+
+```bash
+nmap -Pn -n -p 50000 192.168.123.0/24 -vv | grep 'Discovered'
+```
+
+Example output:
+
+```console
+Discovered open port 50000/tcp on 192.168.123.11
+Discovered open port 50000/tcp on 192.168.123.12
+Discovered open port 50000/tcp on 192.168.123.13
+```
+{{% /alert %}}
+
+## 1. Prepare Configuration Files
+
+1. Start by making a configuration directory for the new cluster:
+
+ ```bash
+ mkdir -p cluster1
+ cd cluster1
+ ```
+
+1. Generate a secrets file.
+ These secrets will later be injected in the configuration and used to establish authenticated connections to Talos nodes:
+
+ ```bash
+ talosctl gen secrets
+ ```
+
+1. Make a configuration patch file `patch.yaml`:
+
+ ```yaml
+ machine:
+ kubelet:
+ nodeIP:
+ validSubnets:
+ - 192.168.123.0/24
+ extraConfig:
+ maxPods: 512
+ sysctls:
+ net.ipv4.neigh.default.gc_thresh1: "4096"
+ net.ipv4.neigh.default.gc_thresh2: "8192"
+ net.ipv4.neigh.default.gc_thresh3: "16384"
+ kernel:
+ modules:
+ - name: openvswitch
+ - name: drbd
+ parameters:
+ - usermode_helper=disabled
+ - name: zfs
+ - name: spl
+ - name: vfio_pci
+ - name: vfio_iommu_type1
+ install:
+ image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+ files:
+ - content: |
+ [plugins]
+ [plugins."io.containerd.cri.v1.runtime"]
+ device_ownership_from_security_context = true
+ path: /etc/cri/conf.d/20-customization.part
+ op: create
+ - op: overwrite
+ path: /etc/lvm/lvm.conf
+ permissions: 0o644
+ content: |
+ backup {
+ backup = 0
+ archive = 0
+ }
+ devices {
+ global_filter = [ "r|^/dev/drbd.*|", "r|^/dev/dm-.*|", "r|^/dev/zd.*|" ]
+ }
+
+ cluster:
+ apiServer:
+ extraArgs:
+ oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
+ oidc-client-id: "kubernetes"
+ oidc-username-claim: "preferred_username"
+ oidc-groups-claim: "groups"
+ network:
+ cni:
+ name: none
+ dnsDomain: cozy.local
+ podSubnets:
+ - 10.244.0.0/16
+ serviceSubnets:
+ - 10.96.0.0/16
+ ```
+
+1. Make another configuration patch file `patch-controlplane.yaml` with settings exclusive to control plane nodes:
+
+ Note that VIP address is used for `machine.network.interfaces[0].vip.ip`:
+
+ ```yaml
+ machine:
+ nodeLabels:
+ node.kubernetes.io/exclude-from-external-load-balancers:
+ $patch: delete
+ network:
+ interfaces:
+ - interface: eth0
+ vip:
+ ip: 192.168.123.10
+ cluster:
+ allowSchedulingOnControlPlanes: true
+ controllerManager:
+ extraArgs:
+ bind-address: 0.0.0.0
+ scheduler:
+ extraArgs:
+ bind-address: 0.0.0.0
+ apiServer:
+ certSANs:
+ - 127.0.0.1
+ proxy:
+ disabled: true
+ discovery:
+ enabled: false
+ etcd:
+ advertisedSubnets:
+ - 192.168.123.0/24
+ ```
+
+
+## 2. Generate Node Configuration Files
+
+Once you have patch files ready, generate the configuration files for each node.
+Note that it's using the three files generated in the previous step: `secrets.yaml`, `patch.yaml`, and `patch-controlplane.yaml`.
+
+URL `192.168.123.10:6443` is the same VIP as mentioned above, and port `6443` is a standard Kubernetes API port.
+
+```bash
+talosctl gen config \
+ cozystack https://192.168.123.10:6443 \
+ --with-secrets secrets.yaml \
+ --config-patch=@patch.yaml \
+ --config-patch-control-plane @patch-controlplane.yaml
+export TALOSCONFIG=$PWD/talosconfig
+```
+
+`192.168.123.11`, `192.168.123.12`, and `192.168.123.13` are nodes.
+In this setup all nodes are management nodes.
+
+## 3. Apply Node Configuration
+
+Apply configuration to all nodes, not only management nodes
+
+```
+talosctl apply -f controlplane.yaml -n 192.168.123.11 -e 192.168.123.11 -i
+talosctl apply -f controlplane.yaml -n 192.168.123.12 -e 192.168.123.12 -i
+talosctl apply -f controlplane.yaml -n 192.168.123.13 -e 192.168.123.13 -i
+```
+
+Further on, you can also use the following options:
+
+- `--dry-run` - dry run mode will show a diff with the existing configuration.
+- `-m try` - try mode will roll back the configuration in 1 minute.
+
+### 3.1. Wait for Nodes Rebooting
+
+Wait until all nodes have rebooted.
+Remove the installation media (e.g., USB stick) to ensure that the nodes boot from the internal disk.
+
+Ready nodes will expose port 50000 which is a sign that the node had completed Talos configuration and rebooted.
+
+If you need to wait for node readiness in a script, consider this example:
+
+```bash
+timeout 60 sh -c 'until nc -nzv 192.168.123.11 50000 && \
+ nc -nzv 192.168.123.12 50000 && \
+ nc -nzv 192.168.123.13 50000; \
+ do sleep 1; done'
+```
+
+## 4. Bootstrap and Access the Cluster
+
+Run `talosctl bootstrap` on a single control-plane node — it is enough to bootstrap the whole cluster:
+
+```bash
+talosctl bootstrap -n 192.168.123.11 -e 192.168.123.11
+```
+
+To access the cluster, generate an administrative `kubeconfig`:
+
+```bash
+talosctl kubeconfig -n 192.168.123.11 -e 192.168.123.11 kubeconfig
+```
+
+Set up `kubectl` to use this new config by exporting the `KUBECONFIG` variable:
+
+```bash
+export KUBECONFIG=$PWD/kubeconfig
+```
+
+{{% alert color="info" %}}
+To make this `kubeconfig` permanently available, you can make it the default one (`~/.kube/config`),
+use `kubectl config use-context`, or employ a variety of other methods.
+Check out the [Kubernetes documentation on cluster access](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/).
+{{% /alert %}}
+
+Check that the cluster is available with this new `kubeconfig`:
+
+```bash
+kubectl get ns
+```
+
+Example output:
+
+```console
+NAME STATUS AGE
+default Active 7m56s
+kube-node-lease Active 7m56s
+kube-public Active 7m56s
+kube-system Active 7m56s
+```
+
+{{% alert color="info" %}}
+:warning: All nodes will show as `READY: False`, which is normal at this step.
+This happens because the default CNI plugin was disabled in the previous step to enable Cozystack installing its own CNI plugin.
+{{% /alert %}}
+
+
+## Further Steps
+
+Now you have a Kubernetes cluster bootstrapped and ready for installing Cozystack.
+To complete the installation, follow the deployment guide, starting with the
+[Install Cozystack]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) section.
diff --git a/content/en/docs/v1.3/install/kubernetes/troubleshooting.md b/content/en/docs/v1.3/install/kubernetes/troubleshooting.md
new file mode 100644
index 00000000..614c53a0
--- /dev/null
+++ b/content/en/docs/v1.3/install/kubernetes/troubleshooting.md
@@ -0,0 +1,91 @@
+---
+title: Troubleshooting Kubernetes Installation
+linkTitle: Troubleshooting
+description: "Instructions for resolving typical problems that can occur when installing Kubernetes with `talm`, `talos-bootstrap`, or `talosctl`."
+weight: 40
+aliases:
+---
+
+This page has instructions for resolving typical problems that can occur when installing Kubernetes with `talm`, `talos-bootstrap`, or `talosctl`.
+
+## No Talos nodes in maintenance mode found!
+
+If you encounter issues with the `talos-bootstrap` script not detecting any nodes, follow these steps to diagnose and resolve the issue:
+
+1. Verify Network Segment
+
+ Ensure that you are running the script within the same network segment as the nodes. This is crucial for the script to be able to communicate with the nodes.
+
+1. Use Nmap to Discover Nodes
+
+ Check if `nmap` can discover your node by running the following command:
+
+ ```bash
+ nmap -Pn -n -p 50000 192.168.0.0/24
+ ```
+
+ This command scans for nodes in the network that are listening on port `50000`.
+ The output should list all the nodes in the network segment that are listening on this port, indicating that they are reachable.
+
+1. Verify talosctl Connectivity
+
+ Next, verify that `talosctl` can connect to a specific node, especially if the node is in maintenance mode:
+
+ ```bash
+ talosctl -e "${node}" -n "${node}" get machinestatus -i
+ ```
+
+ Receiving an error like the following usually means your local `talosctl` binary is outdated:
+
+ ```console
+ rpc error: code = Unimplemented desc = unknown service resource.ResourceService
+ ```
+
+ Updating `talosctl` to the latest version should resolve this issue.
+
+1. Run talos-bootstrap in debug mode
+
+ If the previous steps don’t help, run `talos-bootstrap` in debug mode to gain more insight.
+
+ Execute the script with the `-x` option to enable debug mode:
+
+ ```bash
+ bash -x talos-bootstrap
+ ```
+
+ Pay attention to the last command displayed before the error; it often indicates the command that failed and can provide clues for further troubleshooting.
+
+# fix ext-lldpd on talos nodes
+Waiting a runtime service in talos cause it to stay on booting in talos console, if you want to use lldpd you can patch the nodes,
+proceed if you have connectivity with `talosctl`
+```bash
+cat > lldpd.patch.yaml < -e
+```
+
+Verify which nodes have lldpd installed
+```bash
+node_net='192.168.100.0/24'
+nmap -Pn -n -T4 -p50000 --open -oG - $node_net | awk '/50000\/open/ { system("talosctl get extensions -n "$2" -e "$2" | grep lldpd") }'
+```
+
+If you want to patch all nodes:
+```bash
+nmap -Pn -n -T4 -p50000 --open -oG - $node_net | awk '/50000\/open/ {print "talosctl patch mc -p @lldpd.patch.yaml -n "$2" -e "$2" "}'
+```
+
+Verify state on talos console
+```bash
+talosctl dashboard -n $(nmap -Pn -n -T4 -p50000 --open -oG - $node_net | awk '/50000\/open/ {print $2}' | paste -sd,)
+```
\ No newline at end of file
diff --git a/content/en/docs/v1.3/install/providers/_index.md b/content/en/docs/v1.3/install/providers/_index.md
new file mode 100644
index 00000000..ff35826c
--- /dev/null
+++ b/content/en/docs/v1.3/install/providers/_index.md
@@ -0,0 +1,16 @@
+---
+title: "Deploying Cozystack Cluster on Clouds and Hosting Providers"
+linkTitle: "Provider-Specific Guides"
+description: "Guides for deploying Cozystack clusters on specific cloud and hosting providers."
+weight: 40
+aliases:
+ - /docs/v1.3/talos/install
+---
+
+This section has guides for deploying Cozystack clusters on specific cloud and hosting providers.
+They explain all steps and details of the deployment process, including:
+
+- Specifics of the provider's infrastructure and networking
+- Installation of Talos Linux with a method that suits the provider
+- Configuration of the Kubernetes cluster including provider-specific settings for networking, storage, and other resources
+- Installation of Cozystack on the Kubernetes cluster with provider-specific components and configurations
\ No newline at end of file
diff --git a/content/en/docs/v1.3/install/providers/hetzner.md b/content/en/docs/v1.3/install/providers/hetzner.md
new file mode 100644
index 00000000..d62ff501
--- /dev/null
+++ b/content/en/docs/v1.3/install/providers/hetzner.md
@@ -0,0 +1,459 @@
+---
+title: How to install Cozystack in Hetzner
+linkTitle: Hetzner.com
+description: "How to install Cozystack in Hetzner"
+weight: 30
+aliases:
+ - /docs/v1.3/operations/talos/installation/hetzner
+ - /docs/v1.3/talos/installation/hetzner
+ - /docs/v1.3/talos/install/hetzner
+---
+
+This guide will help you to install Cozystack on a dedicated server from [Hetzner](https://www.hetzner.com/).
+There are several steps to follow, including preparing the infrastructure, installing Talos Linux, configuring cloud-init, and bootstrapping the cluster.
+
+
+## Prepare Infrastructure and Networking
+
+Installation on Hetzner includes the common [hardware requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}}) with several additions.
+
+### Networking Options
+
+There are two options for network connectivity between Cozystack nodes in the cluster:
+
+- **Creating a subnet using vSwitch.**
+ This option is recommended for production environments.
+
+ For this option, dedicated servers must be deployed on [Hetzner robot](https://robot.hetzner.com/).
+ Hetzner also requires using its own load balancer, RobotLB, in place of Cozystack's default MetalLB.
+ Cozystack includes RobotLB as an optional component since release v0.35.0.
+
+- **Using only dedicated servers' public IPs.**
+ This option is valid for a proof-of-concept installation, but not recommended for production.
+
+
+### Configure Subnet with vSwitch
+
+Complete the following steps to prepare your servers for installing Cozystack:
+
+1. Make network configuration settings in Hetzner (only for the **vSwitch subnet** option).
+
+ Complete the steps from the [Prerequisites section](https://github.com/Intreecom/robotlb/blob/master/README.md#prerequisites)
+ of RobotLB's README:
+
+ 1. Create a [vSwitch](https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch/).
+ 2. Use it to assign IPs to your dedicated servers on Hetzner.
+ 3. Create a subnet to [connect your dedicated servers](https://docs.hetzner.com/cloud/networks/connect-dedi-vswitch/).
+
+ Note that you don't need to deploy RobotLB manually.
+ Instead, you will configure Cozystack to install it as an optional component on the step "Installing Cozystack" of this guide.
+
+### Disable Secure Boot
+
+1. Make sure that Secure Boot is disabled.
+
+ Secure Boot is currently not supported in Talos Linux.
+ If your server is configured to use Secure Boot, you need to disable this feature in your BIOS.
+ Otherwise, it will block the server from booting after Talos Linux installation.
+
+ Check it with the following command:
+
+ ```console
+ # mokutil --sb-state
+ SecureBoot disabled
+ Platform is in Setup Mode
+ ```
+
+For the rest of the guide let's assume that we have the following network configuration:
+
+- Hetzner cloud network is `10.0.0.0/16`, named `network-1`.
+- vSwitch subnet with dedicated servers is `10.0.1.0/24`
+- vSwitch VLAN ID is `4000`
+
+- There are three dedicated servers with the following public and private IPs:
+ - `node1`, public IP `12.34.56.101`, vSwitch subnet IP `10.0.1.101`
+ - `node2`, public IP `12.34.56.102`, vSwitch subnet IP `10.0.1.102`
+ - `node3`, public IP `12.34.56.103`, vSwitch subnet IP `10.0.1.103`
+
+## 1. Install Talos Linux
+
+The first stage of deploying Cozystack is to install Talos Linux on the dedicated servers.
+
+Talos is a Linux distribution made for running Kubernetes in the most secure and efficient way.
+To learn why Cozystack adopted Talos as the foundation of the cluster,
+read [Talos Linux in Cozystack]({{% ref "/docs/v1.3/guides/talos" %}}).
+
+### 1.1 Install boot-to-talos in Rescue Mode
+
+Talos will be booted from the Hetzner rescue system using the [`boot-to-talos`](https://github.com/cozystack/boot-to-talos) utility.
+Later, when you apply Talm configuration, Talos will be installed to disk.
+Run these steps on each dedicated server.
+
+1. Switch your server into rescue mode and log in to the server using SSH.
+
+1. Identify the disk that will be used for Talos later (for example, `/dev/nvme0n1`).
+
+1. Download and install `boot-to-talos`:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/boot-to-talos/raw/refs/heads/main/hack/install.sh | sh -s
+ ```
+
+ After this, the `boot-to-talos` binary should be available in your `PATH`:
+
+ ```bash
+ boot-to-talos -h
+ ```
+
+### 1.2. Install Talos Linux with boot-to-talos
+
+1. Start the installer:
+
+ ```bash
+ boot-to-talos
+ ```
+
+ When prompted:
+
+ - Select mode `1. boot`.
+ - Confirm or change the Talos installer image.
+ The default value points to the Cozystack Talos image (the default Cozystack image is suitable),
+ - Provide network settings (interface name, IP address, netmask, gateway) matching the configuration you prepared earlier
+ (vSwitch subnet or public IPs).
+ - Optionally configure a serial console if you use it for remote access.
+
+ The utility will download the Talos installer image, extract the kernel and initramfs, and boot the node into Talos Linux
+ (using the kexec mechanism) without modifying the disks.
+
+### 1.3. Boot into Talos Linux
+
+After `boot-to-talos` finishes, the server reboots automatically into Talos Linux in maintenance mode.
+
+Repeat the same procedure for all dedicated servers in the cluster.
+Once all nodes are booted into Talos, proceed to the next section and configure them using Talm.
+
+## 2. Install Kubernetes Cluster
+
+Now, when Talos is booted in the maintenance mode, it should receive configuration and set up a Kubernetes cluster.
+There are [several options]({{% ref "/docs/v1.3/install/kubernetes" %}}) to write and apply Talos configuration.
+This guide will focus on [Talm](https://github.com/cozystack/talm), Cozystack's own Talos configuration management tool.
+
+This part of the guide is based on the generic [Talm guide]({{% ref "/docs/v1.3/install/kubernetes/talm" %}}),
+but has instructions and examples specific to Hetzner.
+
+### 2.1. Prepare Node Configuration with Talm
+
+1. Start by installing the latest version of Talm for your OS, if you don't have it yet:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/talm/raw/refs/heads/main/hack/install.sh | sh -s
+ ```
+
+1. Make a directory for cluster configuration and initialize a Talm project in it.
+
+ Note that Talm has a built-in preset for Cozystack, which we use with `--preset cozystack`:
+
+ ```bash
+ mkdir -p hetzner-cluster
+ cd hetzner-cluster
+ talm init --preset cozystack --name hetzner
+ ```
+
+ A bunch of files is now created in the `hetzner-cluster` directory.
+ To learn more about the role of each file, refer to the
+ [Talm guide]({{% ref "/docs/v1.3/install/kubernetes/talm#2-initialize-cluster-configuration" %}}).
+
+1. Edit `values.yaml`, modifying the following values:
+
+ - `advertisedSubnets` list should have the vSwitch subnet as an item.
+ - `endpoint` and `floatingIP` should use an unassigned IP from this subnet.
+ This IP will be used to access the cluster API with `talosctl` and `kubectl`.
+ - `podSubnets` and `serviceSubnets` should have other subnets from the Hetzner cloud network,
+ which don't overlap each other and the vSwitch subnet.
+
+ ```yaml
+ endpoint: "https://10.0.1.100:6443"
+ clusterDomain: cozy.local
+ # floatingIP points to the primary etcd node
+ floatingIP: 10.0.1.100
+ image: "ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}"
+ podSubnets:
+ - 10.244.0.0/16
+ serviceSubnets:
+ - 10.96.0.0/16
+ advertisedSubnets:
+ # vSwitch subnet
+ - 10.0.1.0/24
+ oidcIssuerUrl: ""
+ certSANs: []
+ ```
+
+1. Create node configuration files from templates and values:
+
+ ```bash
+ mkdir -p nodes
+ talm template -e 12.34.56.101 -n 12.34.56.101 -t templates/controlplane.yaml -i > nodes/node1.yaml
+ talm template -e 12.34.56.102 -n 12.34.56.102 -t templates/controlplane.yaml -i > nodes/node2.yaml
+ talm template -e 12.34.56.103 -n 12.34.56.103 -t templates/controlplane.yaml -i > nodes/node3.yaml
+ ```
+
+ This guide assumes that you have only three dedicated servers, so they all must be control plane nodes.
+ If you have more and want to separate control plane and worker nodes, use `templates/worker.yaml` to produce worker configs:
+
+ ```bash
+ taml template -e 12.34.56.104 -n 12.34.56.104 -t templates/worker.yaml -i > nodes/worker1.yaml
+ ```
+
+1. Edit each node's configuration file, adding the VLAN configuration.
+
+ Use the following diff as an example and note that for each node its subnet IP should be used:
+
+ ```diff
+ machine:
+ network:
+ interfaces:
+ - deviceSelector:
+ # ...
+ - vip:
+ - ip: 10.0.1.100
+ + vlans:
+ + - addresses:
+ + # different for each node
+ + - 10.0.1.101/24
+ + routes:
+ + - network: 10.0.0.0/16
+ + gateway: 10.0.1.1
+ + vlanId: 4000
+ + vip:
+ + ip: 10.0.1.100
+ ```
+
+### 2.2. Apply Node Configuration
+
+1. Once the configuration files are ready, apply configuration to each node:
+
+ ```bash
+ talm apply -f nodes/node1.yaml -i
+ talm apply -f nodes/node2.yaml -i
+ talm apply -f nodes/node3.yaml -i
+ ```
+
+ This command initializes nodes, setting up authenticated connection, so that `-i` (`--insecure`) won't be required further on.
+ If the command succeeded, it will return the node's IP:
+
+ ```console
+ $ talm apply -f nodes/node1.yaml -i
+ - talm: file=nodes/node1.yaml, nodes=[12.34.56.101], endpoints=[12.34.56.101]
+ ```
+
+1. Wait until all nodes have rebooted and proceed to the next step.
+ When nodes are ready, they will expose port `50000`, which is a sign that the node has completed Talos and rebooted.
+
+ If you need to automate the node readiness check, consider this example:
+
+ ```bash
+ timeout 60 sh -c 'until \
+ nc -nzv 12.34.56.101 50000 && \
+ nc -nzv 12.34.56.102 50000 && \
+ nc -nzv 12.34.56.103 50000; \
+ do sleep 1; done'
+ ```
+
+1. Bootstrap the Kubernetes cluster from one of the control plane nodes:
+
+ ```bash
+ talm bootstrap -f nodes/node1.yaml
+ ```
+
+1. Generate an administrative `kubeconfig` to access the cluster using the same control plane node:
+
+ ```bash
+ talm kubeconfig -f nodes/node1.yaml
+ ```
+
+1. Edit the server URL in the `kubeconfig` to a public IP
+
+ ```diff
+ apiVersion: v1
+ clusters:
+ - cluster:
+ - server: https://10.0.1.101:6443
+ + server: https://12.34.56.101:6443
+ ```
+
+1. Finally, set up the `KUBECONFIG` variable or other tools making this config
+ accessible to your `kubectl` client:
+
+ ```bash
+ export KUBECONFIG=$PWD/kubeconfig
+ ```
+
+1. Check that the cluster is available with this new `kubeconfig`:
+
+ ```bash
+ kubectl get ns
+ ```
+
+ Example output:
+
+ ```console
+ NAME STATUS AGE
+ default Active 7m56s
+ kube-node-lease Active 7m56s
+ kube-public Active 7m56s
+ kube-system Active 7m56s
+ ```
+
+At this point you have dedicated servers with Talos Linux and a Kubernetes cluster deployed on them.
+You also have a `kubeconfig` which you will use to access the cluster using `kubectl` and install Cozystack.
+
+## 3. Install Cozystack
+
+The final stage of deploying a Cozystack cluster on Hetzner is to install Cozystack on a prepared Kubernetes cluster.
+
+### 3.1. Start Cozystack Installer
+
+1. Install the Cozystack operator:
+
+ ```bash
+ helm upgrade --install cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version {{< version-pin "cozystack_version" >}} \
+ --namespace cozy-system \
+ --create-namespace
+ ```
+
+ The example pins the installer to Cozystack {{< version-pin "cozystack_tag" >}}. For a newer patch in the same minor series, pick the desired tag from the [releases page](https://github.com/cozystack/cozystack/releases).
+
+1. Create a Platform Package file, **cozystack-platform.yaml**.
+
+ Note that this file is reusing the subnets for pods and services which were used in `values.yaml` before producing Talos configuration with Talm.
+ Also note how Cozystack's default load balancer MetalLB is replaced with RobotLB using `disabledPackages` and `enabledPackages`.
+
+ Replace `example.org` with a routable fully-qualified domain name (FQDN) that you're going to use for your Cozystack-based platform.
+ If you don't have one ready, you can use [nip.io](https://nip.io/) with dash notation.
+
+ ```yaml
+ apiVersion: cozystack.io/v1alpha1
+ kind: Package
+ metadata:
+ name: cozystack.cozystack-platform
+ spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ bundles:
+ disabledPackages:
+ - cozystack.metallb
+ enabledPackages:
+ - cozystack.hetzner-robotlb
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://api.example.org:443"
+ exposedServices:
+ - dashboard
+ - api
+ networking:
+ ## podSubnets from the node config
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ ## serviceSubnets from the node config
+ serviceCIDR: "10.96.0.0/16"
+ ```
+
+1. Apply the Platform Package:
+
+ ```bash
+ kubectl apply -f cozystack-platform.yaml
+ ```
+
+ The operator starts the installation, which will last for some time.
+ You can track the logs of the operator, if you wish:
+
+ ```bash
+ kubectl logs -n cozy-system deploy/cozystack-operator -f
+ ```
+
+1. Check the status of installation:
+
+ ```bash
+ kubectl get hr -A
+ ```
+
+ When installation is complete, all services will switch their state to `READY: True`:
+ ```console
+ NAMESPACE NAME AGE READY STATUS
+ cozy-cert-manager cert-manager 4m1s True Release reconciliation succeeded
+ cozy-cert-manager cert-manager-issuers 4m1s True Release reconciliation succeeded
+ cozy-cilium cilium 4m1s True Release reconciliation succeeded
+ ...
+ ```
+
+### 3.2 Create a Load Balancer with RobotLB
+
+Hetzner requires using its own RobotLB instead of Cozysatck's default MetalLB.
+RobotLB is already installed as a component of Cozystack and running as a service in it.
+Now it needs a token to create a load balancer resource in Hetzner.
+
+1. Create a Hetzner API token for RobotLB.
+
+ Navigate to the Hetzner console, open Security, and create a token with `Read` and `Write` permissions.
+
+1. Pass the token to RobotLB to create a load balancer in Hetzner.
+
+ Use the Hetzner API token to create a Kubernetes secret in Cozystack.
+
+ - If you're using a **private network** (vSwitch), specify the network name:
+
+ ```bash
+ export ROBOTLB_HCLOUD_TOKEN=""
+ export ROBOTLB_DEFAULT_NETWORK=""
+
+ kubectl create secret generic hetzner-robotlb-credentials \
+ --namespace=cozy-hetzner-robotlb \
+ --from-literal=ROBOTLB_HCLOUD_TOKEN="$ROBOTLB_HCLOUD_TOKEN" \
+ --from-literal=ROBOTLB_DEFAULT_NETWORK="$ROBOTLB_DEFAULT_NETWORK"
+ ```
+
+ - If you're using **public IPs only** (no vSwitch), omit `ROBOTLB_DEFAULT_NETWORK`:
+
+ ```bash
+ export ROBOTLB_HCLOUD_TOKEN=""
+
+ kubectl create secret generic hetzner-robotlb-credentials \
+ --namespace=cozy-hetzner-robotlb \
+ --from-literal=ROBOTLB_HCLOUD_TOKEN="$ROBOTLB_HCLOUD_TOKEN"
+ ```
+
+ In this case, RobotLB will use nodes' public IPs (ExternalIP) as load balancer targets.
+ For this to work, the nodes must have ExternalIP addresses configured.
+ The simplest way to achieve this is by installing [local-ccm](https://github.com/cozystack/local-ccm),
+ which automatically assigns public IPs to nodes' `.status.addresses` field.
+
+ Upon receiving the token, RobotLB service in Cozystack will create a load balancer in Hetzner.
+
+### 3.3 Configure Storage with LINSTOR
+
+Configuring LINSTOR in Hetzner has no difference from other infrastructure setups.
+Follow the [Storage configuration guide]({{% ref "/docs/v1.3/getting-started/install-cozystack#3-configure-storage" %}}) from the Cozystack tutorial.
+
+### 3.4. Start Services in the Root Tenant
+
+Set up the basic services ( `etcd`, `monitoring`, and `ingress`) in the root tenant:
+
+```bash
+kubectl patch -n tenant-root tenants.apps.cozystack.io root --type=merge -p '
+{"spec":{
+ "ingress": true,
+ "monitoring": true,
+ "etcd": true
+}}'
+```
+
+## Notes and Troubleshooting
+
+{{% alert color="warning" %}}
+:warning: If you encounter issues booting Talos Linux on your node, it might be related to the serial console options in your GRUB configuration,
+`console=tty1 console=ttyS0`.
+Try rebooting into rescue mode and remove these options from the GRUB configuration on the third partition of your system's primary disk (`$DISK1`).
+{{% /alert %}}
diff --git a/content/en/docs/v1.3/install/providers/oracle-cloud.md b/content/en/docs/v1.3/install/providers/oracle-cloud.md
new file mode 100644
index 00000000..69ed4226
--- /dev/null
+++ b/content/en/docs/v1.3/install/providers/oracle-cloud.md
@@ -0,0 +1,384 @@
+---
+title: How to install Cozystack in Oracle Cloud Infrastructure
+linkTitle: Oracle Cloud
+description: "How to install Cozystack in Oracle Cloud Infrastructure"
+weight: 25
+aliases:
+ - /docs/v1.3/operations/talos/installation/oracle-cloud
+ - /docs/v1.3/talos/install/oracle-cloud
+---
+
+## Introduction
+
+This guide explains how to install Talos on Oracle Cloud Infrastructure and deploy a Kubernetes cluster that is ready for Cozystack.
+After completing the guide, you will be ready to proceed with
+[installing Cozystack itself]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}).
+
+{{% alert color="info" %}}
+This guide was created to support deployment of development clusters by the Cozystack team.
+If you face any problems while going through the guide, please raise an issue in [cozystack/website](https://github.com/cozystack/website/issues)
+or come and share your experience in the [Cozystack community](https://t.me/cozystack).
+{{% /alert %}}
+
+## 1. Upload Talos Image to Oracle Cloud
+
+The first step is to make a Talos Linux installation image available for use in Oracle Cloud as a custom image.
+
+1. Download the Talos Linux image archive for Cozystack {{< version-pin "cozystack_tag" >}} from the [releases page](https://github.com/cozystack/cozystack/releases/tag/{{< version-pin "cozystack_tag" >}}) and unpack it:
+
+ ```bash
+ wget https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/metal-amd64.raw.xz
+ xz -d metal-amd64.raw.xz
+ ```
+
+ As a result, you will get the file `metal-amd64.raw`, which you can then upload to OCI.
+
+1. Follow the OCI documentation to [upload the image to a bucket in OCI Object Storage](https://docs.oracle.com/iaas/Content/Object/Tasks/managingobjects_topic-To_upload_objects_to_a_bucket.htm).
+
+1. Proceed with the documentation to [import this image as a custom image](https://docs.oracle.com/en-us/iaas/Content/Compute/Tasks/importingcustomimagelinux.htm#linux).
+ Use the following settings:
+
+ - **Image type**: QCOW2
+ - **Launch mode**: Paravirtualized mode
+
+1. Finally, get the image's [OCID](https://docs.oracle.com/en-us/iaas/Content/libraries/glossary/ocid.htm) and save it for use in the next steps.
+
+## 2. Create Infrastructure
+
+The goal of this step is to prepare the infrastructure according to the
+[Cozystack cluster requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}}).
+
+This can be done manually using the Oracle Cloud dashboard or with Terraform.
+
+### 2.1 Prepare Terraform Configuration
+
+If you choose to use Terraform, the first step is to build the configuration.
+
+{{% alert color="info" %}}
+Check out [the complete example of Terraform configuration](https://github.com/cozystack/examples/tree/main/001-deploy-cozystack-oci)
+for deploying several Talos nodes in Oracle Cloud Infrastructure.
+{{% /alert %}}
+
+Below is a shorter example of Terraform configuration creating three virtual machines with the following private IPs:
+
+- `192.168.1.11`
+- `192.168.1.12`
+- `192.168.1.13`
+
+These VMs will also have a VLAN interface with subnet `192.168.100.0/24` used for the internal cluster communication.
+
+Note the part that references the Talos image OCID from the previous step:
+
+```hcl
+ source_details {
+ source_type = "image"
+ source_id = var.talos_image_id
+ }
+```
+
+Full configuration example:
+
+```hcl
+terraform {
+ backend "local" {}
+ required_providers {
+ oci = { source = "oracle/oci", version = "~> 6.35" }
+ }
+}
+
+resource "oci_core_vcn" "cozy_dev1" {
+ display_name = "cozy-dev1"
+ cidr_blocks = ["192.168.0.0/16"]
+ compartment_id = var.compartment_id
+}
+
+resource "oci_core_network_security_group" "cozy_dev1_allow_all" {
+ display_name = "allow-all"
+ compartment_id = var.compartment_id
+ vcn_id = oci_core_vcn.cozy_dev1.id
+}
+
+resource "oci_core_subnet" "test_subnet" {
+ display_name = "cozy-dev1"
+ cidr_block = "192.168.1.0/24"
+ compartment_id = var.compartment_id
+ vcn_id = oci_core_vcn.cozy_dev1.id
+}
+
+resource "oci_core_network_security_group_security_rule" "cozy_dev1_ingress" {
+ network_security_group_id = oci_core_network_security_group.cozy_dev1_allow_all.id
+ direction = "INGRESS"
+ protocol = "all"
+ source = "0.0.0.0/0"
+ source_type = "CIDR_BLOCK"
+}
+
+resource "oci_core_network_security_group_security_rule" "cozy_dev1_egress" {
+ network_security_group_id = oci_core_network_security_group.cozy_dev1_allow_all.id
+ direction = "EGRESS"
+ protocol = "all"
+ destination = "0.0.0.0/0"
+ destination_type = "CIDR_BLOCK"
+}
+
+resource "oci_core_internet_gateway" "cozy_dev1" {
+ display_name = "cozy-dev1"
+ compartment_id = var.compartment_id
+ vcn_id = oci_core_vcn.cozy_dev1.id
+}
+
+resource "oci_core_default_route_table" "cozy_dev1_default_rt" {
+ manage_default_resource_id = oci_core_vcn.cozy_dev1.default_route_table_id
+
+ compartment_id = var.compartment_id
+ display_name = "cozy‑dev1‑default"
+
+ route_rules {
+ destination = "0.0.0.0/0"
+ destination_type = "CIDR_BLOCK"
+ network_entity_id = oci_core_internet_gateway.cozy_dev1.id
+ }
+}
+
+resource "oci_core_vlan" "cozy_dev1_vlan" {
+ display_name = "cozy-dev1-vlan"
+ compartment_id = var.compartment_id
+ vcn_id = oci_core_vcn.cozy_dev1.id
+
+ cidr_block = "192.168.100.0/24"
+ nsg_ids = [oci_core_network_security_group.cozy_dev1_allow_all.id]
+}
+
+variable "node_private_ips" {
+ type = list(string)
+ default = ["192.168.1.11", "192.168.1.12", "192.168.1.13"]
+}
+
+variable "compartment_id" {
+ description = "OCID of the OCI compartment"
+ type = string
+}
+
+variable "availability_domain" {
+ description = "Availability domain for the instances"
+ type = string
+}
+
+variable "talos_image_id" {
+ description = "OCID of the imported Talos Linux image"
+ type = string
+}
+
+resource "oci_core_instance" "cozy_dev1_nodes" {
+ count = length(var.node_private_ips)
+ display_name = "cozy-dev1-node-${count.index + 1}"
+ compartment_id = var.compartment_id
+ availability_domain = var.availability_domain
+ shape = "VM.Standard3.Flex"
+ preserve_boot_volume = false
+ preserve_data_volumes_created_at_launch = false
+
+ create_vnic_details {
+ subnet_id = oci_core_subnet.test_subnet.id
+ nsg_ids = [oci_core_network_security_group.cozy_dev1_allow_all.id]
+ private_ip = var.node_private_ips[count.index]
+ }
+
+ source_details {
+ source_type = "image"
+ source_id = var.talos_image_id
+ }
+
+ launch_volume_attachments {
+ display_name = "cozy-dev1-node${count.index + 1}-data"
+ launch_create_volume_details {
+ display_name = "cozy-dev1-node${count.index + 1}-data"
+ compartment_id = var.compartment_id
+ size_in_gbs = "512"
+ volume_creation_type = "ATTRIBUTES"
+ vpus_per_gb = "10"
+ }
+ type = "paravirtualized"
+ }
+
+ shape_config {
+ memory_in_gbs = "32"
+ ocpus = "4"
+ }
+}
+
+resource "oci_core_vnic_attachment" "cozy_dev1_vlan_vnic" {
+ count = length(var.node_private_ips)
+ instance_id = oci_core_instance.cozy_dev1_nodes[count.index].id
+
+ create_vnic_details {
+ vlan_id = oci_core_vlan.cozy_dev1_vlan.id
+ }
+}
+```
+
+### 2.2 Apply Configuration
+
+When the configuration is ready, authenticate to OCI and apply it with Terraform:
+
+```bash
+oci session authenticate --region us-ashburn-1 --profile-name=DEFAULT
+terraform init
+terraform apply
+```
+
+As a result of these commands, the virtual machines will be deployed and configured.
+
+Save the public IP addresses assigned to the VMs for the next step. In this example, the addresses are:
+
+- `1.2.3.4`
+- `1.2.3.5`
+- `1.2.3.6`
+
+## 3. Configure Talos and Initialize Kubernetes Cluster
+
+The next step is to apply the configurations and install Talos Linux.
+There are several ways to do that.
+
+This guide uses [Talm](https://github.com/cozystack/talm), a command‑line tool for declarative management of Talos Linux.
+Talm has configuration templates specialized for deploying Cozystack, which is why we will use it.
+
+If you do not have Talm installed, [download the latest binary](https://github.com/cozystack/talm/releases/latest) for your OS and architecture.
+Make it executable and save it to `/usr/local/bin/talm`:
+
+```bash
+# pick your preferred architecture from the release artifacts
+wget -O talm https://github.com/cozystack/talm/releases/latest/download/talm-darwin-arm64
+chmod +x talm
+mv talm /usr/local/bin/talm
+```
+
+### 3.1 Prepare Talm Configuration
+
+1. Create a directory for the new cluster's configuration files:
+ ```bash
+ mkdir -p cozystack-cluster
+ cd cozystack-cluster
+ ```
+
+1. Initialize Talm configuration for Cozystack:
+
+ ```bash
+ talm init --preset cozystack --name mycluster
+ ```
+
+1. Generate a configuration template for each node, providing the node's IP address:
+
+ ```bash
+ # Use the node's public IP assigned by OCI
+ talm template \
+ --nodes 1.2.3.4 \
+ --endpoints 1.2.3.4 \
+ --template templates/controlplane.yaml \
+ --insecure \
+ > nodes/node0.yaml
+ ```
+
+ Repeat the same for each node using its public IP:
+
+ ```bash
+ talm template ... > nodes/node1.yaml
+ talm template ... > nodes/node2.yaml
+ ```
+
+ Using `templates/controlplane.yaml` means the node will act as both control plane and worker.
+ Having three combined nodes is the preferred setup for a small PoC cluster.
+
+ The `--insecure` (`-i`) parameter is required because Talm must retrieve configuration data from a node that is not yet initialized and therefore cannot accept an authenticated connection.
+ The node will be initialized only a few steps later, with `talm apply`.
+
+ The node's public IP must be specified for both the `--nodes` (`-n`) and `--endpoints` (`-e`) parameters.
+ To learn more about Talos node configuration and endpoints, refer to the
+ [Talos documentation](https://www.talos.dev/{{< version-pin "talos_minor" >}}/learn-more/talosctl/#endpoints-and-nodes)
+
+1. Edit the node configuration file as needed.
+
+ - Update `hostname` to the desired name:
+
+ ```yaml
+ machine:
+ network:
+ hostname: node1
+ ```
+
+ - Add the private interface configuration to the `machine.network.interfaces` section, and move `vip` to this configuration.
+ This part of the configuration is not generated automatically, so you need to fill in the values:
+
+ - `interface`: obtained from the "Discovered interfaces" by matching options for the private interface.
+ - `addresses`: use the address specified for Layer 2 (L2).
+
+ Example:
+
+ ```yaml
+ machine:
+ network:
+ interfaces:
+ - interface: eth0
+ addresses:
+ - 1.2.3.4/29
+ routes:
+ - network: 0.0.0.0/0
+ gateway: 1.2.3.1
+ - interface: eth1
+ addresses:
+ - 192.168.100.11/24
+ vip:
+ ip: 192.168.100.10
+ ```
+
+After these steps, the node configuration files are ready to be applied.
+
+### 3.2 Initialize Talos and Run Kubernetes Cluster
+
+The next stage is to initialize Talos nodes and bootstrap a Kubernetes cluster.
+
+1. Run `talm apply` for all nodes to apply the configurations:
+
+ ```bash
+ talm apply -f nodes/node0.yaml --insecure
+ talm apply -f nodes/node1.yaml --insecure
+ talm apply -f nodes/node2.yaml --insecure
+ ```
+
+ The nodes will reboot, and Talos will be installed to disk.
+ The parameter `--insecure` (`-i`) is required the first time you run `talm apply` on each node.
+
+1. Execute `talm bootstrap` on the first node in the cluster. For example:
+ ```bash
+ talm bootstrap -f nodes/node0.yaml
+ ```
+
+1. Get the `kubeconfig` from any control‑plane node using Talm. In this example, all three nodes are control‑plane nodes:
+
+ ```bash
+ talm kubeconfig -f nodes/node0.yaml
+ ```
+
+1. Edit the `kubeconfig` to set the server IP address to one of the control‑plane nodes, for example:
+ ```yaml
+ server: https://1.2.3.4:6443
+ ```
+
+1. Export the `KUBECONFIG` variable to use the kubeconfig, and check the connection to the cluster:
+ ```bash
+ export KUBECONFIG=${PWD}/kubeconfig
+ kubectl get nodes
+ ```
+
+ You should see that the nodes are accessible and in the `NotReady` state, which is expected at this stage:
+
+ ```console
+ NAME STATUS ROLES AGE VERSION
+ node0 NotReady control-plane 2m21s v1.32.0
+ node1 NotReady control-plane 1m47s v1.32.0
+ node2 NotReady control-plane 1m43s v1.32.0
+ ```
+
+Now you have a Kubernetes cluster prepared for installing Cozystack.
+To complete the installation, follow the deployment guide, starting with the
+[Install Cozystack]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) section.
diff --git a/content/en/docs/v1.3/install/providers/servers-com/_index.md b/content/en/docs/v1.3/install/providers/servers-com/_index.md
new file mode 100644
index 00000000..d2c3dd29
--- /dev/null
+++ b/content/en/docs/v1.3/install/providers/servers-com/_index.md
@@ -0,0 +1,234 @@
+---
+title: Install Cozystack in Servers.com
+linkTitle: Servers.com
+description: "Install Cozystack in the Servers.com infrastructure."
+weight: 40
+aliases:
+ - /docs/v1.3/operations/talos/installation/servers_com
+ - /docs/v1.3/talos/installation/servers_com
+ - /docs/v1.3/talos/install/servers_com
+---
+
+## Before Installation
+
+### 1. Network
+
+1. **Set Up L2 Network**
+
+ 1. Navigate to **Networks > L2 Segment** and click **Add Segment**.
+
+ 
+
+ 
+
+ 
+
+ First, select **Private**, choose the region, add the servers, assign a name, and save it.
+
+ 1. Set the type to **Native**.
+ Do the same for **Public**.
+
+ 
+
+### 2. Access
+
+1. Create SSH keys for server access.
+
+1. Go to **Identity and Access > SSH and Keys**.
+
+ 
+
+1. Create new keys or add your own.
+
+ 
+ 
+
+## Setup OS
+
+### 1. Operating System and Access
+
+{{% alert color="warning" %}}
+:warning: In rescue mode only the public network is available; the private L2 network is not reachable.
+For Talos installation use a regular OS (for example Ubuntu) instead of rescue mode.
+{{% /alert %}}
+
+1. In the Servers.com control panel, install Ubuntu on the server (for example via **Dedicated Servers > Server Details > OS install**) and make sure your SSH key is selected.
+
+1. After the installation is complete, connect via SSH using the external IP of the server (**Details** > **Public IP**).
+
+ 
+
+### 2. Install Talos with boot-to-talos
+
+Talos will be booted from the installed Ubuntu using the [`boot-to-talos`](https://github.com/cozystack/boot-to-talos) utility.
+Later, when you apply Talm configuration, Talos will be installed to disk.
+Run these steps on each server.
+
+1. Check the information on block devices to find the disk that will be used for Talos later (for example, `/dev/sda`).
+
+ ```console
+ # lsblk
+ NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
+ sda 259:4 0 476.9G 0 disk
+ sdb 259:0 0 476.9G 0 disk
+ ```
+
+1. Download and install `boot-to-talos`:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/boot-to-talos/raw/refs/heads/main/hack/install.sh | sudo sh -s
+ ```
+
+ After installation, verify that the binary is available:
+
+ ```bash
+ boot-to-talos -h
+ ```
+
+1. Run the installer:
+
+ ```bash
+ sudo boot-to-talos
+ ```
+
+ When prompted:
+
+ - Select mode `1. boot`.
+ - Confirm or change the Talos installer image (the default Cozystack image is suitable).
+ - Provide network settings matching the public interface (`bond0`) and default gateway.
+
+ The utility will download the Talos installer image and boot the node into Talos Linux (using the kexec mechanism) without modifying the disks.
+
+ For fully automated installations you can use non-interactive mode:
+
+ ```bash
+ sudo boot-to-talos -yes
+ ```
+
+### 3. Boot into Talos
+
+After `boot-to-talos` finishes, the server reboots automatically into Talos Linux in maintenance mode.
+Repeat the same procedure for all servers, then proceed to configure them with Talm.
+
+## Talos Configuration
+
+Use [Talm](https://github.com/cozystack/talm) to apply config and install Talos Linux on the drive.
+
+1. [Download the latest Talm binary](https://github.com/cozystack/talm/releases/latest) and save it to `/usr/local/bin/talm`
+
+1. Make it executable:
+
+ ```bash
+ chmod +x /usr/local/bin/talm
+ ```
+
+### Installation with Talm
+
+1. Create directory for the new cluster:
+
+ ```bash
+ mkdir -p cozystack-cluster
+ cd cozystack-cluster
+ ```
+
+1. Run the following command to initialize Talm for Cozystack:
+
+ ```bash
+ talm init --preset cozystack --name mycluster
+ ```
+
+ After initializing, generate a configuration template with the command:
+
+ ```bash
+ talm -n 1.2.3.4 -e 1.2.3.4 template -t templates/controlplane.yaml -i > nodes/nodeN.yaml
+ ```
+
+1. Edit the node configuration file as needed:
+
+ 1. Update `hostname` to the desired name.
+ ```yaml
+ machine:
+ network:
+ hostname: node1
+ ```
+
+ 1. Update `nameservers` to the public ones, because internal servers.com DNS is not reachable from the private network.
+ ```yaml
+ machine:
+ network:
+ nameservers:
+ - 8.8.8.8
+ - 1.1.1.1
+ ```
+
+ 1. Add private interface configuration, and move `vip` to this section. This section isn’t generated automatically:
+ - `interface` - Obtained from the "Discovered interfaces" by matching the MAC address of the private interface specified in the provider's email.
+ (Out of the two interfaces, select the one with the uplink).
+ - `addresses` - Use the address specified for Layer 2 (L2).
+
+ ```yaml
+ machine:
+ network:
+ interfaces:
+ - interface: bond0
+ addresses:
+ - 1.2.3.4/29
+ routes:
+ - network: 0.0.0.0/0
+ gateway: 1.2.3.1
+ bond:
+ interfaces:
+ - enp1s0f1
+ - enp3s0f1
+ mode: 802.3ad
+ xmitHashPolicy: layer3+4
+ lacpRate: slow
+ miimon: 100
+ - interface: bond1
+ addresses:
+ - 192.168.102.11/23
+ bond:
+ interfaces:
+ - enp1s0f0
+ - enp3s0f0
+ mode: 802.3ad
+ xmitHashPolicy: layer3+4
+ lacpRate: slow
+ miimon: 100
+ vip:
+ ip: 192.168.102.10
+ ```
+
+**Execution steps:**
+
+1. Run `talm apply -f nodeN.yml` for all nodes to apply the configurations. The nodes will be rebooted and Talos will be installed on the disk.
+
+1. Make sure that talos get installed into disk by executing `talm get systemdisk -f nodeN.yml` for each node. The output should be similar to:
+ ```yaml
+ NODE NAMESPACE TYPE ID VERSION DISK
+ 1.2.3.4 runtime SystemDisk system-disk 1 sda
+ ```
+
+ If the output is empty, it means that Talos still runs in RAM and hasn't been installed on the disk yet.
+1. Execute bootstrap command for the first node in the cluster, example:
+ ```bash
+ talm bootstrap -f nodes/node1.yml
+ ```
+
+1. Get `kubeconfig` from the first node, example:
+ ```bash
+ talm kubeconfig -f nodes/node1.yml
+ ```
+
+1. Edit `kubeconfig` to set the IP address to one of control-plane node, example:
+ ```yaml
+ server: https://1.2.3.4:6443
+ ```
+
+1. Export variable to use the kubeconfig, and check the connection to the Kubernetes:
+ ```bash
+ export KUBECONFIG=${PWD}/kubeconfig
+ kubectl get nodes
+ ```
+
+Now follow **Get Started** guide starting from the [**Install Cozystack**]({{% ref "/docs/v1.3/getting-started/install-cozystack" %}}) section, to continue the installation.
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments1.png b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments1.png
new file mode 100644
index 00000000..0ff562d2
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments1.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments2.png b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments2.png
new file mode 100644
index 00000000..3a0a4b31
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments2.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments3.png b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments3.png
new file mode 100644
index 00000000..88f51c1e
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/l2_segments3.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/public_ip.png b/content/en/docs/v1.3/install/providers/servers-com/img/public_ip.png
new file mode 100644
index 00000000..7931cbd5
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/public_ip.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys1.png b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys1.png
new file mode 100644
index 00000000..93141fe0
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys1.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys2.png b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys2.png
new file mode 100644
index 00000000..7abbfcc0
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys2.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys3.png b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys3.png
new file mode 100644
index 00000000..e648037c
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/ssh_gpg_keys3.png differ
diff --git a/content/en/docs/v1.3/install/providers/servers-com/img/type_native.png b/content/en/docs/v1.3/install/providers/servers-com/img/type_native.png
new file mode 100644
index 00000000..2c698563
Binary files /dev/null and b/content/en/docs/v1.3/install/providers/servers-com/img/type_native.png differ
diff --git a/content/en/docs/v1.3/install/resource-planning.md b/content/en/docs/v1.3/install/resource-planning.md
new file mode 100644
index 00000000..97c1c656
--- /dev/null
+++ b/content/en/docs/v1.3/install/resource-planning.md
@@ -0,0 +1,41 @@
+---
+title: "System Resource Planning Recommendations"
+linkTitle: "Resource Planning"
+description: "How much system resources to allocate per node depending on cluster scale."
+weight: 6
+---
+
+This guide helps you plan system resource allocation per node based on cluster size and tenant count. Recommendations are based on production deployments and provide reasonably accurate estimates for planning purposes.
+
+{{% alert color="warning" %}}
+**Important:** Values shown are only for system components. Add your tenant workload requirements (applications, databases, Kubernetes clusters, VMs, etc.) on top of these.
+{{% /alert %}}
+
+**Quick start**: Allocate at least **2 CPU cores** and **6 GB RAM** per node for system components. For precise requirements based on your cluster size and tenant count, use the table or calculator below.
+
+**Note on allocation**: These values represent expected consumption during normal operation, not hard resource reservations. Kubernetes dynamically schedules workloads, and system components will consume approximately these amounts while remaining capacity stays available for tenant workloads.
+
+## Resource Requirements
+
+Requirements depend on both cluster size (number of nodes) and number of tenants. With many active services per tenant (5+), consider using values from the next tenant category.
+
+| Cluster Size | Nodes | Up to 5 tenants | 6-14 tenants | 15-30 tenants | 31+ tenants |
+|--------------|-------|-----------------|---------------|---------------|-------------|
+| **Small** | 3-5 | CPU: 2 cores RAM: 6 GB | CPU: 2 cores RAM: 6 GB | CPU: 3 cores RAM: 10 GB | CPU: 3 cores RAM: 15 GB |
+| **Medium** | 6-10 | CPU: 3 cores RAM: 7 GB | CPU: 3 cores RAM: 7 GB | CPU: 3 cores RAM: 12 GB | CPU: 4 cores RAM: 18 GB |
+| **Large** | 11+ | CPU: 3 cores RAM: 9 GB | CPU: 3 cores RAM: 10 GB | CPU: 4 cores RAM: 15 GB | CPU: 4 cores RAM: 22 GB |
+
+**Planning tips:**
+- Monitor actual resource consumption and adjust as needed
+- Plan for 20-30% growth buffer
+- With high tenant activity, consider increasing CPU by 50-100% and memory by 100-300%
+
+### Calculate Your Requirements
+
+Use the calculator below to find requirements for your specific configuration:
+
+{{< system-resource-calculator >}}
+
+### Why Resource Requirements Scale
+
+System resource consumption increases with cluster size and tenant count because system components must handle more Kubernetes objects to monitor, more network policies to enforce, and more logs to collect and process.
diff --git a/content/en/docs/v1.3/install/talos/_index.md b/content/en/docs/v1.3/install/talos/_index.md
new file mode 100644
index 00000000..927a9b29
--- /dev/null
+++ b/content/en/docs/v1.3/install/talos/_index.md
@@ -0,0 +1,38 @@
+---
+title: "Installing Talos Linux on Bare Metal or Virtual Machines"
+linkTitle: "1. Install Talos"
+description: "Step 1: Installing Talos Linux on virtual machines or bare metal, ready to bootstrap Cozystack cluster."
+weight: 10
+aliases:
+ - /docs/v1.3/talos/installation
+ - /docs/v1.3/talos/install
+ - /docs/v1.3/operations/talos/installation
+ - /docs/v1.3/operations/talos
+---
+
+**The first step** in deploying a Cozystack cluster is to install Talos Linux on your bare-metal servers or virtual machines.
+Ensure the VMs or bare-metal servers are provisioned before you begin.
+To plan the installation, see the [hardware requirements]({{% ref "/docs/v1.3/install/hardware-requirements" %}}).
+
+If this is your first time installing Cozystack, consider [starting with the Cozystack tutorial]({{% ref "/docs/v1.3/getting-started" %}}).
+
+## Installation Options
+
+There are several methods to install Talos on any bare metal server or virtual machine.
+They have various limitations and optimal use cases:
+
+- **Recommended:** [Boot to Talos Linux from another Linux OS using `boot-to-talos`]({{% ref "/docs/v1.3/install/talos/boot-to-talos" %}}) —
+ a simple installation method, which can be used completely from userspace, and with no external dependencies except the Talos image.
+
+ Choose this option if you are new to Talos or if you have VMs with pre-installed OS from a cloud provider.
+- [Install using temporary DHCP and PXE servers running in Docker containers]({{% ref "/docs/v1.3/install/talos/pxe" %}}) —
+ requires an extra management machine, but allows for installing on multiple hosts at once.
+- [Install using ISO image]({{% ref "/docs/v1.3/install/talos/iso" %}}) — optimal for systems which can automate ISO installation.
+
+## Further Steps
+
+- After installing Talos Linux, you will have a number of nodes ready for the next step, which is to
+ [install and bootstrap a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
+
+- Read the [Talos Linux overview]({{% ref "/docs/v1.3/guides/talos" %}}) to learn why Talos Linux is the optimal OS choice for Cozystack
+ and what it brings to the platform.
diff --git a/content/en/docs/v1.3/install/talos/boot-to-talos.md b/content/en/docs/v1.3/install/talos/boot-to-talos.md
new file mode 100644
index 00000000..a8ed1d31
--- /dev/null
+++ b/content/en/docs/v1.3/install/talos/boot-to-talos.md
@@ -0,0 +1,174 @@
+---
+title: "Install Talos Linux using boot-to-talos"
+linkTitle: boot-to-talos
+description: "Install Talos Linux using boot-to-talos, a convenient CLI application requiring nothing but a Talos image."
+weight: 5
+aliases:
+ - /docs/v1.3/talos/install/kexec
+---
+
+This guide explains how to install Talos Linux on a host running any other Linux distribution using `boot-to-talos`.
+
+`boot-to-talos` was made by Cozystack team to help users and teams adopting Cozystack with installing Talos, which is the most complex step in the process.
+It works entirely from userspace and has no external dependencies except the Talos installer image.
+
+Note that Cozystack provides its own Talos builds, which are tested and optimized for running a Cozystack cluster.
+
+## Version Compatibility
+
+Three versions need to line up when you install Cozystack on Talos:
+
+| Component | Where it comes from | Must match |
+| --- | --- | --- |
+| **Talos** on the node | `-image` flag passed to `boot-to-talos` | the Talos version that ships with the Cozystack release you are installing |
+| **`talosctl`** on your workstation | downloaded separately from [siderolabs/talos releases](https://github.com/siderolabs/talos/releases) | the major.minor of the Talos version you wrote to the node |
+| **Cozystack** | `--version` flag passed to `helm upgrade --install cozy-installer` | — (the anchor; everything else follows) |
+
+For **Cozystack {{< version-pin "cozystack_version" >}}** the pinned Talos version is **{{< version-pin "talos" >}}**
+([`packages/core/talos/images/talos/profiles/installer.yaml`](https://github.com/cozystack/cozystack/blob/{{< version-pin "cozystack_tag" >}}/packages/core/talos/images/talos/profiles/installer.yaml)).
+Use `ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}` as the `boot-to-talos` image and download `talosctl` {{< version-pin "talos_minor" >}}.x.
+
+{{% alert color="warning" %}}
+`boot-to-talos` v0.7.x carries its own hardcoded default image
+(`ghcr.io/cozystack/cozystack/talos:v1.11.6` as of v0.7.1, see
+[`cmd/boot-to-talos/main.go`](https://github.com/cozystack/boot-to-talos/blob/v0.7.1/cmd/boot-to-talos/main.go)).
+If you let the interactive prompt fall through to that default on a cluster
+you intend to run Cozystack v1.3.0, you will end up with a Talos v1.11 node
+while the Cozystack installer and Talm templates target Talos v1.12 — you
+will hit a mismatch at bootstrap time. Always type in the image matching
+your target Cozystack release (or pass `-image` on the command line).
+{{% /alert %}}
+
+## Modes
+
+`boot-to-talos` supports two installation modes:
+
+1. **boot** – Extract the kernel and initrd from the Talos installer and boot them directly using the kexec mechanism.
+2. **install** – Prepare the environment, run the Talos installer, and then overwrite the system disk with the installed image.
+
+{{< note >}}
+If one mode doesn't work on your system, try the other. Different methods may work better on different operating systems.
+{{< /note >}}
+
+## Installation
+
+### 1. Install `boot-to-talos`
+
+- Use the installer script:
+
+ ```bash
+ curl -sSL https://github.com/cozystack/boot-to-talos/raw/refs/heads/main/hack/install.sh | sh -s
+ ```
+
+- Download the binary from the [GitHub releases page](https://github.com/cozystack/boot-to-talos/releases/latest):
+
+ ```bash
+ wget https://github.com/cozystack/boot-to-talos/releases/latest/download/boot-to-talos-linux-amd64.tar.gz
+ ```
+
+### 2. Run to install Talos
+
+Run `boot-to-talos` and provide configuration values.
+Make sure to use Cozystack's own Talos build, found at [ghcr.io/cozystack/cozystack/talos](https://github.com/cozystack/cozystack/pkgs/container/cozystack%2Ftalos).
+
+
+```console
+Mode:
+ 1. boot – extract the kernel and initrd from the Talos installer and boot them directly using the kexec mechanism.
+ 2. install – prepare the environment, run the Talos installer, and then overwrite the system disk with the installed image.
+Mode [1]: 2
+Target disk [/dev/sda]:
+Talos installer image [ghcr.io/cozystack/cozystack/talos:v1.11.6]: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+Add networking configuration? [yes]:
+Interface [eth0]:
+IP address [10.0.2.15]:
+Netmask [255.255.255.0]:
+Gateway (or 'none') [10.0.2.2]:
+Configure serial console? (or 'no') [ttyS0]:
+
+Summary:
+ Image: ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+ Disk: /dev/sda
+ Extra kernel args: ip=10.0.2.15::10.0.2.2:255.255.255.0::eth0::::: console=ttyS0
+
+WARNING: ALL DATA ON /dev/sda WILL BE ERASED!
+
+Continue? [yes]:
+
+2025/08/03 00:11:03 created temporary directory /tmp/installer-3221603450
+2025/08/03 00:11:03 pulling image ghcr.io/cozystack/cozystack/talos:{{< version-pin "talos" >}}
+2025/08/03 00:11:03 extracting image layers
+2025/08/03 00:11:07 creating raw disk /tmp/installer-3221603450/image.raw (2 GiB)
+2025/08/03 00:11:07 attached /tmp/installer-3221603450/image.raw to /dev/loop0
+2025/08/03 00:11:07 starting Talos installer
+2025/08/03 00:11:07 running Talos installer {{< version-pin "talos" >}}
+2025/08/03 00:11:07 WARNING: config validation:
+2025/08/03 00:11:07 use "worker" instead of "" for machine type
+2025/08/03 00:11:07 created EFI (C12A7328-F81F-11D2-BA4B-00A0C93EC93B) size 104857600 bytes
+2025/08/03 00:11:07 created BIOS (21686148-6449-6E6F-744E-656564454649) size 1048576 bytes
+2025/08/03 00:11:07 created BOOT (0FC63DAF-8483-4772-8E79-3D69D8477DE4) size 1048576000 bytes
+2025/08/03 00:11:07 created META (0FC63DAF-8483-4772-8E79-3D69D8477DE4) size 1048576 bytes
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p1" as "vfat" with label "EFI"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p2" as "zeroes" with label "BIOS"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p3" as "xfs" with label "BOOT"
+2025/08/03 00:11:07 formatting the partition "/dev/loop0p4" as "zeroes" with label "META"
+2025/08/03 00:11:07 copying from io reader to /boot/A/vmlinuz
+2025/08/03 00:11:07 copying from io reader to /boot/A/initramfs.xz
+2025/08/03 00:11:08 writing /boot/grub/grub.cfg to disk
+2025/08/03 00:11:08 executing: grub-install --boot-directory=/boot --removable --efi-directory=/boot/EFI /dev/loop0
+2025/08/03 00:11:08 installation of {{< version-pin "talos" >}} complete
+2025/08/03 00:11:08 Talos installer finished successfully
+2025/08/03 00:11:08 remounting all filesystems read-only
+2025/08/03 00:11:08 copy /tmp/installer-3221603450/image.raw → /dev/sda
+2025/08/03 00:11:19 installation image copied to /dev/sda
+2025/08/03 00:11:19 rebooting system
+```
+
+## About the Application
+
+`boot-to-talos` is opensource and hosted on [github.com/cozystack/boot-to-talos](https://github.com/cozystack/boot-to-talos).
+It includes a CLI written in Go and an installer script in Bash.
+There are builds for several architectures:
+
+- `linux-amd64`
+- `linux-arm64`
+- `linux-i386`
+
+### How it Works
+
+Understanding these steps is not required to install Talos Linux.
+
+The workflow depends on the selected mode:
+
+#### Boot Mode
+
+When using the **boot** mode, `boot-to-talos` performs the following steps:
+
+1. **Unpacks Talos installer in RAM**
+ Extracts layers from the Talos‑installer container into a throw‑away `tmpfs`.
+ Note that Docker is not needed during this step.
+2. **Extracts kernel and initrd**
+ Extracts the kernel (`vmlinuz`) and initial ramdisk (`initramfs.xz`) from the Talos installer image.
+3. **Loads kernel via kexec**
+ Uses the `kexec` system call to load the Talos kernel and initrd into memory with the provided kernel command line parameters.
+4. **Reboots into Talos**
+ Executes `kexec --exec` to switch to the Talos kernel without a physical reboot. After booting, you can apply Talos configuration to complete the installation.
+
+#### Install Mode
+
+When using the **install** mode, `boot-to-talos` performs the following steps:
+
+1. **Unpacks Talos installer in RAM**
+ Extracts layers from the Talos‑installer container into a throw‑away `tmpfs`.
+ Note that Docker is not needed during this step.
+2. **Builds system image**
+ Creates a sparse `image.raw`, exposed via a loop device, and executes the Talos *installer* inside a chroot.
+ The installer then partitions, formats, and lays down GRUB and system files.
+3. **Streams to disk**
+ Copies `image.raw` to the chosen block device in chunks of 4 MiB and runs `fsync` after every write, so that data is fully committed before reboot.
+4. **Reboots**
+ Command `echo b > /proc/sysrq-trigger` performs an immediate reboot into the freshly installed Talos Linux.
+
+## Next Steps
+
+Once you have installed Talos, proceed by [installing and bootstrapping a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
diff --git a/content/en/docs/v1.3/install/talos/iso.md b/content/en/docs/v1.3/install/talos/iso.md
new file mode 100644
index 00000000..c6b8b6cd
--- /dev/null
+++ b/content/en/docs/v1.3/install/talos/iso.md
@@ -0,0 +1,31 @@
+---
+title: Install Talos Linux using ISO
+linkTitle: ISO
+description: "How to install Talos Linux using ISO"
+weight: 20
+aliases:
+ - /docs/v1.3/talos/installation/iso
+ - /docs/v1.3/talos/install/iso
+ - /docs/v1.3/operations/talos/installation/iso
+---
+
+This guide explains how to install Talos Linux on bare metal servers or virtual machines.
+Note that Cozystack provides its own Talos builds, which are tested and optimized for running a Cozystack cluster.
+
+## Installation
+
+1. Download the Talos Linux ISO for Cozystack {{< version-pin "cozystack_tag" >}} from the [releases page](https://github.com/cozystack/cozystack/releases/tag/{{< version-pin "cozystack_tag" >}}).
+
+ ```bash
+ wget https://github.com/cozystack/cozystack/releases/download/{{< version-pin "cozystack_tag" >}}/metal-amd64.iso
+ ```
+
+1. Boot your machine with ISO attached.
+
+1. Click **** and fill your network settings:
+
+ 
+
+## Next steps
+
+Once you have installed Talos, proceed by [installing and bootstrapping a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
diff --git a/content/en/docs/v1.3/install/talos/pxe.md b/content/en/docs/v1.3/install/talos/pxe.md
new file mode 100644
index 00000000..b14189bd
--- /dev/null
+++ b/content/en/docs/v1.3/install/talos/pxe.md
@@ -0,0 +1,91 @@
+---
+title: Install Talos Linux using PXE
+linkTitle: PXE
+description: "How to install Talos Linux using temporary DHCP and PXE servers running in Docker containers."
+weight: 15
+aliases:
+ - /docs/v1.3/talos/installation/pxe
+ - /docs/v1.3/talos/install/pxe
+ - /docs/v1.3/operations/talos/installation/pxe
+---
+
+This guide explains how to install Talos Linux on bare metal servers or virtual machines
+using temporary DHCP and PXE servers running in Docker containers.
+This method requires an extra management machine, but allows for installing on multiple hosts at once.
+
+Note that Cozystack provides its own Talos builds, which are tested and optimized for running a Cozystack cluster.
+
+## Dependencies
+
+To install Talos using this method, you will need the following dependencies on the management host:
+
+- `docker`
+- `kubectl`
+
+## Infrastructure Overview
+
+
+
+## Installation
+
+Start matchbox with prebuilt Talos image for Cozystack:
+
+```bash
+sudo docker run --name=matchbox -d --net=host ghcr.io/cozystack/cozystack/matchbox:v0.30.0 \
+ -address=:8080 \
+ -log-level=debug
+```
+
+Start DHCP-Server:
+```bash
+sudo docker run --name=dnsmasq -d --cap-add=NET_ADMIN --net=host quay.io/poseidon/dnsmasq:v0.5.0-32-g4327d60-amd64 \
+ -d -q -p0 \
+ --dhcp-range=192.168.100.3,192.168.100.199 \
+ --dhcp-option=option:router,192.168.100.1 \
+ --enable-tftp \
+ --tftp-root=/var/lib/tftpboot \
+ --dhcp-match=set:bios,option:client-arch,0 \
+ --dhcp-boot=tag:bios,undionly.kpxe \
+ --dhcp-match=set:efi32,option:client-arch,6 \
+ --dhcp-boot=tag:efi32,ipxe.efi \
+ --dhcp-match=set:efibc,option:client-arch,7 \
+ --dhcp-boot=tag:efibc,ipxe.efi \
+ --dhcp-match=set:efi64,option:client-arch,9 \
+ --dhcp-boot=tag:efi64,ipxe.efi \
+ --dhcp-userclass=set:ipxe,iPXE \
+ --dhcp-boot=tag:ipxe,http://192.168.100.254:8080/boot.ipxe \
+ --log-queries \
+ --log-dhcp
+```
+
+For an air-gapped installation, add NTP and DNS servers:
+```bash
+ --dhcp-option=option:ntp-server,10.100.1.1 \
+ --dhcp-option=option:dns-server,10.100.25.253,10.100.25.254 \
+```
+
+Where:
+- `192.168.100.3,192.168.100.199` range to allocate IPs from
+- `192.168.100.1` your gateway
+- `192.168.100.254` is address of your management server
+
+Check status of containers:
+
+```
+docker ps
+```
+
+example output:
+
+```console
+CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
+06115f09e689 quay.io/poseidon/dnsmasq:v0.5.0-32-g4327d60-amd64 "/usr/sbin/dnsmasq -…" 47 seconds ago Up 46 seconds dnsmasq
+6bf638f0808e ghcr.io/cozystack/cozystack/matchbox:v0.30.0 "/matchbox -address=…" 3 minutes ago Up 3 minutes matchbox
+```
+
+Start your servers.
+Now they should automatically boot from your PXE server.
+
+## Next Steps
+
+Once you have installed Talos, proceed by [installing and bootstrapping a Kubernetes cluster]({{% ref "/docs/v1.3/install/kubernetes" %}}).
diff --git a/content/en/docs/v1.3/introduction/_index.md b/content/en/docs/v1.3/introduction/_index.md
new file mode 100644
index 00000000..e665a9e7
--- /dev/null
+++ b/content/en/docs/v1.3/introduction/_index.md
@@ -0,0 +1,77 @@
+---
+title: "Introduction to Cozystack"
+linkTitle: "Introduction"
+description: "Learn what Cozystack is and what you can build with it."
+weight: 9
+---
+
+## What is Cozystack
+
+Cozystack is a Kubernetes-based framework for building a private cloud environment.
+It can be used by a single company to run its own [private cloud]({{% ref "/docs/v1.3/guides/use-cases/private-cloud" %}}) or by a service provider to offer a
+[platform-as-a-service]({{% ref "/docs/v1.3/guides/use-cases/public-cloud" %}}) to multiple customers.
+
+Cozystack covers the most critical needs of a development team:
+
+- [Kubernetes clusters]({{% ref "/docs/v1.3/cozystack-api" %}}) for running applications in development and production
+- Standard [managed applications]({{% ref "/docs/v1.3/applications" %}}): databases, queue managers, caches, and more
+- [Virtual machines]({{% ref "/docs/v1.3/virtualization" %}})
+- Reliable distributed storage
+
+[Cozystack platform stack]({{% ref "/docs/v1.3/guides/platform-stack" %}}) includes reliable components that are typically installed
+in Kubernetes clusters separately.
+Here they're bundled and tested to work together seamlessly.
+The virtualization platform is also built-in and does not require additional hardware.
+Instead, virtual machines run directly inside Kubernetes.
+
+Another powerful feature is the [tenant system]({{% ref "/docs/v1.3/guides/concepts#tenant-system" %}}).
+It allows you to isolate individual developers, teams, or even entire companies in their own fully functional spaces—all on the same hardware.
+
+## Key features
+
+### Multi-User, Multi-Tenant, with SSO included
+
+Cozystack is designed for use by multiple teams, departments, or even companies.
+The traditional approach of assigning each team a dedicated namespace can be too limiting.
+Teams may need multiple environments with identical namespace names,
+or they may lack the root permissions required to manage their own access models.
+
+Cozystack's [tenant system]({{% ref "/docs/v1.3/guides/concepts#tenant-system" %}}) solves these issues
+by allowing users to deploy a Kubernetes-in-Kubernetes environment with a single app.
+Users of nested Kubernetes clusters have full access and control.
+The quota system ensures optimal hardware utilization while isolating resources to prevent the “noisy neighbor” problem.
+Platform users can also generate detailed usage reports for each tenant.
+
+The [single sign-on system]({{% ref "/docs/v1.3/operations/oidc" %}}) in Cozystack is powered by Keycloak.
+The Kubernetes API—both in the root tenant and in nested tenants—supports SSO out of the box.
+
+### Replicated Storage System
+
+Not all businesses can afford dedicated hardware SAN or NAS devices.
+Cozystack includes a reliable distributed storage system that enables the creation of disaster-resilient, replicated volumes.
+It's even possible to replicate volumes across multiple data centers.
+
+### Virtualization System
+
+Typically, you must choose between virtualization and containerization.
+Cozystack combines both in a single platform.
+There's no need to maintain a separate virtualization infrastructure.
+In Cozystack, [virtual machines]({{% ref "/docs/v1.3/virtualization" %}})
+run directly in Kubernetes and consume CPU, memory, GPU, and storage from the same Kubernetes resource pool.
+
+### Managed Databases Without Overhead
+
+Even though Linux-on-Linux virtualization is highly efficient, it still introduces some overhead.
+Cozystack avoids this by running [managed databases]({{% ref "/docs/v1.3/applications" %}})
+directly in containers on the host hardware.
+You can spin up multiple high-availability databases with dedicated IP addresses,
+all on limited hardware—yet each runs with direct access to CPU and storage.
+
+### Kubernetes ecosystem
+
+We’re not aware of any other Kubernetes distribution with more built-in infrastructure components.
+(Seriously—send us a link if you find one!)
+Rather than manually installing components and controllers, you simply choose
+a [Cozystack variant]({{% ref "/docs/v1.3/operations/configuration/variants" %}}) that fits your needs.
+All components are pre-configured, tested for compatibility, and updated alongside the Cozystack framework.
+
diff --git a/content/en/docs/v1.3/kubernetes/_include/_index.md b/content/en/docs/v1.3/kubernetes/_include/_index.md
new file mode 100644
index 00000000..51271444
--- /dev/null
+++ b/content/en/docs/v1.3/kubernetes/_include/_index.md
@@ -0,0 +1,9 @@
+---
+title: "Managed (Tenant) Kubernetes"
+linkTitle: "Managed Kubernetes"
+description: "Learn to deploy and use isolated managed Kubernetes clusters in Cozystack."
+weight: 40
+aliases:
+ - /docs/reference/applications/kubernetes
+ - /docs/v1.3/reference/applications/kubernetes
+---
diff --git a/content/en/docs/v1.3/kubernetes/_index.md b/content/en/docs/v1.3/kubernetes/_index.md
new file mode 100644
index 00000000..f6038b90
--- /dev/null
+++ b/content/en/docs/v1.3/kubernetes/_index.md
@@ -0,0 +1,376 @@
+---
+title: "Managed (Tenant) Kubernetes"
+linkTitle: "Managed Kubernetes"
+description: "Learn to deploy and use isolated managed Kubernetes clusters in Cozystack."
+weight: 40
+aliases:
+ - /docs/reference/applications/kubernetes
+ - /docs/v1.3/reference/applications/kubernetes
+---
+
+
+
+## Managed Kubernetes in Cozystack
+
+Whenever you want to deploy a custom containerized application in Cozystack, it's best to deploy it to a managed Kubernetes cluster.
+
+Cozystack deploys and manages Kubernetes-as-a-service as standalone applications within each tenant’s isolated environment.
+In Cozystack, such clusters are named tenant Kubernetes clusters, while the base Cozystack cluster is called a management or root cluster.
+Tenant clusters are fully separated from the management cluster and are intended for deploying tenant-specific or customer-developed applications.
+
+Within a tenant cluster, users can take advantage of LoadBalancer services and easily provision physical volumes as needed.
+The control-plane operates within containers, while the worker nodes are deployed as virtual machines, all seamlessly managed by the application.
+
+Kubernetes version in tenant clusters is independent of Kubernetes in the management cluster.
+Users can select the latest patch versions from 1.28 to 1.33.
+
+## Why Use a Managed Kubernetes Cluster?
+
+Kubernetes has emerged as the industry standard, providing a unified and accessible API, primarily utilizing YAML for configuration.
+This means that teams can easily understand and work with Kubernetes, streamlining infrastructure management.
+
+Kubernetes leverages robust software design patterns, enabling continuous recovery in any scenario through the reconciliation method.
+Additionally, it ensures seamless scaling across a multitude of servers,
+addressing the challenges posed by complex and outdated APIs found in traditional virtualization platforms.
+This managed service eliminates the need for developing custom solutions or modifying source code, saving valuable time and effort.
+
+The Managed Kubernetes Service in Cozystack offers a streamlined solution for efficiently managing server workloads.
+
+## Starting Work
+
+Once the tenant Kubernetes cluster is ready, you can get a kubeconfig file to work with it.
+It can be done via UI or a `kubectl` request:
+
+- Open the Cozystack dashboard, switch to your tenant, find and open the application page. Copy one of the config files from the **Secrets** section.
+- Run the following command (using the management cluster kubeconfig):
+
+ ```bash
+ kubectl get secret -n tenant- kubernetes--admin-kubeconfig -o go-template='{{ printf "%s\n" (index .data "admin.conf" | base64decode) }}' > admin.conf
+ ```
+
+There are several kubeconfig options available:
+
+- `admin.conf` — The standard kubeconfig for accessing your new cluster.
+ You can create additional Kubernetes users using this configuration.
+- `admin.svc` — Same token as `admin.conf`, but with the API server address set to the internal service name.
+ Use it for applications running inside the cluster that need API access.
+- `super-admin.conf` — Similar to `admin.conf`, but with extended administrative permissions.
+ Intended for troubleshooting and cluster maintenance tasks.
+- `super-admin.svc` — Same as `super-admin.conf`, but pointing to the internal API server address.
+
+## Implementation Details
+
+A tenant Kubernetes cluster in Cozystack is essentially Kubernetes-in-Kubernetes.
+Deploying it involves the following components:
+
+- **Kamaji Control Plane**: [Kamaji](https://kamaji.clastix.io/) is an open-source project that facilitates the deployment
+ of Kubernetes control planes as pods within a root cluster.
+ Each control plane pod includes essential components like `kube-apiserver`, `controller-manager`, and `scheduler`,
+ allowing for efficient multi-tenancy and resource utilization.
+
+- **Etcd Cluster**: A dedicated etcd cluster is deployed using Ænix's [etcd-operator](https://github.com/aenix-io/etcd-operator).
+ It provides reliable and scalable key-value storage for the Kubernetes control plane.
+
+- **Worker Nodes**: Virtual Machines are provisioned to serve as worker nodes using KubeVirt.
+ These nodes are configured to join the tenant Kubernetes cluster, enabling the deployment and management of workloads.
+
+- **Cluster API**: Cozystack is using the [Kubernetes Cluster API](https://cluster-api.sigs.k8s.io/) to provision the components of a cluster.
+
+This architecture ensures isolated, scalable, and efficient tenant Kubernetes environments.
+
+See the reference for components utilized in this service:
+
+- [Kamaji Control Plane](https://kamaji.clastix.io)
+- [Kamaji — Cluster API](https://kamaji.clastix.io/cluster-api/)
+- [github.com/clastix/kamaji](https://github.com/clastix/kamaji)
+- [KubeVirt](https://kubevirt.io/)
+- [github.com/kubevirt/kubevirt](https://github.com/kubevirt/kubevirt)
+- [github.com/aenix-io/etcd-operator](https://github.com/aenix-io/etcd-operator)
+- [Kubernetes Cluster API](https://cluster-api.sigs.k8s.io/)
+- [github.com/kubernetes-sigs/cluster-api-provider-kubevirt](https://github.com/kubernetes-sigs/cluster-api-provider-kubevirt)
+- [github.com/kubevirt/csi-driver](https://github.com/kubevirt/csi-driver)
+
+## Parameters
+
+### Common Parameters
+
+| Name | Description | Type | Value |
+| -------------- | ------------------------------------ | -------- | ------------ |
+| `storageClass` | StorageClass used to store the data. | `string` | `replicated` |
+
+
+### Application-specific Parameters
+
+| Name | Description | Type | Value |
+| ----------------------------------- | ---------------------------------------------------------------------------------------------- | ------------------- | ----------- |
+| `nodeGroups` | Worker nodes configuration map. | `map[string]object` | `{...}` |
+| `nodeGroups[name].minReplicas` | Minimum number of replicas. | `int` | `0` |
+| `nodeGroups[name].maxReplicas` | Maximum number of replicas. | `int` | `10` |
+| `nodeGroups[name].instanceType` | Virtual machine instance type. | `string` | `u1.medium` |
+| `nodeGroups[name].ephemeralStorage` | Ephemeral storage size. | `quantity` | `20Gi` |
+| `nodeGroups[name].roles` | List of node roles. | `[]string` | `[]` |
+| `nodeGroups[name].resources` | CPU and memory resources for each worker node. | `object` | `{}` |
+| `nodeGroups[name].resources.cpu` | CPU available. | `quantity` | `""` |
+| `nodeGroups[name].resources.memory` | Memory (RAM) available. | `quantity` | `""` |
+| `nodeGroups[name].gpus` | List of GPUs to attach (NVIDIA driver requires at least 4 GiB RAM). | `[]object` | `[]` |
+| `nodeGroups[name].gpus[i].name` | Name of GPU, such as "nvidia.com/AD102GL_L40S". | `string` | `""` |
+| `version` | Kubernetes major.minor version to deploy | `string` | `v1.35` |
+| `host` | External hostname for Kubernetes cluster. Defaults to `.` if empty. | `string` | `""` |
+
+
+### Cluster Addons
+
+| Name | Description | Type | Value |
+| --------------------------------------------- | --------------------------------------------------------------------------- | ---------- | --------- |
+| `addons` | Cluster addons configuration. | `object` | `{}` |
+| `addons.certManager` | Cert-manager addon. | `object` | `{}` |
+| `addons.certManager.enabled` | Enable cert-manager. | `bool` | `false` |
+| `addons.certManager.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.cilium` | Cilium CNI plugin. | `object` | `{}` |
+| `addons.cilium.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.gatewayAPI` | Gateway API addon. | `object` | `{}` |
+| `addons.gatewayAPI.enabled` | Enable Gateway API. | `bool` | `false` |
+| `addons.ingressNginx` | Ingress-NGINX controller. | `object` | `{}` |
+| `addons.ingressNginx.enabled` | Enable the controller (requires nodes labeled `ingress-nginx`). | `bool` | `false` |
+| `addons.ingressNginx.exposeMethod` | Method to expose the controller. Allowed values: `Proxied`, `LoadBalancer`. | `string` | `Proxied` |
+| `addons.ingressNginx.hosts` | Domains routed to this tenant cluster when `exposeMethod` is `Proxied`. | `[]string` | `[]` |
+| `addons.ingressNginx.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.gpuOperator` | NVIDIA GPU Operator. | `object` | `{}` |
+| `addons.gpuOperator.enabled` | Enable GPU Operator. | `bool` | `false` |
+| `addons.gpuOperator.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.fluxcd` | FluxCD GitOps operator. | `object` | `{}` |
+| `addons.fluxcd.enabled` | Enable FluxCD. | `bool` | `false` |
+| `addons.fluxcd.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.monitoringAgents` | Monitoring agents. | `object` | `{}` |
+| `addons.monitoringAgents.enabled` | Enable monitoring agents. | `bool` | `false` |
+| `addons.monitoringAgents.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.verticalPodAutoscaler` | Vertical Pod Autoscaler. | `object` | `{}` |
+| `addons.verticalPodAutoscaler.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.velero` | Velero backup/restore addon. | `object` | `{}` |
+| `addons.velero.enabled` | Enable Velero. | `bool` | `false` |
+| `addons.velero.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+| `addons.coredns` | CoreDNS addon. | `object` | `{}` |
+| `addons.coredns.valuesOverride` | Custom Helm values overrides. | `object` | `{}` |
+
+
+### Kubernetes Control Plane Configuration
+
+| Name | Description | Type | Value |
+| --------------------------------------------------- | ------------------------------------------------ | ---------- | ------- |
+| `controlPlane` | Kubernetes control-plane configuration. | `object` | `{}` |
+| `controlPlane.replicas` | Number of control-plane replicas. | `int` | `2` |
+| `controlPlane.apiServer` | API Server configuration. | `object` | `{}` |
+| `controlPlane.apiServer.resources` | CPU and memory resources for API Server. | `object` | `{}` |
+| `controlPlane.apiServer.resources.cpu` | CPU available. | `quantity` | `""` |
+| `controlPlane.apiServer.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
+| `controlPlane.apiServer.resourcesPreset` | Preset if `resources` omitted. | `string` | `large` |
+| `controlPlane.controllerManager` | Controller Manager configuration. | `object` | `{}` |
+| `controlPlane.controllerManager.resources` | CPU and memory resources for Controller Manager. | `object` | `{}` |
+| `controlPlane.controllerManager.resources.cpu` | CPU available. | `quantity` | `""` |
+| `controlPlane.controllerManager.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
+| `controlPlane.controllerManager.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
+| `controlPlane.scheduler` | Scheduler configuration. | `object` | `{}` |
+| `controlPlane.scheduler.resources` | CPU and memory resources for Scheduler. | `object` | `{}` |
+| `controlPlane.scheduler.resources.cpu` | CPU available. | `quantity` | `""` |
+| `controlPlane.scheduler.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
+| `controlPlane.scheduler.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
+| `controlPlane.konnectivity` | Konnectivity configuration. | `object` | `{}` |
+| `controlPlane.konnectivity.server` | Konnectivity Server configuration. | `object` | `{}` |
+| `controlPlane.konnectivity.server.resources` | CPU and memory resources for Konnectivity. | `object` | `{}` |
+| `controlPlane.konnectivity.server.resources.cpu` | CPU available. | `quantity` | `""` |
+| `controlPlane.konnectivity.server.resources.memory` | Memory (RAM) available. | `quantity` | `""` |
+| `controlPlane.konnectivity.server.resourcesPreset` | Preset if `resources` omitted. | `string` | `micro` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+### instanceType Resources
+
+The following instanceType resources are provided by Cozystack:
+
+| Name | vCPUs | Memory |
+|---------------|-------|--------|
+| `cx1.2xlarge` | 8 | 16Gi |
+| `cx1.4xlarge` | 16 | 32Gi |
+| `cx1.8xlarge` | 32 | 64Gi |
+| `cx1.large` | 2 | 4Gi |
+| `cx1.medium` | 1 | 2Gi |
+| `cx1.xlarge` | 4 | 8Gi |
+| `gn1.2xlarge` | 8 | 32Gi |
+| `gn1.4xlarge` | 16 | 64Gi |
+| `gn1.8xlarge` | 32 | 128Gi |
+| `gn1.xlarge` | 4 | 16Gi |
+| `m1.2xlarge` | 8 | 64Gi |
+| `m1.4xlarge` | 16 | 128Gi |
+| `m1.8xlarge` | 32 | 256Gi |
+| `m1.large` | 2 | 16Gi |
+| `m1.xlarge` | 4 | 32Gi |
+| `n1.2xlarge` | 16 | 32Gi |
+| `n1.4xlarge` | 32 | 64Gi |
+| `n1.8xlarge` | 64 | 128Gi |
+| `n1.large` | 4 | 8Gi |
+| `n1.medium` | 4 | 4Gi |
+| `n1.xlarge` | 8 | 16Gi |
+| `o1.2xlarge` | 8 | 32Gi |
+| `o1.4xlarge` | 16 | 64Gi |
+| `o1.8xlarge` | 32 | 128Gi |
+| `o1.large` | 2 | 8Gi |
+| `o1.medium` | 1 | 4Gi |
+| `o1.micro` | 1 | 1Gi |
+| `o1.nano` | 1 | 512Mi |
+| `o1.small` | 1 | 2Gi |
+| `o1.xlarge` | 4 | 16Gi |
+| `rt1.2xlarge` | 8 | 32Gi |
+| `rt1.4xlarge` | 16 | 64Gi |
+| `rt1.8xlarge` | 32 | 128Gi |
+| `rt1.large` | 2 | 8Gi |
+| `rt1.medium` | 1 | 4Gi |
+| `rt1.micro` | 1 | 1Gi |
+| `rt1.small` | 1 | 2Gi |
+| `rt1.xlarge` | 4 | 16Gi |
+| `u1.2xlarge` | 8 | 32Gi |
+| `u1.2xmedium` | 2 | 4Gi |
+| `u1.4xlarge` | 16 | 64Gi |
+| `u1.8xlarge` | 32 | 128Gi |
+| `u1.large` | 2 | 8Gi |
+| `u1.medium` | 1 | 4Gi |
+| `u1.micro` | 1 | 1Gi |
+| `u1.nano` | 1 | 512Mi |
+| `u1.small` | 1 | 2Gi |
+| `u1.xlarge` | 4 | 16Gi |
+
+### U Series: Universal
+
+The U Series is quite neutral and provides resources for
+general purpose applications.
+
+*U* is the abbreviation for "Universal", hinting at the universal
+attitude towards workloads.
+
+VMs of instance types will share physical CPU cores on a
+time-slice basis with other VMs.
+
+#### U Series Characteristics
+
+Specific characteristics of this series are:
+- *Burstable CPU performance* - The workload has a baseline compute
+ performance but is permitted to burst beyond this baseline, if
+ excess compute resources are available.
+- *vCPU-To-Memory Ratio (1:4)* - A vCPU-to-Memory ratio of 1:4, for less
+ noise per node.
+
+### O Series: Overcommitted
+
+The O Series is based on the U Series, with the only difference
+being that memory is overcommitted.
+
+*O* is the abbreviation for "Overcommitted".
+
+#### O Series Characteristics
+
+Specific characteristics of this series are:
+- *Burstable CPU performance* - The workload has a baseline compute
+ performance but is permitted to burst beyond this baseline, if
+ excess compute resources are available.
+- *Overcommitted Memory* - Memory is over-committed in order to achieve
+ a higher workload density.
+- *vCPU-To-Memory Ratio (1:4)* - A vCPU-to-Memory ratio of 1:4, for less
+ noise per node.
+
+### CX Series: Compute Exclusive
+
+The CX Series provides exclusive compute resources for compute
+intensive applications.
+
+*CX* is the abbreviation of "Compute Exclusive".
+
+The exclusive resources are given to the compute threads of the
+VM. In order to ensure this, some additional cores (depending
+on the number of disks and NICs) will be requested to offload
+the IO threading from cores dedicated to the workload.
+In addition, in this series, the NUMA topology of the used
+cores is provided to the VM.
+
+#### CX Series Characteristics
+
+Specific characteristics of this series are:
+- *Hugepages* - Hugepages are used in order to improve memory
+ performance.
+- *Dedicated CPU* - Physical cores are exclusively assigned to every
+ vCPU in order to provide fixed and high compute guarantees to the
+ workload.
+- *Isolated emulator threads* - Hypervisor emulator threads are isolated
+ from the vCPUs in order to reduce emaulation related impact on the
+ workload.
+- *vNUMA* - Physical NUMA topology is reflected in the guest in order to
+ optimize guest sided cache utilization.
+- *vCPU-To-Memory Ratio (1:2)* - A vCPU-to-Memory ratio of 1:2.
+
+### M Series: Memory
+
+The M Series provides resources for memory intensive
+applications.
+
+*M* is the abbreviation of "Memory".
+
+#### M Series Characteristics
+
+Specific characteristics of this series are:
+- *Hugepages* - Hugepages are used in order to improve memory
+ performance.
+- *Burstable CPU performance* - The workload has a baseline compute
+ performance but is permitted to burst beyond this baseline, if
+ excess compute resources are available.
+- *vCPU-To-Memory Ratio (1:8)* - A vCPU-to-Memory ratio of 1:8, for much
+ less noise per node.
+
+### RT Series: RealTime
+
+The RT Series provides resources for realtime applications, like Oslat.
+
+*RT* is the abbreviation for "realtime".
+
+This series of instance types requires nodes capable of running
+realtime applications.
+
+#### RT Series Characteristics
+
+Specific characteristics of this series are:
+- *Hugepages* - Hugepages are used in order to improve memory
+ performance.
+- *Dedicated CPU* - Physical cores are exclusively assigned to every
+ vCPU in order to provide fixed and high compute guarantees to the
+ workload.
+- *Isolated emulator threads* - Hypervisor emulator threads are isolated
+ from the vCPUs in order to reduce emaulation related impact on the
+ workload.
+- *vCPU-To-Memory Ratio (1:4)* - A vCPU-to-Memory ratio of 1:4 starting from
+ the medium size.
diff --git a/content/en/docs/v1.3/kubernetes/relocate-etcd.md b/content/en/docs/v1.3/kubernetes/relocate-etcd.md
new file mode 100644
index 00000000..33ad98ff
--- /dev/null
+++ b/content/en/docs/v1.3/kubernetes/relocate-etcd.md
@@ -0,0 +1,46 @@
+---
+title: How to relocate etcd replicas in tenant clusters
+linkTitle: How to relocate etcd replicas
+description: "Learn how to relocate replicas of tenant etcd clusters, which are used by tenant Kubernetes clusters."
+weight: 100
+---
+
+Tenant Kubernetes clusters are using their own etcd clusters, not the one that is used by the management cluster.
+Such etcd clusters are deployed in tenants and are available to managed Kubernetes clusters deployed in the tenant and its sub-tenants.
+
+Replicas of a tenant etcd cluster can be relocated between nodes for maintenance reasons.
+Currently, management operations for tenant etcd clusters are not automated,
+but such a task can be done manually.
+
+First, you need to install the `kubectl-etcd` plugin for `kubectl`:
+
+```bash
+go install github.com/aenix-io/etcd-operator/cmd/kubectl-etcd@latest
+```
+
+Now you can manage etcd replicas.
+The example script shown below removes the `etcd-2` replica from the etcd cluster and then add it back.
+
+```bash
+# tenant which the etcd cluster belongs to
+NAMESPACE=tenant-demo
+# etcd replica
+RM=etcd-2
+POD=$(kubectl get pod -n "$NAMESPACE" -l app.kubernetes.io/name=etcd --no-headers | awk '$2 == "1/1" && $1 != "'$RM'" {print $1; exit;}')
+RMID=$(kubectl etcd -n $NAMESPACE -p $POD members | awk '$2 == "'$RM'" {print $1}')
+
+# delete the replica
+kubectl delete -n $NAMESPACE pvc/data-$RM pod/$RM
+if [ -n $RMID ]; then
+ kubectl etcd -n $NAMESPACE -p $POD remove-member "$RMID"
+fi
+
+# add the replica back
+kubectl etcd -n $NAMESPACE -p $POD add-member "https://$RM.etcd-headless.$NAMESPACE.svc:2380"
+
+kubectl wait --for=condition=ready pod -n $NAMESPACE $RM --timeout=2m
+
+kubectl etcd -n $NAMESPACE -p $RM members
+```
+
+To learn more about tenant nesting and shared services, read the [Tenants guide]({{% ref "/docs/v1.3/guides/tenants" %}}).
diff --git a/content/en/docs/v1.3/networking/_include/http-cache.md b/content/en/docs/v1.3/networking/_include/http-cache.md
new file mode 100644
index 00000000..a68b0fe0
--- /dev/null
+++ b/content/en/docs/v1.3/networking/_include/http-cache.md
@@ -0,0 +1,10 @@
+---
+title: "Managed Nginx-based HTTP Cache Service"
+linkTitle: "HTTP Cache"
+description: "The Nginx-based HTTP caching service is designed to optimize web traffic and enhance web application performance."
+weight: 20
+aliases:
+ - /docs/reference/applications/http-cache
+ - /docs/v1.3/reference/applications/http-cache
+---
+
diff --git a/content/en/docs/v1.3/networking/_include/tcp-balancer.md b/content/en/docs/v1.3/networking/_include/tcp-balancer.md
new file mode 100644
index 00000000..275c6cb6
--- /dev/null
+++ b/content/en/docs/v1.3/networking/_include/tcp-balancer.md
@@ -0,0 +1,10 @@
+---
+title: "Managed TCP Load Balancer Service"
+linkTitle: "TCP Load Balancer"
+description: "The Managed TCP Load Balancer Service simplifies the deployment and management of load balancers."
+weight: 30
+aliases:
+ - /docs/reference/applications/tcp-balancer
+ - /docs/v1.3/reference/applications/tcp-balancer
+---
+
diff --git a/content/en/docs/v1.3/networking/_include/vpc.md b/content/en/docs/v1.3/networking/_include/vpc.md
new file mode 100644
index 00000000..f929cfa5
--- /dev/null
+++ b/content/en/docs/v1.3/networking/_include/vpc.md
@@ -0,0 +1,10 @@
+---
+title: "VPC"
+linkTitle: "VPC"
+description: "Dedicated subnets"
+weight: 10
+aliases:
+ - /docs/reference/applications/vpc
+ - /docs/v1.3/reference/applications/vpc
+---
+
diff --git a/content/en/docs/v1.3/networking/_include/vpn.md b/content/en/docs/v1.3/networking/_include/vpn.md
new file mode 100644
index 00000000..1ffa00b1
--- /dev/null
+++ b/content/en/docs/v1.3/networking/_include/vpn.md
@@ -0,0 +1,10 @@
+---
+title: "Managed VPN Service"
+linkTitle: "VPN"
+description: "Managed VPN Service simplifies the deployment and management of VPN server, enabling you to establish secure connections with ease."
+weight: 10
+aliases:
+ - /docs/reference/applications/vpn
+ - /docs/v1.3/reference/applications/vpn
+---
+
diff --git a/content/en/docs/v1.3/networking/_index.md b/content/en/docs/v1.3/networking/_index.md
new file mode 100644
index 00000000..3e15e8fb
--- /dev/null
+++ b/content/en/docs/v1.3/networking/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Networking Capabilities"
+linkTitle: "Networking"
+description: "Network configuration, virtual routers, load balancers, and other networking capabilities in Cozystack."
+weight: 60
+---
+
+This documentation section explains network configuration, virtual routers, load balancers, and other networking capabilities in Cozystack.
diff --git a/content/en/docs/v1.3/networking/architecture.md b/content/en/docs/v1.3/networking/architecture.md
new file mode 100644
index 00000000..658005e3
--- /dev/null
+++ b/content/en/docs/v1.3/networking/architecture.md
@@ -0,0 +1,445 @@
+---
+title: "Network Architecture"
+linkTitle: "Architecture"
+description: "Overview of Cozystack cluster network architecture: MetalLB load balancing, Cilium eBPF networking, and tenant isolation with Kube-OVN."
+weight: 5
+aliases:
+ - /docs/v1.3/reference/applications/architecture
+ - /docs/reference/applications/architecture
+---
+
+## Overview
+
+Cozystack uses a multi-layered networking stack designed for bare-metal Kubernetes clusters. The architecture combines several components, each responsible for a specific layer of the network:
+
+| Layer | Component | Purpose |
+| --- | --- | --- |
+| External load balancing | MetalLB | Publishing services to external networks |
+| Service load balancing | Cilium eBPF | kube-proxy replacement, in-kernel DNAT |
+| Network policies | Cilium eBPF | Tenant isolation and security enforcement |
+| Pod networking (CNI) | Kube-OVN | Centralized IPAM, overlay networking |
+| VM IP passthrough | [cozy-proxy](https://github.com/cozystack/cozy-proxy/) | Passing through external IPs into virtual machines |
+| VM secondary interfaces | [Multus CNI](https://github.com/k8snetworkplumbingwg/multus-cni) | Attaching secondary L2 interfaces to virtual machines |
+| Observability | Hubble (optional) | Network traffic visibility (disabled by default) |
+
+```mermaid
+flowchart TD
+ EXT["External Clients"]
+ RTR["Upstream Router / Gateway"]
+ MLB["MetalLB (L2 ARP / BGP)"]
+ CIL["Cilium eBPF (Service Load Balancing + Network Policies)"]
+ OVN["Kube-OVN (Pod Networking + IPAM)"]
+ PODS["Pods"]
+
+ EXT --> RTR
+ RTR --> MLB
+ MLB --> CIL
+ CIL --> OVN
+ OVN --> PODS
+```
+
+## Cluster Network Configuration
+
+| Parameter | Default Value |
+| --- | --- |
+| Pod CIDR | 10.244.0.0/16 |
+| Service CIDR | 10.96.0.0/16 |
+| Join CIDR | 100.64.0.0/16 |
+| Cluster domain | cozy.local |
+| Overlay type | GENEVE |
+| CNI | Kube-OVN |
+| kube-proxy replacement | Cilium eBPF |
+
+### Networking Stack Variants
+
+Cozystack supports several networking stack variants to accommodate different
+cluster types. The variant is selected via `bundles.system.variant` in the
+platform configuration.
+
+| Variant | Components | Target Platform |
+| --- | --- | --- |
+| `kubeovn-cilium` | Kube-OVN + Cilium (default) | Talos Linux |
+| `kubeovn-cilium-generic` | Kube-OVN + Cilium | kubeadm, k3s, RKE2 |
+| `cilium` | Cilium only | Talos Linux |
+| `cilium-generic` | Cilium only | kubeadm, k3s, RKE2 |
+| `cilium-kilo` | Cilium + Kilo | Talos Linux |
+| `noop` | None (bring your own CNI) | Any |
+
+In Kube-OVN variants, Cilium operates as a chained CNI (`generic-veth` mode):
+Kube-OVN handles pod networking and IPAM, while Cilium provides service load
+balancing, network policy enforcement, and optional observability via Hubble.
+
+In Cilium-only variants, Cilium serves as both the CNI and the service load
+balancer.
+
+{{% alert color="info" %}}
+The rest of this document describes the default `kubeovn-cilium` variant.
+{{% /alert %}}
+
+### Pod CIDR Allocation (Kube-OVN)
+
+Kube-OVN uses a **shared Pod CIDR** model:
+
+- All pods draw from a single shared IP pool (10.244.0.0/16)
+- IP addresses are allocated centrally through Kube-OVN's IPAM
+- There is no per-node CIDR splitting (unlike Calico or Flannel)
+- Because IPs are not tied to node-specific CIDR blocks, pods can be rescheduled to different nodes while retaining their addresses
+- Inter-node pod communication uses GENEVE tunnels (Join CIDR: 100.64.0.0/16)
+
+## External Traffic Ingress with MetalLB
+
+MetalLB is a load balancer implementation for bare-metal Kubernetes clusters. It assigns external IP addresses to Services of type `LoadBalancer`, allowing external traffic to reach the cluster.
+
+```mermaid
+flowchart TD
+ CLIENT["External Client"]
+ RTR["Upstream Router"]
+
+ subgraph CLUSTER["Kubernetes Cluster"]
+ S1["Node 1 MetalLB Speaker"]
+ S2["Node 2 MetalLB Speaker"]
+ S3["Node 3 MetalLB Speaker"]
+ CIL["Cilium (eBPF) Service Load Balancing DNAT to Pod IP"]
+ POD["Target Pod (Pod CIDR)"]
+ end
+
+ CLIENT -->|"Traffic to external IP (e.g. 10.x.x.20)"| RTR
+ RTR -->|"L2 (ARP) or BGP"| S1
+ RTR -->|"L2 (ARP) or BGP"| S2
+ RTR -->|"L2 (ARP) or BGP"| S3
+ S1 --> CIL
+ S2 --> CIL
+ S3 --> CIL
+ CIL --> POD
+```
+
+### Layer 2 Mode (ARP)
+
+In L2 mode, MetalLB responds to ARP requests for the Service's external IP. A single node becomes the "leader" for that IP and receives all traffic.
+
+How it works:
+
+1. A MetalLB speaker on one node claims the external IP
+2. The speaker responds to ARP requests: "IP X is at MAC aa:bb:cc:dd:ee:ff"
+3. All traffic for that IP goes to the leader node
+4. Cilium on the node performs DNAT to the actual pod
+
+```mermaid
+sequenceDiagram
+ participant C as Client
+ participant L as Node (MetalLB Leader)
+ participant CIL as Cilium (eBPF)
+ participant P as Pod
+
+ C->>L: ARP: Who has 10.x.x.20?
+ L-->>C: ARP Reply: 10.x.x.20 is at aa:bb:cc:dd:ee:ff
+ C->>L: Send traffic to 10.x.x.20
+ L->>CIL: Packet enters kernel
+ CIL->>P: DNAT → Pod 10.244.x.x:8080
+```
+
+{{% alert color="info" %}}
+In L2 mode, only one node handles traffic for a given Service IP. Failover occurs if the leader node goes down, but there is no true load balancing across nodes for a single Service.
+{{% /alert %}}
+
+### BGP Mode
+
+In BGP mode, MetalLB establishes BGP sessions with upstream routers and announces /32 routes for Service IPs. This enables true ECMP load balancing across nodes.
+
+How it works:
+
+1. MetalLB speakers establish BGP sessions with the upstream router(s)
+2. Each speaker announces the Service IP as a /32 route
+3. The router has multiple next-hops for the same prefix
+4. ECMP distributes traffic across nodes
+5. Cilium on the receiving node performs DNAT to the actual pod
+
+```mermaid
+sequenceDiagram
+ participant S1 as Node 1 (Speaker)
+ participant S2 as Node 2 (Speaker)
+ participant S3 as Node 3 (Speaker)
+ participant R as Upstream Router
+ participant CIL as Cilium (eBPF)
+ participant P as Pod
+
+ S1->>R: BGP UPDATE: 10.x.x.20/32 via Node 1
+ S2->>R: BGP UPDATE: 10.x.x.20/32 via Node 2
+ S3->>R: BGP UPDATE: 10.x.x.20/32 via Node 3
+ Note over R: ECMP: 3 next-hops for 10.x.x.20/32
+ R->>S1: Traffic (1/3)
+ R->>S2: Traffic (1/3)
+ R->>S3: Traffic (1/3)
+ S1->>CIL: Packet enters kernel
+ CIL->>P: DNAT → Pod
+```
+
+### VLAN Integration for External Traffic
+
+External traffic can be delivered to the cluster through additional VLANs (client VLANs, DMZ, public networks, etc.) which are then routed to services via MetalLB and Cilium.
+
+```mermaid
+flowchart TD
+ EXT["External Traffic"]
+
+ subgraph VLANs["Additional VLANs (Client, DMZ, Public, etc.)"]
+ V1["VLAN A"]
+ V2["VLAN B"]
+ end
+
+ subgraph LB["MetalLB"]
+ L2["L2 Mode → Service → Pod"]
+ BGP["BGP Mode → Service → Pod"]
+ end
+
+ EXT --> VLANs
+ V1 --> L2
+ V2 --> BGP
+```
+
+## Cilium as kube-proxy Replacement
+
+Cilium replaces kube-proxy by attaching eBPF programs directly in the Linux kernel. This provides more efficient packet processing and advanced capabilities.
+
+### Traditional kube-proxy (iptables) vs Cilium eBPF
+
+```mermaid
+flowchart LR
+ subgraph IPTABLES["kube-proxy (iptables)"]
+ direction LR
+ P1["Packet"] --> IPT["iptables PREROUTING"]
+ IPT --> NAT["NAT chains O(n) rule traversal"]
+ NAT --> DNAT1["DNAT to Pod"]
+ DNAT1 --> POD1["Pod"]
+ end
+
+ subgraph EBPF["Cilium (eBPF)"]
+ direction LR
+ P2["Packet"] --> BPF["eBPF program (TC/XDP)"]
+ BPF --> MAP["eBPF map lookup O(1) hash"]
+ MAP --> DNAT2["DNAT"]
+ DNAT2 --> POD2["Pod"]
+ end
+```
+
+Key differences:
+
+| Aspect | kube-proxy (iptables) | Cilium (eBPF) |
+| --- | --- | --- |
+| Lookup complexity | O(n) rule traversal | O(1) hash-based lookup |
+| Execution context | Userspace overhead | Native in-kernel |
+| Context switches | Required | None |
+| Scalability | Degrades with service count | Constant performance |
+
+### eBPF Architecture
+
+```mermaid
+flowchart TD
+ subgraph KERNEL["Kernel Space"]
+ subgraph BPF["eBPF Programs"]
+ TC["TC (ingress/egress)"]
+ XDP["XDP (fastest path)"]
+ SOCK["Socket-level (connect, sendmsg)"]
+ end
+
+ subgraph MAPS["eBPF Maps"]
+ SVC["Service Tables"]
+ EP["Endpoint Maps"]
+ POL["Policy Maps"]
+ end
+
+ TC --> MAPS
+ XDP --> MAPS
+ SOCK --> MAPS
+ end
+```
+
+## Tenant Isolation with Kube-OVN and Cilium
+
+In a multi-tenant Cozystack cluster, all tenants share the same Pod CIDR. This is secure because isolation is enforced by Cilium eBPF policies at the kernel level, not by network segmentation. Tenants cannot communicate even though they share the same IP pool. Kube-OVN allocates IPs from this shared pool centrally, without per-node CIDR splitting.
+
+### CNI Architecture
+
+```mermaid
+flowchart TD
+ subgraph KO["Kube-OVN"]
+ IPAM["Centralized IPAM — Shared pool 10.244.0.0/16"]
+ OVN["OVN/OVS Overlay Network (GENEVE)"]
+ SUBNET["Subnet management per namespace/tenant"]
+ end
+
+ subgraph CIL["Cilium"]
+ POLICY["eBPF Network Policies"]
+ SVCBAL["Service Load Balancing (kube-proxy replacement)"]
+ IDENT["Identity-based Security"]
+ HUB["Observability via Hubble"]
+ end
+
+ KO --> CIL
+```
+
+Kube-OVN provides the primary CNI plugin for pod networking and IPAM. Kube-OVN's
+own network policy engine is disabled (`ENABLE_NP: false`), and all policy
+enforcement is delegated to Cilium. Cilium operates as a chained CNI component
+(`generic-veth` mode) that enforces network policies via eBPF and replaces
+kube-proxy for service load balancing.
+
+### Tenant Isolation Model
+
+```mermaid
+flowchart TD
+ TA["Tenant A — Namespace app-a Pods: 10.244.0.10, 10.244.0.11"]
+ TB["Tenant B — Namespace app-b Pods: 10.244.1.20, 10.244.1.21"]
+ TC["Tenant C — Namespace app-c Pods: 10.244.2.30, 10.244.2.31"]
+
+ ENGINE{"Cilium eBPF Policy Engine"}
+
+ TA --> ENGINE
+ TB --> ENGINE
+ TC --> ENGINE
+
+ ENGINE -->|"A ↔ A — ALLOWED"| ALLOW["Same-tenant traffic passes"]
+ ENGINE -->|"A ↔ B — DENIED"| DENY["Cross-tenant traffic dropped"]
+```
+
+### Identity-based Security
+
+Cilium assigns each endpoint (pod) a **security identity** based on its labels. Policies are enforced using these identities rather than IP addresses.
+
+```mermaid
+flowchart LR
+ POD["Pod: frontend-abc123 Labels: app=frontend, tenant=acme, env=prod"]
+ AGENT["Cilium Agent Hash(labels) → Identity: 12345"]
+ BPFMAP["eBPF Map 10.244.0.10 → Identity 12345"]
+
+ POD --> AGENT
+ AGENT --> BPFMAP
+```
+
+### Policy Enforcement in Kernel
+
+When a packet is sent between pods, Cilium enforces policies entirely within kernel space:
+
+```mermaid
+flowchart TD
+ PKT["Packet: 10.244.0.10 → 10.244.1.20"]
+ STEP1["1. Lookup source identity: 10.244.0.10 → ID 12345 (tenant-a)"]
+ STEP2["2. Lookup destination identity: 10.244.1.20 → ID 67890 (tenant-b)"]
+ STEP3["3. Check policy map: (12345, 67890, TCP, 80) → DENY"]
+ DROP["4. DROP packet"]
+
+ PKT --> STEP1 --> STEP2 --> STEP3 --> DROP
+```
+
+All of this happens in kernel space in approximately 100 nanoseconds.
+
+### Why eBPF Enforcement is Secure
+
+| Property | Description |
+| --- | --- |
+| **Verifier** | eBPF programs are verified before loading — no crashes, no infinite loops |
+| **Isolation** | Programs run in a restricted kernel context |
+| **No userspace bypass** | All network traffic must pass through eBPF hooks |
+| **Atomic updates** | Policy changes are atomic — no race conditions |
+| **In-kernel** | No context switches needed, faster than userspace |
+
+### Kernel-level Enforcement
+
+```mermaid
+flowchart TD
+ subgraph US["User Space"]
+ PODA["Pod A (Tenant A)"]
+ PODB["Pod B (Tenant B)"]
+ NOTE["Cannot bypass policy — traffic MUST go through kernel"]
+ end
+
+ subgraph KS["Kernel Space"]
+ EBPF["eBPF Programs • Attached to network interfaces • Run in privileged kernel context • Verified by kernel • Cannot be bypassed by userspace • Atomic policy updates"]
+ end
+
+ US -->|"all traffic"| KS
+```
+
+### Default Deny with Namespace Isolation
+
+{{% alert color="warning" %}}
+By default, Kubernetes allows all pod-to-pod traffic. Cozystack applies
+CiliumNetworkPolicy and CiliumClusterwideNetworkPolicy resources automatically
+when a tenant is created. These policies enforce namespace-level isolation and
+restrict access to system ports (etcd, kubelet, controllers).
+{{% /alert %}}
+
+Cozystack uses hierarchical tenant labels for isolation. Policies match on
+`tenant.cozystack.io/*` namespace labels, which allows parent tenants to
+include sub-tenant namespaces. Example:
+
+```yaml
+apiVersion: cilium.io/v2
+kind: CiliumNetworkPolicy
+metadata:
+ name: allow-internal-communication
+ namespace: tenant-example
+spec:
+ endpointSelector: {}
+ ingress:
+ - fromEndpoints:
+ - matchLabels:
+ k8s:io.cilium.k8s.namespace.labels.tenant.cozystack.io/tenant-example: ""
+ egress:
+ - toEndpoints:
+ - matchLabels:
+ k8s:io.cilium.k8s.namespace.labels.tenant.cozystack.io/tenant-example: ""
+ - toEntities:
+ - kube-apiserver
+ - cluster
+```
+
+## Observability with Hubble
+
+Hubble provides network traffic visibility for the Cilium data plane. It is
+included in the Cozystack networking stack but **disabled by default** to
+minimize resource usage.
+
+When enabled, Hubble provides:
+
+- Real-time flow logs for all pod-to-pod and external traffic
+- DNS query visibility
+- HTTP/gRPC request-level metrics
+- Prometheus metrics integration
+- Web UI for traffic visualization
+
+To enable Hubble, set the following in the Cilium configuration:
+
+```yaml
+cilium:
+ hubble:
+ enabled: true
+ relay:
+ enabled: true
+ ui:
+ enabled: true
+```
+
+See [Enabling Hubble](https://docs.cilium.io/en/stable/observability/hubble/) for full configuration details.
+
+## Traffic Flow Summary
+
+### External Access
+
+```mermaid
+flowchart LR
+ C["Client"] --> R["Router"]
+ R --> M["MetalLB (L2/BGP)"]
+ M --> N["Node"]
+ N --> E["Cilium eBPF"]
+ E --> P["Pod"]
+```
+
+### Tenant Isolation
+
+```mermaid
+flowchart LR
+ A["Pod A"] --> CHECK{"eBPF Policy Check"}
+ CHECK -->|"Cross-tenant"| DENY["DENY"]
+ CHECK -->|"Same tenant"| ALLOW["ALLOW → Pod A'"]
+```
diff --git a/content/en/docs/v1.3/networking/http-cache.md b/content/en/docs/v1.3/networking/http-cache.md
new file mode 100644
index 00000000..41a0588f
--- /dev/null
+++ b/content/en/docs/v1.3/networking/http-cache.md
@@ -0,0 +1,155 @@
+---
+title: "Managed Nginx-based HTTP Cache Service"
+linkTitle: "HTTP Cache"
+description: "The Nginx-based HTTP caching service is designed to optimize web traffic and enhance web application performance."
+weight: 20
+aliases:
+ - /docs/reference/applications/http-cache
+ - /docs/v1.3/reference/applications/http-cache
+---
+
+
+
+
+The Nginx-based HTTP caching service is designed to optimize web traffic and enhance web application performance.
+This service combines custom-built Nginx instances with HAProxy for efficient caching and load balancing.
+
+## Deployment information
+
+The Nginx instances include the following modules and features:
+
+- VTS module for statistics
+- Integration with ip2location
+- Integration with ip2proxy
+- Support for 51Degrees
+- Cache purge functionality
+
+HAproxy plays a vital role in this setup by directing incoming traffic to specific Nginx instances based on a consistent hash calculated from the URL. Each Nginx instance includes a Persistent Volume Claim (PVC) for storing cached content, ensuring fast and reliable access to frequently used resources.
+
+## Deployment Details
+
+The deployment architecture is illustrated in the diagram below:
+
+```
+
+ ┌─────────┐
+ │ metallb │ arp announce
+ └────┬────┘
+ │
+ │
+ ┌───────▼───────────────────────────┐
+ │ kubernetes service │ node
+ │ (externalTrafficPolicy: Local) │ level
+ └──────────┬────────────────────────┘
+ │
+ │
+ ┌────▼────┐ ┌─────────┐
+ │ haproxy │ │ haproxy │ loadbalancer
+ │ (active)│ │ (backup)│ layer
+ └────┬────┘ └─────────┘
+ │
+ │ balance uri whole
+ │ hash-type consistent
+ ┌──────┴──────┬──────────────┐
+ ┌───▼───┐ ┌───▼───┐ ┌───▼───┐ caching
+ │ nginx │ │ nginx │ │ nginx │ layer
+ └───┬───┘ └───┬───┘ └───┬───┘
+ │ │ │
+ ┌────┴───────┬─────┴────┬─────────┴──┐
+ │ │ │ │
+ ┌───▼────┐ ┌────▼───┐ ┌───▼────┐ ┌────▼───┐
+ │ origin │ │ origin │ │ origin │ │ origin │
+ └────────┘ └────────┘ └────────┘ └────────┘
+
+```
+
+## Known issues
+
+- VTS module shows wrong upstream response time, [github.com/vozlt/nginx-module-vts#198](https://github.com/vozlt/nginx-module-vts/issues/198)
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| -------------- | ------------------------------------------------------------ | ---------- | ------- |
+| `size` | Persistent Volume Claim size available for application data. | `quantity` | `10Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ----------- | ------------------------------------------------ | ---------- | ----- |
+| `endpoints` | Endpoints configuration, as a list of . | `[]string` | `[]` |
+
+
+### HAProxy parameters
+
+| Name | Description | Type | Value |
+| -------------------------- | -------------------------------------------------------------------------------------------------------- | ---------- | ------ |
+| `haproxy` | HAProxy configuration. | `object` | `{}` |
+| `haproxy.replicas` | Number of HAProxy replicas. | `int` | `2` |
+| `haproxy.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `haproxy.resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `haproxy.resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `haproxy.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+
+
+### Nginx parameters
+
+| Name | Description | Type | Value |
+| ------------------------ | -------------------------------------------------------------------------------------------------------- | ---------- | ------ |
+| `nginx` | Nginx configuration. | `object` | `{}` |
+| `nginx.replicas` | Number of Nginx replicas. | `int` | `2` |
+| `nginx.resources` | Explicit CPU and memory configuration. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `nginx.resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `nginx.resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `nginx.resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+
+### endpoints
+
+`endpoints` is a flat list of IP addresses:
+
+```yaml
+endpoints:
+ - 10.100.3.1:80
+ - 10.100.3.11:80
+ - 10.100.3.2:80
+ - 10.100.3.12:80
+ - 10.100.3.3:80
+ - 10.100.3.13:80
+```
diff --git a/content/en/docs/v1.3/networking/tcp-balancer.md b/content/en/docs/v1.3/networking/tcp-balancer.md
new file mode 100644
index 00000000..2f1d2b6d
--- /dev/null
+++ b/content/en/docs/v1.3/networking/tcp-balancer.md
@@ -0,0 +1,78 @@
+---
+title: "Managed TCP Load Balancer Service"
+linkTitle: "TCP Load Balancer"
+description: "The Managed TCP Load Balancer Service simplifies the deployment and management of load balancers."
+weight: 30
+aliases:
+ - /docs/reference/applications/tcp-balancer
+ - /docs/v1.3/reference/applications/tcp-balancer
+---
+
+
+
+
+The Managed TCP Load Balancer Service simplifies the deployment and management of load balancers. It efficiently distributes incoming TCP traffic across multiple backend servers, ensuring high availability and optimal resource utilization.
+
+## Deployment Details
+
+Managed TCP Load Balancer Service efficiently utilizes HAProxy for load balancing purposes. HAProxy is a well-established and reliable solution for distributing incoming TCP traffic across multiple backend servers, ensuring high availability and efficient resource utilization. This deployment choice guarantees the seamless and dependable operation of your load balancing infrastructure.
+
+- Docs: https://www.haproxy.com/documentation/
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | -------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of HAProxy replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each TCP Balancer replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| -------------------------------- | ------------------------------------------------------------- | ---------- | ------- |
+| `httpAndHttps` | HTTP and HTTPS configuration. | `object` | `{}` |
+| `httpAndHttps.mode` | Mode for balancer. | `string` | `tcp` |
+| `httpAndHttps.targetPorts` | Target ports configuration. | `object` | `{}` |
+| `httpAndHttps.targetPorts.http` | HTTP port number. | `int` | `80` |
+| `httpAndHttps.targetPorts.https` | HTTPS port number. | `int` | `443` |
+| `httpAndHttps.endpoints` | Endpoint addresses list. | `[]string` | `[]` |
+| `whitelistHTTP` | Secure HTTP by whitelisting client networks (default: false). | `bool` | `false` |
+| `whitelist` | List of allowed client networks. | `[]string` | `[]` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
diff --git a/content/en/docs/v1.3/networking/virtual-router.md b/content/en/docs/v1.3/networking/virtual-router.md
new file mode 100644
index 00000000..c1dd8a0a
--- /dev/null
+++ b/content/en/docs/v1.3/networking/virtual-router.md
@@ -0,0 +1,68 @@
+---
+title: "Virtual Routers"
+linkTitle: "Virtual Routers"
+description: "Deploy a virtual router in a VM"
+weight: 40
+aliases:
+ - /docs/v1.3/operations/virtualization/virtual-router
+---
+
+Starting with version [v0.27.0](https://github.com/cozystack/cozystack/releases/tag/v0.27.0),
+Cozystack can deploy virtual routers (also known as "router appliances" or "middlebox appliances").
+This feature allows you to create a virtual router based on a virtual machine instance.
+The virtual router can route traffic between different networks.
+
+## Creating a Virtual Router
+
+Creating a virtual router requires a Cozystack administrator account.
+
+1. **Create a VM Instance**
+ Use the standard `vm-instance` and `virtual-machine` packages to create a virtual machine instance.
+
+1. **Disable Anti-Spoofing Protection**
+ To act as a virtual router, the VM instance should have anti-spoofing protection disabled:
+
+ ```bash
+ kubectl patch virtualmachines.kubevirt.io virtual-machine-example --type=merge \
+ -p '{"spec":{"template":{"metadata":{"annotations":{"ovn.kubernetes.io/port_security": "false"}}}}}'
+ ```
+
+1. **Restart the Virtual Machine**
+
+ ```bash
+ virtctl stop virtual-machine-example
+ virtctl start virtual-machine-example
+ ```
+
+1. **Retrieve the IP Address of the VM**
+
+ ```bash
+ kubectl get vmi
+ ```
+
+ The output will have a line with the new VM's IP address:
+
+ ```console
+ NAME AGE PHASE IP NODENAME READY
+ virtual-machine-example 3d4h Running 10.244.8.56 gld-csxhk-003 True
+ ```
+
+1. **Configure Custom Routes for a Tenant**
+ Edit the tenant namespace:
+
+ ```bash
+ kubectl edit namespace tenant-example
+ ```
+
+ Add the following annotation using the router IP you found earlier as `gw`
+ and the subnet mask for the router to handle as `dst`:
+
+ ```yaml
+ ovn.kubernetes.io/routes: |
+ [{
+ "gw": "10.244.8.56",
+ "dst": "10.10.13.0/24"
+ }]
+ ```
+
+These custom routes will now be applied to all pods within the tenant namespace.
diff --git a/content/en/docs/v1.3/networking/vpc.md b/content/en/docs/v1.3/networking/vpc.md
new file mode 100644
index 00000000..58ab71aa
--- /dev/null
+++ b/content/en/docs/v1.3/networking/vpc.md
@@ -0,0 +1,65 @@
+---
+title: "VPC"
+linkTitle: "VPC"
+description: "Dedicated subnets"
+weight: 10
+aliases:
+ - /docs/reference/applications/vpc
+ - /docs/v1.3/reference/applications/vpc
+---
+
+
+
+
+VPC offers a subset of dedicated subnets with networking services related to it.
+As the service evolves, it will provide more ways to isolate your workloads.
+
+## Service details
+
+To function, the service requires kube-ovn and multus CNI to be present, so by default it will only work on `paas-full` bundle.
+Kube-ovn provides VPC and Subnet resources and performs isolation and networking maintenance such as DHCP. Under the hood it uses ovn virtual routers and virtual switches.
+Multus enables a multi-nic capability, so a pod or a VM could have two or more network interfaces.
+
+Currently every workload will have a connection to a default management network which will also have a default gateway, and the majority of traffic will go through it.
+VPC subnets are for now an additional dedicated networking spaces.
+
+## Deployment notes
+
+VPC name must be unique within a tenant.
+Subnet name and ip address range must be unique within a VPC.
+Subnet ip address space must not overlap with the default management network ip address range, subsets of 172.16.0.0/12 are recommended.
+Currently there are no fail-safe checks, however they are planned for the future.
+
+Different VPCs may have subnets with overlapping ip address ranges.
+
+A VM or a pod may be connected to multiple secondary Subnets at once. Each secondary connection will be represented as an additional network interface.
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| -------------------- | -------------------------------- | ------------------- | ------- |
+| `subnets` | Subnets of a VPC | `map[string]object` | `{...}` |
+| `subnets[name].cidr` | Subnet CIDR, e.g. 192.168.0.0/24 | `cidr` | `{}` |
+
+
+## Examples
+```yaml
+apiVersion: apps.cozystack.io/v1alpha1
+kind: VirtualPrivateCloud
+metadata:
+ name: vpc00
+spec:
+ subnets:
+ sub00:
+ cidr: 172.16.0.0/24
+ sub01:
+ cidr: 172.16.1.0/24
+ sub02:
+ cidr: 172.16.2.0/24
+```
diff --git a/content/en/docs/v1.3/networking/vpn.md b/content/en/docs/v1.3/networking/vpn.md
new file mode 100644
index 00000000..d613e1cb
--- /dev/null
+++ b/content/en/docs/v1.3/networking/vpn.md
@@ -0,0 +1,101 @@
+---
+title: "Managed VPN Service"
+linkTitle: "VPN"
+description: "Managed VPN Service simplifies the deployment and management of VPN server, enabling you to establish secure connections with ease."
+weight: 10
+aliases:
+ - /docs/reference/applications/vpn
+ - /docs/v1.3/reference/applications/vpn
+---
+
+
+
+
+A Virtual Private Network (VPN) is a critical tool for ensuring secure and private communication over the internet.
+Managed VPN Service simplifies the deployment and management of VPN server, enabling you to establish secure connections with ease.
+
+- VPN client applications: https://shadowsocks5.github.io/en/download/clients.html
+
+## Deployment Details
+
+The VPN Service is powered by the Outline Server, an advanced and user-friendly VPN solution.
+Internally known as "Shadowbox", which simplifies the process of setting up and sharing Shadowsocks servers.
+It operates by launching Shadowsocks instances on demand.
+Furthermore, Shadowbox is compatible with standard Shadowsocks clients, providing flexibility and ease of use for your VPN requirements.
+
+- Docs: https://shadowsocks.org/
+- Docs: https://github.com/Jigsaw-Code/outline-server/tree/master/src/shadowbox
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------------------------------------------------------------------------------------------------------ | ---------- | ------- |
+| `replicas` | Number of VPN server replicas. | `int` | `2` |
+| `resources` | Explicit CPU and memory configuration for each VPN server replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `nano` |
+| `external` | Enable external access from outside the cluster. | `bool` | `false` |
+
+
+### Application-specific parameters
+
+| Name | Description | Type | Value |
+| ---------------------- | ------------------------------------------------------------------------------------------------------ | ------------------- | ----- |
+| `host` | Host used to substitute into generated URLs. | `string` | `""` |
+| `users` | Users configuration map. | `map[string]object` | `{}` |
+| `users[name].password` | Password for the user (autogenerated if not provided). | `string` | `""` |
+| `externalIPs` | List of externalIPs for service. Optional. If not specified, will use LoadBalancer service by default. | `[]string` | `[]` |
+
+
+## Parameter examples and reference
+
+### resources and resourcesPreset
+
+`resources` sets explicit CPU and memory configurations for each replica.
+When left empty, the preset defined in `resourcesPreset` is applied.
+
+```yaml
+resources:
+ cpu: 4000m
+ memory: 4Gi
+```
+
+`resourcesPreset` sets named CPU and memory configurations for each replica.
+This setting is ignored if the corresponding `resources` value is set.
+
+| Preset name | CPU | memory |
+|-------------|--------|---------|
+| `nano` | `250m` | `128Mi` |
+| `micro` | `500m` | `256Mi` |
+| `small` | `1` | `512Mi` |
+| `medium` | `1` | `1Gi` |
+| `large` | `2` | `2Gi` |
+| `xlarge` | `4` | `4Gi` |
+| `2xlarge` | `8` | `8Gi` |
+
+
+### users
+
+```yaml
+users:
+ user1:
+ password: hackme
+ user2: {} # autogenerated password
+```
+
+
+### externalIPs
+
+```yaml
+externalIPs:
+ - "11.22.33.44"
+ - "11.22.33.45"
+ - "11.22.33.46"
+```
diff --git a/content/en/docs/v1.3/operations/_index.md b/content/en/docs/v1.3/operations/_index.md
new file mode 100644
index 00000000..de4a8fdc
--- /dev/null
+++ b/content/en/docs/v1.3/operations/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Cluster Configuration and Management Guide"
+linkTitle: "Operations Guide"
+description: "Configure, monitor, secure, and upgrade a Cozystack cluster."
+weight: 35
+---
+
+Configure, monitor, secure, and upgrade a Cozystack cluster.
\ No newline at end of file
diff --git a/content/en/docs/v1.3/operations/cluster/_index.md b/content/en/docs/v1.3/operations/cluster/_index.md
new file mode 100644
index 00000000..0dfb568e
--- /dev/null
+++ b/content/en/docs/v1.3/operations/cluster/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Cluster Maintenance Guides"
+linkTitle: "Cluster Maintenance"
+description: "Guides for the regular cluster operations: adding and removing nodes, upgrading Talos, etc."
+weight: 20
+---
+
+Guides for the regular cluster operations: adding and removing nodes, upgrading Talos, etc.
diff --git a/content/en/docs/v1.3/operations/cluster/rotate-ca.md b/content/en/docs/v1.3/operations/cluster/rotate-ca.md
new file mode 100644
index 00000000..9ba728de
--- /dev/null
+++ b/content/en/docs/v1.3/operations/cluster/rotate-ca.md
@@ -0,0 +1,80 @@
+---
+title: "How to Rotate Certificate Authority"
+linkTitle: "How to rotate CA"
+description: "How to Rotate Certificate Authority"
+weight: 110
+---
+
+
+Talos sets up root certificate authorities with a lifetime of 10 years,
+and all Talos and Kubernetes API certificates are issued by these root CAs.
+In general, you almost never need to rotate the root CA certificate and key for the Talos API and Kubernetes API.
+
+Rotation of the root CA is only needed:
+
+- when you suspect that the private key has been compromised;
+- when you want to revoke access to the cluster for a leaked `talosconfig` or `kubeconfig`;
+- once in 10 years.
+
+### Rotate CA for Talos API
+
+To rotate the Talos CA for the management cluster, use the following command:
+
+First, run in dry-run mode to preview the changes:
+
+```bash
+talm -f nodes/node.yaml rotate-ca --talos=true --kubernetes=false
+```
+
+Then, execute the actual rotation:
+
+```bash
+talm -f nodes/node.yaml rotate-ca --talos=true --kubernetes=false --dry-run=false
+```
+
+After the rotation is complete, download the new `talosconfig` from the secrets.
+
+### Rotate CA for the Management Kubernetes Cluster
+
+To rotate the Kubernetes CA for the management cluster, use the following command:
+
+First, run in dry-run mode to preview the changes:
+
+```bash
+talm -f nodes/node.yaml rotate-ca --talos=false --kubernetes=true
+```
+
+Then, execute the actual rotation:
+
+```bash
+talm -f nodes/node.yaml rotate-ca --talos=false --kubernetes=true --dry-run=false
+```
+
+### Rotate CA for a Tenant Kubernetes Cluster
+
+See: https://kamaji.clastix.io/guides/certs-lifecycle/
+
+```bash
+export NAME=k8s-cluster-name
+export NAMESPACE=k8s-cluster-namespace
+
+kubectl -n ${NAMESPACE} delete secret ${NAME}-ca
+kubectl -n ${NAMESPACE} delete secret ${NAME}-sa-certificate
+
+kubectl -n ${NAMESPACE} delete secret ${NAME}-api-server-certificate
+kubectl -n ${NAMESPACE} delete secret ${NAME}-api-server-kubelet-client-certificate
+kubectl -n ${NAMESPACE} delete secret ${NAME}-datastore-certificate
+kubectl -n ${NAMESPACE} delete secret ${NAME}-front-proxy-client-certificate
+kubectl -n ${NAMESPACE} delete secret ${NAME}-konnectivity-certificate
+
+kubectl -n ${NAMESPACE} delete secret ${NAME}-admin-kubeconfig
+kubectl -n ${NAMESPACE} delete secret ${NAME}-controller-manager-kubeconfig
+kubectl -n ${NAMESPACE} delete secret ${NAME}-konnectivity-kubeconfig
+kubectl -n ${NAMESPACE} delete secret ${NAME}-scheduler-kubeconfig
+
+kubectl delete po -l app.kubernetes.io/name=kamaji -n cozy-kamaji
+kubectl delete po -l app=${NAME}-kcsi-driver
+```
+
+Wait for the `virt-launcher-kubernetes-*` pods to restart.
+After that, download the new Kubernetes certificate.
diff --git a/content/en/docs/v1.3/operations/cluster/scaling.md b/content/en/docs/v1.3/operations/cluster/scaling.md
new file mode 100644
index 00000000..59864689
--- /dev/null
+++ b/content/en/docs/v1.3/operations/cluster/scaling.md
@@ -0,0 +1,124 @@
+---
+title: "Cluster Scaling: Adding and Removing Nodes"
+linkTitle: "Cluster Scaling"
+description: "Adding and removing nodes in a Cozystack cluster."
+weight: 20
+---
+
+## How to add a node to a Cozystack cluster
+
+Adding a node is done in a way similar to regular Cozystack installation.
+
+1. [Install Talos on the node]({{% ref "/docs/v1.3/install/talos" %}}), using the Cozystack's custom-built Talos image.
+
+1. Generate the configuration for the new node, using the [Talm]({{% ref "/docs/v1.3/install/kubernetes/talm#3-generate-node-configuration-files" %}})
+ or [talosctl]({{% ref "/docs/v1.3/install/kubernetes/talosctl#2-generate-node-configuration-files" %}}) guide.
+
+ For example, configuring a control plane node:
+
+ ```bash
+ talm template -e 192.168.123.20 -n 192.168.123.20 -t templates/controlplane.yaml -i > nodes/nodeN.yaml
+ ```
+
+ and for a worker node:
+ ```bash
+ talm template -e 192.168.123.20 -n 192.168.123.20 -t templates/worker.yaml -i > nodes/nodeN.yaml
+ ```
+
+1. Apply the generated configuration to the node, using the [Talm]({{% ref "/docs/v1.3/install/kubernetes/talm#41-apply-configuration-files" %}})
+ or [talosctl]({{% ref "/docs/v1.3/install/kubernetes/talosctl#3-apply-node-configuration" %}}) guide.
+ For example:
+
+ ```bash
+ talm apply -f nodes/nodeN.yaml -i
+ ```
+
+1. Wait for the node to reboot and bootstrap itself to the cluster.
+ You don't need to bootstrap it manually or to install Cozystack on it, as it is all done automatically.
+
+ You can check the result with `kubectl get nodes`.
+
+
+## How to remove a node from a Cozystack cluster
+
+When a cluster node fails, Cozystack automatically handles high availability by recreating replicated PVCs and workloads on other nodes.
+However, there can be issues that require removing the node to resolve:
+
+- Local storage PVs may remain bound to the failed node, which can cause issues with new pods.
+ These need to be cleaned up manually.
+
+- The failed node will still exist in the cluster, which can lead to inconsistencies in the cluster state and affect pod scheduling.
+
+
+### Step 1: Remove the Node from the Cluster
+
+Run the following command to remove the failed node (replace mynode with the actual node name):
+
+```bash
+kubectl delete node mynode
+```
+
+If the failed node is a control-plane node, you must also remove its etcd member from the etcd cluster:
+
+```bash
+talm -f nodes/node1.yaml etcd member list
+```
+
+Example output:
+
+```console
+NODE ID HOSTNAME PEER URLS CLIENT URLS LEARNER
+37.27.60.28 2ba6e48b8cf1a0c1 node1 https://192.168.100.11:2380 https://192.168.100.11:2379 false
+37.27.60.28 b82e2194fb76ee42 node2 https://192.168.100.12:2380 https://192.168.100.12:2379 false
+37.27.60.28 f24f4de3d01e5e88 node3 https://192.168.100.13:2380 https://192.168.100.13:2379 false
+```
+
+Then remove the corresponding member (replace the ID with the one for your failed node):
+
+```bash
+talm -f nodes/node1.yaml etcd remove-member f24f4de3d01e5e88
+```
+
+### Step 2: Remove PVCs and Pods Bound to the Failed Node
+
+Here are few commands to help you clean up the failed node:
+
+- **Delete PVCs** bound to the failed node:
+ (Replace `mynode` with the name of your failed node)
+
+ ```bash
+ kubectl get pv -o json | jq -r '.items[] | select(.spec.nodeAffinity.required.nodeSelectorTerms[0].matchExpressions[0].values[0] == "mynode").spec.claimRef | "kubectl delete pvc -n \(.namespace) \(.name)"' | sh -x
+ ```
+
+- **Delete pods** stuck in `Pending` state across all namespaces:
+
+ ```bash
+ kubectl get pod -A | awk '/Pending/ {print "kubectl delete pod -n " $1 " " $2}' | sh -x
+ ```
+
+### Step 3: Check Resource Status
+
+After cleanup, check for any resource issues using `linstor advise`:
+
+```console
+# linstor advise resource
+╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
+┊ Resource ┊ Issue ┊ Possible fix ┊
+╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
+┊ pvc-02b0c0a1-e0b6-4e98-9384-60ff24f3b3b6 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-02b0c0a1-e0b6-4e98-9384-60ff24f3b3b6 ┊
+┊ pvc-06e3b406-23f0-4f10-8b03-84063c1b2a12 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-06e3b406-23f0-4f10-8b03-84063c1b2a12 ┊
+┊ pvc-a0b8aeaf-076e-4bd9-93ed-c4db09c04d0b ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-a0b8aeaf-076e-4bd9-93ed-c4db09c04d0b ┊
+┊ pvc-a523ebeb-c3b6-468d-abe5-f6afbbf31081 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-a523ebeb-c3b6-468d-abe5-f6afbbf31081 ┊
+┊ pvc-cf7e87b5-3e6d-4034-903d-4625830fb5b4 ┊ Resource expected to have 1 replicas, got only 0. ┊ linstor rd ap --place-count 1 pvc-cf7e87b5-3e6d-4034-903d-4625830fb5b4 ┊
+┊ pvc-d344bc83-97fd-4489-bbe7-5399eea57165 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-d344bc83-97fd-4489-bbe7-5399eea57165 ┊
+┊ pvc-d39345a9-5446-4c64-a5ba-957ff7c7a31f ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-d39345a9-5446-4c64-a5ba-957ff7c7a31f ┊
+┊ pvc-db6d4236-93bd-4268-9dcc-0ed275b17067 ┊ Resource expected to have 1 replicas, got only 0. ┊ linstor rd ap --place-count 1 pvc-db6d4236-93bd-4268-9dcc-0ed275b17067 ┊
+┊ pvc-ebb412c3-083c-4eee-93dc-70917ea6d87e ┊ Resource expected to have 1 replicas, got only 0. ┊ linstor rd ap --place-count 1 pvc-ebb412c3-083c-4eee-93dc-70917ea6d87e ┊
+┊ pvc-f107aacb-78d7-4ac6-97f8-8ed529a9c292 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-f107aacb-78d7-4ac6-97f8-8ed529a9c292 ┊
+┊ pvc-f347d71a-b646-45e5-a717-f0a745061beb ┊ Resource expected to have 1 replicas, got only 0. ┊ linstor rd ap --place-count 1 pvc-f347d71a-b646-45e5-a717-f0a745061beb ┊
+┊ pvc-f6e96c83-6144-4510-b0ab-61936db52391 ┊ Resource expected to have 3 replicas, got only 2. ┊ linstor rd ap --place-count 3 pvc-f6e96c83-6144-4510-b0ab-61936db52391 ┊
+╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
+```
+
+Run the `linstor rd ap` commands suggested in the "Possible fix" column to restore the desired replica count.
+
diff --git a/content/en/docs/v1.3/operations/cluster/upgrade.md b/content/en/docs/v1.3/operations/cluster/upgrade.md
new file mode 100644
index 00000000..86401cff
--- /dev/null
+++ b/content/en/docs/v1.3/operations/cluster/upgrade.md
@@ -0,0 +1,91 @@
+---
+title: "Upgrading Cozystack and Post-upgrade Checks"
+linkTitle: "Upgrading Cozystack"
+description: "Upgrade Cozystack system components."
+weight: 10
+aliases:
+ - /docs/v1.3/upgrade
+ - /docs/v1.3/operations/upgrade
+---
+
+## About Cozystack Versions
+
+Cozystack uses a staged release process to ensure stability and flexibility during development.
+
+There are three types of releases:
+
+- **Alpha, Beta, and Release Candidates (RC)** – Preview versions (such as `v0.42.0-alpha.1` or `v0.42.0-rc.1`) used for final testing and validation.
+- **Stable Releases** – Regular versions (e.g., `v0.42.0`) that are feature-complete and thoroughly tested.
+ Such versions usually introduce new features, update dependencies, and may have API changes.
+- **Patch Releases** – Bugfix-only updates (e.g., `v0.42.1`) made after a stable release, based on a dedicated release branch.
+
+It's highly recommended to install only stable and patch releases in production environments.
+
+For a full list of releases, see the [Releases page](https://github.com/cozystack/cozystack/releases) on GitHub.
+
+To learn more about Cozystack release process, read the [Cozystack Release Workflow](https://github.com/cozystack/cozystack/blob/main/docs/release.md).
+
+## Upgrading Cozystack
+
+### 1. Check the cluster status
+
+Before upgrading, check the current status of your Cozystack cluster by following steps from
+
+- [Troubleshooting Checklist]({{% ref "/docs/v1.3/operations/troubleshooting/#troubleshooting-checklist" %}})
+
+Make sure that the Platform Package is healthy and contains the expected configuration:
+
+```bash
+kubectl get packages.cozystack.io cozystack.cozystack-platform -o yaml
+```
+
+### 2. Protect critical resources
+
+Before upgrading, annotate the `cozy-system` namespace and the `cozystack-version` ConfigMap
+with `helm.sh/resource-policy=keep` to prevent Helm from deleting them during the upgrade:
+
+```bash
+kubectl annotate namespace cozy-system helm.sh/resource-policy=keep --overwrite
+kubectl annotate configmap -n cozy-system cozystack-version helm.sh/resource-policy=keep --overwrite
+```
+
+{{% alert color="warning" %}}
+**This step is required.** Without these annotations, removing or upgrading the Helm installer
+release could delete the `cozy-system` namespace and all resources within it.
+{{% /alert %}}
+
+### 3. Upgrade the Cozystack Operator
+
+Upgrade the Cozystack operator Helm release to the target version:
+
+{{< reuse-values-warning >}}
+
+```bash
+helm upgrade cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --version X.Y.Z \
+ --namespace cozy-system
+```
+
+You can read the logs of the operator:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator -f
+```
+
+### 4. Check the cluster status after upgrading
+
+```bash
+kubectl get pods -n cozy-system
+kubectl get hr -A | grep -v "True"
+```
+
+If pod status shows a failure, check the logs:
+
+```bash
+kubectl logs -n cozy-system deploy/cozystack-operator --previous
+```
+
+To make sure everything works as expected, repeat the steps from
+
+- [Troubleshooting Checklist]({{% ref "/docs/v1.3/operations/troubleshooting/#troubleshooting-checklist" %}})
+
diff --git a/content/en/docs/v1.3/operations/configuration/_index.md b/content/en/docs/v1.3/operations/configuration/_index.md
new file mode 100644
index 00000000..a8d936f6
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Cozystack Cluster Configuration"
+linkTitle: "Configuration"
+description: "Learn how to configure your Cozystack cluster, including variants, components, and other key settings"
+weight: 10
+---
+
+This section of the documentation explains everything about Cozystack cluster configuration.
diff --git a/content/en/docs/v1.3/operations/configuration/components.md b/content/en/docs/v1.3/operations/configuration/components.md
new file mode 100644
index 00000000..c9fb3eb3
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/components.md
@@ -0,0 +1,77 @@
+---
+title: "Cozystack Components Reference"
+linkTitle: "Components"
+description: "Full reference for Cozystack components."
+weight: 30
+aliases:
+ - /docs/v1.3/install/cozystack/components
+---
+
+### Overwriting Component Parameters
+
+You might want to override specific options for the components.
+To achieve this, modify the corresponding Package resource and specify values
+in the `spec.components` section. The values structure follows the
+[values.yaml](https://github.com/cozystack/cozystack/tree/main/packages/system)
+of the respective system chart in the Cozystack repository.
+
+For example, if you want to enable FRR-K8s mode for MetalLB, look at its
+[values.yaml](https://github.com/cozystack/cozystack/blob/main/packages/system/metallb/values.yaml)
+to understand the available parameters, then modify the `cozystack.metallb` Package:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.metallb
+ namespace: cozy-system
+spec:
+ variant: default
+ components:
+ metallb:
+ values:
+ metallb:
+ frrk8s:
+ enabled: true
+```
+
+### Enabling and Disabling Components
+
+Bundles have optional components that need to be explicitly enabled (included) in the installation.
+Regular bundle components can, on the other hand, be disabled (excluded) from the installation, when you don't need them.
+
+Use `bundles.enabledPackages` and `bundles.disabledPackages` in the Platform Package values.
+Every entry in those lists is a fully-qualified Package name — the same name you see with
+`kubectl get package`. All platform packages live under the `cozystack.` prefix (for example,
+`cozystack.metallb`, `cozystack.hetzner-robotlb`, `cozystack.nfs-driver`). Run
+`kubectl get package` to see the exact names available on your cluster before editing
+the Platform Package.
+
+For example, [installing Cozystack in Hetzner]({{% ref "/docs/v1.3/install/providers/hetzner" %}})
+requires swapping the default load balancer, MetalLB, with one made specifically for Hetzner, called RobotLB:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ bundles:
+ disabledPackages:
+ - cozystack.metallb
+ enabledPackages:
+ - cozystack.hetzner-robotlb
+ # rest of the config
+```
+
+Disabling components must be done before installing Cozystack.
+Applying updated configuration with `disabledPackages` will not remove components that are already installed.
+To remove already installed components, delete the Helm release manually using this command:
+
+```bash
+kubectl delete hr -n
+```
diff --git a/content/en/docs/v1.3/operations/configuration/platform-package.md b/content/en/docs/v1.3/operations/configuration/platform-package.md
new file mode 100644
index 00000000..38de929c
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/platform-package.md
@@ -0,0 +1,166 @@
+---
+title: "Platform Package Reference"
+linkTitle: "Platform Package"
+description: "Reference for the Cozystack Platform Package, which defines key configuration values for a Cozystack installation and operations."
+weight: 10
+aliases:
+ - /docs/v1.3/install/cozystack/configmap
+ - /docs/v1.3/operations/configuration/configmap
+---
+
+This page explains the role of the Cozystack Platform Package and provides a full reference for its values.
+
+Cozystack's main configuration is defined by a `Package` custom resource.
+This Package includes the [Cozystack variant]({{% ref "/docs/v1.3/operations/configuration/variants" %}}) and [component settings]({{% ref "/docs/v1.3/operations/configuration/components" %}}),
+key network settings, exposed services, and other options.
+
+
+## Example
+
+Here's an example of configuration for installing Cozystack with variant `isp-full`, with root host "example.org",
+and Cozystack Dashboard and API exposed and available to users:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://api.example.org:443"
+ exposedServices:
+ - dashboard
+ - api
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+```
+
+
+## Reference
+
+### Package-level fields
+
+| Field | Description |
+| --- | --- |
+| `spec.variant` | Variant to use for installation (e.g., `isp-full`, `isp-full-generic`, `isp-hosted`, `distro-full`). |
+
+### Platform values (`spec.components.platform.values.*`)
+
+#### Publishing
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `publishing.host` | `"example.org"` | The main domain for all services created under Cozystack, such as the dashboard, Grafana, Keycloak, etc. |
+| `publishing.apiServerEndpoint` | `""` | Used for generating kubeconfig files for your users. It is recommended to use a routable FQDN or IP address instead of local-only addresses. Example: `"https://api.example.org"`. |
+| `publishing.exposedServices` | `[api, dashboard, vm-exportproxy, cdi-uploadproxy]` | List of services to expose. Possible values: `api`, `dashboard`, `cdi-uploadproxy`, `vm-exportproxy`. |
+| `publishing.ingressName` | `"tenant-root"` | Ingress controller to use for exposing services. |
+| `publishing.externalIPs` | `[]` | List of external IPs used for the specified ingress controller. If not specified, a LoadBalancer service is used by default. |
+| `publishing.certificates.solver` | `"http01"` | ACME challenge solver type for default letsencrypt issuer. Possible values: `http01`, `dns01`. |
+| `publishing.certificates.issuerName` | `"letsencrypt-prod"` | `ClusterIssuer` name for TLS certificates used in system Helm releases. |
+
+#### Networking
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `networking.clusterDomain` | `"cozy.local"` | Internal cluster domain name. |
+| `networking.podCIDR` | `"10.244.0.0/16"` | The pod subnet used by Pods to assign IPs. |
+| `networking.podGateway` | `"10.244.0.1"` | The gateway address for the pod subnet. |
+| `networking.serviceCIDR` | `"10.96.0.0/16"` | The service subnet used by Services to assign IPs. |
+| `networking.joinCIDR` | `"100.64.0.0/16"` | The `join` subnet for network communication between the Node and Pod. Follow the [kube-ovn] documentation to learn more. |
+| `networking.kubeovn.MASTER_NODES` | `""` | Comma-separated list of KubeOVN master node IPs. By default, KubeOVN uses `lookup` to find control-plane nodes by label `node-role.kubernetes.io/control-plane`. On fresh clusters, lookup may return empty results. Set this to override. |
+
+#### Bundles
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `bundles.system.enabled` | `false` | Enable the system bundle. Managed by the operator based on `spec.variant`. |
+| `bundles.system.variant` | `"isp-full"` | System bundle variant. Options: `isp-full`, `isp-full-generic`, `isp-hosted`. Managed by the operator based on `spec.variant`. |
+| `bundles.iaas.enabled` | `false` | Enable the IaaS bundle. Managed by the operator based on `spec.variant`. |
+| `bundles.paas.enabled` | `false` | Enable the PaaS bundle. Managed by the operator based on `spec.variant`. |
+| `bundles.naas.enabled` | `false` | Enable the NaaS bundle. Managed by the operator based on `spec.variant`. |
+| `bundles.enabledPackages` | `[]` | List of optional bundle components to include in the installation. Read more in ["How to enable and disable bundle components"][enable-disable]. |
+| `bundles.disabledPackages` | `[]` | List of bundle components to exclude from the installation. Read more in ["How to enable and disable bundle components"][enable-disable]. |
+
+#### Authentication
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `authentication.oidc.enabled` | `false` | Enable [OIDC][oidc] feature in Cozystack. |
+| `authentication.oidc.insecureSkipVerify` | `false` | Skip TLS certificate verification for the OIDC provider. |
+| `authentication.oidc.keycloakExtraRedirectUri` | `""` | Additional redirect URI for Keycloak OIDC client. |
+| `authentication.oidc.keycloakInternalUrl` | `""` | Internal URL for backend-to-backend requests to Keycloak. When set, the dashboard's oauth2-proxy skips OIDC discovery and routes token, JWKS, userinfo, and logout requests through this URL while keeping browser redirects on the external URL. Example: `http://keycloak-http.cozy-keycloak.svc:8080/realms/cozy`. |
+
+#### Scheduling
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `scheduling.globalAppTopologySpreadConstraints` | `""` | Global pod topology spread constraints applied to all managed applications. |
+
+#### Branding
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `branding` | `{}` | UI branding configuration object. See the [White Labeling]({{% ref "/docs/v1.3/operations/configuration/white-labeling" %}}) guide for available fields and usage. Individual fields (e.g., `titleText`, `logoSvg`) have their own defaults when not specified. |
+
+#### Registries
+
+Container registry mirrors configuration. Allows routing image pulls through local mirrors.
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `registries.mirrors` | `{}` | Map of registry hostnames to mirror endpoints. Each entry maps a registry (e.g., `docker.io`) to a list of mirror endpoints. |
+| `registries.config` | `{}` | Per-endpoint configuration, such as TLS settings. |
+
+Example:
+
+```yaml
+registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - http://10.0.0.1:8082
+ ghcr.io:
+ endpoints:
+ - http://10.0.0.1:8083
+ config:
+ "10.0.0.1:8082":
+ tls:
+ insecureSkipVerify: true
+```
+
+#### Resources
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `resources.cpuAllocationRatio` | `10` | CPU allocation ratio: `1/cpuAllocationRatio` CPU requested per 1 vCPU. See [Resource Management] for detailed explanation and examples. |
+| `resources.memoryAllocationRatio` | `1` | Memory allocation ratio: `1/memoryAllocationRatio` memory requested per unit of configured memory. |
+| `resources.ephemeralStorageAllocationRatio` | `40` | Ephemeral storage allocation ratio: `1/ephemeralStorageAllocationRatio` ephemeral storage requested per unit of configured storage. |
+
+#### Internal fields
+
+These fields are managed automatically by the Cozystack operator and should not be modified manually.
+
+| Value | Default | Description |
+| --- | --- | --- |
+| `sourceRef.kind` | `"OCIRepository"` | Source reference kind for the platform package. |
+| `sourceRef.name` | `"cozystack-platform"` | Source reference name. |
+| `sourceRef.namespace` | `"cozy-system"` | Source reference namespace. |
+| `sourceRef.path` | `"/"` | Source reference path. |
+| `migrations.enabled` | `false` | Whether platform migrations are enabled. |
+| `migrations.image` | — | Container image used for running platform migrations. |
+| `migrations.targetVersion` | — | Target migration version number. |
+
+[enable-disable]: {{% ref "/docs/v1.3/operations/configuration/components#enabling-and-disabling-components" %}}
+[overwrite-parameters]: {{% ref "/docs/v1.3/operations/configuration/components#overwriting-component-parameters" %}}
+[Resource Management]: {{% ref "/docs/v1.3/guides/resource-management#cpu-allocation-ratio" %}}
+[oidc]: {{% ref "/docs/v1.3/operations/oidc" %}}
+[telemetry]: {{% ref "/docs/v1.3/operations/configuration/telemetry" %}}
+[kube-ovn]: https://kubeovn.github.io/docs/en/guide/subnet/#join-subnet
diff --git a/content/en/docs/v1.3/operations/configuration/telemetry.md b/content/en/docs/v1.3/operations/configuration/telemetry.md
new file mode 100644
index 00000000..db04d0e5
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/telemetry.md
@@ -0,0 +1,86 @@
+---
+title: "Telemetry"
+linkTitle: "Telemetry"
+description: "Cozystack Telemetry"
+weight: 60
+aliases:
+ - /docs/v1.3/telemetry
+ - /docs/v1.3/operations/telemetry
+---
+
+This document outlines the telemetry feature within the Cozystack project, detailing the rationale behind data collection, the nature of the data collected, data handling practices, and instructions for opting out.
+
+## Why We Collect Telemetry
+
+Cozystack, as an open source project, thrives on community feedback and usage insights. Telemetry data allows maintainers to understand how Cozystack is being used in real-world scenarios. This data informs decisions related to feature prioritization, testing strategies, bug fixes, and overall project evolution. Without telemetry, decisions would rely on guesswork or limited feedback, which might slow down improvement cycles or introduce features that don’t align with users’ needs. Telemetry ensures that development is guided by actual usage patterns and community requirements, fostering a more robust and user-centric platform.
+
+## What We Collect and How
+
+Cozystack strives to comply with the [LF Telemetry Data Policy](https://www.linuxfoundation.org/legal/telemetry-data-policy), ensuring responsible data collection practices that respect user privacy and transparency.
+
+Our focus is on gathering non-personal usage metrics about Cozystack components rather than personal user information. We specifically collect information about cluster infrastructure (nodes, storage, networking), installed packages, and application instances. This collected data helps us gain insights into prevalent configurations and usage trends across installations.
+
+Telemetry is collected by two components:
+- **cozystack-operator** — collects cluster-level metrics (nodes, storage, packages)
+- **cozystack-controller** — collects application-level metrics (deployed application instances)
+
+For a detailed view of what data is collected, you can review the telemetry implementation:
+- [Telemetry Client](https://github.com/cozystack/cozystack/tree/main/internal/telemetry)
+- [Telemetry Server](https://github.com/cozystack/cozystack-telemetry-server/)
+
+### Example of Telemetry Payload:
+
+Below is how a typical telemetry payload looks like in Cozystack.
+
+**From cozystack-operator** (cluster infrastructure):
+
+```prometheus
+cozy_cluster_info{cozystack_version="v1.0.0",kubernetes_version="v1.31.4"} 1
+cozy_nodes_count{os="linux (Talos (v1.8.4))",kernel="6.6.64-talos"} 3
+cozy_cluster_capacity{resource="cpu"} 168
+cozy_cluster_capacity{resource="memory"} 811020009472
+cozy_cluster_capacity{resource="nvidia.com/TU104GL_TESLA_T4"} 3
+cozy_loadbalancers_count 1
+cozy_pvs_count{driver="linstor.csi.linbit.com",size="5Gi"} 7
+cozy_pvs_count{driver="linstor.csi.linbit.com",size="10Gi"} 6
+cozy_package_info{name="cozystack.core",variant="default"} 1
+cozy_package_info{name="cozystack.storage",variant="linstor"} 1
+cozy_package_info{name="cozystack.monitoring",variant="default"} 1
+```
+
+**From cozystack-controller** (application instances):
+
+```prometheus
+cozy_application_count{kind="Tenant"} 2
+cozy_application_count{kind="Postgres"} 5
+cozy_application_count{kind="Redis"} 3
+cozy_application_count{kind="Kubernetes"} 2
+cozy_application_count{kind="VirtualMachine"} 0
+```
+
+Data is collected by components running within Cozystack that periodically gather and transmit usage statistics to our secure backend. The telemetry system ensures that data is anonymized, aggregated, and stored securely, with strict controls on access to protect user privacy.
+
+## Telemetry Opt-Out
+
+We respect your privacy and choice regarding telemetry. If you prefer not to participate in telemetry data collection, Cozystack provides a straightforward way to opt out.
+
+Opting Out:
+
+To disable telemetry reporting, upgrade the Cozystack operator Helm release with the `disableTelemetry` flag:
+
+```bash
+helm upgrade cozystack oci://ghcr.io/cozystack/cozystack/cozy-installer \
+ --namespace cozy-system \
+ --version X.Y.Z \
+ --set cozystackOperator.disableTelemetry=true
+```
+
+Replace `X.Y.Z` with your currently installed Cozystack version.
+
+{{< reuse-values-warning >}}
+
+This command updates the operator to disable telemetry data collection. If you wish to re-enable telemetry in the future, run the same command with `disableTelemetry=false`.
+
+## Conclusion
+
+Telemetry in Cozystack is designed to support a data-informed development process that responds to the community’s needs and ensures continuous improvement. Your participation—or choice to opt out—helps shape the future of Cozystack, making it a more effective and user-focused platform for everyone.
diff --git a/content/en/docs/v1.3/operations/configuration/variants.md b/content/en/docs/v1.3/operations/configuration/variants.md
new file mode 100644
index 00000000..8f6d3be9
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/variants.md
@@ -0,0 +1,203 @@
+---
+title: "Cozystack Variants: Overview and Comparison"
+linkTitle: "Variants"
+description: "Cozystack variants reference: composition, configuration, and comparison."
+weight: 20
+aliases:
+ - /docs/v1.3/guides/bundles
+ - /docs/v1.3/operations/bundles/
+ - /docs/v1.3/operations/bundles/isp-full
+ - /docs/v1.3/operations/bundles/isp-hosted
+ - /docs/v1.3/operations/bundles/paas-full
+ - /docs/v1.3/operations/bundles/paas-hosted
+ - /docs/v1.3/operations/bundles/distro-full
+ - /docs/v1.3/operations/bundles/distro-hosted
+ - /docs/v1.3/install/cozystack/bundles
+ - /docs/v1.3/operations/configuration/bundles
+---
+
+## Introduction
+
+**Variants** are pre-defined configurations of Cozystack that determine which bundles and components are enabled.
+Each variant is tested, versioned, and guaranteed to work as a unit.
+They simplify installation, reduce the risk of misconfiguration, and make it easier to choose the right set of features for your deployment.
+
+This guide is for infrastructure engineers, DevOps teams, and platform architects planning to deploy Cozystack in different environments.
+It explains how Cozystack variants help tailor the installation to specific needs—whether you're building a fully featured platform-as-a-service
+or need full manual control over installed packages.
+
+
+## Variants Overview
+
+| Component | [default] | [isp-full] | [isp-full-generic] | [isp-hosted] |
+|:------------------------------|:-----------------------|:-----------------------|:-----------------------|:-----------------------|
+| [Managed Kubernetes][k8s] | | ✔ | ✔ | |
+| [Managed Applications][apps] | | ✔ | ✔ | ✔ |
+| [Virtual Machines][vm] | | ✔ | ✔ | |
+| Cozystack Dashboard (UI) | | ✔ | ✔ | ✔ |
+| [Cozystack API][api] | | ✔ | ✔ | ✔ |
+| [Kubernetes Operators] | | ✔ | ✔ | ✔ |
+| [Monitoring subsystem] | | ✔ | ✔ | ✔ |
+| Storage subsystem | | [LINSTOR] | [LINSTOR] | |
+| Networking subsystem | | [Kube-OVN] + [Cilium] | [Kube-OVN] + [Cilium] | |
+| Virtualization subsystem | | [KubeVirt] | [KubeVirt] | |
+| OS and [Kubernetes] subsystem | | [Talos Linux] | | |
+
+[apps]: {{% ref "/docs/v1.3/applications" %}}
+[vm]: {{% ref "/docs/v1.3/virtualization" %}}
+[k8s]: {{% ref "/docs/v1.3/kubernetes" %}}
+[api]: {{% ref "/docs/v1.3/cozystack-api" %}}
+[monitoring subsystem]: {{% ref "/docs/v1.3/guides/platform-stack#victoria-metrics" %}}
+[linstor]: {{% ref "/docs/v1.3/guides/platform-stack#drbd" %}}
+[kube-ovn]: {{% ref "/docs/v1.3/guides/platform-stack#kube-ovn" %}}
+[cilium]: {{% ref "/docs/v1.3/guides/platform-stack#cilium" %}}
+[kubevirt]: {{% ref "/docs/v1.3/guides/platform-stack#kubevirt" %}}
+[talos linux]: {{% ref "/docs/v1.3/guides/platform-stack#talos-linux" %}}
+[kubernetes]: {{% ref "/docs/v1.3/guides/platform-stack#kubernetes" %}}
+[kubernetes operators]: https://github.com/cozystack/cozystack/blob/main/packages/core/platform/templates/bundles/paas.yaml
+
+[default]: {{% ref "/docs/v1.3/operations/configuration/variants#default" %}}
+[isp-full]: {{% ref "/docs/v1.3/operations/configuration/variants#isp-full" %}}
+[isp-full-generic]: {{% ref "/docs/v1.3/operations/configuration/variants#isp-full-generic" %}}
+[isp-hosted]: {{% ref "/docs/v1.3/operations/configuration/variants#isp-hosted" %}}
+
+
+## Choosing the Right Variant
+
+Variants combine bundles from different layers to match particular needs.
+Some are designed for full platform scenarios, others for cloud-hosted workloads or fully manual package management.
+
+### `default`
+
+`default` is a minimal variant that only provides the set of PackageSources (package registry references).
+No bundles or components are pre-configured—all packages are managed manually through [cozypkg](https://github.com/cozystack/cozystack/tree/main/cmd/cozypkg).
+Use this variant when you need full control over which packages are installed and configured.
+This is the variant used in the [Build Your Own Platform (BYOP)]({{% ref "/docs/v1.3/install/cozystack/kubernetes-distribution" %}}) workflow.
+
+Example configuration:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: default
+```
+
+### `isp-full`
+
+`isp-full` is a full-featured PaaS and IaaS variant, designed for installation on Talos Linux.
+It includes all bundles and provides the full set of Cozystack components, enabling a comprehensive PaaS experience.
+Some higher-layer components are optional and can be excluded during installation.
+
+`isp-full` is intended for installation on bare-metal servers or VMs.
+
+Example configuration:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full
+ components:
+ platform:
+ values:
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://192.168.100.10:6443"
+ exposedServices:
+ - api
+ - dashboard
+ - cdi-uploadproxy
+ - vm-exportproxy
+```
+
+### `isp-full-generic`
+
+`isp-full-generic` provides the same full-featured PaaS and IaaS experience as `isp-full`, but is designed for generic Kubernetes distributions such as k3s, kubeadm, or RKE2.
+Use this variant when you want the full Cozystack feature set without requiring Talos Linux.
+
+For detailed installation instructions, see the [Generic Kubernetes guide]({{% ref "/docs/v1.3/install/kubernetes/generic" %}}).
+
+Example configuration:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full-generic
+ components:
+ platform:
+ values:
+ networking:
+ podCIDR: "10.244.0.0/16"
+ podGateway: "10.244.0.1"
+ serviceCIDR: "10.96.0.0/16"
+ joinCIDR: "100.64.0.0/16"
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://192.168.100.10:6443"
+ exposedServices:
+ - api
+ - dashboard
+ - cdi-uploadproxy
+ - vm-exportproxy
+```
+
+### `isp-hosted`
+
+Cozystack can be installed as platform-as-a-service (PaaS) on top of an existing managed Kubernetes cluster,
+typically provisioned from a cloud provider.
+Variant `isp-hosted` is made for this use case.
+It can be used with [kind](https://kind.sigs.k8s.io/) and any cloud-based Kubernetes clusters.
+
+`isp-hosted` includes the PaaS and NaaS bundles, providing Cozystack API and UI, managed applications, and tenant Kubernetes clusters.
+It does not include CNI plugins, virtualization, or storage.
+
+The Kubernetes cluster used to deploy Cozystack must conform to the following requirements:
+
+- Listening address of some Kubernetes components must be changed from `localhost` to a routable address.
+- Kubernetes API server must be reachable on `localhost`.
+
+Example configuration:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-hosted
+ components:
+ platform:
+ values:
+ publishing:
+ host: "example.org"
+ apiServerEndpoint: "https://192.168.100.10:6443"
+ exposedServices:
+ - api
+ - dashboard
+```
+
+## Learn More
+
+For a full list of configuration options for each variant, refer to the
+[configuration reference]({{% ref "/docs/v1.3/operations/configuration" %}}).
+
+To see the full list of components, how to enable and disable them, refer to the
+[Components reference]({{% ref "/docs/v1.3/operations/configuration/components" %}}).
+
+To deploy a selected variant, follow the [Cozystack installation guide]({{% ref "/docs/v1.3/install/cozystack" %}})
+or [provider-specific guides]({{% ref "/docs/v1.3/install/providers" %}}).
+However, if this your first time installing Cozystack, it's best to use the variant `isp-full` and
+go through the [Cozystack tutorial]({{% ref "/docs/v1.3/getting-started" %}}).
diff --git a/content/en/docs/v1.3/operations/configuration/white-labeling.md b/content/en/docs/v1.3/operations/configuration/white-labeling.md
new file mode 100644
index 00000000..82d06068
--- /dev/null
+++ b/content/en/docs/v1.3/operations/configuration/white-labeling.md
@@ -0,0 +1,231 @@
+---
+title: "White Labeling"
+linkTitle: "White Labeling"
+description: "Customize branding elements in the Cozystack Dashboard and Keycloak authentication pages, including custom Keycloak themes"
+weight: 50
+---
+
+White labeling allows you to replace default Cozystack branding with your own logos and text across the Dashboard UI and Keycloak authentication pages.
+
+## Overview
+
+Branding is configured through the `branding` field in the Platform Package (`spec.components.platform.values.branding`). The configuration propagates automatically to:
+
+- **Dashboard**: logo, page title, footer text, favicon, and tenant identifier
+- **Keycloak**: realm display name on authentication pages
+
+## Configuration
+
+Edit your Platform Package to add or update the `branding` section:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cozystack-platform
+spec:
+ variant: isp-full # use your variant
+ components:
+ platform:
+ values:
+ branding:
+ # Dashboard branding
+ titleText: "My Company Dashboard"
+ footerText: "My Company Platform"
+ tenantText: "Production v1.0"
+ logoText: ""
+ logoSvg: ""
+ iconSvg: ""
+ # Keycloak branding
+ brandName: "My Company"
+ brandHtmlName: "
My Company
"
+```
+
+Apply the changes:
+
+```bash
+kubectl apply --server-side --filename platform-package.yaml
+```
+
+## Configuration Fields
+
+### Dashboard Fields
+
+| Field | Default | Description |
+| --- | --- | --- |
+| `titleText` | `Cozystack Dashboard` | Browser tab title and Dashboard header text. |
+| `footerText` | `Cozystack` | Text displayed in the Dashboard footer. |
+| `tenantText` | Platform version string | Version or tenant identifier displayed in the Dashboard. |
+| `logoText` | `""` (empty) | Alternative text-based logo. Used when SVG logo is not provided. |
+| `logoSvg` | Cozystack logo (base64) | Base64-encoded SVG logo displayed in the Dashboard header. |
+| `iconSvg` | Cozystack icon (base64) | Base64-encoded SVG icon used as the browser favicon. |
+
+### Keycloak Fields
+
+| Field | Default | Description |
+| --- | --- | --- |
+| `brandName` | Not set | Plain text realm name displayed in the Keycloak browser tab. |
+| `brandHtmlName` | Not set | HTML-formatted realm name displayed on Keycloak login pages. Supports inline HTML/CSS for styled branding. |
+
+## Preparing SVG Logos
+
+### Theme-Aware SVG Variables
+
+The Dashboard supports template variables in SVG content that adapt to light and dark themes:
+
+- `{token.colorText}` — replaced at runtime with the current theme's text color
+
+{{< note >}}
+The `{token.colorText}` syntax is **not valid XML**. The attribute value is intentionally unquoted because the Dashboard performs raw string substitution on the SVG source before rendering — it replaces `{token.colorText}` with the actual color value. This means SVG files with these placeholders cannot be opened directly in a browser or validated with an XML parser. This is expected and matches the upstream Dashboard implementation.
+{{< /note >}}
+
+Example SVG using a theme-aware variable:
+
+```text
+
+```
+
+### Converting SVG to Base64
+
+Encode your SVG files to base64 strings:
+
+```bash
+base64 < logo.svg | tr -d '\n'
+```
+
+### Example Workflow
+
+```bash
+# Encode logos
+LOGO_B64=$(base64 < logo.svg | tr -d '\n')
+ICON_B64=$(base64 < icon.svg | tr -d '\n')
+
+# Patch the Platform Package
+kubectl patch packages.cozystack.io cozystack.cozystack-platform \
+ --type merge --server-side \
+ --patch "{
+ \"spec\": {
+ \"components\": {
+ \"platform\": {
+ \"values\": {
+ \"branding\": {
+ \"logoSvg\": \"$LOGO_B64\",
+ \"iconSvg\": \"$ICON_B64\"
+ }
+ }
+ }
+ }
+ }
+ }"
+```
+
+## Verification
+
+After applying changes, verify that branding is correctly configured:
+
+1. **Check the Platform Package**:
+
+ ```bash
+ kubectl get packages.cozystack.io cozystack.cozystack-platform \
+ --output jsonpath='{.spec.components.platform.values.branding}' | jq .
+ ```
+
+2. **Dashboard**: open the Dashboard URL and verify the logo, title, footer, and favicon.
+
+3. **Keycloak**: open the Keycloak login page and verify the realm display name.
+
+{{< note >}}
+You may need to hard-refresh (Ctrl+Shift+R / Cmd+Shift+R) or clear browser cache to see updated branding.
+{{< /note >}}
+
+## Custom Keycloak Themes
+
+For deeper visual customization of Keycloak authentication pages (login, registration, account management), you can inject custom themes built as container images.
+
+### Theme Image Contract
+
+A theme image must contain theme files under the `/themes/` directory. The directory structure should follow the standard [Keycloak theme format](https://www.keycloak.org/docs/latest/server_development/index.html#_themes):
+
+```text
+/themes/
+ my-brand/
+ login/
+ theme.properties
+ resources/
+ css/
+ img/
+ account/
+ theme.properties
+```
+
+At pod startup, init containers copy files from each theme image into Keycloak's `/opt/keycloak/themes/` directory. Built-in Keycloak themes (bundled in JAR files) are not affected.
+
+If multiple theme images contain files at the same path, later entries in the list take precedence.
+
+### Configuration
+
+Custom themes are configured on the Keycloak system component. Edit the `cozystack.keycloak` Package:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.keycloak
+ namespace: cozy-system
+spec:
+ variant: default
+ components:
+ keycloak:
+ values:
+ themes:
+ - name: my-brand
+ image: registry.example.com/my-keycloak-theme:v1.0
+```
+
+Apply the changes:
+
+```bash
+kubectl apply --server-side --filename keycloak-package.yaml
+```
+
+### Theme Fields
+
+| Field | Required | Description |
+| --- | --- | --- |
+| `name` | Yes | Theme identifier. Used as init container name (sanitized to DNS-1123 format). |
+| `image` | Yes | Container image containing theme files under `/themes/`. |
+
+### Private Registries
+
+If your theme images are stored in a private registry, add `imagePullSecrets`:
+
+```yaml
+keycloak:
+ values:
+ themes:
+ - name: my-brand
+ image: private-registry.example.com/my-keycloak-theme:v1.0
+ imagePullSecrets:
+ - name: my-registry-secret
+```
+
+The referenced Secret must exist in the `cozy-keycloak` namespace.
+
+### Activating a Custom Theme
+
+After deploying a theme image, activate it in Keycloak:
+
+1. Open the Keycloak admin console.
+2. Navigate to **Realm Settings** > **Themes**.
+3. Select your custom theme from the dropdown for the desired theme type (login, account, email, or admin).
+4. Save the changes.
+
+## Migration from v0
+
+In Cozystack v0, branding was configured via a standalone `cozystack-branding` ConfigMap in the `cozy-system` namespace. In v1, this ConfigMap is no longer used. The [migration script]({{% ref "/docs/v1.3/operations/upgrades#step-3-generate-the-platform-package" %}}) automatically converts the old ConfigMap values into the Platform Package `branding` field.
+
+If you previously used the ConfigMap approach, no manual migration is needed — the upgrade process handles it automatically.
diff --git a/content/en/docs/v1.3/operations/faq/_index.md b/content/en/docs/v1.3/operations/faq/_index.md
new file mode 100644
index 00000000..0497dce4
--- /dev/null
+++ b/content/en/docs/v1.3/operations/faq/_index.md
@@ -0,0 +1,128 @@
+---
+title: "Frequently asked questions and How-to guides"
+linkTitle: "FAQ / How-tos"
+description: "Knowledge base with FAQ and advanced configurations"
+weight: 100
+aliases:
+ - /docs/v1.3/faq
+ - /docs/v1.3/guides/faq
+---
+
+{{% alert title="Troubleshooting" %}}
+Troubleshooting advice can be found on our [Troubleshooting Cheatsheet]({{% ref "/docs/v1.3/operations/troubleshooting" %}}).
+{{% /alert %}}
+
+
+## Deploying Cozystack
+
+
+How to allocate space on system disk for user storage
+
+Deploying Cozystack, [How to install Talos on a single-disk machine]({{% ref "/docs/v1.3/install/how-to/single-disk" %}})
+
+
+
+
+
+How to Enable KubeSpan
+
+Deploying Cozystack, [How to Enable KubeSpan]({{% ref "/docs/v1.3/install/how-to/kubespan" %}})
+
+
+
+
+
+How to enable Hugepages
+
+Deploying Cozystack, [How to enable Hugepages]({{% ref "/docs/v1.3/install/how-to/hugepages" %}}).
+
+
+
+
+
+What if my cloud provider does not support MetalLB
+
+Most cloud providers don't support MetalLB.
+Instead of using it, you can expose the main ingress controller using the external IPs method.
+
+For deploying on Hetzner, follow the specialized [Hetzner installation guide]({{% ref "/docs/v1.3/install/providers/hetzner" %}}).
+For other providers, follow the [Cozystack installation guide, Public IP Setup]({{% ref "/docs/v1.3/install/cozystack#4b-public-ip-setup" %}}).
+
+
+
+
+
+Public-network Kubernetes deployment
+
+Deploying Cozystack, [Deploy with public networks]({{% ref "/docs/v1.3/install/how-to/public-ip" %}}).
+
+
+
+## Operations
+
+
+How to enable access to dashboard via ingress-controller
+
+Update your `ingress` application and enable `dashboard: true` option in it.
+Dashboard will become available under: `https://dashboard.`
+
+
+
+
+
+How to configure Cozystack using FluxCD or ArgoCD
+
+Here you can find reference repository to learn how to configure Cozystack services using GitOps approach:
+
+- https://github.com/aenix-io/cozystack-gitops-example
+
+
+
+
+
+How to generate kubeconfig for tenant users
+
+Moved to [How to generate kubeconfig for tenant users]({{% ref "/docs/v1.3/operations/faq/generate-kubeconfig" %}}).
+
+
+
+
+
+How to use ServiceAccount tokens for API access
+
+See [ServiceAccount Tokens for API Access]({{% ref "/docs/v1.3/operations/faq/serviceaccount-api-access" %}}).
+
+
+
+
+
+How to Rotate Certificate Authority
+
+Moved to Cluster Maintenance, [How to Rotate Certificate Authority]({{% ref "/docs/v1.3/operations/cluster/rotate-ca" %}}).
+
+
+
+
+
+How to cleanup etcd state
+
+Moved to Troubleshooting: [How to clean up etcd state]({{% ref "/docs/v1.3/operations/troubleshooting/etcd#how-to-clean-up-etcd-state" %}}).
+
+
+
+## Bundles
+
+
+How to overwrite parameters for specific components
+
+Moved to Cluster configuration, [Components reference]({{% ref "/docs/v1.3/operations/configuration/components#overwriting-component-parameters" %}}).
+
+
+
+
+
+How to disable some components from bundle
+
+Moved to Cluster configuration, [Components reference]({{% ref "/docs/v1.3/operations/configuration/components#enabling-and-disabling-components" %}}).
+
+
diff --git a/content/en/docs/v1.3/operations/faq/generate-kubeconfig.md b/content/en/docs/v1.3/operations/faq/generate-kubeconfig.md
new file mode 100644
index 00000000..426e36e0
--- /dev/null
+++ b/content/en/docs/v1.3/operations/faq/generate-kubeconfig.md
@@ -0,0 +1,38 @@
+---
+title: "How to generate kubeconfig for tenant users"
+linkTitle: "Generate tenant kubeconfig"
+description: "A guide on how to generate a kubeconfig file for tenant users in Cozystack."
+weight: 30
+aliases:
+ - /docs/v1.3/operations/faq/generate-kubeconfig
+---
+
+To generate a `kubeconfig` for tenant users, use the following script.
+As a result, you’ll receive the tenant-kubeconfig file, which you can provide to the user.
+
+
+```bash
+SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
+kubectl get secret tenant-root -n tenant-root -o go-template='
+apiVersion: v1
+kind: Config
+clusters:
+- name: tenant-root
+ cluster:
+ server: '"$SERVER"'
+ certificate-authority-data: {{ index .data "ca.crt" }}
+contexts:
+- name: tenant-root
+ context:
+ cluster: tenant-root
+ namespace: {{ index .data "namespace" | base64decode }}
+ user: tenant-root
+current-context: tenant-root
+users:
+- name: tenant-root
+ user:
+ token: {{ index .data "token" | base64decode }}
+' \
+> tenant-root.kubeconfig
+```
+
diff --git a/content/en/docs/v1.3/operations/faq/serviceaccount-api-access.md b/content/en/docs/v1.3/operations/faq/serviceaccount-api-access.md
new file mode 100644
index 00000000..9d7c29fb
--- /dev/null
+++ b/content/en/docs/v1.3/operations/faq/serviceaccount-api-access.md
@@ -0,0 +1,89 @@
+---
+title: "ServiceAccount Tokens for API Access"
+linkTitle: "ServiceAccount API Access"
+description: "How to retrieve and use ServiceAccount tokens in Cozystack."
+weight: 20
+aliases:
+ - /docs/v1.3/operations/api-access
+---
+
+## Prerequisites
+
+Before you begin, make sure that:
+- A tenant already exists in Cozystack.
+ See [Create a User Tenant]({{% ref "/docs/v1.3/getting-started/create-tenant" %}}) if you haven't created one yet.
+- You have access to the tenant namespace — either via OIDC credentials or an administrative kubeconfig.
+- `kubectl` is installed and configured.
+- (Optional) `jq` is installed.
+
+## Retrieving the ServiceAccount Token
+
+Each tenant in Cozystack has a Secret that contains a ServiceAccount token.
+The Secret has the same name as the tenant and is located in the tenant's namespace.
+
+{{< tabs name="get_token" >}}
+{{% tab name="Dashboard" %}}
+
+1. Log in to the Dashboard as a user with access to the tenant.
+1. Switch context to the target tenant if needed.
+1. On the left sidebar, navigate to the **Administration** → **Info** page and open the **Secrets** tab.
+1. Find the secret named `tenant-` (e.g. `tenant-team1`), where the **Key** is **token**.
+1. Click the eye icon to reveal the **Value** field, then click the revealed data. The text will be copied to the clipboard automatically.
+
+{{% /tab %}}
+
+{{% tab name="kubectl" %}}
+
+Retrieve the token for a tenant named ``:
+
+```bash
+kubectl -n tenant- get tenantsecret tenant- -o json | jq -r '.data.token | @base64d'
+```
+
+To store the token in a variable for subsequent commands:
+
+```bash
+export TOKEN=$(kubectl -n tenant- get tenantsecret tenant- -o json | jq -r '.data.token | @base64d')
+```
+
+{{% /tab %}}
+{{< /tabs >}}
+
+## Using the Token for API Access
+
+Once you have the token, you can [generate a kubeconfig]({{% ref "/docs/v1.3/operations/faq/generate-kubeconfig" %}}) for kubectl access, or use it directly with `curl` as shown below.
+
+{{% alert color="warning" %}}
+**Token Security**
+
+ServiceAccount tokens in Cozystack **do not expire** by default. Handle them with the same care as passwords.
+{{% /alert %}}
+
+### Test the Connection
+
+First, verify your kubectl context points to the correct Cozystack cluster:
+
+```bash
+kubectl config current-context
+kubectl cluster-info
+```
+
+Next, get the API server address:
+
+```bash
+export API_SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
+```
+
+Then, extract the CA certificate from the tenant secret:
+
+```bash
+kubectl -n tenant- get secret tenant- -o jsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
+```
+
+Now, test the connection:
+
+```bash
+curl --cacert ca.crt -H "Authorization: Bearer ${TOKEN}" ${API_SERVER}/api
+```
+
+> You can remove `ca.crt` after testing.
diff --git a/content/en/docs/v1.3/operations/multi-location/_index.md b/content/en/docs/v1.3/operations/multi-location/_index.md
new file mode 100644
index 00000000..3680c93d
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/_index.md
@@ -0,0 +1,15 @@
+---
+title: "Multi-Location Clusters"
+linkTitle: "Multi-Location"
+description: "Extend Cozystack management clusters across multiple locations using Kilo WireGuard mesh, cloud autoscaling, and local cloud controller manager."
+weight: 40
+---
+
+This section covers extending a Cozystack management cluster across multiple physical locations
+(on-premises + cloud, multi-cloud, etc.) using WireGuard mesh networking.
+
+The setup consists of three components:
+
+- [Networking Mesh]({{% ref "networking-mesh" %}}) -- Kilo WireGuard mesh with Cilium IPIP encapsulation
+- [Local CCM]({{% ref "local-ccm" %}}) -- cloud controller manager for node IP detection and lifecycle
+- [Cluster Autoscaling]({{% ref "autoscaling" %}}) -- automatic node provisioning in cloud providers
diff --git a/content/en/docs/v1.3/operations/multi-location/autoscaling/_index.md b/content/en/docs/v1.3/operations/multi-location/autoscaling/_index.md
new file mode 100644
index 00000000..2a03ed0d
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/autoscaling/_index.md
@@ -0,0 +1,19 @@
+---
+title: "Cluster Autoscaling"
+linkTitle: "Autoscaling"
+description: "Automatic node scaling for Cozystack management clusters using Kubernetes Cluster Autoscaler."
+weight: 20
+---
+
+The `cluster-autoscaler` system package enables automatic node scaling for Cozystack management clusters.
+It monitors pending pods and automatically provisions or removes cloud nodes based on demand.
+
+Before configuring autoscaling, complete the [Networking Mesh]({{% ref "../networking-mesh" %}})
+and [Local CCM]({{% ref "../local-ccm" %}}) setup.
+
+Cozystack provides pre-configured variants for different cloud providers:
+
+- [Hetzner Cloud]({{% ref "hetzner" %}}) -- scale using Hetzner Cloud servers
+- [Azure]({{% ref "azure" %}}) -- scale using Azure Virtual Machine Scale Sets
+
+Each variant is deployed as a separate Cozystack Package with provider-specific configuration.
diff --git a/content/en/docs/v1.3/operations/multi-location/autoscaling/azure.md b/content/en/docs/v1.3/operations/multi-location/autoscaling/azure.md
new file mode 100644
index 00000000..7134d6ce
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/autoscaling/azure.md
@@ -0,0 +1,474 @@
+---
+title: "Cluster Autoscaler for Azure"
+linkTitle: "Azure"
+description: "Configure automatic node scaling in Azure with Talos Linux and VMSS."
+weight: 20
+---
+
+This guide explains how to configure cluster-autoscaler for automatic node scaling in Azure with Talos Linux.
+
+## Prerequisites
+
+- Azure subscription with Contributor Service Principal
+- `az` CLI installed
+- Existing Talos Kubernetes cluster
+- [Networking Mesh]({{% ref "../networking-mesh" %}}) and [Local CCM]({{% ref "../local-ccm" %}}) configured
+
+## Step 1: Create Azure Infrastructure
+
+### 1.1 Login with Service Principal
+
+```bash
+az login --service-principal \
+ --username "" \
+ --password "" \
+ --tenant ""
+```
+
+### 1.2 Create Resource Group
+
+```bash
+az group create \
+ --name \
+ --location
+```
+
+### 1.3 Create VNet and Subnet
+
+```bash
+az network vnet create \
+ --resource-group \
+ --name cozystack-vnet \
+ --address-prefix 10.2.0.0/16 \
+ --subnet-name workers \
+ --subnet-prefix 10.2.0.0/24 \
+ --location
+```
+
+### 1.4 Create Network Security Group
+
+```bash
+az network nsg create \
+ --resource-group \
+ --name cozystack-nsg \
+ --location
+
+# Allow WireGuard
+az network nsg rule create \
+ --resource-group \
+ --nsg-name cozystack-nsg \
+ --name AllowWireGuard \
+ --priority 100 \
+ --direction Inbound \
+ --access Allow \
+ --protocol Udp \
+ --destination-port-ranges 51820
+
+# Allow Talos API
+az network nsg rule create \
+ --resource-group \
+ --nsg-name cozystack-nsg \
+ --name AllowTalosAPI \
+ --priority 110 \
+ --direction Inbound \
+ --access Allow \
+ --protocol Tcp \
+ --destination-port-ranges 50000
+
+# Associate NSG with subnet
+az network vnet subnet update \
+ --resource-group \
+ --vnet-name cozystack-vnet \
+ --name workers \
+ --network-security-group cozystack-nsg
+```
+
+## Step 2: Create Talos Image
+
+### 2.1 Generate Schematic ID
+
+Create a schematic at [factory.talos.dev](https://factory.talos.dev) with required extensions:
+
+```bash
+curl -s -X POST https://factory.talos.dev/schematics \
+ -H "Content-Type: application/json" \
+ -d '{
+ "customization": {
+ "systemExtensions": {
+ "officialExtensions": [
+ "siderolabs/amd-ucode",
+ "siderolabs/amdgpu-firmware",
+ "siderolabs/bnx2-bnx2x",
+ "siderolabs/drbd",
+ "siderolabs/i915-ucode",
+ "siderolabs/intel-ice-firmware",
+ "siderolabs/intel-ucode",
+ "siderolabs/qlogic-firmware",
+ "siderolabs/zfs"
+ ]
+ }
+ }
+ }'
+```
+
+Save the returned `id` as `SCHEMATIC_ID`.
+
+### 2.2 Create Managed Image from VHD
+
+```bash
+# Download Talos Azure image
+curl -L -o azure-amd64.raw.xz \
+ "https://factory.talos.dev/image/${SCHEMATIC_ID}//azure-amd64.raw.xz"
+
+# Decompress
+xz -d azure-amd64.raw.xz
+
+# Convert to VHD
+qemu-img convert -f raw -o subformat=fixed,force_size -O vpc \
+ azure-amd64.raw azure-amd64.vhd
+
+# Get VHD size
+VHD_SIZE=$(stat -f%z azure-amd64.vhd) # macOS
+# VHD_SIZE=$(stat -c%s azure-amd64.vhd) # Linux
+
+# Create managed disk for upload
+az disk create \
+ --resource-group \
+ --name talos- \
+ --location \
+ --upload-type Upload \
+ --upload-size-bytes $VHD_SIZE \
+ --sku Standard_LRS \
+ --os-type Linux \
+ --hyper-v-generation V2
+
+# Get SAS URL for upload
+SAS_URL=$(az disk grant-access \
+ --resource-group \
+ --name talos- \
+ --access-level Write \
+ --duration-in-seconds 3600 \
+ --query accessSAS --output tsv)
+
+# Upload VHD
+azcopy copy azure-amd64.vhd "$SAS_URL" --blob-type PageBlob
+
+# Revoke access
+az disk revoke-access \
+ --resource-group \
+ --name talos-
+
+# Create managed image from disk
+az image create \
+ --resource-group \
+ --name talos- \
+ --location \
+ --os-type Linux \
+ --hyper-v-generation V2 \
+ --source $(az disk show --resource-group \
+ --name talos- --query id --output tsv)
+```
+
+## Step 3: Create Talos Machine Config for Azure
+
+From your cluster repository, generate a worker config file:
+
+```bash
+talm template -t templates/worker.yaml --offline --full > nodes/azure.yaml
+```
+
+Then edit `nodes/azure.yaml` for Azure workers:
+
+1. Add Azure location metadata (see [Networking Mesh]({{% ref "../networking-mesh" %}})):
+ ```yaml
+ machine:
+ nodeAnnotations:
+ kilo.squat.ai/location: azure
+ kilo.squat.ai/persistent-keepalive: "20"
+ nodeLabels:
+ topology.kubernetes.io/zone: azure
+ ```
+2. Set public Kubernetes API endpoint:
+ Change `cluster.controlPlane.endpoint` to the **public** API server address (for example `https://:6443`). You can find this address in your kubeconfig or publish it via ingress.
+3. Remove discovered installer/network sections:
+ Delete `machine.install` and `machine.network` sections from this file.
+4. Set external cloud provider for kubelet (see [Local CCM]({{% ref "../local-ccm" %}})):
+ ```yaml
+ machine:
+ kubelet:
+ extraArgs:
+ cloud-provider: external
+ ```
+5. Fix node IP subnet detection:
+ Set `machine.kubelet.nodeIP.validSubnets` to the actual Azure subnet where autoscaled nodes run (for example `192.168.102.0/23`).
+6. (Optional) Add registry mirrors to avoid Docker Hub rate limiting:
+ ```yaml
+ machine:
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+ ```
+
+Result should include at least:
+
+```yaml
+machine:
+ nodeAnnotations:
+ kilo.squat.ai/location: azure
+ kilo.squat.ai/persistent-keepalive: "20"
+ nodeLabels:
+ topology.kubernetes.io/zone: azure
+ kubelet:
+ nodeIP:
+ validSubnets:
+ - 192.168.102.0/23 # replace with your Azure workers subnet
+ extraArgs:
+ cloud-provider: external
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+cluster:
+ controlPlane:
+ endpoint: https://:6443
+```
+
+All other settings (cluster tokens, CA, extensions, etc.) remain the same as the generated template.
+
+## Step 4: Create VMSS (Virtual Machine Scale Set)
+
+```bash
+IMAGE_ID=$(az image show \
+ --resource-group \
+ --name talos- \
+ --query id --output tsv)
+
+az vmss create \
+ --resource-group \
+ --name workers \
+ --location \
+ --orchestration-mode Uniform \
+ --image "$IMAGE_ID" \
+ --vm-sku Standard_D2s_v3 \
+ --instance-count 0 \
+ --vnet-name cozystack-vnet \
+ --subnet workers \
+ --public-ip-per-vm \
+ --custom-data nodes/azure.yaml \
+ --security-type Standard \
+ --admin-username talos \
+ --authentication-type ssh \
+ --generate-ssh-keys \
+ --upgrade-policy-mode Manual
+
+# Enable IP forwarding on VMSS NICs (required for Kilo leader to forward traffic)
+az vmss update \
+ --resource-group \
+ --name workers \
+ --set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableIPForwarding=true
+```
+
+{{% alert title="Important" color="warning" %}}
+- Must use `--orchestration-mode Uniform` (cluster-autoscaler requires Uniform mode)
+- Must use `--public-ip-per-vm` for WireGuard connectivity
+- IP forwarding must be enabled on VMSS NICs so the Kilo leader can forward traffic between the WireGuard mesh and non-leader nodes in the same subnet
+- Check VM quota in your region: `az vm list-usage --location `
+- `--custom-data` passes the Talos machine config to new instances
+{{% /alert %}}
+
+## Step 5: Deploy Cluster Autoscaler
+
+Create the Package resource:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cluster-autoscaler-azure
+spec:
+ variant: default
+ components:
+ cluster-autoscaler-azure:
+ values:
+ cluster-autoscaler:
+ azureClientID: ""
+ azureClientSecret: ""
+ azureTenantID: ""
+ azureSubscriptionID: ""
+ azureResourceGroup: ""
+ azureVMType: "vmss"
+ autoscalingGroups:
+ - name: workers
+ minSize: 0
+ maxSize: 10
+```
+
+Apply:
+```bash
+kubectl apply -f package.yaml
+```
+
+## Step 6: Kilo WireGuard Connectivity
+
+Azure nodes are behind NAT, so their initial WireGuard endpoint will be a private IP. Kilo handles this automatically through WireGuard's built-in NAT traversal when `persistent-keepalive` is configured (already included in the machine config from Step 3).
+
+The flow works as follows:
+1. The Azure node initiates a WireGuard handshake to the on-premises leader (which has a public IP)
+2. `persistent-keepalive` sends periodic keepalive packets, maintaining the NAT mapping
+3. The on-premises Kilo leader discovers the real public endpoint of the Azure node through WireGuard
+4. Kilo stores the discovered endpoint and uses it for subsequent connections
+
+{{% alert title="Note" color="info" %}}
+No manual `force-endpoint` annotation is needed. The `kilo.squat.ai/persistent-keepalive: "20"` annotation in the machine config is sufficient for Kilo to discover NAT endpoints automatically. Without this annotation, Kilo's NAT traversal mechanism is disabled and the tunnel will not stabilize.
+{{% /alert %}}
+
+## Testing
+
+### Manual scale test
+
+```bash
+# Scale up
+az vmss scale --resource-group --name workers --new-capacity 1
+
+# Check node joined
+kubectl get nodes -o wide
+
+# Check WireGuard tunnel
+kubectl logs -n cozy-kilo
+
+# Scale down
+az vmss scale --resource-group --name workers --new-capacity 0
+```
+
+### Autoscaler test
+
+Deploy a workload to trigger autoscaling:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: test-azure-autoscale
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: test-azure
+ template:
+ metadata:
+ labels:
+ app: test-azure
+ spec:
+ nodeSelector:
+ topology.kubernetes.io/zone: azure
+ containers:
+ - name: pause
+ image: registry.k8s.io/pause:3.9
+ resources:
+ requests:
+ cpu: "500m"
+ memory: "512Mi"
+```
+
+## Troubleshooting
+
+### Connecting to remote workers for diagnostics
+
+You can debug Azure worker nodes using the **Serial console** in the Azure portal:
+navigate to your VMSS instance → **Support + troubleshooting** → **Serial console**.
+This gives you direct access to the node's console output without requiring network connectivity.
+
+Alternatively, use `talm dashboard` to connect through the control plane:
+
+```bash
+talm dashboard -f nodes/.yaml -n
+```
+
+Where `.yaml` is your control plane node config and `` is
+the Kubernetes internal IP of the remote worker.
+
+### Node stuck in maintenance mode
+
+If you see the following messages in the serial console:
+
+```
+[talos] talosctl apply-config --insecure --nodes 10.2.0.5 --file
+[talos] or apply configuration using talosctl interactive installer:
+[talos] talosctl apply-config --insecure --nodes 10.2.0.5 --mode=interactive
+```
+
+This means the machine config was not picked up or is invalid. Common causes:
+
+- **Unsupported Kubernetes version**: the `kubelet` image version in the config is not compatible with the current Talos version
+- **Malformed config**: YAML syntax errors or invalid field values
+- **customData not applied**: the VMSS instance was created before the config was updated
+
+To debug, apply the config manually via Talos API (port 50000 must be open in the NSG):
+
+```bash
+talosctl apply-config --insecure --nodes --file nodes/azure.yaml
+```
+
+If the config is rejected, the error message will indicate what needs to be fixed.
+
+To update the machine config for new VMSS instances:
+
+```bash
+az vmss update \
+ --resource-group \
+ --name workers \
+ --custom-data @nodes/azure.yaml
+```
+
+After updating, delete existing instances so they are recreated with the new config:
+
+```bash
+az vmss delete-instances \
+ --resource-group \
+ --name workers \
+ --instance-ids "*"
+```
+
+{{% alert title="Warning" color="warning" %}}
+Azure does not provide a way to read back the `customData` from a VMSS — you can only set it. Always keep your machine config file (`nodes/azure.yaml`) in version control as the single source of truth.
+{{% /alert %}}
+
+### Node doesn't join cluster
+- Check that the Talos machine config control plane endpoint is reachable from Azure
+- Verify NSG rules allow outbound traffic to port 6443
+- Verify NSG rules allow inbound traffic to port 50000 (Talos API) for debugging
+- Check VMSS instance provisioning state: `az vmss list-instances --resource-group --name workers`
+
+### Non-leader nodes unreachable (kubectl logs/exec timeout)
+
+If `kubectl logs` or `kubectl exec` works for the Kilo leader node but times out for all other nodes in the same Azure subnet:
+
+1. **Verify IP forwarding** is enabled on the VMSS:
+ ```bash
+ az vmss show --resource-group --name workers \
+ --query "virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableIPForwarding"
+ ```
+ If `false`, enable it and apply to existing instances:
+ ```bash
+ az vmss update --resource-group --name workers \
+ --set virtualMachineProfile.networkProfile.networkInterfaceConfigurations[0].enableIPForwarding=true
+ az vmss update-instances --resource-group --name workers --instance-ids "*"
+ ```
+
+2. **Test the return path** from the leader node:
+ ```bash
+ # This should work (same subnet, direct)
+ kubectl exec -n cozy-kilo -- ping -c 2
+ ```
+
+### VM quota errors
+- Check quota: `az vm list-usage --location `
+- Request quota increase via Azure portal
+- Try a different VM family that has available quota
+
+### SkuNotAvailable errors
+- Some VM sizes may have capacity restrictions in certain regions
+- Try a different VM size: `az vm list-skus --location --size `
diff --git a/content/en/docs/v1.3/operations/multi-location/autoscaling/hetzner.md b/content/en/docs/v1.3/operations/multi-location/autoscaling/hetzner.md
new file mode 100644
index 00000000..4f28ec1b
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/autoscaling/hetzner.md
@@ -0,0 +1,420 @@
+---
+title: "Cluster Autoscaler for Hetzner Cloud"
+linkTitle: "Hetzner"
+description: "Configure automatic node scaling in Hetzner Cloud with Talos Linux."
+weight: 10
+---
+
+This guide explains how to configure cluster-autoscaler for automatic node scaling in Hetzner Cloud with Talos Linux.
+
+## Prerequisites
+
+- Hetzner Cloud account with API token
+- `hcloud` CLI installed
+- Existing Talos Kubernetes cluster
+- [Networking Mesh]({{% ref "../networking-mesh" %}}) and [Local CCM]({{% ref "../local-ccm" %}}) configured
+
+## Step 1: Create Talos Image in Hetzner Cloud
+
+Hetzner doesn't support direct image uploads, so we need to create a snapshot via a temporary server.
+
+### 1.1 Generate Schematic ID
+
+Create a schematic at [factory.talos.dev](https://factory.talos.dev) with required extensions:
+
+```bash
+curl -s -X POST https://factory.talos.dev/schematics \
+ -H "Content-Type: application/json" \
+ -d '{
+ "customization": {
+ "systemExtensions": {
+ "officialExtensions": [
+ "siderolabs/qemu-guest-agent",
+ "siderolabs/amd-ucode",
+ "siderolabs/amdgpu-firmware",
+ "siderolabs/bnx2-bnx2x",
+ "siderolabs/drbd",
+ "siderolabs/i915-ucode",
+ "siderolabs/intel-ice-firmware",
+ "siderolabs/intel-ucode",
+ "siderolabs/qlogic-firmware",
+ "siderolabs/zfs"
+ ]
+ }
+ }
+ }'
+```
+
+Save the returned `id` as `SCHEMATIC_ID`.
+
+{{% alert title="Note" color="info" %}}
+`siderolabs/qemu-guest-agent` is required for Hetzner Cloud. Add other extensions
+(zfs, drbd, etc.) as needed for your workloads.
+{{% /alert %}}
+
+### 1.2 Configure hcloud CLI
+
+```bash
+export HCLOUD_TOKEN=""
+```
+
+### 1.3 Create temporary server in rescue mode
+
+```bash
+# Create server (without starting)
+hcloud server create \
+ --name talos-image-builder \
+ --type cpx22 \
+ --image ubuntu-24.04 \
+ --location fsn1 \
+ --ssh-key \
+ --start-after-create=false
+
+# Enable rescue mode and start
+hcloud server enable-rescue --type linux64 --ssh-key talos-image-builder
+hcloud server poweron talos-image-builder
+```
+
+### 1.4 Write Talos image to disk
+
+```bash
+# Get server IP
+SERVER_IP=$(hcloud server ip talos-image-builder)
+
+# SSH into rescue mode and write image
+ssh root@$SERVER_IP
+
+# Inside rescue mode:
+wget -O- "https://factory.talos.dev/image/${SCHEMATIC_ID}//hcloud-amd64.raw.xz" \
+ | xz -d \
+ | dd of=/dev/sda bs=4M status=progress
+sync
+exit
+```
+
+### 1.5 Create snapshot and cleanup
+
+```bash
+# Power off and create snapshot
+hcloud server poweroff talos-image-builder
+hcloud server create-image --type snapshot --description "Talos " talos-image-builder
+
+# Get snapshot ID (save this for later)
+hcloud image list --type snapshot
+
+# Delete temporary server
+hcloud server delete talos-image-builder
+```
+
+## Step 2: Create Hetzner vSwitch (Optional but Recommended)
+
+Create a private network for communication between nodes:
+
+```bash
+# Create network
+hcloud network create --name cozystack-vswitch --ip-range 10.100.0.0/16
+
+# Add subnet for your region (eu-central covers FSN1, NBG1)
+hcloud network add-subnet cozystack-vswitch \
+ --type cloud \
+ --network-zone eu-central \
+ --ip-range 10.100.0.0/24
+```
+
+## Step 3: Create Talos Machine Config
+
+From your cluster repository, generate a worker config file:
+
+```bash
+talm template -t templates/worker.yaml --offline --full > nodes/hetzner.yaml
+```
+
+Then edit `nodes/hetzner.yaml` for Hetzner workers:
+
+1. Add Hetzner location metadata (see [Networking Mesh]({{% ref "../networking-mesh" %}})):
+ ```yaml
+ machine:
+ nodeAnnotations:
+ kilo.squat.ai/location: hetzner-cloud
+ kilo.squat.ai/persistent-keepalive: "20"
+ nodeLabels:
+ topology.kubernetes.io/zone: hetzner-cloud
+ ```
+2. Set public Kubernetes API endpoint:
+ Change `cluster.controlPlane.endpoint` to the **public** API server address (for example `https://:6443`). You can find this address in your kubeconfig or publish it via ingress.
+3. Remove discovered installer/network sections:
+ Delete `machine.install` and `machine.network` sections from this file.
+4. Set external cloud provider for kubelet (see [Local CCM]({{% ref "../local-ccm" %}})):
+ ```yaml
+ machine:
+ kubelet:
+ extraArgs:
+ cloud-provider: external
+ ```
+5. Fix node IP subnet detection:
+ Set `machine.kubelet.nodeIP.validSubnets` to your vSwitch subnet (for example `10.100.0.0/24`).
+6. (Optional) Add registry mirrors to avoid Docker Hub rate limiting:
+ ```yaml
+ machine:
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+ ```
+
+Result should include at least:
+
+```yaml
+machine:
+ nodeAnnotations:
+ kilo.squat.ai/location: hetzner-cloud
+ kilo.squat.ai/persistent-keepalive: "20"
+ nodeLabels:
+ topology.kubernetes.io/zone: hetzner-cloud
+ kubelet:
+ nodeIP:
+ validSubnets:
+ - 10.100.0.0/24 # replace with your vSwitch subnet
+ extraArgs:
+ cloud-provider: external
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+cluster:
+ controlPlane:
+ endpoint: https://:6443
+```
+
+All other settings (cluster tokens, CA, extensions, etc.) remain the same as the generated template.
+
+## Step 4: Create Kubernetes Secrets
+
+### 4.1 Create secret with Hetzner API token
+
+```bash
+kubectl -n cozy-cluster-autoscaler-hetzner create secret generic hetzner-credentials \
+ --from-literal=token=
+```
+
+### 4.2 Create secret with Talos machine config
+
+The machine config must be base64-encoded:
+
+```bash
+# Encode your worker.yaml (single line base64)
+base64 -w 0 -i worker.yaml -o worker.b64
+
+# Create secret
+kubectl -n cozy-cluster-autoscaler-hetzner create secret generic talos-config \
+ --from-file=cloud-init=worker.b64
+```
+
+## Step 5: Deploy Cluster Autoscaler
+
+Create the Package resource:
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: Package
+metadata:
+ name: cozystack.cluster-autoscaler-hetzner
+spec:
+ variant: default
+ components:
+ cluster-autoscaler-hetzner:
+ values:
+ cluster-autoscaler:
+ autoscalingGroups:
+ - name: workers-fsn1
+ minSize: 0
+ maxSize: 10
+ instanceType: cpx22
+ region: FSN1
+ extraEnv:
+ HCLOUD_IMAGE: ""
+ HCLOUD_SSH_KEY: ""
+ HCLOUD_NETWORK: "cozystack-vswitch"
+ HCLOUD_PUBLIC_IPV4: "true"
+ HCLOUD_PUBLIC_IPV6: "false"
+ extraEnvSecrets:
+ HCLOUD_TOKEN:
+ name: hetzner-credentials
+ key: token
+ HCLOUD_CLOUD_INIT:
+ name: talos-config
+ key: cloud-init
+```
+
+Apply:
+```bash
+kubectl apply -f package.yaml
+```
+
+## Step 6: Test Autoscaling
+
+Create a deployment with pod anti-affinity to force scale-up:
+
+```yaml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: test-autoscaler
+spec:
+ replicas: 5
+ selector:
+ matchLabels:
+ app: test-autoscaler
+ template:
+ metadata:
+ labels:
+ app: test-autoscaler
+ spec:
+ affinity:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - labelSelector:
+ matchLabels:
+ app: test-autoscaler
+ topologyKey: kubernetes.io/hostname
+ containers:
+ - name: nginx
+ image: nginx
+ resources:
+ requests:
+ cpu: "100m"
+ memory: "128Mi"
+```
+
+If you have fewer nodes than replicas, the autoscaler will create new Hetzner servers.
+
+## Step 7: Verify
+
+```bash
+# Check autoscaler logs
+kubectl -n cozy-cluster-autoscaler-hetzner logs \
+ deployment/cluster-autoscaler-hetzner-hetzner-cluster-autoscaler -f
+
+# Check nodes
+kubectl get nodes -o wide
+
+# Verify node labels and internal IP
+kubectl get node --show-labels
+```
+
+Expected result for autoscaled nodes:
+- Internal IP from vSwitch range (e.g., 10.100.0.2)
+- Label `kilo.squat.ai/location=hetzner-cloud`
+
+## Configuration Reference
+
+### Environment Variables
+
+| Variable | Description | Required |
+|----------|-------------|----------|
+| `HCLOUD_TOKEN` | Hetzner API token | Yes |
+| `HCLOUD_IMAGE` | Talos snapshot ID | Yes |
+| `HCLOUD_CLOUD_INIT` | Base64-encoded machine config | Yes |
+| `HCLOUD_NETWORK` | vSwitch network name/ID | No |
+| `HCLOUD_SSH_KEY` | SSH key name/ID | No |
+| `HCLOUD_FIREWALL` | Firewall name/ID | No |
+| `HCLOUD_PUBLIC_IPV4` | Assign public IPv4 | No (default: true) |
+| `HCLOUD_PUBLIC_IPV6` | Assign public IPv6 | No (default: false) |
+
+### Hetzner Server Types
+
+| Type | vCPU | RAM | Good for |
+|------|------|-----|----------|
+| cpx22 | 2 | 4GB | Small workloads |
+| cpx32 | 4 | 8GB | General purpose |
+| cpx42 | 8 | 16GB | Medium workloads |
+| cpx52 | 16 | 32GB | Large workloads |
+| ccx13 | 2 dedicated | 8GB | CPU-intensive |
+| ccx23 | 4 dedicated | 16GB | CPU-intensive |
+| ccx33 | 8 dedicated | 32GB | CPU-intensive |
+| cax11 | 2 ARM | 4GB | ARM workloads |
+| cax21 | 4 ARM | 8GB | ARM workloads |
+
+{{% alert title="Note" color="info" %}}
+Some older server types (cpx11, cpx21, etc.) may be unavailable in certain regions.
+{{% /alert %}}
+
+### Hetzner Regions
+
+| Code | Location |
+|------|----------|
+| FSN1 | Falkenstein, Germany |
+| NBG1 | Nuremberg, Germany |
+| HEL1 | Helsinki, Finland |
+| ASH | Ashburn, USA |
+| HIL | Hillsboro, USA |
+
+## Troubleshooting
+
+### Connecting to remote workers for diagnostics
+
+Talos does not allow opening a dashboard directly to worker nodes. Use `talm dashboard`
+to connect through the control plane:
+
+```bash
+talm dashboard -f nodes/.yaml -n
+```
+
+Where `.yaml` is your control plane node config and `` is
+the Kubernetes internal IP of the remote worker.
+
+### Nodes not joining cluster
+
+1. Check VNC console via Hetzner Cloud Console or:
+ ```bash
+ hcloud server request-console
+ ```
+2. Common errors:
+ - **"unknown keys found during decoding"**: Check Talos config format. `nodeLabels` goes under `machine`, `nodeIP` goes under `machine.kubelet`
+ - **"kubelet image is not valid"**: Kubernetes version mismatch. Use kubelet version compatible with your Talos version
+ - **"failed to load config"**: Machine config syntax error
+
+### Nodes have wrong Internal IP
+
+Ensure `machine.kubelet.nodeIP.validSubnets` is set to your vSwitch subnet:
+```yaml
+machine:
+ kubelet:
+ nodeIP:
+ validSubnets:
+ - 10.100.0.0/24
+```
+
+### Scale-up not triggered
+
+1. Check autoscaler logs for errors
+2. Verify RBAC permissions (leases access required)
+3. Check if pods are actually pending:
+ ```bash
+ kubectl get pods --field-selector=status.phase=Pending
+ ```
+
+### Registry rate limiting (403 errors)
+
+Add registry mirrors to Talos config:
+```yaml
+machine:
+ registries:
+ mirrors:
+ docker.io:
+ endpoints:
+ - https://mirror.gcr.io
+ registry.k8s.io:
+ endpoints:
+ - https://registry.k8s.io
+```
+
+### Scale-down not working
+
+The autoscaler caches node information for up to 30 minutes. Wait or restart autoscaler:
+```bash
+kubectl -n cozy-cluster-autoscaler-hetzner rollout restart \
+ deployment cluster-autoscaler-hetzner-hetzner-cluster-autoscaler
+```
diff --git a/content/en/docs/v1.3/operations/multi-location/local-ccm.md b/content/en/docs/v1.3/operations/multi-location/local-ccm.md
new file mode 100644
index 00000000..e633ee6b
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/local-ccm.md
@@ -0,0 +1,39 @@
+---
+title: "Local Cloud Controller Manager"
+linkTitle: "Local CCM"
+description: "Node IP detection and lifecycle management for multi-location clusters."
+weight: 15
+---
+
+The `local-ccm` package provides a lightweight cloud controller manager for self-managed clusters.
+It handles node IP detection and node lifecycle without requiring an external cloud provider.
+
+## What it does
+
+- **External IP detection**: Detects each node's external IP via `ip route get` (default target: `8.8.8.8`)
+- **Node initialization**: Removes the `node.cloudprovider.kubernetes.io/uninitialized` taint so pods can be scheduled
+- **Node lifecycle controller** (optional): Monitors NotReady nodes via ICMP ping and removes them after a configurable timeout
+
+## Install
+
+```bash
+cozypkg add cozystack.local-ccm
+```
+
+## Talos machine config
+
+All nodes in the cluster (including control plane) must have `cloud-provider: external` set
+so that kubelet defers node initialization to the cloud controller manager:
+
+```yaml
+machine:
+ kubelet:
+ extraArgs:
+ cloud-provider: external
+```
+
+{{% alert title="Important" color="warning" %}}
+The `cloud-provider: external` setting must be present on **all** nodes in the cluster,
+including control plane nodes. Without it, the cluster-autoscaler cannot match Kubernetes
+nodes to cloud provider instances (e.g. Azure VMSS).
+{{% /alert %}}
diff --git a/content/en/docs/v1.3/operations/multi-location/networking-mesh.md b/content/en/docs/v1.3/operations/multi-location/networking-mesh.md
new file mode 100644
index 00000000..94b68fdb
--- /dev/null
+++ b/content/en/docs/v1.3/operations/multi-location/networking-mesh.md
@@ -0,0 +1,81 @@
+---
+title: "Networking Mesh"
+linkTitle: "Networking Mesh"
+description: "Configure Kilo WireGuard mesh with Cilium for multi-location cluster connectivity."
+weight: 10
+---
+
+Kilo creates a WireGuard mesh between cluster locations. When running with Cilium, it uses
+IPIP encapsulation routed through Cilium's VxLAN overlay so that traffic between locations
+works even when the cloud network blocks raw IPIP (protocol 4) packets.
+
+## Select the cilium-kilo networking variant
+
+During platform setup, select the **cilium-kilo** networking variant. This deploys both Cilium
+and Kilo as an integrated stack with the required configuration:
+
+## How it works
+
+1. Kilo runs in `--local=false` mode -- it does not manage routes within a location (Cilium handles that)
+2. Kilo creates a WireGuard tunnel (`kilo0`) between location leaders
+3. Non-leader nodes in each location reach remote locations through IPIP encapsulation to their location leader, routed via Cilium's VxLAN overlay
+4. The leader decapsulates IPIP and forwards traffic through the WireGuard tunnel
+5. Cilium's `enable-ipip-termination` option creates the `cilium_tunl` interface (kernel's `tunl0` renamed) that Kilo uses for IPIP TX/RX -- without it, the kernel detects TX recursion on the tunnel device
+
+## Talos machine config for cloud nodes
+
+Cloud worker nodes must include Kilo annotations in their Talos machine config:
+
+```yaml
+machine:
+ nodeAnnotations:
+ kilo.squat.ai/location:
+ kilo.squat.ai/persistent-keepalive: "20"
+ nodeLabels:
+ topology.kubernetes.io/zone:
+```
+
+{{% alert title="Note" color="info" %}}
+Kilo reads `kilo.squat.ai/location` from **node annotations**, not labels. The
+`persistent-keepalive` annotation is critical for cloud nodes behind NAT -- it enables
+WireGuard NAT traversal, allowing Kilo to discover the real public endpoint automatically.
+{{% /alert %}}
+
+## Allowed location IPs
+
+By default, Kilo only routes pod CIDRs and individual node internal IPs through the WireGuard mesh. If nodes in a
+location use a private subnet that other locations need to reach (e.g. for kubelet communication
+or NodePort access), annotate the nodes **in that location** with `kilo.squat.ai/allowed-location-ips`:
+
+```bash
+# On all on-premise nodes (using a label selector) — expose the on-premise subnet to cloud nodes
+kubectl annotate nodes -l topology.kubernetes.io/zone=on-prem kilo.squat.ai/allowed-location-ips=192.168.100.0/24
+```
+
+This tells Kilo to include the specified CIDRs in the WireGuard allowed IPs for that location,
+making those subnets routable through the tunnel from all other locations.
+
+{{% alert title="Warning" color="warning" %}}
+Set this annotation on nodes **that own the subnet you want to expose** (i.e. nodes in the
+location where that network exists), **not** on remote nodes that want to reach it. If you
+set it on the wrong location, Kilo will create a route that sends traffic for that CIDR
+through the WireGuard tunnel on all other nodes -- including nodes that are directly connected
+to that subnet via L2. This breaks local connectivity between co-located nodes.
+
+For example, if your cloud nodes use `10.2.0.0/24`, add the annotation to the **cloud** nodes.
+Do **not** add the on-premise subnet (e.g. `192.168.100.0/23`) to cloud nodes -- this would
+hijack all local traffic between on-premise nodes through the WireGuard tunnel.
+{{% /alert %}}
+
+## Troubleshooting
+
+### WireGuard tunnel not established
+- Verify the node has `kilo.squat.ai/persistent-keepalive: "20"` annotation
+- Verify the node has `kilo.squat.ai/location` annotation (not just as a label)
+- Check that the cloud firewall allows inbound UDP 51820
+- Inspect kilo logs: `kubectl logs -n cozy-kilo `
+- Repeating "WireGuard configurations are different" messages every 30 seconds indicate a missing `persistent-keepalive` annotation
+
+### Non-leader nodes unreachable (kubectl logs/exec timeout)
+- Verify IP forwarding is enabled on the cloud network interfaces (required for the Kilo leader to forward traffic)
+- Check kilo pod logs for `cilium_tunl interface not found` errors -- this means Cilium is not running with `enable-ipip-termination=true` (the cilium-kilo variant configures this automatically)
diff --git a/content/en/docs/v1.3/operations/oidc/_index.md b/content/en/docs/v1.3/operations/oidc/_index.md
new file mode 100644
index 00000000..f19afa3c
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Using OpenID Connect with Cozystack"
+linkTitle: "OpenID Connect"
+description: "OIDC in Cozystack"
+weight: 36
+aliases:
+ - /docs/v1.3/oidc
+---
diff --git a/content/en/docs/v1.3/operations/oidc/enable_oidc.md b/content/en/docs/v1.3/operations/oidc/enable_oidc.md
new file mode 100644
index 00000000..f2e9c8ab
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/enable_oidc.md
@@ -0,0 +1,160 @@
+---
+title: "Enable OIDC Server"
+linkTitle: "OIDC Server"
+description: "How to enable OIDC Server"
+weight: 36
+aliases:
+ - /docs/v1.3/oidc/enable_oidc
+---
+
+## Prerequisites
+
+1. **OIDC Configuration**
+ Your API server must be configured to use OIDC. If you are using Talos Linux, your machine configuration should include the following parameters:
+
+ ```yaml
+ cluster:
+ apiServer:
+ extraArgs:
+ oidc-issuer-url: "https://keycloak.example.org/realms/cozy"
+ oidc-client-id: "kubernetes"
+ oidc-username-claim: "preferred_username"
+ oidc-groups-claim: "groups"
+ ```
+
+ **For Talm**
+ Add to your `values.yaml` in talm repo:
+ ```yaml
+ oidcIssuerUrl: "https://keycloak./realms/cozy"
+ ```
+
+2. **Domain Reachability**
+ Ensure that the domain `keycloak.example.org` is accessible from the cluster and resolves to your root ingress controller.
+
+3. **Storage Configuration**
+ Storage must be properly configured.
+
+## Configuration
+
+If all prerequisites are met, you can proceed with the configuration steps.
+
+### Step 1: Enable OIDC in Cozystack
+
+Patch the Platform Package to enable OIDC. This also exposes the Keycloak service automatically:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "authentication": {
+ "oidc": {
+ "enabled": true
+ }
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+If you need to add extra redirect URLs for the dashboard client (for example, when accessing the dashboard via port-forwarding),
+patch the Platform Package. Multiple redirect URLs should be separated by commas.
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "authentication": {
+ "oidc": {
+ "keycloakExtraRedirectUri": "http://127.0.0.1:8080/oauth2/callback/*,http://localhost:8080/oauth2/callback/*"
+ }
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+{{% alert color="info" %}}
+**Optional**: If you want the dashboard to reach Keycloak via the internal cluster network instead of the external ingress, set `keycloakInternalUrl`. This is useful in environments with self-signed certificates or restricted external access. See [Self-Signed Certificates]({{% ref "/docs/v1.3/operations/oidc/self-signed-certificates" %}}) for details.
+{{% /alert %}}
+
+Within one minute, CozyStack will reconcile and create three new `HelmRelease` resources:
+
+```bash
+# kubectl get hr -n cozy-keycloak
+cozy-keycloak keycloak 26s Unknown Running 'install' action with a timeout of 5m0s
+cozy-keycloak keycloak-configure 26s False dependency 'cozy-keycloak/keycloak-operator' is not ready
+cozy-keycloak keycloak-operator 26s False dependency 'cozy-keycloak/keycloak' is not ready
+```
+
+### Step 2: Wait for Installation Completion
+
+Wait until all resources are successfully installed and reach the `Ready` state:
+
+```bash
+NAME AGE READY STATUS
+keycloak 2m19s True Release reconciliation succeeded
+keycloak-configure 2m19s True Release reconciliation succeeded
+keycloak-operator 2m19s True Release reconciliation succeeded
+```
+
+
+Reconcile tenants:
+
+```
+kubectl annotate -n tenant-root hr/tenant-root reconcile.fluxcd.io/forceAt=$(date +"%Y-%m-%dT%H:%M:%SZ") --overwrite
+```
+
+### Step 3: Access Keycloak
+
+You can now access Keycloak at `https://keycloak.example.org` (replace `example.org` with your infrastructure domain).
+
+To get the Keycloak credentials for default user `admin`, run the following command:
+
+```bash
+kubectl get secret -o yaml -n cozy-keycloak keycloak-credentials -o go-template='{{ printf "%s\n" (index .data "password" | base64decode) }}'
+```
+
+1. Switch realm to `cozy`.
+2. Create a user in the realm `cozy`.
+
+ Follow the [Keycloak documentation](https://www.keycloak.org/docs/latest/server_admin/index.html#proc-creating-user_server_administration_guide) to create a user in the realm `cozy`.
+
+3. After a user is created, go to the user details in Keycloak admin console and turn on the "Verified email" toggle. This is needed for OIDC authentication to work properly.
+
+4. Add the user to the `cozystack-cluster-admin` group.
+
+5. Now you should be able to login to the dashboard using your OIDC credentials.
+
+ {{% alert color="warning" %}}
+ If the dashboard is still requesting a token instead of login/password, manually reconcile it:
+
+ ```bash
+ kubectl annotate -n cozy-dashboard hr/dashboard reconcile.fluxcd.io/forceAt=$(date +"%Y-%m-%dT%H:%M:%SZ") --overwrite
+ ```
+ {{% /alert %}}
+
+### Step 4: Retrieve Kubeconfig
+
+To access the cluster through the Dashboard, download your kubeconfig by selecting the deployed tenant and copying the secret from the resource map.
+
+This kubeconfig will be automatically configured to use OIDC authentication and the namespace dedicated to the tenant.
+
+Setup [kubelogin](https://github.com/int128/kubelogin) which is necessary to use an OIDC-enabled kubeconfig.
+```bash
+# Homebrew (macOS and Linux)
+brew install int128/kubelogin/kubelogin
+
+# Krew (macOS, Linux, Windows and ARM)
+kubectl krew install oidc-login
+
+# Chocolatey (Windows)
+choco install kubelogin
+```
diff --git a/content/en/docs/v1.3/operations/oidc/identity_providers/_index.md b/content/en/docs/v1.3/operations/oidc/identity_providers/_index.md
new file mode 100644
index 00000000..aa551efd
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/identity_providers/_index.md
@@ -0,0 +1,8 @@
+---
+title: "Identity providers"
+linkTitle: "Identity providers"
+description: "Identity providers managment."
+weight: 70
+aliases:
+ - /docs/v1.3/oidc/identity_providers
+---
diff --git a/content/en/docs/v1.3/operations/oidc/identity_providers/gitlab.md b/content/en/docs/v1.3/operations/oidc/identity_providers/gitlab.md
new file mode 100644
index 00000000..ecd21cbf
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/identity_providers/gitlab.md
@@ -0,0 +1,51 @@
+---
+title: How to configure GitLab as an Identity Provider
+linkTitle: Gitlab
+description: "How to configure GitLab as an Identity Provider"
+weight: 30
+aliases:
+ - /docs/v1.3/oidc/identity_providers/gitlab
+---
+
+You can use Gitlab identity provider for Keycloak
+
+### Overview
+
+## Create Application in Gitlab
+
+- Open `https://gitlab.com/groups//-/settings/applications`
+- Click `Add new application`
+- Name: cozy, Redirect URI: `https://keycloak./realms/cozy/broker/gitlab/endpoint`
+- Enable Confidential, api, read_api, read_user, openid, profile, email
+- Copy and save Secret
+
+
+## Configure Keycloak Identity Provider
+Create a `KeycloakRealmIdentityProvider` resource with the following configuration:
+
+```yaml
+apiVersion: v1.edp.epam.com/v1
+kind: KeycloakRealmIdentityProvider
+metadata:
+ name: gitlab
+spec:
+ realmRef:
+ name: keycloakrealm-cozy
+ kind: ClusterKeycloakRealm
+ alias: gitlab
+ authenticateByDefault: false
+ enabled: true
+ providerId: "gitlab"
+ config:
+ clientId: "YOUR GITLAB APP ID"
+ clientSecret: "YOUR GITLAB APP SECRET"
+ syncMode: "IMPORT"
+ mappers:
+ - name: "username"
+ identityProviderMapper: "oidc-username-idp-mapper"
+ identityProviderAlias: "gitlab"
+ config:
+ target: "LOCAL"
+ syncMode: "INHERIT"
+ template: "${ALIAS}---${CLAIM.preferred_username}"
+```
diff --git a/content/en/docs/v1.3/operations/oidc/identity_providers/google.md b/content/en/docs/v1.3/operations/oidc/identity_providers/google.md
new file mode 100644
index 00000000..23ee6ff4
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/identity_providers/google.md
@@ -0,0 +1,74 @@
+---
+title: How to configure Google as an Identity Provider
+linkTitle: Google
+description: "How to configure Google as an Identity Provider"
+weight: 30
+aliases:
+ - /docs/v1.3/oidc/identity_providers/google
+---
+
+## Configure Google
+
+- Head over to [Google Console](https://console.cloud.google.com/apis/dashboard), login in to the console using Google account and you will see Google Developer Console. Once logged in, head over the top left drop-down to create new project.
+
+
+- Click on "New Project" to proceed.
+
+
+- Enter the project name of your choice and select the Organisation if you have multiple organisations. Once done click on "Create"
+
+
+- Once the project is created you will get a pop-up suggesting to configure the consent screen. If not then head over to the Dashboard and head over to "Explore and enable APIs" options. Then Click on "Credentials" > "Configure Consent Screen" and head over to the next step.
+
+
+- Click on "External" as we want to allow any Google account to be able to sign in to our application and hit "Create".
+
+
+- After this, we will be redirected to pages where we will have to configure different things
+ - Application type: Public
+ - Application name: Your application name
+ - Authorised domains: Your application top-level domain name
+ - Application Homepage link: Your application homepage
+ - Application Privacy Policy link: Your application privacy policy link
+
+- Now head over to the Create Credentials option in the navbar and click on "OAuth Client ID".
+
+
+- Select Application type as a "Web application" and name the application according to your choice. Next, Add the link provided in the Keycloak tab under "Authorized Redirect URIs" and click "Create". The link should look something like this
+```bash
+https://YOUR_KEYCLOAK_DOMAIN/auth/realms/cozy/broker/google/endpoint
+```
+
+
+- As it is done, you will see a pop up with the information required in the next step. You will need to "Client ID" and "Client secret" in next step so make sure you make a safe copy of it.
+
+
+## Configure Keycloak Identity Provider
+Create a `KeycloakRealmIdentityProvider` resource with the following configuration:
+
+```yaml
+apiVersion: v1.edp.epam.com/v1
+kind: KeycloakRealmIdentityProvider
+metadata:
+ name: google
+spec:
+ realmRef:
+ name: keycloakrealm-cozy
+ kind: ClusterKeycloakRealm
+ alias: google
+ authenticateByDefault: false
+ enabled: true
+ providerId: "google"
+ config:
+ clientId: "YOUR GOOGLE APP ID"
+ clientSecret: "YOUR GOOGLE APP SECRET"
+ syncMode: "IMPORT"
+ mappers:
+ - name: "username"
+ identityProviderMapper: "oidc-username-idp-mapper"
+ identityProviderAlias: "google"
+ config:
+ target: "LOCAL"
+ syncMode: "INHERIT"
+ template: "${ALIAS}---${CLAIM.email}"
+```
diff --git a/content/en/docs/v1.3/operations/oidc/self-signed-certificates.md b/content/en/docs/v1.3/operations/oidc/self-signed-certificates.md
new file mode 100644
index 00000000..02536e07
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/self-signed-certificates.md
@@ -0,0 +1,187 @@
+---
+title: "Self-Signed Certificates"
+linkTitle: "Self-Signed Certificates"
+description: "How to configure OIDC with self-signed certificates"
+weight: 60
+aliases:
+ - /docs/oidc/self-signed-certificates
+ - /docs/operations/oidc/self-signed-certificates
+---
+
+This guide explains how to configure Kubernetes API server for OIDC authentication with Keycloak when using self-signed certificates. By default, Cozystack issues certificates via LetsEncrypt, but some environments (e.g., air-gapped or private enterprise networks) may use a custom CA instead.
+
+## Prerequisites
+
+- Cozystack cluster with OIDC enabled (see [Enable OIDC Server]({{% ref "/docs/v1.3/operations/oidc/enable_oidc" %}}))
+- Talos Linux control plane nodes
+- `talosctl` configured for your cluster
+- `kubelogin` installed
+
+## Step 1: Retrieve the Keycloak Certificate
+
+Get the certificate from the ingress controller:
+
+```bash
+echo | openssl s_client -connect :443 \
+ -servername keycloak.example.org 2>/dev/null | openssl x509
+```
+
+Replace `` with your ingress controller IP address, and `keycloak.example.org` with your actual Keycloak domain.
+
+Save the output (the certificate between `-----BEGIN CERTIFICATE-----` and `-----END CERTIFICATE-----`) for the next step.
+
+## Step 2: Configure Talos Control Plane Nodes
+
+For each control plane node, add the following to your machine configuration:
+
+```yaml
+machine:
+ network:
+ extraHostEntries:
+ - ip:
+ aliases:
+ - keycloak.example.org
+ files:
+ - content: |
+ -----BEGIN CERTIFICATE-----
+
+ -----END CERTIFICATE-----
+ permissions: 0o644
+ path: /var/oidc-ca.crt
+ op: create
+
+cluster:
+ apiServer:
+ extraArgs:
+ oidc-issuer-url: https://keycloak.example.org/realms/cozy
+ oidc-client-id: kubernetes
+ oidc-username-claim: preferred_username
+ oidc-groups-claim: groups
+ oidc-ca-file: /etc/kubernetes/oidc/ca.crt
+ extraVolumes:
+ - hostPath: /var/oidc-ca.crt
+ mountPath: /etc/kubernetes/oidc/ca.crt
+```
+
+Apply the configuration to each control plane node:
+
+```bash
+talosctl apply-config -n -f nodes/.yaml
+```
+
+{{% alert color="info" %}}
+The `extraHostEntries` configuration ensures that the Keycloak domain resolves correctly within the cluster, which is essential when using internal ingress IPs.
+{{% /alert %}}
+
+## Optional: Configure Internal Keycloak URL for Dashboard
+
+By default, the Cozystack Dashboard's oauth2-proxy connects to Keycloak through the external ingress URL. In environments with self-signed certificates or restricted external access, you can configure the dashboard to use Keycloak's internal cluster service for backend requests (token exchange, JWKS validation, userinfo, logout) while keeping browser redirects on the external URL.
+
+Patch the Platform Package:
+
+```bash
+kubectl patch packages.cozystack.io cozystack.cozystack-platform --type=merge -p '{
+ "spec": {
+ "components": {
+ "platform": {
+ "values": {
+ "authentication": {
+ "oidc": {
+ "keycloakInternalUrl": "http://keycloak-http.cozy-keycloak.svc:8080/realms/cozy"
+ }
+ }
+ }
+ }
+ }
+ }
+}'
+```
+
+{{% alert color="info" %}}
+This only affects the dashboard's oauth2-proxy (pod-to-pod communication). The Kubernetes API server still requires `extraHostEntries` to reach Keycloak, since `kube-apiserver` uses host-level DNS and cannot resolve cluster service names.
+{{% /alert %}}
+
+## Step 3: Configure kubelogin
+
+Install kubelogin if you haven't already:
+
+```bash
+# Homebrew (macOS and Linux)
+brew install int128/kubelogin/kubelogin
+
+# Krew (macOS, Linux, Windows and ARM)
+kubectl krew install oidc-login
+
+# Chocolatey (Windows)
+choco install kubelogin
+```
+
+Save the CA certificate from Step 1 to a file on your local machine:
+
+```bash
+# Save the certificate to a file (e.g., ~/.kube/oidc-ca.pem)
+cat > ~/.kube/oidc-ca.pem <
+-----END CERTIFICATE-----
+EOF
+```
+
+Set up OIDC login (this will open a browser for authentication):
+
+```bash
+kubectl oidc-login setup \
+ --oidc-issuer-url=https://keycloak.example.org/realms/cozy \
+ --oidc-client-id=kubernetes \
+ --certificate-authority=~/.kube/oidc-ca.pem
+```
+
+Configure kubectl credentials:
+
+```bash
+kubectl config set-credentials oidc \
+ --exec-api-version=client.authentication.k8s.io/v1 \
+ --exec-interactive-mode=IfAvailable \
+ --exec-command=kubectl \
+ --exec-arg=oidc-login \
+ --exec-arg=get-token \
+ --exec-arg="--oidc-issuer-url=https://keycloak.example.org/realms/cozy" \
+ --exec-arg="--oidc-client-id=kubernetes" \
+ --exec-arg="--certificate-authority=~/.kube/oidc-ca.pem"
+```
+
+Switch to the OIDC user and verify:
+
+```bash
+kubectl config set-context --current --user=oidc
+kubectl get nodes
+```
+
+{{% alert color="info" %}}
+If your organization's CA is already installed in the system trust store (common in enterprise environments), you can omit the `--certificate-authority` flag entirely — kubelogin will use the system CA bundle automatically.
+{{% /alert %}}
+
+{{% alert color="warning" %}}
+Avoid using `--insecure-skip-tls-verify`. If you cannot install the CA certificate on your machine or pass it via `--certificate-authority`, you can use `--insecure-skip-tls-verify` as a temporary workaround, but this disables TLS verification and is not recommended for production use.
+{{% /alert %}}
+
+## Troubleshooting
+
+### Check API Server OIDC Logs
+
+```bash
+kubectl logs -n kube-system -l component=kube-apiserver --tail=50 | grep oidc
+```
+
+### Verify OIDC Flags Are Applied
+
+```bash
+kubectl get pods -n kube-system -l component=kube-apiserver \
+ -o jsonpath='{.items[0].spec.containers[0].command}' | tr ',' '\n' | grep oidc
+```
+
+### Common Issues
+
+- **Certificate not found**: Ensure the certificate file path in `extraVolumes` matches the path specified in `oidc-ca-file`.
+- **Domain resolution fails**: Verify that `extraHostEntries` is correctly configured on all control plane nodes.
+- **Authentication fails**: Check that the user exists in Keycloak and has the required group memberships (see [Users and Roles]({{% ref "/docs/v1.3/operations/oidc/users_and_roles" %}})).
diff --git a/content/en/docs/v1.3/operations/oidc/users_and_roles.md b/content/en/docs/v1.3/operations/oidc/users_and_roles.md
new file mode 100644
index 00000000..297ca4b5
--- /dev/null
+++ b/content/en/docs/v1.3/operations/oidc/users_and_roles.md
@@ -0,0 +1,71 @@
+---
+title: Creating users and add roles for them
+linkTitle: Users and roles
+description: "How to create users and add roles for them"
+weight: 50
+aliases:
+ - /docs/v1.3/oidc/users_and_roles
+---
+
+Creating users and add roles for them
+
+### Overview
+
+When a tenant is created in Cozy (starting with version 1.6.0), roles, RoleBindings and keycloak groups will automatically be created in the Kubernetes cluster.
+
+To create a user, refer to the following documentation:
+[Keycloak Admin Console Documentation](https://www.keycloak.org/docs/latest/server_admin/#using-the-admin-console)
+
+## Assigning a Role to a User for a Tenant
+
+1. **Access Keycloak**:
+ To retrieve login credentials, check the secret by running the following command:
+ ```bash
+ kubectl get secret keycloak-credentials -n cozy-keycloak -o yaml
+ ```
+ **Keycloak Address**:
+ The Keycloak address will match the `publishing.host` value specified in your Platform Package. For example, if your Package includes:
+
+ ```yaml
+ spec:
+ components:
+ platform:
+ values:
+ publishing:
+ host: "infra.example.org"
+ ```
+
+ Then Keycloak will be available at: `keycloak.infra.example.org`
+
+ {{% alert color="warning" %}}
+ If you are planning to integrate with external services either as clients or as IdPs, your Keycloak address needs to be publicly accessible and reachable by these services.
+ {{% /alert %}}
+
+
+## Configure Roles for Each Tenant in Cozy:
+
+### Cluster wide
+- **`cozystack-cluster-admin`**
+ - Allow all.
+
+- **`cozystack-cluster-admin`**
+ - Allow all in "" api group
+ - Allow all for helmreleases in helm.toolkit.fluxcd.io and apps.cozystack.io
+
+### Tenant wide
+- **`tenant-abc-view`**
+ - Read-only access to resources from our API.
+ - Ability to view logs.
+
+- **`tenant-abc-use`**
+ - All previous permissions
+ - VNC access for virtual machines.
+
+- **`tenant-abc-admin`**
+ - All previous permissions
+ - Ability to delete pods, along with all permissions from `tenant-abc-use`.
+ - Ability to create, update, and delete resources from our API (excluding `tenant`, `monitoring`, `etcd`, `ingress`).
+
+- **`tenant-abc-super-admin`**
+ - All previous permissions
+ - Ability to create, update, and delete `tenant`, `monitoring`, `etcd`, and `ingress`.
diff --git a/content/en/docs/v1.3/operations/scheduling-classes.md b/content/en/docs/v1.3/operations/scheduling-classes.md
new file mode 100644
index 00000000..1f9cfe88
--- /dev/null
+++ b/content/en/docs/v1.3/operations/scheduling-classes.md
@@ -0,0 +1,217 @@
+---
+title: "Scheduling Classes"
+linkTitle: "Scheduling Classes"
+description: "Restrict tenant workloads to specific nodes or failure domains using SchedulingClass resources and the Cozystack scheduler."
+weight: 150
+---
+
+SchedulingClass is a cluster-scoped custom resource that lets administrators define
+placement policies for tenant workloads. When a tenant is assigned a scheduling class,
+all of its pods are automatically routed to the Cozystack custom scheduler, which
+merges the class-defined constraints with any constraints already present on the pod.
+
+This allows platform operators to pin tenants to specific data centers, availability
+zones, or node groups — without modifying individual application charts.
+
+## How it works
+
+The feature has two components:
+
+1. **Lineage-controller webhook** (part of `cozystack`): a mutating admission webhook
+ that intercepts pod creation in tenant namespaces. When a namespace carries the
+ `scheduler.cozystack.io/scheduling-class` label, the webhook sets `schedulerName: cozystack-scheduler`
+ and adds the `scheduler.cozystack.io/scheduling-class` annotation on every pod.
+ If the referenced SchedulingClass CR does not exist (e.g. the scheduler is not installed),
+ pods are left untouched and scheduled normally.
+
+2. **Cozystack scheduler** (the `cozystack-scheduler` package): a custom Kubernetes
+ scheduler that runs alongside the default scheduler. During scheduling, it resolves
+ the SchedulingClass referenced by the pod annotation and merges the CR's constraints
+ (node affinity, pod affinity/anti-affinity, topology spread) with the pod's own spec —
+ entirely in memory, without mutating the pod in the API server.
+
+## Prerequisites
+
+- Cozystack v1.2+
+- The `cozystack-scheduler` system package (v0.2.0+)
+
+## Installing the scheduler
+
+```bash
+cozypkg add cozystack.cozystack-scheduler
+```
+
+## Creating a SchedulingClass
+
+A SchedulingClass CR mirrors familiar Kubernetes scheduling primitives. All fields
+are optional — include only the constraints you need.
+
+### Example: pin workloads to a data center
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: SchedulingClass
+metadata:
+ name: dc-west
+spec:
+ nodeSelector:
+ topology.kubernetes.io/region: us-west-2
+```
+
+Pods assigned to this class will only be scheduled on nodes labeled
+`topology.kubernetes.io/region=us-west-2`.
+
+### Example: spread across availability zones
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: SchedulingClass
+metadata:
+ name: zone-spread
+spec:
+ topologySpreadConstraints:
+ - maxSkew: 1
+ topologyKey: topology.kubernetes.io/zone
+ whenUnsatisfiable: DoNotSchedule
+```
+
+{{% alert title="Note" %}}
+When a `topologySpreadConstraint` or pod affinity/anti-affinity term has a nil
+`labelSelector`, the scheduler automatically populates it with a selector matching
+the workload's Cozystack application identity labels (`apps.cozystack.io/application.group`,
+`.kind`, `.name`). This means you can define generic spreading or anti-affinity
+policies without hard-coding label values per application.
+{{% /alert %}}
+
+### Example: require dedicated nodes with anti-affinity
+
+```yaml
+apiVersion: cozystack.io/v1alpha1
+kind: SchedulingClass
+metadata:
+ name: dedicated-nodes
+spec:
+ nodeAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ nodeSelectorTerms:
+ - matchExpressions:
+ - key: node-pool
+ operator: In
+ values:
+ - dedicated
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - topologyKey: kubernetes.io/hostname
+```
+
+This pins workloads to nodes in the `dedicated` pool and spreads pods across
+hosts. The anti-affinity `labelSelector` is auto-populated per application, so
+pods from different applications of the same tenant can still land on the same node.
+
+## Full SchedulingClass spec reference
+
+| Field | Type | Description |
+|-------|------|-------------|
+| `spec.nodeSelector` | `map[string]string` | Simple key-value node labels that all nodes must match. |
+| `spec.nodeAffinity` | [`NodeAffinity`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#nodeaffinity-v1-core) | Required and preferred node affinity rules. |
+| `spec.podAffinity` | [`PodAffinity`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#podaffinity-v1-core) | Required and preferred pod co-location rules. |
+| `spec.podAntiAffinity` | [`PodAntiAffinity`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#podantiaffinity-v1-core) | Required and preferred pod anti-co-location rules. |
+| `spec.topologySpreadConstraints` | [`[]TopologySpreadConstraint`](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.31/#topologyspreadconstraint-v1-core) | Topology spread constraints for even distribution across failure domains. |
+
+## Assigning a SchedulingClass to a tenant
+
+When creating or editing a tenant, set the `schedulingClass` parameter to the name
+of an existing SchedulingClass CR:
+
+**Via the dashboard:**
+
+Select the scheduling class from the dropdown in the tenant creation form.
+
+**Via Helm values (`values.yaml`):**
+
+```yaml
+schedulingClass: dc-west
+```
+
+**Via the tenant secret (child tenant inheritance):**
+
+When a parent tenant has a scheduling class assigned, all child tenants inherit it
+automatically. A child tenant cannot override the parent's scheduling class — it can
+only set one if the parent has none.
+
+The assignment writes the `scheduler.cozystack.io/scheduling-class` label on the
+tenant's namespace. The webhook reads this label (or resolves it from the owning
+Application CR) to inject the scheduler name and annotation into pods.
+
+## Auto-populated label selectors
+
+The scheduler (v0.2.0+) automatically fills in nil `labelSelector` fields on
+pod affinity, pod anti-affinity, and topology spread constraint terms. It uses
+the pod's Cozystack application identity labels:
+
+- `apps.cozystack.io/application.group`
+- `apps.cozystack.io/application.kind`
+- `apps.cozystack.io/application.name`
+
+This means that a generic SchedulingClass like:
+
+```yaml
+spec:
+ podAntiAffinity:
+ requiredDuringSchedulingIgnoredDuringExecution:
+ - topologyKey: kubernetes.io/hostname
+```
+
+will automatically scope the anti-affinity to pods of the same application — each
+application gets its own anti-affinity behavior without needing a separate
+SchedulingClass per app.
+
+The default label keys can be overridden in the scheduler's Helm values:
+
+```yaml
+defaultLabelSelectorKeys:
+ - apps.cozystack.io/application.group
+ - apps.cozystack.io/application.kind
+ - apps.cozystack.io/application.name
+```
+
+If a term already has an explicit `labelSelector`, it is preserved as-is.
+
+## Operators without native schedulerName support
+
+Some operators used by Cozystack do not expose `schedulerName` in their CRDs.
+The webhook-based approach handles these transparently because it mutates pods
+directly at admission time, regardless of which operator created them:
+
+- etcd-operator
+- redis-operator (spotahome)
+- mariadb-operator
+- clickhouse-operator (altinity)
+
+No special configuration is needed for workloads managed by these operators.
+
+## Verifying the setup
+
+1. Confirm the scheduler is running:
+
+ ```bash
+ kubectl get pods -n cozy-system -l app.kubernetes.io/name=cozystack-scheduler
+ ```
+
+2. Confirm the SchedulingClass exists:
+
+ ```bash
+ kubectl get schedulingclasses
+ ```
+
+3. Check that a tenant namespace has the label:
+
+ ```bash
+ kubectl get ns tenant-example -o jsonpath='{.metadata.labels.scheduler\.cozystack\.io/scheduling-class}'
+ ```
+
+4. Check that pods in the tenant namespace use the custom scheduler:
+
+ ```bash
+ kubectl get pods -n tenant-example -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{.spec.schedulerName}{"\n"}{end}'
+ ```
diff --git a/content/en/docs/v1.3/operations/services/_include/bootbox.md b/content/en/docs/v1.3/operations/services/_include/bootbox.md
new file mode 100644
index 00000000..1648bc6b
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/bootbox.md
@@ -0,0 +1,5 @@
+---
+title: "BootBox Service Reference"
+linkTitle: "BootBox"
+---
+
diff --git a/content/en/docs/v1.3/operations/services/_include/etcd.md b/content/en/docs/v1.3/operations/services/_include/etcd.md
new file mode 100644
index 00000000..79395411
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/etcd.md
@@ -0,0 +1,5 @@
+---
+title: "Etcd Service Reference"
+linkTitle: "Etcd"
+---
+
diff --git a/content/en/docs/v1.3/operations/services/_include/ingress.md b/content/en/docs/v1.3/operations/services/_include/ingress.md
new file mode 100644
index 00000000..0441212f
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/ingress.md
@@ -0,0 +1,5 @@
+---
+title: "Ingress-NGINX Controller Reference"
+linkTitle: "Ingress"
+---
+
diff --git a/content/en/docs/v1.3/operations/services/_include/monitoring-overview.md b/content/en/docs/v1.3/operations/services/_include/monitoring-overview.md
new file mode 100644
index 00000000..5475d5c2
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/monitoring-overview.md
@@ -0,0 +1,119 @@
+## Data Flow Architecture
+
+```mermaid
+flowchart TD
+ A[VMAgent] --> B[VMCluster]
+ B --> C[Grafana]
+ B --> D[Alerta]
+ E[Fluent Bit] --> F[VLogs]
+```
+
+## Component Descriptions
+
+- **VMAgent**: A lightweight agent that collects metrics from various sources and sends them to VictoriaMetrics.
+- **VMCluster**: A VictoriaMetrics cluster that stores and processes time-series data for efficient querying.
+- **Grafana**: An open-source platform for monitoring and observability with customizable dashboards.
+- **Alerta**: An alerting system that processes and manages alerts from monitoring systems.
+- **Fluent Bit**: A fast and lightweight log processor and forwarder.
+- **VLogs**: VictoriaLogs, a high-performance log management system for storing and querying logs.
+
+## Visualization Architecture
+
+```mermaid
+graph TD
+ A[VictoriaMetrics VMCluster] --> B[Grafana]
+ C[VLogs] --> B
+ D[External Prometheus] --> B
+ E[Custom Application Metrics] --> B
+ B --> F[Pre-built Dashboards Cluster Overview, Node Metrics, etc.]
+ B --> G[Custom Dashboards Time Series, Stat, Table, etc.]
+ F --> H[Visualization CPU Usage, Memory, Network]
+ G --> H
+```
+
+### Visualization Component Descriptions
+
+- **VictoriaMetrics VMCluster**: The core metrics storage and querying engine that provides data to Grafana.
+- **VLogs**: VictoriaLogs system for log data integration into visualizations.
+- **External Prometheus**: Additional metrics sources that can be integrated.
+- **Custom Application Metrics**: User-defined metrics from applications.
+- **Grafana**: The visualization platform that renders dashboards.
+- **Pre-built Dashboards**: Standard dashboards for common monitoring views.
+- **Custom Dashboards**: User-created dashboards with various panel types.
+- **Visualization**: The final output showing metrics like CPU, memory, and network usage.
+
+## Monitoring Architecture
+
+```mermaid
+graph TD
+ A[Kubernetes Cluster] --> B[VMAgent]
+ C[Applications] --> B
+ D[Infrastructure] --> B
+ B --> E[VictoriaMetrics VMCluster]
+ E --> F[Grafana]
+ E --> G[Alerta]
+ F --> H[Dashboards & Visualizations]
+ G --> I[Alerts & Notifications]
+```
+
+### Monitoring Architecture Component Descriptions
+
+- **Kubernetes Cluster**: The core platform where workloads run, providing metrics endpoints.
+- **Applications**: User applications that expose custom metrics.
+- **Infrastructure**: Underlying hardware and system metrics.
+- **VMAgent**: Collects metrics from various sources and forwards them to VictoriaMetrics.
+- **VictoriaMetrics VMCluster**: Stores and processes time-series metrics data.
+- **Grafana**: Provides visualization and dashboarding capabilities.
+- **Alerta**: Handles alerting and notification management.
+- **Dashboards & Visualizations**: User interfaces for monitoring data.
+- **Alerts & Notifications**: System for notifying operators of issues.
+
+## Alerting Flow
+
+```mermaid
+flowchart TD
+ A[Metrics Collection VMAgent] --> B[VictoriaMetrics VMCluster]
+ B --> C[Alert Rules Evaluation]
+ C --> D{Condition Met?}
+ D -->|Yes| E[Generate Alert]
+ D -->|No| A
+ E --> F[Alerta Alert Manager]
+ F --> G[Grouping & Deduplication]
+ G --> H[Routing & Notification]
+ H --> I[Email/SMS/Slack/etc.]
+```
+
+### Alerting Flow Component Descriptions
+
+- **Metrics Collection**: Gathering of metrics by VMAgent from sources.
+- **VictoriaMetrics VMCluster**: Storage and querying of metrics data.
+- **Alert Rules Evaluation**: Checking metrics against predefined thresholds.
+- **Generate Alert**: Creating alert instances when conditions are met.
+- **Alerta Alert Manager**: Processing and managing alerts.
+- **Grouping & Deduplication**: Organizing alerts to avoid duplicates.
+- **Routing & Notification**: Directing alerts to appropriate channels.
+- **Email/SMS/Slack/etc.**: Final delivery methods for notifications.
+
+## Logging Architecture
+
+```mermaid
+graph TD
+ A[Application Logs] --> B[Fluent Bit]
+ C[Kubernetes Container Logs] --> B
+ D[System Logs] --> B
+ B --> E[VictoriaLogs VLogs]
+ E --> F[Grafana Log Panels]
+ F --> G[Log Visualization & Search]
+ E --> H[Log Queries & Analysis]
+```
+
+### Logging Architecture Component Descriptions
+
+- **Application Logs**: Logs generated by user applications.
+- **Kubernetes Container Logs**: Logs from containers running in Kubernetes.
+- **System Logs**: Infrastructure and system-level logs.
+- **Fluent Bit**: Lightweight log processor that collects and forwards logs.
+- **VictoriaLogs VLogs**: High-performance log storage and querying system.
+- **Grafana Log Panels**: Integration for visualizing logs in Grafana dashboards.
+- **Log Visualization & Search**: Interfaces for exploring and searching log data.
+- **Log Queries & Analysis**: Tools for querying and analyzing log information.
\ No newline at end of file
diff --git a/content/en/docs/v1.3/operations/services/_include/monitoring.md b/content/en/docs/v1.3/operations/services/_include/monitoring.md
new file mode 100644
index 00000000..b37fae6c
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/monitoring.md
@@ -0,0 +1,5 @@
+---
+title: "Monitoring Hub"
+linkTitle: "Monitoring Hub"
+---
+
diff --git a/content/en/docs/v1.3/operations/services/_include/parameters.md b/content/en/docs/v1.3/operations/services/_include/parameters.md
new file mode 100644
index 00000000..1ecb0872
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/parameters.md
@@ -0,0 +1,6 @@
+---
+title: "Monitoring Parameters"
+linkTitle: "Parameters"
+description: "Configure and manage monitoring parameters in Cozystack."
+weight: 1
+---
diff --git a/content/en/docs/v1.3/operations/services/_include/seaweedfs.md b/content/en/docs/v1.3/operations/services/_include/seaweedfs.md
new file mode 100644
index 00000000..24524afd
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_include/seaweedfs.md
@@ -0,0 +1,5 @@
+---
+title: "SeaweedFS Service Reference"
+linkTitle: "SeaweedFS"
+---
+
diff --git a/content/en/docs/v1.3/operations/services/_index.md b/content/en/docs/v1.3/operations/services/_index.md
new file mode 100644
index 00000000..4359fbcc
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/_index.md
@@ -0,0 +1,34 @@
+---
+title: "Cluster Services Reference"
+linkTitle: "Cluster Services"
+description: "Learn about middleware system packages, deployed to tenants and providing major functionality to user apps."
+weight: 35
+---
+
+## Monitoring
+
+The monitoring system in Cozystack provides comprehensive observability for both system-level and tenant-level resources. It operates at two primary levels: system-wide monitoring for infrastructure components and tenant-specific monitoring for user applications and services.
+
+### Architecture Overview
+
+- **System Level**: Monitors core Cozystack components, Kubernetes clusters, and underlying infrastructure.
+- **Tenant Level**: Provides isolated monitoring stacks for each tenant, allowing them to monitor their own applications without interference.
+
+### Key Components
+
+- **VMAgent**: Collects metrics from various sources and forwards them to VictoriaMetrics.
+- **VMCluster**: VictoriaMetrics cluster for storing and querying metrics.
+- **Grafana**: Visualization and dashboarding tool for metrics and logs.
+- **Alerta**: Alerting system for notifications based on metrics thresholds.
+
+### Data Flows
+
+Metrics flow from exporters (e.g., node-exporters, kube-state-metrics) to VMAgent, which then writes to VMCluster. Grafana queries VMCluster for visualization, and Alerta processes alerts from VMCluster or other sources.
+
+For detailed configuration, see [Monitoring Hub Reference]({{% ref "/docs/v1.3/operations/services/monitoring" %}}).
+
+Cozystack includes a number of cluster services.
+They are deployed through tenant settings, and not through the application catalog.
+
+Each tenant can have its own copy of cluster service or use the parent tenant's services.
+Read more about the services sharing mechanism in [Tenant System]({{% ref "/docs/v1.3/guides/tenants#sharing-cluster-services" %}})
diff --git a/content/en/docs/v1.3/operations/services/bootbox.md b/content/en/docs/v1.3/operations/services/bootbox.md
new file mode 100644
index 00000000..642cd743
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/bootbox.md
@@ -0,0 +1,33 @@
+---
+title: "BootBox Service Reference"
+linkTitle: "BootBox"
+---
+
+
+
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------------- | ----------------------------------------------------- | ---------- | ------- |
+| `whitelistHTTP` | Secure HTTP by enabling client networks whitelisting. | `bool` | `true` |
+| `whitelist` | List of client networks. | `[]string` | `[]` |
+| `machines` | Configuration of physical machine instances. | `[]object` | `[]` |
+| `machines[i].hostname` | Hostname. | `string` | `""` |
+| `machines[i].arch` | Architecture. | `string` | `""` |
+| `machines[i].ip` | IP address configuration. | `object` | `{}` |
+| `machines[i].ip.address` | IP address. | `string` | `""` |
+| `machines[i].ip.gateway` | IP gateway. | `string` | `""` |
+| `machines[i].ip.netmask` | Netmask. | `string` | `""` |
+| `machines[i].leaseTime` | Lease time. | `int` | `0` |
+| `machines[i].mac` | MAC addresses. | `[]string` | `[]` |
+| `machines[i].nameServers` | Name servers. | `[]string` | `[]` |
+| `machines[i].timeServers` | Time servers. | `[]string` | `[]` |
+| `machines[i].uefi` | UEFI. | `bool` | `false` |
+
diff --git a/content/en/docs/v1.3/operations/services/etcd.md b/content/en/docs/v1.3/operations/services/etcd.md
new file mode 100644
index 00000000..7b6860b1
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/etcd.md
@@ -0,0 +1,25 @@
+---
+title: "Etcd Service Reference"
+linkTitle: "Etcd"
+---
+
+
+
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | ------------------------------------ | ---------- | ------- |
+| `size` | Persistent Volume size. | `quantity` | `4Gi` |
+| `storageClass` | StorageClass used to store the data. | `string` | `""` |
+| `replicas` | Number of etcd replicas. | `int` | `3` |
+| `resources` | Resource configuration for etcd. | `object` | `{}` |
+| `resources.cpu` | Number of CPU cores allocated. | `quantity` | `1000m` |
+| `resources.memory` | Amount of memory allocated. | `quantity` | `512Mi` |
+
diff --git a/content/en/docs/v1.3/operations/services/ingress.md b/content/en/docs/v1.3/operations/services/ingress.md
new file mode 100644
index 00000000..2aafd4d2
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/ingress.md
@@ -0,0 +1,26 @@
+---
+title: "Ingress-NGINX Controller Reference"
+linkTitle: "Ingress"
+---
+
+
+
+
+## Parameters
+
+### Common parameters
+
+| Name | Description | Type | Value |
+| ------------------ | --------------------------------------------------------------------------------------------------------------------------------------- | ---------- | ------- |
+| `replicas` | Number of ingress-nginx replicas. | `int` | `2` |
+| `whitelist` | List of client networks. | `[]string` | `[]` |
+| `cloudflareProxy` | Restoring original visitor IPs when Cloudflare proxied is enabled. | `bool` | `false` |
+| `resources` | Explicit CPU and memory configuration for each ingress-nginx replica. When omitted, the preset defined in `resourcesPreset` is applied. | `object` | `{}` |
+| `resources.cpu` | CPU available to each replica. | `quantity` | `""` |
+| `resources.memory` | Memory (RAM) available to each replica. | `quantity` | `""` |
+| `resourcesPreset` | Default sizing preset used when `resources` is omitted. | `string` | `micro` |
+
diff --git a/content/en/docs/v1.3/operations/services/monitoring/_index.md b/content/en/docs/v1.3/operations/services/monitoring/_index.md
new file mode 100644
index 00000000..61dd5789
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/monitoring/_index.md
@@ -0,0 +1,5 @@
+---
+title: "Monitoring Hub Reference"
+linkTitle: "Monitoring"
+---
+{{< include "docs/v1.3/operations/services/_include/monitoring-overview.md" >}}
\ No newline at end of file
diff --git a/content/en/docs/v1.3/operations/services/monitoring/alerting.md b/content/en/docs/v1.3/operations/services/monitoring/alerting.md
new file mode 100644
index 00000000..3d91dc73
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/monitoring/alerting.md
@@ -0,0 +1,360 @@
+---
+title: "Monitoring Alerting"
+linkTitle: "Alerting"
+description: "Configure and manage alerts in Cozystack monitoring system using Alerta and Alertmanager."
+weight: 36
+---
+
+## Overview
+
+The alerting system in Cozystack integrates Prometheus, Alertmanager, and Alerta to provide comprehensive monitoring and notification capabilities. Alerts are generated based on metrics collected by VMAgent and stored in VMCluster, then routed through Alertmanager for grouping and deduplication, and finally managed by Alerta for notifications via various channels like Telegram and Slack.
+
+### Alerting Flow
+
+```mermaid
+sequenceDiagram
+ participant P as Prometheus
+ participant AM as Alertmanager
+ participant A as Alerta
+ participant T as Telegram
+ participant S as Slack
+ P->>AM: Send Alert
+ AM->>A: Forward Alert
+ A->>T: Send Notification
+ A->>S: Send Notification
+```
+
+## Configuring Alerts in Alerta
+
+Alerta is the alerting system integrated into Cozystack's monitoring stack. It processes alerts from various sources and provides notifications through multiple channels.
+
+### Alert Rules
+
+Alerts are generated based on Prometheus rules defined in the monitoring configuration. You can configure custom alert rules by modifying the PrometheusRule resources in your tenant's namespace.
+
+To create custom alerts, define PrometheusRule manifests with expressions that evaluate to true when the alert condition is met. Each rule includes:
+
+- **expr**: The PromQL expression to evaluate.
+- **for**: Duration the condition must be true before firing the alert.
+- **labels**: Metadata like severity.
+- **annotations**: Descriptive information for notifications.
+
+Example of a custom alert rule:
+
+```yaml
+apiVersion: monitoring.coreos.com/v1
+kind: PrometheusRule
+metadata:
+ name: custom-alerts
+ namespace: tenant-name
+spec:
+ groups:
+ - name: custom.rules
+ rules:
+ - alert: HighCPUUsage
+ expr: (1 - avg(rate(node_cpu_seconds_total{mode="idle"}[5m]))) * 100 > 80
+ for: 5m
+ labels:
+ severity: warning
+ annotations:
+ summary: "High CPU usage detected"
+ description: "CPU usage is above 80% for more than 5 minutes"
+```
+
+### Severity Levels
+
+Alerta supports the following severity levels:
+
+- **informational**: Low-priority information
+- **warning**: Potential issues that require attention
+- **critical**: Urgent issues that need immediate action
+- **major**: Significant problems affecting operations
+- **minor**: Minor issues
+
+You can configure which severities trigger notifications in the Alerta configuration.
+
+### Integrations
+
+#### Telegram Integration
+
+To enable Telegram notifications, configure the following in your monitoring settings:
+
+```yaml
+alerta:
+ alerts:
+ telegram:
+ token: "your-telegram-bot-token"
+ chatID: "chat-id-1,chat-id-2"
+ disabledSeverity:
+ - informational
+```
+
+#### Slack Integration
+
+For Slack notifications:
+
+```yaml
+alerta:
+ alerts:
+ slack:
+ url: "https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
+ disabledSeverity:
+ - informational
+ - warning
+```
+
+#### Email Integration
+
+To enable email notifications:
+
+```yaml
+alerta:
+ alerts:
+ email:
+ smtpHost: "smtp.example.com"
+ smtpPort: 587
+ smtpUser: "alerts@example.com"
+ smtpPassword: "your-password"
+ fromAddress: "alerts@example.com"
+ toAddress: "team@example.com"
+ disabledSeverity:
+ - informational
+```
+
+#### PagerDuty Integration
+
+For PagerDuty notifications:
+
+```yaml
+alerta:
+ alerts:
+ pagerduty:
+ serviceKey: "YOUR_PAGERDUTY_INTEGRATION_KEY"
+ disabledSeverity:
+ - informational
+ - warning
+```
+
+For detailed configuration options, see [Monitoring Hub Reference]({{% ref "/docs/v1.3/operations/services/monitoring" %}}).
+
+## Alert Examples
+
+Here are common alert examples for system monitoring:
+
+### CPU Usage Alert
+
+```yaml
+- alert: HighCPUUsage
+ expr: 100 - (avg by(instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100) > 80
+ for: 5m
+ labels:
+ severity: warning
+ annotations:
+ summary: "High CPU usage on {{ $labels.instance }}"
+ description: "CPU usage is {{ $value }}% for more than 5 minutes"
+```
+
+### Memory Usage Alert
+
+```yaml
+- alert: HighMemoryUsage
+ expr: (1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100 > 90
+ for: 5m
+ labels:
+ severity: critical
+ annotations:
+ summary: "High memory usage on {{ $labels.instance }}"
+ description: "Memory usage is {{ $value }}% for more than 5 minutes"
+```
+
+### Disk Space Alert
+
+```yaml
+- alert: LowDiskSpace
+ expr: (node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 10
+ for: 5m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Low disk space on {{ $labels.instance }}"
+ description: "Disk space available is {{ $value }}% for more than 5 minutes"
+```
+
+### WorkloadNotOperational Alert
+
+```yaml
+- alert: WorkloadNotOperational
+ expr: up{job="workload-monitor"} == 0
+ for: 1m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Workload {{ $labels.workload }} is not operational"
+ description: "Workload monitor reports the workload is down"
+```
+
+### Network Interface Down Alert
+
+```yaml
+- alert: NetworkInterfaceDown
+ expr: node_network_up{device!~"lo"} == 0
+ for: 2m
+ labels:
+ severity: critical
+ annotations:
+ summary: "Network interface {{ $labels.device }} is down on {{ $labels.instance }}"
+ description: "Network interface has been down for more than 2 minutes"
+```
+
+### Kubernetes Pod Crash Alert
+
+```yaml
+- alert: KubernetesPodCrashLooping
+ expr: rate(kube_pod_container_status_restarts_total[10m]) > 0.5
+ for: 5m
+ labels:
+ severity: warning
+ annotations:
+ summary: "Pod {{ $labels.pod }} is crash looping"
+ description: "Pod is restarting more than once every 2 minutes"
+```
+
+### High Network Latency Alert
+
+```yaml
+- alert: HighNetworkLatency
+ expr: node_network_receive_bytes_total / node_network_receive_packets_total > 1500
+ for: 5m
+ labels:
+ severity: warning
+ annotations:
+ summary: "High network latency on {{ $labels.instance }}"
+ description: "Average packet size exceeds 1500 bytes, indicating potential latency issues"
+```
+
+## Managing Alerts
+
+### Escalation
+
+Alerts can be escalated based on duration and severity. Configure escalation policies in Alerta to automatically increase severity or notify additional channels if an alert remains unresolved.
+
+Escalation helps ensure that critical issues are addressed promptly. You can define escalation rules based on:
+
+- Time thresholds (e.g., escalate after 15 minutes)
+- Severity levels
+- Alert attributes (e.g., specific services or environments)
+
+Example escalation configuration:
+
+- Warning alerts escalate to critical after 30 minutes
+- Critical alerts trigger immediate notifications to on-call personnel
+- Major alerts notify management after 1 hour
+
+To configure escalation in Alerta, use the web interface or API to set up escalation policies for different alert types.
+
+### Suppression
+
+You can suppress alerts temporarily using Alerta's silencing feature. This is useful during maintenance windows, planned outages, or when investigating known issues without triggering notifications.
+
+Silences can be created for specific alerts or based on filters like environment, resource, or event type. Silenced alerts are still visible in the Alerta dashboard but do not generate notifications.
+
+To create a silence:
+
+1. Go to the Alerta web interface
+2. Navigate to the Alerts section
+3. Select the alert to silence or use filters to silence multiple alerts
+4. Choose "Silence" and set the duration and reason
+
+Alternatively, use the API:
+
+```bash
+curl -X POST https://alerta.example.com/api/v2/silences \
+ -H "Authorization: Bearer YOUR_API_KEY" \
+ -H "Content-Type: application/json" \
+ -d '{
+ "environment": "production",
+ "resource": "server-01",
+ "event": "HighCPUUsage",
+ "startTime": "2023-12-01T00:00:00Z",
+ "duration": 3600,
+ "comment": "Scheduled maintenance"
+ }'
+```
+
+Silences can also be managed via Alertmanager for more advanced routing-based suppression.
+
+## Alertmanager Configuration
+
+Alertmanager handles routing, grouping, and deduplication of alerts before sending notifications. It acts as an intermediary between Prometheus and notification systems like Alerta.
+
+### Grouping
+
+Alerts can be grouped by labels to reduce noise and prevent alert fatigue. Configure grouping in the Alertmanager configuration:
+
+```yaml
+route:
+ group_by: ['alertname', 'cluster', 'namespace']
+ group_wait: 10s
+ group_interval: 10s
+ repeat_interval: 1h
+ receiver: 'default'
+```
+
+- **group_by**: Labels to group alerts by
+- **group_wait**: Time to wait before sending the first notification
+- **group_interval**: Interval between notifications for the same group
+- **repeat_interval**: Minimum time between notifications
+
+### Routing
+
+Route alerts to different receivers based on labels, allowing for targeted notifications:
+
+```yaml
+route:
+ receiver: 'default'
+ routes:
+ - match:
+ severity: critical
+ receiver: 'critical-alerts'
+ - match:
+ team: devops
+ receiver: 'devops-team'
+ - match_re:
+ namespace: 'kube-.*'
+ receiver: 'kubernetes-alerts'
+
+receivers:
+- name: 'default'
+ slack_configs:
+ - api_url: 'https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK'
+ channel: '#alerts'
+- name: 'critical-alerts'
+ pagerduty_configs:
+ - service_key: 'YOUR_PAGERDUTY_KEY'
+- name: 'devops-team'
+ email_configs:
+ - to: 'devops@example.com'
+ from: 'alertmanager@example.com'
+ smarthost: 'smtp.example.com:587'
+ auth_username: 'alertmanager@example.com'
+ auth_password: 'password'
+- name: 'kubernetes-alerts'
+ webhook_configs:
+ - url: 'http://alerta.example.com/api/webhooks/prometheus'
+ send_resolved: true
+```
+
+### Inhibition
+
+Use inhibition rules to suppress certain alerts when other related alerts are firing:
+
+```yaml
+inhibit_rules:
+- source_match:
+ alertname: 'NodeDown'
+ target_match:
+ alertname: 'PodCrashLooping'
+ equal: ['node']
+```
+
+For more information on Alertmanager configuration, refer to the [official documentation](https://prometheus.io/docs/alerting/latest/alertmanager/).
\ No newline at end of file
diff --git a/content/en/docs/v1.3/operations/services/monitoring/custom-metrics.md b/content/en/docs/v1.3/operations/services/monitoring/custom-metrics.md
new file mode 100644
index 00000000..846128a7
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/monitoring/custom-metrics.md
@@ -0,0 +1,137 @@
+---
+title: "Custom Metrics Collection"
+linkTitle: "Custom Metrics"
+description: "Connect your own Prometheus exporters to the Cozystack tenant monitoring stack using VMServiceScrape and VMPodScrape."
+weight: 15
+---
+
+## Overview
+
+Cozystack tenant monitoring supports scraping custom metrics from your own applications and exporters. The tenant VMAgent discovers scrape targets through Kubernetes namespace labels, allowing you to connect any application that exposes a Prometheus-compatible `/metrics` endpoint.
+
+This guide explains how to create `VMServiceScrape` and `VMPodScrape` resources so that the tenant VMAgent collects your custom metrics and makes them available in Grafana.
+
+## Prerequisites
+
+- Monitoring is enabled for your tenant (see [Monitoring Setup]({{< ref "setup" >}}))
+- Your application or exporter is deployed and exposes a Prometheus-compatible `/metrics` endpoint
+- You have `kubectl` access to the cluster
+
+## Using VMServiceScrape
+
+A `VMServiceScrape` tells the tenant VMAgent to scrape metrics from endpoints behind a Kubernetes Service.
+
+### Example
+
+Suppose you have a Service named `my-app` in namespace `my-app-ns` that exposes metrics on port `metrics` at path `/metrics`:
+
+```yaml
+apiVersion: operator.victoriametrics.com/v1beta1
+kind: VMServiceScrape
+metadata:
+ name: my-app-metrics
+ namespace: my-app-ns
+spec:
+ selector:
+ matchLabels:
+ app: my-app
+ endpoints:
+ - port: metrics
+ path: /metrics
+ interval: 30s
+```
+
+Apply the resource:
+
+```bash
+kubectl apply --filename vmservicescrape.yaml --namespace my-app-ns
+```
+
+### Key Fields
+
+| Field | Description |
+| --- | --- |
+| `spec.selector.matchLabels` | Label selector to find the target Service |
+| `spec.endpoints[].port` | Named port on the Service to scrape |
+| `spec.endpoints[].path` | HTTP path for metrics (default: `/metrics`) |
+| `spec.endpoints[].interval` | Scrape interval (default: inherited from VMAgent, typically `30s`) |
+
+## Using VMPodScrape
+
+A `VMPodScrape` scrapes metrics directly from Pods, without requiring a Service. This is useful for sidecar exporters or applications that do not have a corresponding Service.
+
+### Example
+
+Suppose you have Pods labeled `app: my-worker` that expose metrics on port `9090` at path `/metrics`:
+
+```yaml
+apiVersion: operator.victoriametrics.com/v1beta1
+kind: VMPodScrape
+metadata:
+ name: my-worker-metrics
+ namespace: my-app-ns
+spec:
+ selector:
+ matchLabels:
+ app: my-worker
+ podMetricsEndpoints:
+ - port: "9090"
+ path: /metrics
+```
+
+Apply the resource:
+
+```bash
+kubectl apply --filename vmpodscrape.yaml --namespace my-app-ns
+```
+
+### Key Fields
+
+| Field | Description |
+| --- | --- |
+| `spec.selector.matchLabels` | Label selector to find the target Pods |
+| `spec.podMetricsEndpoints[].port` | Port name or number on the Pod to scrape |
+| `spec.podMetricsEndpoints[].path` | HTTP path for metrics (default: `/metrics`) |
+
+## Verifying Metrics Collection
+
+After creating a `VMServiceScrape` or `VMPodScrape`, verify that the tenant VMAgent is scraping your targets.
+
+### Check VMAgent Targets
+
+List the VMAgent pods in your tenant namespace and open the targets page:
+
+```bash
+kubectl get pods --namespace --selector app.kubernetes.io/name=vmagent
+```
+
+Port-forward to the VMAgent UI to inspect active targets:
+
+```bash
+kubectl port-forward --namespace service/vmagent-vmagent 8429:8429
+```
+
+Then open `http://localhost:8429/targets` in your browser. Your new scrape target should appear in the list with status `UP`.
+
+### Query Metrics in Grafana
+
+1. Open Grafana at `https://grafana.`
+2. Go to **Explore**
+3. Select the **VictoriaMetrics** datasource
+4. Run a PromQL query for one of your custom metrics, for example:
+
+ ```promql
+ up{job="my-app-ns/my-app-metrics"}
+ ```
+
+A result of `1` confirms that the target is being scraped successfully.
+
+## Troubleshooting
+
+- **Target not appearing in VMAgent**: Verify that the namespace has the `namespace.cozystack.io/monitoring: ` label and that the `VMServiceScrape`/`VMPodScrape` is created in that namespace
+- **Target shows status DOWN**: Check that the application is running and the metrics endpoint is reachable on the configured port and path
+- **No metrics in Grafana**: Confirm that the VMAgent is writing to the correct VMCluster by checking the VMAgent logs:
+
+ ```bash
+ kubectl logs --namespace --selector app.kubernetes.io/name=vmagent
+ ```
diff --git a/content/en/docs/v1.3/operations/services/monitoring/dashboards.md b/content/en/docs/v1.3/operations/services/monitoring/dashboards.md
new file mode 100644
index 00000000..f5aeee69
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/monitoring/dashboards.md
@@ -0,0 +1,154 @@
+---
+title: "Monitoring Dashboards"
+linkTitle: "Dashboards"
+description: "Learn how to visualize metrics and create custom dashboards in Grafana for monitoring Cozystack clusters and applications."
+weight: 10
+---
+
+## Overview
+
+Cozystack integrates Grafana as the primary visualization tool for monitoring metrics collected by VictoriaMetrics (VM). This section covers accessing pre-built dashboards, creating custom visualizations, and integrating external data sources to provide comprehensive observability for your Cozystack clusters and applications.
+
+## Accessing Grafana
+
+To access Grafana and explore dashboards:
+
+1. Navigate to the Grafana URL: `https://grafana.`, where `` is your tenant's domain.
+2. Log in using your tenant credentials (OIDC or token-based authentication).
+3. Once logged in, you can browse pre-configured dashboards in the "Dashboards" section.
+
+For initial setup and configuration details, refer to [Monitoring Setup]({{% ref "/docs/v1.3/operations/services/monitoring/setup" %}}).
+
+## Pre-built Dashboards
+
+Cozystack provides a set of pre-configured dashboards in Grafana, automatically deployed and updated via the monitoring stack. These dashboards are defined in the `packages/extra/monitoring/dashboards.list` file and offer out-of-the-box insights into system and application performance.
+
+### Cluster Infrastructure Dashboards
+
+- **Kubernetes Cluster Overview**: Provides a high-level view of the entire Kubernetes cluster, including node status, pod health, cluster-wide CPU/memory/disk utilization, and API server performance. Useful for quick health checks and identifying resource bottlenecks across the cluster.
+- **Node Metrics**: Detailed per-node metrics such as CPU usage, memory consumption, disk I/O, network traffic, and system load. Includes panels for individual nodes and aggregated views. Ideal for diagnosing node-specific issues.
+- **ETCD Metrics**: Monitors ETCD cluster health, including latency, storage usage, leader elections, and database operations. Essential for ensuring the reliability of Kubernetes control plane data.
+- **Storage Metrics**: Insights into storage components like Linstor and SeaweedFS, covering volume usage, I/O operations, replication status, and performance metrics. Helps in managing storage resources and troubleshooting storage-related problems.
+
+### Application and Service Dashboards
+
+- **Tenant Applications**: Customizable dashboards for user-deployed applications, displaying metrics such as request rates, error rates, response times, and throughput. Supports applications like web services, APIs, and microservices running in tenant namespaces.
+- **Service Mesh**: Metrics for networking components, including ingress controllers (e.g., NGINX, Traefik), load balancers, and service mesh proxies. Covers traffic patterns, latency, error rates, and connectivity health.
+- **Database Metrics**: Specialized dashboards for supported databases such as PostgreSQL, MySQL, Redis, and others. Includes query performance, connection counts, cache hit rates, and storage metrics. For example, the PostgreSQL dashboard shows active connections, slow queries, and replication status.
+
+These dashboards are regularly updated with new releases. For screenshots and visual examples, check the [Cozystack blog](https://cozystack.io/blog/) for release notes featuring dashboard previews.
+
+## Creating Custom Dashboards
+
+If the pre-built dashboards do not meet your needs, you can create custom dashboards in Grafana to visualize specific metrics or combine data from multiple sources.
+
+### Steps to Create a Custom Dashboard
+
+1. **Access Grafana**: Log in to Grafana using your tenant credentials.
+2. **Create a New Dashboard**: Click the "+" icon in the sidebar and select "Dashboard".
+3. **Add Panels**: Click "Add new panel" to create visualizations. Choose from various panel types and configure data sources.
+4. **Configure Queries**: Use MetricsQL (VictoriaMetrics query language) to fetch and transform data.
+5. **Customize Layout**: Arrange panels, set time ranges, and add annotations or variables for interactivity.
+6. **Save and Share**: Save the dashboard, set permissions, and optionally export it for reuse.
+
+
+### Example Queries
+
+Here are some common MetricsQL queries for custom panels:
+
+- **Pod CPU Usage**:
+ ```promql
+ rate(container_cpu_usage_seconds_total{pod=~"$pod"}[5m])
+ ```
+ Displays CPU usage rate for selected pods over time.
+
+- **Memory Usage Percentage**:
+ ```promql
+ (1 - node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes) * 100
+ ```
+ Shows memory utilization as a percentage for nodes.
+
+- **Network Traffic**:
+ ```promql
+ rate(node_network_receive_bytes_total[5m]) + rate(node_network_transmit_bytes_total[5m])
+ ```
+ Monitors incoming and outgoing network traffic.
+
+- **Application Response Time**:
+ ```promql
+ histogram_quantile(0.95, rate(http_request_duration_seconds_bucket{job="my-app"}[5m]))
+ ```
+ Calculates the 95th percentile response time for an application.
+
+### Panel Types and Best Practices
+
+- **Time Series (Graph)**: Ideal for trends over time, such as CPU usage or request rates. Use for historical data visualization.
+- **Stat**: Displays single values, like current CPU percentage or total requests. Good for at-a-glance metrics.
+- **Table**: Shows tabular data, such as top processes or alert summaries. Useful for detailed listings.
+- **Heatmap**: Visualizes density, like error rates across time intervals. Effective for spotting patterns.
+- **Gauge**: Represents values on a scale, such as disk usage percentage.
+
+When creating panels, consider:
+- Use appropriate time ranges and refresh intervals.
+- Add thresholds and alerts directly in panels for proactive monitoring.
+- Leverage variables for dynamic filtering (e.g., by namespace or pod name).
+
+For advanced querying and functions, refer to the [VictoriaMetrics MetricsQL documentation](https://docs.victoriametrics.com/MetricsQL.html).
+
+## Integrating External Data Sources
+
+Cozystack allows integration with external monitoring systems to centralize observability.
+
+### Adding External Prometheus
+
+To integrate an external Prometheus instance:
+
+1. In Grafana, go to "Configuration" > "Data Sources" > "Add data source".
+2. Select "Prometheus" as the type.
+3. Enter the external Prometheus URL, authentication details (if required), and scrape interval.
+4. Test the connection and save.
+5. Use PromQL in your dashboards to query the external data.
+
+### Custom Application Metrics
+
+For applications exposing custom metrics:
+
+- Ensure your application exposes metrics in Prometheus format (e.g., via `/metrics` endpoint).
+- Configure VMAgent in Cozystack to scrape these endpoints by updating the monitoring configuration.
+- Metrics will be ingested into VM and available for querying in Grafana.
+
+Follow Prometheus [metric naming conventions](https://prometheus.io/docs/practices/naming/) to ensure compatibility. For configuration examples, see [Monitoring Hub Reference]({{% ref "/docs/v1.3/operations/services/monitoring" %}}).
+
+### Grafana Data Sources Integration
+
+This diagram shows how external data sources are integrated into Grafana for centralized monitoring.
+
+```mermaid
+graph TD
+ A[VictoriaMetrics VM Data Source MetricsQL Queries] --> B[Grafana]
+ C[VLogs Log Data Source Log Queries] --> B
+ D[External Prometheus PromQL Queries] --> B
+ E[Custom Application Metrics Prometheus Format] --> B
+ B --> F[Dashboards Pre-built and Custom]
+ F --> G[Visualization Metrics + Logs Correlation]
+```
+
+## Data Sources Configuration
+
+Grafana in Cozystack is pre-configured with optimized data sources for seamless integration.
+
+### VictoriaMetrics (VM) Data Source
+
+- **Type**: Prometheus-compatible (MetricsQL).
+- **URL**: Internal VM cluster endpoint within the tenant namespace.
+- **Authentication**: Automatic via service account tokens.
+- **Usage**: Primary source for time-series metrics. Supports high-performance querying and aggregation.
+
+### VLogs Data Source
+
+- **Type**: Custom plugin for log querying.
+- **Purpose**: Enables log visualization and correlation with metrics.
+- **Configuration**: Automatically set up for tenant-specific log streams.
+- **Usage**: Add log panels to dashboards to combine metrics and logs, e.g., for troubleshooting application issues.
+
+To modify data source settings, access the Grafana admin panel (admin privileges required) or update the monitoring configuration via the Cozystack API. For detailed parameters, refer to [Monitoring Hub Reference]({{% ref "/docs/v1.3/operations/services/monitoring" %}}).
\ No newline at end of file
diff --git a/content/en/docs/v1.3/operations/services/monitoring/logs.md b/content/en/docs/v1.3/operations/services/monitoring/logs.md
new file mode 100644
index 00000000..c4d497c8
--- /dev/null
+++ b/content/en/docs/v1.3/operations/services/monitoring/logs.md
@@ -0,0 +1,210 @@
+---
+title: "Monitoring Logs"
+linkTitle: "Logs"
+description: "Learn how to collect, store, search, and analyze logs in Cozystack using Fluent Bit and VictoriaLogs for comprehensive observability."
+weight: 11
+---
+
+## Collecting and Storing Logs
+
+Cozystack uses Fluent Bit for log collection and VictoriaLogs for log storage and querying. Logs are collected from various sources within the cluster and stored in dedicated log storages configured per tenant.
+
+### Configuring Logs Storages
+
+Log storages are configured through the monitoring hub parameters. Each tenant can have multiple log storage instances with customizable retention periods and storage sizes.
+
+| Parameter | Description | Type | Default |
+|-----------|-------------|------|---------|
+| `logsStorages` | Array of log storage configurations | `[]object` | `[]` |
+| `logsStorages[i].name` | Name of the storage instance | `string` | `""` |
+| `logsStorages[i].retentionPeriod` | Retention period for logs (e.g., "30d") | `string` | `"1"` |
+| `logsStorages[i].storage` | Persistent volume size | `string` | `"10Gi"` |
+| `logsStorages[i].storageClassName` | StorageClass for data persistence | `string` | `"replicated"` |
+
+For detailed configuration options, see [Monitoring Hub Reference]({{% ref "/docs/v1.3/operations/services/monitoring" %}}).
+
+### Fluent Bit Inputs and Outputs
+
+Fluent Bit is configured to collect logs from:
+
+- **Kubernetes Pods**: Container logs from all namespaces
+- **System Logs**: Node-level logs and system services
+- **Application Logs**: Custom log sources via sidecar containers
+
+#### Example Fluent Bit Input Configuration
+
+```yaml
+apiVersion: v1
+kind: ConfigMap
+metadata:
+ name: fluent-bit-config
+data:
+ fluent-bit.conf: |
+ [INPUT]
+ Name tail
+ Path /var/log/containers/*.log
+ Parser docker
+ Tag kube.*
+ Refresh_Interval 5
+
+ [OUTPUT]
+ Name vlogs
+ Match kube.*
+ Host vlogs-cluster
+ Port 9428
+```
+
+Logs are forwarded to VictoriaLogs for storage and indexing. The output plugin ensures logs are enriched with metadata like pod names, namespaces, and timestamps.
+
+## Logging Architecture
+
+The following diagram illustrates the logging architecture in Cozystack, showing how logs flow from various sources to storage and visualization tools.
+
+```mermaid
+graph TD
+ A[Kubernetes Pods] --> B[Fluent Bit]
+ C[System Logs] --> B
+ D[Application Logs] --> B
+ B --> E[VictoriaLogs]
+ E --> F[Grafana]
+ F --> G[Log Analysis and Dashboards]
+```
+
+## Searching and Analyzing Logs
+
+VictoriaLogs (VLogs) provides powerful querying capabilities for stored logs. Access VLogs through Grafana or directly via API for advanced log analysis.
+
+### Using VictoriaLogs
+
+- **Query Language**: Use VLogs query syntax to filter logs by fields, time ranges, and patterns.
+- **Integration with Grafana**: Visualize logs alongside metrics in dashboards.
+
+#### Example VLogs Query
+
+To search for error logs from a specific pod:
+
+```text
+_level:ERROR AND kubernetes_pod_name: "my-app-pod"
+```
+
+### Filters and Metadata
+
+Logs in Cozystack include rich metadata for effective filtering:
+
+- **Pod Metadata**: `kubernetes_pod_name`, `kubernetes_namespace_name`, `kubernetes_container_name`
+- **Tenant**: `tenant` — identifies which tenant the logs belong to
+- **Log Levels**: `_level` (INFO, WARN, ERROR, etc.)
+- **Timestamps**: Automatic timestamp parsing
+- **Custom Labels**: Application-specific labels added during collection
+
+#### Advanced Filtering
+
+Use complex queries to correlate logs:
+
+```text
+kubernetes_namespace_name: "kube-system" AND _level: "WARN" AND _msg: *timeout*
+```
+
+For more on VLogs querying, refer to the [VictoriaLogs documentation](https://docs.victoriametrics.com/victorialogs/).
+
+## Viewing Tenant Kubernetes Cluster Logs
+
+When running workloads in a [tenant Kubernetes cluster]({{% ref "/docs/v1.3/kubernetes" %}}), their logs are collected and forwarded to the parent tenant's VictoriaLogs instance. You can then query these logs in Grafana using specific label filters.
+
+### Prerequisites
+
+Enable the `monitoringAgents` addon on the tenant Kubernetes cluster. This deploys agents inside the cluster that collect logs and forward them to VictoriaLogs.
+
+Via the Cozystack dashboard, set `addons.monitoringAgents.enabled: true` in the Kubernetes application parameters, or apply it programmatically:
+
+```yaml
+addons:
+ monitoringAgents:
+ enabled: true
+```
+
+See [Managed Kubernetes parameters]({{% ref "/docs/v1.3/kubernetes#cluster-addons" %}}) for details.
+
+### Log Labels
+
+Logs from tenant Kubernetes clusters are enriched with the following labels:
+
+| Label | Description | Example |
+| --- | --- | --- |
+| `tenant` | Tenant identifier (format: `tenant-`) | `tenant-workload` |
+| `kubernetes_namespace_name` | Namespace within the tenant Kubernetes cluster | `default` |
+| `kubernetes_pod_name` | Pod name | `my-app-6b7b8c9b89-ccqgf` |
+| `kubernetes_container_name` | Container name within the pod | `my-app` |
+
+### Querying Logs in Grafana
+
+1. Open Grafana at `https://grafana.