[docs] Update managed apps reference for v1.3.0#507
[docs] Update managed apps reference for v1.3.0#507myasnikovdaniil wants to merge 4 commits intomainfrom
Conversation
Signed-off-by: Myasnikov Daniil <myasnikovdaniil2001@gmail.com>
✅ Deploy Preview for cozystack ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
|
Important Review skippedToo many files! This PR contains 236 files, which is 86 over the limit of 150. ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (13)
📒 Files selected for processing (236)
You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces comprehensive documentation for Cozystack v1.3, covering managed applications, networking architecture, cluster maintenance, and multi-location deployment guides. The feedback focuses on correcting technical inaccuracies in the documentation, such as invalid cron expressions for PostgreSQL backups, incorrect Kubernetes CLI commands for LINSTOR management, and invalid default values for Kafka and MongoDB replicas. Additionally, several version strings and namespace references in troubleshooting and installation guides were updated to ensure consistency with the v1.3.0 release.
| retentionPolicy: 30d | ||
| destinationPath: s3://bucket/path/to/folder/ | ||
| endpointURL: http://minio-gateway-service:9000 | ||
| schedule: "0 2 * * * *" |
There was a problem hiding this comment.
The cron schedule 0 2 * * * * contains 6 fields. Standard cron expressions (and those typically used by CloudNativePG) expect 5 fields (minute hour day month day-of-week). As written, this schedule might be interpreted as triggering at 2 minutes past every hour. If the intention is to run the backup daily at 2:00 AM, it should be 0 2 * * *.
| schedule: "0 2 * * * *" | |
| schedule: "0 2 * * *" |
There was a problem hiding this comment.
This file is autogenerated from the upstream packages/apps/postgres/README.md in cozystack/cozystack; edits here would be overwritten on the next release sync. Also, CloudNativePG's ScheduledBackup.spec.schedule uses a 6-field cron with seconds as the first field, so the 5-field suggestion (0 2 * * *) would not parse. If the current value schedules at the wrong time, the fix needs to land in the upstream README (likely 0 0 2 * * * for daily at 02:00).
| | ------------------------ | ------------------------------------------------------ | -------- | ----------------------------------- | | ||
| | `backup` | Backup configuration. | `object` | `{}` | | ||
| | `backup.enabled` | Enable regular backups. | `bool` | `false` | | ||
| | `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * * *` | |
There was a problem hiding this comment.
The cron schedule 0 2 * * * * is inconsistent with the 5-field format used in other managed application references (like MariaDB or ClickHouse) for daily backups at 2:00 AM.
| | `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * * *` | | |
| | `backup.schedule` | Cron schedule for automated backups. | `string` | `0 2 * * *` | |
There was a problem hiding this comment.
This file is autogenerated from the upstream packages/apps/postgres/README.md in cozystack/cozystack; edits here would be overwritten on the next release sync. Also, CloudNativePG's ScheduledBackup.spec.schedule uses a 6-field cron with seconds as the first field, so the 5-field suggestion (0 2 * * *) would not parse. If the current value schedules at the wrong time, the fix needs to land in the upstream README (likely 0 0 2 * * * for daily at 02:00).
| Add the dependency to your Go module: | ||
|
|
||
| ```bash | ||
| go get github.com/cozystack/cozystack/api/apps/v1alpha1@v1.2.0 |
There was a problem hiding this comment.
This documentation is for Cozystack v1.3.0, but the installation command references version v1.2.0. It should be updated to point to the current release version.
| go get github.com/cozystack/cozystack/api/apps/v1alpha1@v1.2.0 | |
| go get github.com/cozystack/cozystack/api/apps/v1alpha1@v1.3.0 |
There was a problem hiding this comment.
Fixed in 800256a. Switched the install command to {{< version-pin "cozystack_tag" >}}, which reads from data/versions/<version>.yaml so the version tracks the release cut automatically (v1.3/ renders as @v1.3.0; next/ tracks the upcoming release). Applied to both v1.3/ and next/.
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 -- zpool set failmode=continue data | ||
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv2 -- zpool set failmode=continue data | ||
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv3 -- zpool set failmode=continue data |
There was a problem hiding this comment.
The command kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 is invalid. kubectl exec requires a specific pod name as a target. You cannot target a specific node's pod by appending the node name to the DaemonSet name in standard Kubernetes. A more reliable way to target the pod on a specific node is to use a label selector combined with a field selector for the node name.
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv1 -- zpool set failmode=continue data | |
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv2 -- zpool set failmode=continue data | |
| kubectl exec -ti -n cozy-linstor ds/linstor-satellite.srv3 -- zpool set failmode=continue data | |
| kubectl exec -ti -n cozy-linstor $(kubectl get pod -n cozy-linstor -l app.kubernetes.io/name=linstor-satellite --field-selector spec.nodeName=srv1 -o name) -- zpool set failmode=continue data | |
| kubectl exec -ti -n cozy-linstor $(kubectl get pod -n cozy-linstor -l app.kubernetes.io/name=linstor-satellite --field-selector spec.nodeName=srv2 -o name) -- zpool set failmode=continue data | |
| kubectl exec -ti -n cozy-linstor $(kubectl get pod -n cozy-linstor -l app.kubernetes.io/name=linstor-satellite --field-selector spec.nodeName=srv3 -o name) -- zpool set failmode=continue data |
There was a problem hiding this comment.
Fixed in 800256a. ds/<name>.<node> is not a valid kubectl selector; in Piraeus Operator v2 the satellites are individual Pods literally named linstor-satellite.<node>, so changing ds/ -> pod/ targets them correctly. Also applied the same fix to install/cozystack/platform.md, which had the identical pattern, in both v1.3/ and next/.
| | `topics[i].partitions` | Number of partitions. | `int` | `0` | | ||
| | `topics[i].replicas` | Number of replicas. | `int` | `0` | |
There was a problem hiding this comment.
Default values of 0 for partitions and replicas are invalid for Kafka topics. It is recommended to document sensible defaults, such as 1 partition and 3 replicas for high availability.
| | `topics[i].partitions` | Number of partitions. | `int` | `0` | | |
| | `topics[i].replicas` | Number of replicas. | `int` | `0` | | |
| | `topics[i].partitions` | Number of partitions. | `int` | `1` | | |
| | `topics[i].replicas` | Number of replicas. | `int` | `3` | |
There was a problem hiding this comment.
This file is autogenerated from the upstream packages/apps/kafka/README.md in cozystack/cozystack; edits here would be overwritten on the next release sync. The shown 0 values come from the upstream chart's values.yaml. If the defaults should change, please open a PR against that file upstream.
| | `shardingConfig.mongos` | Number of mongos router replicas. | `int` | `2` | | ||
| | `shardingConfig.shards` | List of shard configurations. | `[]object` | `[...]` | | ||
| | `shardingConfig.shards[i].name` | Shard name. | `string` | `""` | | ||
| | `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `0` | |
There was a problem hiding this comment.
A default value of 0 replicas for a MongoDB shard is invalid. It should be at least 1, or 3 for a standard production-ready replica set configuration.
| | `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `0` | | |
| | `shardingConfig.shards[i].replicas` | Number of replicas in this shard. | `int` | `3` | |
There was a problem hiding this comment.
Same as kafka - this file is autogenerated from packages/apps/mongodb/README.md upstream. Edits here would be overwritten on the next release sync; please raise the change upstream in cozystack/cozystack.
| - Check the service and ingress: | ||
|
|
||
| ```bash | ||
| kubectl get svc,ingress -n cozy-monitoring -l app.kubernetes.io/name=grafana |
There was a problem hiding this comment.
In Cozystack, tenant-specific monitoring components like Grafana are deployed within the tenant's own namespace (e.g., tenant-root or a user tenant namespace), not in the system-wide cozy-monitoring namespace. The troubleshooting command should be updated to reflect this.
| kubectl get svc,ingress -n cozy-monitoring -l app.kubernetes.io/name=grafana | |
| kubectl get svc,ingress -n <tenant-namespace> -l app.kubernetes.io/name=grafana |
There was a problem hiding this comment.
Fixed in 800256a. Confirmed against operations/services/monitoring/setup.md which already uses -n <tenant-namespace> for Grafana. Applied to both v1.3/ and next/.
| | Variable | Default | Description | | ||
| | --- | --- | --- | | ||
| | `cozystack_api_server_host` | *(required)* | Internal IP of the control-plane node. | | ||
| | `cozystack_chart_version` | `1.0.0-rc.1` | Version of the Cozystack Helm chart. **Pin this explicitly.** | |
There was a problem hiding this comment.
The default version 1.0.0-rc.1 is outdated for the v1.3.0 documentation. It should be updated to the current release version to ensure users install the correct version by default.
| | `cozystack_chart_version` | `1.0.0-rc.1` | Version of the Cozystack Helm chart. **Pin this explicitly.** | | |
| | `cozystack_chart_version` | `1.3.0` | Version of the Cozystack Helm chart. **Pin this explicitly.** | |
There was a problem hiding this comment.
Fixed in 800256a. Switched both the warning example (line 98) and the defaults table (line 180) to {{< version-pin "cozystack_version" >}}, which pulls from data/versions/<version>.yaml and tracks the current release. Applied in v1.3/ and next/.
| - - kind | ||
| - - metadata | ||
| - - metadata | ||
| - name |
There was a problem hiding this comment.
The two entries are not a duplication: [metadata] sets metadata's position among the top-level keys, and [metadata, name] then sets name's position inside metadata. Real Cozystack application definitions use exactly this pattern - see for example packages/system/info-rd/cozyrds/info.yaml upstream: keysOrder: [["apiVersion"], ["appVersion"], ["kind"], ["metadata"], ["metadata", "name"]]. The docs example matches the convention, so keeping it as-is.
Pins the Cozystack-release-coupled versions in v1.3 docs (Talos image tag,
Cozystack release-asset URLs, talos.dev minor, helm --version example) to
values in data/versions/v1.3.yaml, resolved via a new {{< version-pin >}}
shortcode. Tool installers that stay backward-compatible across minors
(talm, boot-to-talos, talos-bootstrap, kubectl-etcd) keep floating on
main/latest so routine upstream bumps don't require doc edits.
release_next.sh snapshots data/versions/next.yaml to vX.Y.yaml at cut
time; init_version.sh seeds the data file from the source version, so a
fresh docs directory ships with working pins.
hack/release-checklist.md documents what to bump each minor and includes
grep invariants to catch new literal pins that bypass the shortcode.
Signed-off-by: Myasnikov Daniil <myasnikovdaniil2001@gmail.com>
- go-types.md: pin `go get` to `{{< version-pin "cozystack_tag" >}}`
so the release tag tracks data/versions/<version>.yaml (was @v1.2.0).
- install-cozystack.md, install/cozystack/platform.md: change
`ds/linstor-satellite.srv<N>` to `pod/linstor-satellite.srv<N>` —
Piraeus Operator v2 names the satellite pods literally, while
`ds/<name>.<node>` is not valid kubectl selector syntax.
- monitoring-troubleshooting.md: Grafana is tenant-scoped, so the
service/ingress lookup must use the tenant namespace, not
`cozy-monitoring`.
- ansible.md: pin `cozystack_chart_version` default via
`{{< version-pin "cozystack_version" >}}` in both the alert
example and the defaults table (was `1.0.0-rc.1` / `1.0.0-rc.2`).
Same edits applied symmetrically to content/en/docs/next/ so the
fixes carry into future releases.
Signed-off-by: Myasnikov Daniil <myasnikovdaniil2001@gmail.com>
Rebuilds content/en/docs/next/ from the now-current v1.3/ trunk after the v1.3.0 release cut. Updates autogenerated source URLs from release-1.2 to v1.3.0, propagates version-pin shortcode adoption, and picks up upstream README changes (e.g., source.disk field in vm-disk). Signed-off-by: Myasnikov Daniil <myasnikovdaniil2001@gmail.com>
Automated docs update for release
v1.3.0.