Skip to content

🌱 chore(ci): Add test to check if user changes are preserved#2512

Open
camilamacedo86 wants to merge 1 commit intooperator-framework:mainfrom
camilamacedo86:test-asked
Open

🌱 chore(ci): Add test to check if user changes are preserved#2512
camilamacedo86 wants to merge 1 commit intooperator-framework:mainfrom
camilamacedo86:test-asked

Conversation

@camilamacedo86
Copy link
Contributor

@camilamacedo86 camilamacedo86 commented Feb 16, 2026

Add a test to ensure that OLM is not reverting user changes like kubectl rollout restart.

Copilot AI review requested due to automatic review settings February 16, 2026 17:56
@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 16, 2026
@netlify
Copy link

netlify bot commented Feb 16, 2026

Deploy Preview for olmv1 ready!

Name Link
🔨 Latest commit f32caad
🔍 Latest deploy log https://app.netlify.com/projects/olmv1/deploys/69b9e77daa2d0c000811cfeb
😎 Deploy Preview https://deploy-preview-2512--olmv1.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds an end-to-end test to verify that OLMv1, when using Server-Side Apply, does not revert user-initiated changes to deployed resources. The test specifically validates the scenario where a user runs kubectl rollout restart deployment, which was problematic in OLMv0. The PR is explicitly marked as Work In Progress (WIP).

Changes:

  • Added new feature file rollout-restart.feature with a scenario testing user-initiated deployment changes
  • Implemented three new step functions: UserAddsRestartAnnotation, ResourceHasRestartAnnotation, and ClusterExtensionAnnotationIsSet

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
test/e2e/features/rollout-restart.feature New feature file defining the test scenario for verifying OLMv1 doesn't revert user-initiated changes to deployments
test/e2e/steps/steps.go Added three new step functions to support the rollout restart test scenario and registered them in the step definitions

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copilot AI review requested due to automatic review settings February 18, 2026 09:13
@camilamacedo86 camilamacedo86 changed the title WIP: Add test verifies that OLMv1 does not revert user-initiated changes to deployed resources 🐛 fix: Preserve user changes to deployment pod templates Feb 18, 2026
@camilamacedo86 camilamacedo86 changed the title 🐛 fix: Preserve user changes to deployment pod templates 🐛 Preserve user changes to deployment pod templates Feb 18, 2026
@camilamacedo86 camilamacedo86 removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Feb 18, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@camilamacedo86 camilamacedo86 changed the title 🐛 Preserve user changes to deployment pod templates WIP 🐛 Preserve user changes to deployment pod templates Feb 18, 2026
Copilot AI review requested due to automatic review settings February 18, 2026 15:04
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@codecov
Copy link

codecov bot commented Mar 10, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 68.61%. Comparing base (147cfa8) to head (f32caad).
⚠️ Report is 8 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2512      +/-   ##
==========================================
+ Coverage   64.33%   68.61%   +4.28%     
==========================================
  Files         131      131              
  Lines        9333     9333              
==========================================
+ Hits         6004     6404     +400     
+ Misses       2853     2438     -415     
- Partials      476      491      +15     
Flag Coverage Δ
e2e 39.00% <ø> (-3.21%) ⬇️
experimental-e2e 51.62% <ø> (+39.74%) ⬆️
unit 53.84% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copilot AI review requested due to automatic review settings March 10, 2026 20:44
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated no new comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@@ -0,0 +1,37 @@
@BoxcutterRuntime
Feature: Rollout Restart User Changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe another way to express this might be that non-revision spec field values are preserved? Would that make sense?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i.e. it's not just about user changes, but change from other controllers as well. If if change does not interfere with the revision spec, OLM won't revert it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the case, is it important to test this feature in this way? Would something simpler like adding an annotation to the deployment and ensure that it doesn't get removed over 5 seconds sufficient?

Copy link
Contributor Author

@camilamacedo86 camilamacedo86 Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the case, is it important to test this feature in this way? Would something simpler like adding an annotation to the deployment and ensure that it doesn't get removed over 5 seconds sufficient?

I think that does not cover the case scenario that we are looking for: operator-framework/operator-lifecycle-manager#3392. The goal here is ensure that we cover this scenario.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@perdasilva I improved the tests .. but I think that is .. see wdyt now ?

Can we get this one merged?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to agree here with @perdasilva - by reading operator-framework/operator-lifecycle-manager#3392 I would also say that the issue here is that we should allow users to add custom annotations and labels to the deployed resources, without trying to revert them. Hence, I would even call this feature differently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've addressed the feedback:

  • Renamed the feature to "Preserve Custom Annotations and Labels on Managed Resources" -- framing it as the broader capability rather than a specific rollout restart fix.
  • Added a simplified first scenario that adds a custom annotation and label via kubectl annotate/kubectl label, triggers reconciliation, and verifies they're preserved. No rollout complexity.
  • Kept the rollout restart as a second scenario since it covers the real-world use case from operator-lifecycle-manager#3392.

Note that as far I understand, annotations are part of .metadata -- changes there don't bump .metadata.generation. Since we rely on comparing observedGeneration with generation to confirm the controller has processed the change (via the existing step ClusterExtension has been reconciled the latest generation), patching an annotation wouldn't give us that sync signal.

That is why the current approach -- patching spec.install.preflight.crdUpgradeSafety.enforcement -- reliably bumps generation because it's a persisted spec field.

// Patch the spec with a harmless reconcileRequests entry that includes a
// unique timestamp. Any spec change will bump .metadata.generation, which
// the controller should observe and reconcile.
payload := fmt.Sprintf(`{"spec":{"reconcileRequests":[%d]}}`, time.Now().UnixNano())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would adding a the recocileRequests as an annotation also trigger the reconciliation? If so, could it be a better approach?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An annotation wouldn't work here. Annotation changes don't bump metadata.generation, so the next step (ClusterExtension has been reconciled the latest generation) — which waits for observedGeneration == generation — would pass immediately before the controller actually reconciles. That's a race condition.

The original spec.reconcileRequests approach was also broken: that field doesn't exist in the CRD schema, so the API server silently prunes it. No spec change means no generation bump and no reconciliation is triggered at all.

The fix uses install.preflight.crdUpgradeSafety.enforcement: "None" instead — a real spec field that persists, bumps generation, and gives us a reliable sync signal. It's harmless here since the test doesn't involve CRD upgrades.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pedjak ^ I add an extra one, following the suggestions but see above

Copilot AI review requested due to automatic review settings March 16, 2026 10:39
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@perdasilva
Copy link
Contributor

/approve

@openshift-ci
Copy link

openshift-ci bot commented Mar 16, 2026

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: perdasilva

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 16, 2026
@@ -0,0 +1,37 @@
@BoxcutterRuntime
Feature: Rollout Restart User Changes
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to agree here with @perdasilva - by reading operator-framework/operator-lifecycle-manager#3392 I would also say that the issue here is that we should allow users to add custom annotations and labels to the deployed resources, without trying to revert them. Hence, I would even call this feature differently.

@camilamacedo86
Copy link
Contributor Author

@pedjak

I did the changes suggested by you.
Please, let me know wdyt. If we can move forward now.
Cheers.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +13 to +41
Scenario: Bundle-provided and user-added labels and annotations persist after OLM reconciliation
When ClusterExtension is applied
"""
apiVersion: olm.operatorframework.io/v1
kind: ClusterExtension
metadata:
name: ${NAME}
spec:
namespace: ${TEST_NAMESPACE}
serviceAccount:
name: olm-sa
source:
sourceType: Catalog
catalog:
packageName: test
selector:
matchLabels:
"olm.operatorframework.io/metadata.name": test-catalog
"""
Then ClusterExtension is rolled out
And ClusterExtension is available
And resource "deployment/test-operator" is available
# Verify bundle-provided labels are present after initial install
And resource "deployment/test-operator" has label "app.kubernetes.io/name" with value "test-operator"
# Add user-owned labels and annotations
When user adds annotation "example.com/custom-annotation=my-value" to "deployment/test-operator"
And user adds label "example.com/custom-label=my-value" to "deployment/test-operator"
Then resource "deployment/test-operator" has annotation "example.com/custom-annotation" with value "my-value"
And resource "deployment/test-operator" has label "example.com/custom-label" with value "my-value"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Copilot on this comment.

@joelanford
Copy link
Member

@pedjak @camilamacedo86

I would also say that the issue here is that we should allow users to add custom annotations and labels to the deployed resources, without trying to revert them. Hence, I would even call this feature differently.

I see this both ways:

  1. Yes, this feature should be "we do not care about fields that we are not the manager of"
  2. A specific user intent is "make a deployment rollout restart work"

I think it makes sense for us to have two scenarios:

  1. Set an arbitrary field that we are not managing and make sure that field sticks
  2. Perform the deployment rollout and verify that succeeds.

And if we care about two scenarios being overkill, then I'd call this feature something based on (1), but then test it with (2) as the scenario since that is the specific thing a user cared about AND it would cover the general case.

@camilamacedo86
Copy link
Contributor Author

camilamacedo86 commented Mar 17, 2026

Hi @joelanford @pedjak

I think it makes sense for us to have two scenarios:

  • Set an arbitrary field that we are not managing and make sure that field sticks
  • Perform the deployment rollout and verify that succeeds.

So that means, have the PR as it was before the changes

I will remove the latest commit and keep it as it was.
I would prefer have both checks therefore we ensure a better coverage.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

Comment on lines +1551 to +1563
// First, get the deployment to find its selector labels
deploymentOut, err := k8sClient("get", "deployment", deploymentName, "-n", sc.namespace, "-o", "json")
if err != nil {
logger.V(1).Info("Failed to get deployment", "deployment", deploymentName, "error", err)
return false
}

var deployment appsv1.Deployment
if err := json.Unmarshal([]byte(deploymentOut), &deployment); err != nil {
logger.V(1).Info("Failed to parse deployment", "error", err)
return false
}

Comment on lines +1590 to +1591
if len(rsList) < expectedCount {
logger.V(1).Info("Not enough ReplicaSets yet", "deployment", deploymentName, "current", len(rsList), "expected", expectedCount)
Comment on lines +1478 to +1487
// We set install.preflight.crdUpgradeSafety.enforcement to "None" because it is
// a real spec field that the API server will persist (unlike unknown fields, which
// are pruned by structural schemas). Any persisted spec change bumps
// .metadata.generation, giving us a reliable synchronization signal.
func TriggerClusterExtensionReconciliation(ctx context.Context) error {
sc := scenarioCtx(ctx)
payload := `{"spec":{"install":{"preflight":{"crdUpgradeSafety":{"enforcement":"None"}}}}}`
_, err := k8sClient("patch", "clusterextension", sc.clusterExtensionName,
"--type=merge",
"-p", payload)
}

// DeploymentHasReplicaSets verifies that a deployment has the expected number of ReplicaSets
// and that the latest one is active with pods running.
Add a test to ensure that OLM is not reverting user changes like kubectl rollout restart.

Assisted-by: Cursor/Claude
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants