UN-151 [FIX] Block workflow deletion at backend when in use by pipelines/APIs#1969
UN-151 [FIX] Block workflow deletion at backend when in use by pipelines/APIs#1969pk-zipstack wants to merge 1 commit into
Conversation
…nes/APIs Adds a perform_destroy guard on WorkflowViewSet that calls can_update_workflow and raises WorkflowDeletionError (400) with the same detailed message used by the frontend. Prevents direct API callers from silently cascade-deleting Pipelines/APIDeployments that reference the workflow. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Summary by CodeRabbit
WalkthroughThis PR prevents deletion of workflows that are currently in use. It adds a new REST exception class, implements a deletion guard in the ViewSet, and builds user-facing error messages that detail which pipelines and APIs depend on the workflow. ChangesWorkflow Deletion Guard
🎯 2 (Simple) | ⏱️ ~8 minutes 🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Test ResultsSummary
Runner Tests - Full Report
SDK1 Tests - Full Report
|
|
| Filename | Overview |
|---|---|
| backend/workflow_manager/workflow_v2/exceptions.py | Adds WorkflowDeletionError (HTTP 400) — straightforward exception class matching existing patterns. |
| backend/workflow_manager/workflow_v2/views.py | Adds perform_destroy guard that calls can_update_workflow before deletion; has a redundant DB round-trip since the workflow is already loaded as instance. |
| backend/workflow_manager/workflow_v2/workflow_helper.py | Adds build_workflow_in_use_message; the zero-count fallback branch is unreachable from normal use, and pipeline_type raw enum keys (DEFAULT, APP) produce confusing labels in the error message. |
Sequence Diagram
sequenceDiagram
participant Client
participant WorkflowViewSet
participant WorkflowHelper
participant DB
Client->>WorkflowViewSet: "DELETE /workflow/<id>/"
WorkflowViewSet->>WorkflowViewSet: get_object() → instance
WorkflowViewSet->>WorkflowViewSet: perform_destroy(instance)
WorkflowViewSet->>WorkflowHelper: can_update_workflow(str(instance.id))
WorkflowHelper->>DB: "Workflow.objects.get(pk=id) [redundant]"
WorkflowHelper->>DB: "Pipeline.objects.filter(workflow=...).count()"
WorkflowHelper->>DB: "APIDeployment.objects.filter(workflow=...).count()"
alt "in use (count > 0)"
WorkflowHelper-->>WorkflowViewSet: "{can_update: False, pipelines:[...], api_names:[...]}"
WorkflowViewSet->>WorkflowHelper: build_workflow_in_use_message(name, usage)
WorkflowHelper-->>WorkflowViewSet: detailed error string
WorkflowViewSet-->>Client: 400 WorkflowDeletionError
else "not in use (count == 0)"
WorkflowHelper-->>WorkflowViewSet: "{can_update: True}"
WorkflowViewSet->>DB: super().perform_destroy(instance)
WorkflowViewSet-->>Client: 204 No Content
end
Prompt To Fix All With AI
Fix the following 3 code review issues. Work through them one at a time, proposing concise fixes.
---
### Issue 1 of 3
backend/workflow_manager/workflow_v2/workflow_helper.py:1025-1027
**Raw enum key used as pipeline type label**
`pipeline_type` is stored as the `PipelineType` TextChoices key (`"ETL"`, `"TASK"`, `"DEFAULT"`, `"APP"`), so the message would read `DEFAULT Pipeline` or `APP Pipeline` for those types — labels that may be unclear to API callers. The `.values()` queryset in `can_update_workflow` returns the raw DB value rather than the human-readable label (`"Default"`, `"App"`). Consider mapping the raw key to the display label (e.g. via `PipelineType(p_type).label` where `PipelineType` is imported from `pipeline_v2.models`) before constructing the line.
### Issue 2 of 3
backend/workflow_manager/workflow_v2/views.py:148
**Redundant DB fetch inside `perform_destroy`**
`can_update_workflow` opens with `Workflow.objects.get(pk=workflow_id)` to reload the object, but `instance` passed into `perform_destroy` is already the fully-loaded `Workflow` row (fetched by `get_object()` earlier in the DRF lifecycle). This adds an extra round-trip. If `can_update_workflow` is also called from the existing `can_update` endpoint with only an ID string, consider adding an overload or internal helper that accepts either a `Workflow` instance or an ID so `perform_destroy` can skip the redundant lookup.
### Issue 3 of 3
backend/workflow_manager/workflow_v2/workflow_helper.py:1008-1009
**Unreachable fallback branch**
`build_workflow_in_use_message` is only called from `perform_destroy` after `can_update_workflow` has already returned `can_update=False`, which only occurs when `pipeline_count + api_count > 0`. The zero-count branch (line 1008–1009) therefore cannot be reached via the current call site and silently returns a generic message if it ever were, which could mask the real cause. Consider either removing this branch (trusting the caller contract) or converting it to an `AssertionError`/log to make the invariant explicit.
Reviews (1): Last reviewed commit: "UN-151 [FIX] Block workflow deletion at ..." | Re-trigger Greptile
| name = p.get("pipeline_name") | ||
| p_type = p.get("pipeline_type") | ||
| lines.append(f"- `{name}` ({p_type} Pipeline)") |
There was a problem hiding this comment.
Raw enum key used as pipeline type label
pipeline_type is stored as the PipelineType TextChoices key ("ETL", "TASK", "DEFAULT", "APP"), so the message would read DEFAULT Pipeline or APP Pipeline for those types — labels that may be unclear to API callers. The .values() queryset in can_update_workflow returns the raw DB value rather than the human-readable label ("Default", "App"). Consider mapping the raw key to the display label (e.g. via PipelineType(p_type).label where PipelineType is imported from pipeline_v2.models) before constructing the line.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/workflow_manager/workflow_v2/workflow_helper.py
Line: 1025-1027
Comment:
**Raw enum key used as pipeline type label**
`pipeline_type` is stored as the `PipelineType` TextChoices key (`"ETL"`, `"TASK"`, `"DEFAULT"`, `"APP"`), so the message would read `DEFAULT Pipeline` or `APP Pipeline` for those types — labels that may be unclear to API callers. The `.values()` queryset in `can_update_workflow` returns the raw DB value rather than the human-readable label (`"Default"`, `"App"`). Consider mapping the raw key to the display label (e.g. via `PipelineType(p_type).label` where `PipelineType` is imported from `pipeline_v2.models`) before constructing the line.
How can I resolve this? If you propose a fix, please make it concise.| Pipeline/APIDeployment would silently drop their rows along with | ||
| the workflow. | ||
| """ | ||
| usage = WorkflowHelper.can_update_workflow(str(instance.id)) |
There was a problem hiding this comment.
Redundant DB fetch inside
perform_destroy
can_update_workflow opens with Workflow.objects.get(pk=workflow_id) to reload the object, but instance passed into perform_destroy is already the fully-loaded Workflow row (fetched by get_object() earlier in the DRF lifecycle). This adds an extra round-trip. If can_update_workflow is also called from the existing can_update endpoint with only an ID string, consider adding an overload or internal helper that accepts either a Workflow instance or an ID so perform_destroy can skip the redundant lookup.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/workflow_manager/workflow_v2/views.py
Line: 148
Comment:
**Redundant DB fetch inside `perform_destroy`**
`can_update_workflow` opens with `Workflow.objects.get(pk=workflow_id)` to reload the object, but `instance` passed into `perform_destroy` is already the fully-loaded `Workflow` row (fetched by `get_object()` earlier in the DRF lifecycle). This adds an extra round-trip. If `can_update_workflow` is also called from the existing `can_update` endpoint with only an ID string, consider adding an overload or internal helper that accepts either a `Workflow` instance or an ID so `perform_destroy` can skip the redundant lookup.
How can I resolve this? If you propose a fix, please make it concise.| if (pipeline_count + api_count) == 0: | ||
| return f"Cannot delete `{workflow_name}` as it is currently in use." |
There was a problem hiding this comment.
build_workflow_in_use_message is only called from perform_destroy after can_update_workflow has already returned can_update=False, which only occurs when pipeline_count + api_count > 0. The zero-count branch (line 1008–1009) therefore cannot be reached via the current call site and silently returns a generic message if it ever were, which could mask the real cause. Consider either removing this branch (trusting the caller contract) or converting it to an AssertionError/log to make the invariant explicit.
Prompt To Fix With AI
This is a comment left during a code review.
Path: backend/workflow_manager/workflow_v2/workflow_helper.py
Line: 1008-1009
Comment:
**Unreachable fallback branch**
`build_workflow_in_use_message` is only called from `perform_destroy` after `can_update_workflow` has already returned `can_update=False`, which only occurs when `pipeline_count + api_count > 0`. The zero-count branch (line 1008–1009) therefore cannot be reached via the current call site and silently returns a generic message if it ever were, which could mask the real cause. Consider either removing this branch (trusting the caller contract) or converting it to an `AssertionError`/log to make the invariant explicit.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
🧹 Nitpick comments (1)
backend/workflow_manager/workflow_v2/workflow_helper.py (1)
996-1033: ⚡ Quick winAdd defensive handling for None or empty pipeline fields.
If
pipeline_nameorpipeline_typeisNoneor an empty string, the formatted message will displayNoneor empty backticks, which degrades the user experience.🛡️ Proposed fix to handle None/empty values gracefully
if pipelines: shown = list(pipelines)[:limit] for p in shown: - name = p.get("pipeline_name") - p_type = p.get("pipeline_type") + name = p.get("pipeline_name") or "Unknown" + p_type = p.get("pipeline_type") or "Unknown" lines.append(f"- `{name}` ({p_type} Pipeline)")🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@backend/workflow_manager/workflow_v2/workflow_helper.py` around lines 996 - 1033, The build_workflow_in_use_message function currently inserts raw pipeline_name/pipeline_type (and api_names entries) into the user message, which can render "None" or empty backticks; update build_workflow_in_use_message to defensively coerce missing or empty values to a friendly placeholder (e.g. "<unknown>" or "unnamed") before formatting: for pipelines iterate over p in shown and compute name = (p.get("pipeline_name") or "<unknown>").strip() and p_type = (p.get("pipeline_type") or "Pipeline").strip() (or similar), and for api_names map any None/empty to a placeholder before adding backticked entries; keep usage of WorkflowHelper.USAGE_MESSAGE_DISPLAY_LIMIT and the existing usage dict keys (pipelines, api_names, pipeline_count, api_count) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Nitpick comments:
In `@backend/workflow_manager/workflow_v2/workflow_helper.py`:
- Around line 996-1033: The build_workflow_in_use_message function currently
inserts raw pipeline_name/pipeline_type (and api_names entries) into the user
message, which can render "None" or empty backticks; update
build_workflow_in_use_message to defensively coerce missing or empty values to a
friendly placeholder (e.g. "<unknown>" or "unnamed") before formatting: for
pipelines iterate over p in shown and compute name = (p.get("pipeline_name") or
"<unknown>").strip() and p_type = (p.get("pipeline_type") or "Pipeline").strip()
(or similar), and for api_names map any None/empty to a placeholder before
adding backticked entries; keep usage of
WorkflowHelper.USAGE_MESSAGE_DISPLAY_LIMIT and the existing usage dict keys
(pipelines, api_names, pipeline_count, api_count) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 9778eca5-e3ea-414b-b814-ab21fe1bdaa5
📒 Files selected for processing (3)
backend/workflow_manager/workflow_v2/exceptions.pybackend/workflow_manager/workflow_v2/views.pybackend/workflow_manager/workflow_v2/workflow_helper.py



What
perform_destroyoverride onWorkflowViewSetthat callsWorkflowHelper.can_update_workflowbefore deletion and raises a newWorkflowDeletionError(HTTP 400) with the same detailed message the UI shows when a workflow is in use.WorkflowHelper.build_workflow_in_use_messageso the rejection detail lists the dependent pipelines and API deployments, matching the frontend format.Why
Workflows.jsx) already gates the delete button via thecan_updateendpoint and shows a descriptive error when a workflow is referenced by pipelines/API deployments (PR UN-3217 [FEAT] Show specific pipeline/API names in workflow deletion error message #1784).WorkflowViewSet.destroyhad no equivalent guard. BecausePipeline.workflowandAPIDeployment.workfloware declared withon_delete=models.CASCADE, a directDELETE /workflow/<id>/API call (or a UI bypass / race) would silently cascade-delete every dependent pipeline and API deployment. That was the original concern in UN-151.How
WorkflowDeletionError(status 400) inworkflow_manager/workflow_v2/exceptions.py.build_workflow_in_use_message(workflow_name, usage)inworkflow_helper.pymirrors the frontend formatting (truncates after 3 names per category, appends...and N moresummary).WorkflowViewSet.perform_destroyinvokescan_update_workflow; oncan_update=falseit raisesWorkflowDeletionError(detail=...). Otherwise it falls through to the defaultModelViewSetdestroy.Can this PR break any existing features. If yes, please list possible items. If no, please explain why. (PS: Admins do not merge the PR without this section filled)
can_updatebefore issuingDELETE, so end users won't notice a change. The new guard only fires for direct API callers and any TOCTOU race between the UI check and the actual delete — both cases where the previous behavior was unsafe (silent cascade delete). Workflows with no pipeline/API references continue to delete as before.Database Migrations
Env Config
Relevant Docs
Related Issues or PRs
Dependencies Versions
Notes on Testing
DELETE /workflow/<id>/for a workflow that is not referenced by any pipeline/API → should succeed (200).DELETE /workflow/<id>/for a workflow referenced by 1+ Pipelines and/or APIDeployments via curl or Postman → should return 400 with a body like:Pipeline/APIDeploymentrows are intact after the rejection.can_updatefirst) continues to work unchanged.Screenshots
Checklist
I have read and understood the Contribution Guidelines.
🤖 Generated with Claude Code