Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions gunicorn.conf
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
bind = 'unix:/var/run/cabotage/nginx.sock'
backlog = 1024
workers = 2
Copy link

Copilot AI Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PR description says setting workers to 2 may help memory, but in this repo Gunicorn is started via gunicorn -c gunicorn.conf ... (see Procfile), so without an explicit value Gunicorn would default to 1 worker. Adding workers = 2 will generally increase the number of processes and can increase memory usage (and could also change throughput/latency characteristics). Consider either keeping workers at 1 for the memory experiment, or making it environment-configurable (e.g., via an env var) so you can tune per deployment without code changes.

Copilot uses AI. Check for mistakes.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Avoid forcing two Gunicorn workers to fix memory pressure

If the cabotage web process is already close to its memory limit, workers = 2 moves it in the wrong direction: Gunicorn will fork a second Django worker on every web start, so request-time heap growth is duplicated even with preload_app = True, and pydotorg/settings/cabotage.py:11-12 also keeps a separate 600-second DB connection per worker. In the exact “help memory” scenario from the PR description, this increases per-instance RSS and connection usage rather than reducing them, making OOMs more likely.

Useful? React with 👍 / 👎.

preload_app = True
max_requests = 2048
max_requests_jitter = 128
Expand Down
Loading