Skip to content

feat: support per-runner-flavor SQS batch size and window in multi_runner_config#5108

Draft
Copilot wants to merge 3 commits intomainfrom
copilot/support-sqs-batch-size-window-config
Draft

feat: support per-runner-flavor SQS batch size and window in multi_runner_config#5108
Copilot wants to merge 3 commits intomainfrom
copilot/support-sqs-batch-size-window-config

Conversation

Copy link
Copy Markdown
Contributor

Copilot AI commented Apr 14, 2026

Description

lambda_event_source_mapping_batch_size and lambda_event_source_mapping_maximum_batching_window_in_seconds are module-level variables applied identically to every runner flavor. Deployments with mixed load profiles (e.g. 1000-runner high-volume vs 10-runner low-volume) need per-flavor tuning to avoid unnecessary latency, wasted SSM throughput, and GitHub API rate limit pressure on low-volume flavors.

Adds both as optional(number, null) fields in multi_runner_config.runner_config, with coalesce() fallback to the existing module-level variables. Follows the established pattern used by scale_up_reserved_concurrent_executions.

Changes:

  • modules/multi-runner/variables.tf — new optional fields in the runner_config object type + descriptions
  • modules/multi-runner/runners.tfcoalesce(per_flavor, module_level) for both parameters

Usage:

multi_runner_config = {
  runner-large = {
    runner_config = {
      lambda_event_source_mapping_batch_size                         = 50   # override
      lambda_event_source_mapping_maximum_batching_window_in_seconds = 10   # override
      # ...
    }
  }
  runner-metal = {
    runner_config = {
      # no override — uses module-level defaults
      # ...
    }
  }
}

Fully backwards compatible. Existing configs require no changes.

Test Plan

  • Verified with terraform fmt (clean)
  • CodeQL security scan passed (no issues)
  • Code review passed (1 pre-existing issue noted: missing colon on enable_jit_config description in the heredoc — not introduced by this PR)

Related Issues

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • checkpoint-api.hashicorp.com
    • Triggering command: /usr/local/bin/terraform terraform version (dns block)
    • Triggering command: /usr/local/bin/terraform terraform fmt modules/multi-REDACTED/variables.tf modules/multi-REDACTED/REDACTEDs.tf (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

…nner_config

Add lambda_event_source_mapping_batch_size and
lambda_event_source_mapping_maximum_batching_window_in_seconds as
optional fields inside multi_runner_config.runner_config, falling back
to the existing module-level variables when not set via coalesce().

This follows the same pattern used by scale_up_reserved_concurrent_executions.

Agent-Logs-Url: https://github.com/github-aws-runners/terraform-aws-github-runner/sessions/1a1b641a-bbf5-45ab-bd97-73d9501552b9

Co-authored-by: Brend-Smits <15904543+Brend-Smits@users.noreply.github.com>
Copilot AI changed the title [WIP] Support per-runner-flavor SQS batch size and window configuration feat: support per-runner-flavor SQS batch size and window in multi_runner_config Apr 14, 2026
Copilot AI requested a review from Brend-Smits April 14, 2026 08:32
@github-actions
Copy link
Copy Markdown
Contributor

Dependency Review

✅ No vulnerabilities or license issues or OpenSSF Scorecard issues found.

Snapshot Warnings

⚠️: No snapshots were found for the head SHA bea5f2d.
Ensure that dependencies are being submitted on PR branches and consider enabling retry-on-snapshot-warnings. See the documentation for more information and troubleshooting advice.

Scanned Files

None

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support per-runner-flavor SQS batch size and window configuration in multi_runner_config

2 participants