Conversation
Summary of ChangesHello @bootjp, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes resource usage within the Vagrant environment by significantly reducing the default memory allocations for virtual machines. This change applies to both VirtualBox and UTM providers, aiming to create a more lightweight and efficient development setup without compromising the core functionality of the Jepsen test environment. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request reduces the memory allocated to the Vagrant VMs for the Jepsen test environment. While this might be intended to lower resource consumption, I have significant concerns that the new memory limits (2GB for the control node, 1GB for worker nodes) are too low. This could lead to Out-Of-Memory errors and an unstable test environment, given that the control node runs memory-intensive tasks like a JVM and Go compilation. I've left a comment with details on the risks and a suggestion to use configurable variables for memory settings to improve maintainability and flexibility.
| vb.memory = name == :ctrl ? 2048 : 1024 | ||
| vb.cpus = 2 | ||
| end | ||
|
|
||
| # UTM (Apple Silicon) defaults | ||
| node.vm.provider "utm" do |utm| | ||
| utm.memory = name == :ctrl ? 4096 : 2048 | ||
| utm.memory = name == :ctrl ? 2048 : 1024 |
There was a problem hiding this comment.
Halving the memory allocation for both VirtualBox and UTM providers is a significant change that could impact the stability of the Jepsen test environment.
- Control Node (2GB): The control node runs Jepsen (on a JVM) and compiles Go code. Both are memory-intensive operations, and 2GB might be insufficient, leading to Out-Of-Memory errors or slow performance.
- Worker Nodes (1GB): The worker nodes run the
elastickvservice. 1GB of RAM might not be enough, depending on the service's memory footprint, especially under test load.
This change introduces a high risk of making the test environment unreliable.
For better maintainability and to avoid duplicating the memory values, consider defining them as variables at a higher scope. This would also make it easier for other developers to adjust them based on their system's resources.
Example:
# At the top of the Vagrantfile
CTRL_MEM = 2048
NODE_MEM = 1024
# ... then in the provider blocks
vb.memory = name == :ctrl ? CTRL_MEM : NODE_MEM
# and
utm.memory = name == :ctrl ? CTRL_MEM : NODE_MEMEven better would be to allow overriding from environment variables:
CTRL_MEM = ENV.fetch('VAGRANT_CTRL_MEM', 2048)
NODE_MEM = ENV.fetch('VAGRANT_NODE_MEM', 1024)Update Vagrantfile
There was a problem hiding this comment.
Pull request overview
Updates the Jepsen VM infrastructure to support running the cluster under libvirt/KVM (in addition to existing VirtualBox/UTM defaults), and adjusts the GitHub Actions workflow to bring up Jepsen VMs via the libvirt provider on a self-hosted runner.
Changes:
- Reduce default VM memory for VirtualBox and UTM providers in the Jepsen Vagrantfile.
- Add a libvirt provider configuration to the Jepsen Vagrantfile (with selectable driver via
LIBVIRT_DRIVER). - Update the Jepsen GitHub Actions workflow to install libvirt/Vagrant tooling and run
vagrant up --provider=libvirton self-hosted runners.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
jepsen/Vagrantfile |
Lowers VirtualBox/UTM memory defaults and adds a libvirt provider block with driver selection. |
.github/workflows/jepsen.yml |
Switches Jepsen VM workflow to self-hosted + libvirt setup and uses libvirt as the Vagrant provider. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| node.vm.provider "libvirt" do |kvm, override| | ||
| override.vm.box = "generic/debian12" | ||
| kvm.memory = name == :ctrl ? 4096 : 2048 | ||
| kvm.cpus = 2 | ||
| # 必要に応じてドライバを指定 | ||
| kvm.driver = ENV['LIBVIRT_DRIVER'] || 'kvm' |
There was a problem hiding this comment.
In the libvirt provider block, the first block parameter is named kvm, but this object represents the libvirt provider config even when kvm.driver is set to qemu. Renaming the variable to something provider-neutral (e.g. libvirt) would avoid confusion and better match its actual meaning.
| node.vm.provider "libvirt" do |kvm, override| | |
| override.vm.box = "generic/debian12" | |
| kvm.memory = name == :ctrl ? 4096 : 2048 | |
| kvm.cpus = 2 | |
| # 必要に応じてドライバを指定 | |
| kvm.driver = ENV['LIBVIRT_DRIVER'] || 'kvm' | |
| node.vm.provider "libvirt" do |libvirt, override| | |
| override.vm.box = "generic/debian12" | |
| libvirt.memory = name == :ctrl ? 4096 : 2048 | |
| libvirt.cpus = 2 | |
| # 必要に応じてドライバを指定 | |
| libvirt.driver = ENV['LIBVIRT_DRIVER'] || 'kvm' |
| override.vm.box = "generic/debian12" | ||
| kvm.memory = name == :ctrl ? 4096 : 2048 | ||
| kvm.cpus = 2 | ||
| # 必要に応じてドライバを指定 |
There was a problem hiding this comment.
The inline comment # 必要に応じてドライバを指定 is the only non-English comment in this (otherwise English) Vagrantfile. Consider translating it to English for consistency so future maintainers don’t miss the purpose of LIBVIRT_DRIVER.
| # 必要に応じてドライバを指定 | |
| # Specify libvirt driver via LIBVIRT_DRIVER if needed |
| utm.cpus = 2 | ||
| end | ||
|
|
||
| node.vm.provider "libvirt" do |kvm, override| |
There was a problem hiding this comment.
override.vm.box = "generic/debian12" makes the base box differ by provider from the global config.vm.box (bento/debian-12). If this is required because bento/debian-12 lacks a libvirt build, please add a short comment explaining that constraint; otherwise, consider using a single multi-provider box to keep environments consistent across providers.
| node.vm.provider "libvirt" do |kvm, override| | |
| node.vm.provider "libvirt" do |kvm, override| | |
| # Use a separate box for libvirt because bento/debian-12 does not provide a libvirt build. |
| # Requires a runner with KVM support (e.g. larger GitHub runners or metal) for decent performance. | ||
| # Standard ubuntu-latest might fall back to QEMU TCG (very slow) or fail if kvm driver is forced. | ||
| # runs-on: ubuntu-latest | ||
| runs-on: [self-hosted] |
There was a problem hiding this comment.
This job now uses runs-on: [self-hosted], but the steps assume a Debian/Ubuntu-like host (uses apt-get) and require KVM/libvirt support. To avoid the workflow being scheduled onto an incompatible self-hosted runner, consider adding explicit runner labels (e.g. self-hosted, linux, kvm / libvirt) and/or switching to a known hosted runner image if feasible.
| runs-on: [self-hosted] | |
| runs-on: [self-hosted, linux, kvm, libvirt] |
| # Setup Libvirt permissions | ||
| sudo usermod -aG libvirt $USER | ||
| # Allow current session to access libvirt socket | ||
| sudo chmod 666 /var/run/libvirt/libvirt-sock | ||
|
|
There was a problem hiding this comment.
sudo chmod 666 /var/run/libvirt/libvirt-sock grants world read/write access to the libvirt control socket, which is a high-impact privilege escalation risk on self-hosted runners. Prefer a least-privilege approach (e.g., ensure the runner user is in the libvirt group and run libvirt/vagrant commands under that group via sg/newgrp, or use an ACL for just the runner user) rather than opening the socket globally.
No description provided.