Skip to content

feat: Improve custom tune search space#210

Open
toby-coleman wants to merge 7 commits intomainfrom
feat/improved-custom-tune-space
Open

feat: Improve custom tune search space#210
toby-coleman wants to merge 7 commits intomainfrom
feat/improved-custom-tune-space

Conversation

@toby-coleman
Copy link
Contributor

Summary

Improve custom search space by allowing the ProcessSpec to be passed as an optional spec argument. This allows users to build more general search spaces, which can read e.g. the process/component params.

Changes

  • Implemented optional spec argument on custom search space.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @toby-coleman, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility of custom search spaces within the tuning framework. By allowing the ProcessSpec to be passed as an optional argument to custom search space functions, users can now define more sophisticated and context-dependent optimization strategies. This change defers the full configuration of the Ray Tune search algorithm until runtime, ensuring that all necessary process-specific information is available for building the search space.

Highlights

  • Enhanced Custom Search Spaces: Custom tune search space functions can now optionally receive the ProcessSpec object, enabling more dynamic and context-aware search space definitions based on process parameters.
  • Deferred Ray Tune Configuration: The initialization of the Ray Tune search algorithm and its configuration has been refactored to occur within the run method, ensuring the ProcessSpec is available at the point of configuration.
  • Improved Test Coverage: New test cases have been introduced and existing ones parameterized to thoroughly validate the functionality of custom search spaces that utilize the ProcessSpec.
Changelog
  • plugboard/tune/tune.py
    • Imported partial from functools and signature from inspect.
    • Stored _algorithm, _max_concurrent, and _num_samples as instance attributes in the Tuner class.
    • Modified the _build_algorithm method to accept a process_spec argument.
    • Modified the _build_algo_kwargs method to accept a process_spec argument.
    • Updated the _resolve_space_fn method to accept process_spec, check if the custom space function accepts a spec argument using inspect.signature, and if so, use functools.partial to pre-fill the spec argument.
    • Moved the creation of the ray.tune.TuneConfig and the search algorithm (searcher) from the __init__ method to the run method, allowing process_spec to be used in their construction.
    • Modified the _init_search_algorithm method to accept a process_spec argument.
  • tests/integration/test_tuner.py
    • Imported ProcessSpec from plugboard.schemas.
    • Added a new test helper function custom_space_with_process_spec that demonstrates how to use the ProcessSpec within a custom search space.
    • Updated the existing custom_space function to use the correct parameter path component.a.arg.iters.
    • Increased num_samples in test_multi_objective_tune from 10 to 20.
    • Parameterized the test_custom_space_tune function to execute tests with both the original custom_space and the new custom_space_with_process_spec.
    • Added an assertion to test_custom_space_tune to verify that parameters set using ProcessSpec are correctly applied.
  • tests/unit/test_tuner.py
    • Updated the import for ray.tune.search.optuna.OptunaSearch to be from ray.tune.search.optuna import OptunaSearch.
    • Modified test_optuna_storage_uri_conversion to mock ray.tune.Tuner and assert the type of search_alg after the tuner.run() method is called, aligning with the deferred initialization logic.
Activity
  • No specific activity (comments, reviews, etc.) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

Benchmark comparison for 5cfd64c8 (base) vs 2bdad3b4 (PR)


------------------------------------------------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------------------------------------------------
Name (time in ms)                                                                         Min                 Max                Mean            StdDev              Median               IQR            Outliers     OPS            Rounds  Iterations
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_benchmark_process_run (pr/.benchmarks/Linux-CPython-3.12-64bit/0001_pr)         458.9422 (1.0)      463.7187 (1.0)      461.5645 (1.0)      1.9524 (1.0)      461.4173 (1.0)      3.1839 (1.0)           2;0  2.1665 (1.0)           5           1
test_benchmark_process_run (main/.benchmarks/Linux-CPython-3.12-64bit/0001_base)     462.1709 (1.01)     476.2334 (1.03)     466.8173 (1.01)     6.1373 (3.14)     463.2459 (1.00)     9.0210 (2.83)          1;0  2.1422 (0.99)          5           1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request enhances the custom search space functionality by allowing the ProcessSpec to be passed as an optional spec argument to the search space function. This is a valuable feature for creating more dynamic and context-aware tuning configurations. The implementation is solid, involving a necessary refactoring to defer search algorithm initialization until the run method, where the ProcessSpec is available. The accompanying test changes are thorough, including parameterization to cover both old and new custom space function signatures, and a fix for an incorrect parameter name. I have one suggestion to improve the robustness of the search space function resolution.

@github-actions
Copy link

Benchmark comparison for 5cfd64c8 (base) vs 8765595e (PR)


------------------------------------------------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------------------------------------------------
Name (time in ms)                                                                         Min                 Max                Mean            StdDev              Median               IQR            Outliers     OPS            Rounds  Iterations
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_benchmark_process_run (main/.benchmarks/Linux-CPython-3.12-64bit/0001_base)     450.8573 (1.0)      464.0733 (1.0)      459.7597 (1.0)      5.4284 (1.81)     461.6151 (1.0)      7.1033 (1.83)          1;0  2.1750 (1.0)           5           1
test_benchmark_process_run (pr/.benchmarks/Linux-CPython-3.12-64bit/0001_pr)         461.4715 (1.02)     469.3415 (1.01)     464.8317 (1.01)     3.0016 (1.0)      465.1498 (1.01)     3.8737 (1.0)           2;0  2.1513 (0.99)          5           1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean

@github-actions
Copy link

Benchmark comparison for 5cfd64c8 (base) vs 509fc7de (PR)


------------------------------------------------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------------------------------------------------
Name (time in ms)                                                                         Min                 Max                Mean            StdDev              Median               IQR            Outliers     OPS            Rounds  Iterations
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_benchmark_process_run (pr/.benchmarks/Linux-CPython-3.12-64bit/0001_pr)         450.3302 (1.0)      459.7909 (1.0)      455.0645 (1.0)      3.4911 (1.07)     455.5180 (1.0)      4.4033 (1.40)          2;0  2.1975 (1.0)           5           1
test_benchmark_process_run (main/.benchmarks/Linux-CPython-3.12-64bit/0001_base)     451.6554 (1.00)     460.7991 (1.00)     456.3460 (1.00)     3.2623 (1.0)      456.6073 (1.00)     3.1539 (1.0)           2;0  2.1913 (1.00)          5           1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean

@codecov
Copy link

codecov bot commented Feb 15, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.

📢 Thoughts on this report? Let us know!

@github-actions
Copy link

Benchmark comparison for 5cfd64c8 (base) vs a19fc8cd (PR)


------------------------------------------------------------------------------------------------------------------ benchmark: 2 tests -----------------------------------------------------------------------------------------------------------------
Name (time in ms)                                                                         Min                 Max                Mean            StdDev              Median               IQR            Outliers     OPS            Rounds  Iterations
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
test_benchmark_process_run (main/.benchmarks/Linux-CPython-3.12-64bit/0001_base)     366.5254 (1.0)      376.8073 (1.01)     371.2060 (1.00)     3.7619 (1.84)     370.2735 (1.00)     4.2326 (1.43)          2;0  2.6939 (1.00)          5           1
test_benchmark_process_run (pr/.benchmarks/Linux-CPython-3.12-64bit/0001_pr)         366.6906 (1.00)     372.0153 (1.0)      369.5729 (1.0)      2.0498 (1.0)      369.4241 (1.0)      2.9530 (1.0)           2;0  2.7058 (1.0)           5           1
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

Legend:
  Outliers: 1 Standard Deviation from Mean; 1.5 IQR (InterQuartile Range) from 1st Quartile and 3rd Quartile.
  OPS: Operations Per Second, computed as 1 / Mean

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant