WIP: Architecture Refactoring POC (Factory, Runner modules, MCP Registry)#233
WIP: Architecture Refactoring POC (Factory, Runner modules, MCP Registry)#233pi-2r wants to merge 11 commits intoGitHubSecurityLab:mainfrom
Conversation
…sistency Co-authored-by: Copilot <copilot@github.com>
Co-authored-by: Copilot <copilot@github.com>
There was a problem hiding this comment.
Pull request overview
Draft POC implementing the architecture proposals from #231 by extracting model/prompt logic out of runner.py, centralizing default-model resolution, and introducing a registry-based MCP transport factory to improve extensibility.
Changes:
- Extracted prompt expansion (
repeat_prompt) intoprompt_builder.pyand model config/task model resolution intomodel_resolver.py, with tests updated accordingly. - Reworked retry/backoff logic in
runner.pyusingtenacityand addedtenacityas a dependency. - Replaced MCP transport
match/casewith a registry +register_transportdecorator inmcp_lifecycle.py.
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 5 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/test_runner.py | Updates imports/patch targets to new prompt_builder + model_resolver exports. |
| tests/test_prompt_parser_edge.py | Patches sys.argv for deterministic argparse defaults in edge-case tests. |
| src/seclab_taskflow_agent/runner.py | Uses extracted modules; adds Tenacity-based retries; default model now resolved via capi.get_default_model. |
| src/seclab_taskflow_agent/prompt_builder.py | New module containing build_prompts_to_run prompt expansion logic. |
| src/seclab_taskflow_agent/model_resolver.py | New module containing resolve_model_config / resolve_task_model. |
| src/seclab_taskflow_agent/mcp_lifecycle.py | Adds MCP transport registry + decorator-based registration for stdio/sse/streamable. |
| src/seclab_taskflow_agent/capi.py | Adds get_default_model and loads dotenv at import time. |
| src/seclab_taskflow_agent/agent.py | Removes DEFAULT_MODEL constant usage; resolves model via get_default_model(endpoint) when model=None. |
| pyproject.toml | Adds tenacity dependency. |
| example | Adds a new file with unrelated/inappropriate content. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 3 comments.
Comments suppressed due to low confidence (1)
src/seclab_taskflow_agent/runner.py:269
RateLimitErrorcan still escapedeploy_task_agents(): the tenacity@retry(..., reraise=True)will re-raise the last exception after retries, but the surroundingtry/exceptonly catchesAPITimeoutError(and others). After MAX_API_RETRY attempts, a finalRateLimitErrorwill bubble up and abort the run. Consider catchingRateLimitErrorhere as well (or set aretry_error_callback/wrap it to a timeout-style error) so the function returnscomplete=Falseconsistently and still emits a user-facing message.
@retry(
retry=retry_if_exception_type((RateLimitError, APITimeoutError)),
wait=wait_exponential(multiplier=RATE_LIMIT_BACKOFF, max=MAX_RATE_LIMIT_BACKOFF),
stop=stop_after_attempt(MAX_API_RETRY),
reraise=True,
)
async def _run_streamed() -> None:
result = agent0.run_streamed(prompt, max_turns=max_turns)
async for event in result.stream_events():
if event.type == "raw_response_event" and isinstance(event.data, ResponseTextDeltaEvent):
await render_model_output(event.data.delta, async_task=async_task, task_id=task_id)
await render_model_output("\n\n", async_task=async_task, task_id=task_id)
await _run_streamed()
complete = True
except MaxTurnsExceeded as e:
await render_model_output(f"** 🤖❗ Max Turns Reached: {e}\n", async_task=async_task, task_id=task_id)
logging.exception(f"Exceeded max_turns: {max_turns}")
except AgentsException as e:
await render_model_output(f"** 🤖❗ Agent Exception: {e}\n", async_task=async_task, task_id=task_id)
logging.exception("Agent Exception")
except BadRequestError as e:
await render_model_output(f"** 🤖❗ Request Error: {e}\n", async_task=async_task, task_id=task_id)
logging.exception("Bad Request")
except APITimeoutError as e:
await render_model_output(f"** 🤖❗ Timeout Error: {e}\n", async_task=async_task, task_id=task_id)
logging.exception("API Timeout")
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
… test docstring for clarity
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Hi team,
Following up on my issue #231, I had some free time and ended up coding a full Proof of Concept (POC) for the 4 architectural improvements we discussed.
What this POC does:
agent.py/capi.py: Decoupled the LLM provider initialization (Dependency Injection) to easily support local models.runner.pycleanup: Extracted ~210 lines into dedicatedprompt_builder.pyandmodel_resolver.pymodules (Single Responsibility Principle).mcp_lifecycle.py: Introduced a Registry pattern for MCP transports.If you like the direction of this architecture, I will close this Draft and open small, atomic PRs (e.g., just the LLM decoupling first) to make reviewing easy for you.
Let me know if this modular approach aligns with the team's vision!