feat: add python tool timeout param#7953
Conversation
There was a problem hiding this comment.
Hey - I've found 1 issue, and left some high level feedback:
- The timeout computation logic is duplicated in both PythonTool.call and LocalPythonTool.call; consider extracting a small helper (e.g., _compute_effective_timeout(context, timeout)) to keep the behavior consistent and easier to maintain.
- The parameter schema uses a default of 0 while the function signatures use timeout: int | None = None and then special-case None; you could simplify and align these by using timeout: int = 0 and treating <= 0 as "use tool_call_timeout" to remove the None branch.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The timeout computation logic is duplicated in both PythonTool.call and LocalPythonTool.call; consider extracting a small helper (e.g., _compute_effective_timeout(context, timeout)) to keep the behavior consistent and easier to maintain.
- The parameter schema uses a default of 0 while the function signatures use timeout: int | None = None and then special-case None; you could simplify and align these by using timeout: int = 0 and treating <= 0 as "use tool_call_timeout" to remove the None branch.
## Individual Comments
### Comment 1
<location path="astrbot/core/tools/computer_tools/python.py" line_range="97" />
<code_context>
context.context.context,
context.context.event.unified_msg_origin,
)
+ effective_timeout = context.tool_call_timeout
+ if timeout is not None:
+ try:
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting a shared helper to compute the effective timeout and reuse it in both tools to avoid duplicated timeout-handling logic.
You can simplify the timeout handling and remove duplication without changing behavior.
1. Extract a shared helper for the timeout semantics:
```python
def _get_effective_timeout(
context: ContextWrapper[AstrAgentContext],
timeout: int | None,
) -> int:
# None or <= 0 → use tool_call_timeout
if not timeout or timeout <= 0:
return context.tool_call_timeout
# cap at tool_call_timeout
return min(timeout, context.tool_call_timeout)
```
2. Use this helper in both tools and drop the `int()` casting / try-except, relying on the schema’s integer type:
```python
async def call(
self,
context: ContextWrapper[AstrAgentContext],
code: str,
silent: bool = False,
timeout: int | None = None,
) -> ToolExecResult:
if permission_error := check_admin_permission(context, "Python execution"):
return permission_error
sb = await get_booter(
context.context.context,
context.context.event.unified_msg_origin,
)
effective_timeout = _get_effective_timeout(context, timeout)
try:
result = await sb.python.exec(
code,
timeout=effective_timeout,
silent=silent,
)
return await handle_result(result, context.context.event)
except Exception as e:
return f"Error executing code: {str(e)}"
```
Apply the same pattern in `LocalPythonTool.call`:
```python
sb = get_local_booter()
effective_timeout = _get_effective_timeout(context, timeout)
```
This keeps all current semantics (schema default `0`, `None` allowed in signature, capping to `tool_call_timeout`) while reducing duplication and cognitive load.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces an optional timeout parameter to the PythonTool and LocalPythonTool classes, allowing users to specify an execution time limit that is capped by the system's tool_call_timeout. The reviewer noted that the logic for calculating the effective timeout is duplicated across both tool implementations and recommended refactoring this logic into a shared helper function to improve maintainability and simplify the code.
d910487 to
8b94678
Compare
|
@sourcery-ai review |
|
/gemini review |
There was a problem hiding this comment.
Hey - I've left some high level feedback:
- The
effective_timeoutcalculation is duplicated betweenPythonToolandLocalPythonTool; consider extracting this into a small shared helper (e.g.,compute_effective_timeout(timeout, context.tool_call_timeout)) to keep behavior consistent and reduce future maintenance overhead.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The `effective_timeout` calculation is duplicated between `PythonTool` and `LocalPythonTool`; consider extracting this into a small shared helper (e.g., `compute_effective_timeout(timeout, context.tool_call_timeout)`) to keep behavior consistent and reduce future maintenance overhead.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces an optional timeout parameter for Python execution tools, allowing for more granular control over code execution duration. The logic for calculating the effective timeout, which ensures the provided value does not exceed the system's default limit, is implemented in both PythonTool and LocalPythonTool. A review comment suggests refactoring this duplicated logic into a shared helper function to improve maintainability.
* feat: add python tool timeout param * Update python.py --------- Co-authored-by: Weilong Liao <37870767+Soulter@users.noreply.github.com>
* feat: supports plugin to add skills * fix tests * fix: fs tools * Add tests for plugin skills handling and improve skill management - Implement test for restricted local member reading plugin skill inventory even if the plugin is inactive. - Ensure that the skill synchronization process retains built-in skills when local skills are empty, including proper handling of plugin paths. - Update dashboard tests to verify that plugin details include components when requested. - Enhance skill metadata enrichment tests to include inactive plugin-provided skills for inventory. - Add filtering tests for plugin skills based on current configuration, ensuring only allowed plugins are considered and inactive plugins are skipped. Co-authored-by: Copilot <copilot@github.com> * fix: handle PPIO platform context-length error messages (#7888) * fix: 压缩算法删除 user 消息 Bug 修复 * perf: improve truncate algo * fix: improve context length error detection for PPIO platform compatibility - Extend error detection to handle PPIO's error message format: 'The input is longer than the model's context length' - Add case-insensitive matching using .lower() for robustness - Maintain backward compatibility with existing 'maximum context length' check This fixes the issue where PPIO platform models (e.g., ppio/zai-org/glm-5-turbo) would fail with AgentState.ERROR due to unrecognized context length errors. --------- Co-authored-by: Soulter <905617992@qq.com> * fix: 支持微信客服文件消息 (#7923) * fix: 支持微信客服文件消息 * fix: remove WeCom file message placeholder * fix(provider): fix Anthropic custom headers and system prompt compatibility (#7587) * fix(provider): fix Anthropic custom headers and system prompt compatibility - Pass custom_headers via AsyncAnthropic's `default_headers` parameter instead of creating a separate httpx.AsyncClient. This avoids `isinstance` check failures when multiple httpx installations exist on sys.path (e.g. bundled Python + system Python). - Use list format for the `system` parameter (`[{"type": "text", ...}]`) instead of a plain string. The list format is supported by the official Anthropic API and is also compatible with third-party API proxies that reject the string format. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(provider): fix Anthropic custom headers and system prompt compatibility - Pass custom_headers via AsyncAnthropic's `default_headers` parameter instead of creating a separate httpx.AsyncClient. This avoids `isinstance` check failures when multiple httpx installations exist on sys.path (e.g. bundled Python + system Python). - Use list format for the `system` parameter (`[{"type": "text", ...}]`) instead of a plain string. The list format is supported by the official Anthropic API and is also compatible with third-party API proxies that reject the string format. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Add test unit --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * perf: improve logic of adding models Co-authored-by: piexian <piexian@users.noreply.github.com> * chore: remove redundant logger messages and improve log clarity Co-authored-by: Copilot <copilot@github.com> * chore: ruff format * docs: update knowledge base docs closes: #7962 * fix(#7907): send_message_to_user cron 场景下 session 容错 (#7911) * fix: send_message_to_user cron 场景下 session 容错 (#7907) - LLM 在主动场景可能只传 session_id 而非完整三段式, from_str 失败时用 current_session 补全前两段。 Co-authored-by: Copilot <copilot@github.com> * fix: 限制 session 补全仅对裸 session_id 生效,避免误修带冒号的错误输入 (#7907) * feat: add session information to cron job payload Co-authored-by: Copilot <copilot@github.com> * fix: improve clarity and consistency of safety mode prompts Co-authored-by: Copilot <copilot@github.com> --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: Weilong Liao <37870767+Soulter@users.noreply.github.com> Co-authored-by: Soulter <905617992@qq.com> * perf: tool rendering in conversation page (#7937) * fix(dashboard): route conversation history tool messages through ToolCallCard When viewing conversation history, large tool outputs (e.g. a single git log --stat producing tens of KB) caused the browser renderer to freeze. Root cause: formattedMessages mapped every role (including tool / system / _checkpoint) into user/bot bubbles, and bot plain strings went through markstream-vue's MarkdownRender. Single 88KB tool messages plus 88-of-them adding up to ~349KB of synchronous markdown parsing was enough to block the main thread for 5+ seconds. This patch: - Indexes tool-role messages by tool_call_id - Filters formattedMessages to user/assistant only — tool, system and _checkpoint roles no longer render as standalone bubbles - Converts assistant.tool_calls (OpenAI shape, with tc.name/tc.arguments fallbacks) into the existing tool_call MessagePart, attaching the paired result so MessageList's ToolCallCard renders it (default collapsed, no longer feeds large strings into the markdown renderer) - Drops empty placeholder plain parts when an assistant message only carries tool_calls - Sets ts/finished_ts to 0 as a sentinel: ToolCallCard.toolCallDuration returns "" when startTime <= 0, suppressing a misleading "0ms" duration that would otherwise appear because conversation history has no real timing data Behavior change: tool results are now embedded in their assistant's ToolCallCard.result instead of appearing as separate bot bubbles. This matches the main chat UI's behavior. Fixes #7929 Refs #7372 #7456 * style(dashboard): use single scrollbar in conversation history preview ToolCallCard's result/args panes have their own max-height + overflow, which produced a nested scrollbar when nested inside the history preview's already-scrollable .conversation-messages-container. Override those constraints inside the preview only — the outer 500px-bounded container already provides scroll bounds, so a single scrollbar feels cleaner. The main chat UI is unaffected. --------- Co-authored-by: wanger <wanger@example.com> * fix: ruff format * feat: add python tool timeout param (#7953) * feat: add python tool timeout param * Update python.py --------- Co-authored-by: Weilong Liao <37870767+Soulter@users.noreply.github.com> * fix: 钉钉连接超时后自动重连失败 (#7924) * fix: improve DingTalk adapter error handling in run() method * fix: add retry logic for DingTalk SDK task unexpected exit * fix: use task.add_done_callback to wake thread on task completion, handle UnboundLocalError * refactor: extract retry logic into handle_retry helper function --------- Co-authored-by: Blueteemo <Blueteemo@users.noreply.github.com> --------- Co-authored-by: Copilot <copilot@github.com> Co-authored-by: leonforcode <leonbeyourside01@gmail.com> Co-authored-by: AstralSolipsism <134063164+AstralSolipsism@users.noreply.github.com> Co-authored-by: Pink YuDeer <wer00001@outlook.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: piexian <piexian@users.noreply.github.com> Co-authored-by: NayukiMeko <ChibaNayuki@163.com> Co-authored-by: wanger <122891289+10knamesmore@users.noreply.github.com> Co-authored-by: wanger <wanger@example.com> Co-authored-by: Haoran Xu <3230105281@zju.edu.cn> Co-authored-by: 千岚之夏 <108566281+Blueteemo@users.noreply.github.com> Co-authored-by: Blueteemo <Blueteemo@users.noreply.github.com>
Fixes #7952
Refs #7925
This PR implements LLM-controlled timeouts for Python execution tools. The previous local Python execution timeout was effectively hardcoded at 30 seconds, not configurable and not LLM-controlled. Following maintainer @Soulter 's guidance, the timeout is now provided as a tool parameter and capped by the global tool_call_timeout so all tools respect the same upper bound.
Modifications / 改动点
Expose a
timeoutparameter onastrbot_execute_pythonandastrbot_execute_ipython.Clamp per-call timeout to
tool_call_timeoutand fall back to the global limit when omitted or invalid.Core file modified: astrbot/core/tools/computer_tools/python.py
This is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Add LLM-controlled, per-call timeouts to Python execution tools while respecting the global tool_call_timeout cap.
New Features:
Enhancements:
Summary by Sourcery
Add LLM-controlled per-call timeouts to Python execution tools while respecting the global tool_call_timeout cap.
New Features:
Enhancements: