feat: Add evaluations support to ManagedAgent.run()#153
feat: Add evaluations support to ManagedAgent.run()#153jsonbailey wants to merge 3 commits intojb/aic-2174/langchain-graph-runnerfrom
Conversation
4f29d99 to
0ea4a04
Compare
0539ba1 to
404670d
Compare
0ea4a04 to
04e80a8
Compare
404670d to
f132154
Compare
04e80a8 to
29ced10
Compare
f132154 to
eb1004c
Compare
29ced10 to
c343602
Compare
eb1004c to
8a049e2
Compare
c343602 to
1a24a4f
Compare
8a049e2 to
cea3780
Compare
1a24a4f to
78a7ded
Compare
cea3780 to
f27f9b8
Compare
38951a6 to
52756c7
Compare
f27f9b8 to
d892533
Compare
52756c7 to
ff2de9a
Compare
d892533 to
13ee088
Compare
ff2de9a to
1054ef7
Compare
13ee088 to
3159524
Compare
1054ef7 to
4a0923d
Compare
3159524 to
2c5671d
Compare
Wire judge evaluations into ManagedAgent.run() via an asyncio.Task, mirroring ManagedModel.run(). Awaiting result.evaluations guarantees both evaluation and tracker.track_judge_result() complete. run() returns immediately; the evaluations task resolves asynchronously. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Mirror the managed_model.py fix in managed_agent.py: wrap tracker.track_judge_result() in try/except so a tracking failure does not destroy successfully computed evaluation results, and log a warning when a judge evaluation fails (r.success is False) so failures are visible rather than silently skipped.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
4a0923d to
9f9c880
Compare
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
❌ Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, have a team admin enable autofix in the Cursor dashboard.
Reviewed by Cursor Bugbot for commit 9f9c880. Configure here.
| log.warning("Judge evaluation failed: %s", r.error_message) | ||
| return results | ||
|
|
||
| return asyncio.create_task(_run_and_track(evaluator_task)) |
There was a problem hiding this comment.
Duplicated _track_judge_results logic across managed classes
Low Severity
The _track_judge_results method in ManagedAgent is a character-for-character duplicate of the same method in ManagedModel. Both take tracker, input_text, output_text, call evaluator.evaluate(), wrap it in an async task that iterates results, tracks successful ones, and logs failures. This duplicated logic increases maintenance burden — a bug fix or behavior change in one would need to be manually replicated in the other.
Reviewed by Cursor Bugbot for commit 9f9c880. Configure here.
There was a problem hiding this comment.
We will consider a refactor in the future if needed. Its light enough that we will leave it as is for the moment.


Summary
ManagedAgent.run()viaasyncio.Task, mirroringManagedModel.run()(PR 7 / PR 8)run()returns immediately;await result.evaluationsguarantees both evaluation andtracker.track_judge_result()completeai_config.evaluator.evaluate(input, content)— resolves to empty list withEvaluator.noop()success=False) do NOT calltrack_judge_result()Depends on
Test plan
uv run pytest packages/sdk/server-ai/tests/)TestManagedAgentEvaluationstests: run returns before evaluations resolve, collect results, tracking fires on await, noop evaluator returns empty list, failed results not tracked🤖 Generated with Claude Code
Note
Medium Risk
Introduces new async evaluation/telemetry side-effects in
ManagedAgent.run()via background tasks; risk is moderate due to potential concurrency/lifecycle issues (unawaited tasks, exception handling) affecting tracking reliability rather than core auth/data safety.Overview
ManagedAgent.run()now kicks off judge evaluations viaai_config.evaluator.evaluate(input, output)and returns aManagedResultthat includes anevaluationsasyncio.Taskalongside the normal content/metrics.Awaiting
result.evaluationsruns per-judge tracking (tracker.track_judge_result) for successful results, logs failures/exceptions without raising, and returns the collectedJudgeResultlist; tests were expanded to cover the non-blocking behavior, result collection, tracking-on-await contract, noop evaluator behavior, and failed-result handling.Reviewed by Cursor Bugbot for commit 9f9c880. Bugbot is set up for automated code reviews on this repo. Configure here.