feat: Updating the Tracing implementation and updating BaseAgent.runLive #837
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
feat: Updating the Tracing implementation and updating BaseAgent.runLive
This update should make the java tracing consistent with Python ADK:
BaseAgent.java:runAsyncandrunLivemethods have been modified to create theInvocationContextbefore starting the tracing span."agent_run [<agent name>]"to"invoke_agent <agent name>".Tracing.traceAgentInvocationmethod is now called to add initial tracing attributes for the agent invocation.runLive, therunLiveImplexecution is now wrapped with calls tobeforeAgentCallbackandafterAgentCallbackto ensure proper tracing of these lifecycle events.Tracing.java:"com.google.adk"to"gcp.vertex.agent".traceAgentInvocationmethod has been added to set standard attributes for agent invocation spans, includinggen_ai.operation.name,gen_ai.agent.description,gen_ai.agent.name, andgen_ai.conversation.id.traceToolCall,traceToolResponse,traceCallLlm, andtraceSendDatahave been updated to use the"gcp.vertex.agent."prefix instead of"adk."or"com.google.adk".CAPTURE_MESSAGE_CONTENT_IN_SPANSflag. When disabled, empty JSON objects are recorded instead.traceToolResponsenow includes logic to extract and trace thetool_call.idand the tool response content fromFunctionResponseobjects.traceCallLlmnow captures additional LLM request and response details, such asgen_ai.request.top_p,gen_ai.request.max_tokens,gen_ai.usage.input_tokens,gen_ai.usage.output_tokens, andgen_ai.response.finish_reasons.