Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 73 additions & 0 deletions packages/uipath-llamaindex/template/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# UiPath LlamaIndex Template Agent

A quickstart UiPath LlamaIndex agent. It answers user queries using live tools and supports multiple LLM providers.

> **Docs:** [uipath-llamaindex quick start](https://uipath.github.io/uipath-python/llamaindex/quick_start/) — **Samples:** [uipath-llamaindex/samples](https://github.com/UiPath/uipath-integrations-python/tree/main/packages/uipath-llamaindex/samples)

## What it does

1. **Prepares** the conversation — injects a system prompt and the user question into workflow context
2. **Runs a ReAct agent step** that autonomously decides which tools to call and in what order
3. **Postprocesses** — validates and truncates the response if it exceeds the configured max length

### Tools

| Tool | Description |
| ------------------ | ------------------------------------------------ |
| `get_current_time` | Returns the current UTC date and time (ISO 8601) |
| `get_weather` | Returns weather data for a city (mock data) |

### LLM Providers

The template defaults to **Claude Haiku 4.5** via `UiPathChatBedrockConverse`. To switch providers, edit `main.py`:

```python
# Choose your LLM provider by uncommenting one of the following:
llm = UiPathChatBedrockConverse(model=BedrockModel.anthropic_claude_haiku_4_5)
# llm = UiPathOpenAI(model=OpenAIModel.GPT_4_1_MINI_2025_04_14.value)
# llm = UiPathVertex(model=GeminiModel.gemini_2_5_flash)
```

## Workflow

```mermaid
flowchart TD
START --> prepare
prepare --> react_agent
react_agent -->|tool calls| tool_executor
tool_executor --> react_agent
react_agent -->|final| postprocess
postprocess --> END
```

## Input / Output

```json
// Input
{
"question": "What's the weather like in London?"
}

// Output
{
"response": "..."
}
```

## Running locally

```bash
# Run
uv run uipath run agent --input-file input.json --output-file output.json

# Debug with dynamic node breakpoints
uv run uipath debug agent --input-file input.json --output-file output.json
```

## Evaluation

The agent ships with a tool call order evaluator that verifies the ReAct step calls `get_current_time` **before** `get_weather` when given a time-and-weather query, and an LLM judge that checks weather output for semantic similarity.

```bash
uv run uipath eval
```
9 changes: 5 additions & 4 deletions packages/uipath-llamaindex/template/entry-points.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,19 @@
"entryPoints": [
{
"filePath": "agent",
"uniqueId": "d64050f7-add5-4197-91f2-7b9cf3187751",
"uniqueId": "9016cb4a-25b4-44d3-8ace-08c3fea5316e",
"type": "agent",
"input": {
"type": "object",
"properties": {
"query": {
"title": "Query",
"question": {
"description": "Question for the assistant, e.g. 'What's the weather in Paris?'",
"title": "Question",
"type": "string"
}
},
"required": [
"query"
"question"
]
},
"output": {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"id": "ada5a2c1-976c-470b-964f-eb70a5e61eb4",
"name": "Weather in Paris",
"inputs": {
"query": "Is it good weather for a walk in Paris?"
"question": "Is it good weather for a walk in Paris?"
},
"evaluationCriterias": {
"evaluator-llm-judge-output": {
Expand Down
2 changes: 1 addition & 1 deletion packages/uipath-llamaindex/template/input.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"query": "What's the weather like in London?"
"question": "What's the weather like in London?"
}
7 changes: 5 additions & 2 deletions packages/uipath-llamaindex/template/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
Workflow,
step,
)
from pydantic import Field

from uipath_llamaindex.llms import BedrockModel, GeminiModel, OpenAIModel, UiPathOpenAI
from uipath_llamaindex.llms.bedrock import UiPathChatBedrockConverse
Expand Down Expand Up @@ -63,7 +64,9 @@ def get_weather(city: str, utc_time: str) -> str:


class QueryEvent(StartEvent):
query: str
question: str = Field(
description="Question for the assistant, e.g. 'What's the weather in Paris?'"
)


class LLMInputEvent(Event):
Expand All @@ -87,7 +90,7 @@ class TemplateAgent(Workflow):
async def prepare(self, ctx: Context, ev: QueryEvent) -> LLMInputEvent:
await ctx.store.set("messages", [
ChatMessage(role="system", content=SYSTEM_PROMPT),
ChatMessage(role="user", content=ev.query),
ChatMessage(role="user", content=ev.question),
])
return LLMInputEvent()

Expand Down
Loading