Skip to content

[pull] main from FlowiseAI:main#387

Merged
pull[bot] merged 2 commits intocode:mainfrom
FlowiseAI:main
Feb 27, 2026
Merged

[pull] main from FlowiseAI:main#387
pull[bot] merged 2 commits intocode:mainfrom
FlowiseAI:main

Conversation

@pull
Copy link

@pull pull bot commented Feb 27, 2026

See Commits and Changes for more details.


Created by pull[bot] (v2.0.0-alpha.4)

Can you help keep this open source service alive? 💖 Please sponsor : )

aviu16 and others added 2 commits February 27, 2026 19:01
…o matches (#5760)

* fix(agentflow): prevent ConditionAgent from silently dropping when no scenario matches

the ConditionAgent was doing strict exact string matching against
scenario descriptions, but LLMs often return abbreviated or slightly
different versions of the scenario text. when nothing matched, all
branches got marked as unfulfilled and the flow silently terminated
with no response.

added fallback matching (startsWith, includes) so partial matches
still route correctly, plus a last-resort else branch so the flow
never just dies silently. also added a safety net in the execution
engine to catch the case where all conditions are unfulfilled.

fixes #5620

* refactor: normalize output once and drop unnecessary any casts

- normalize calledOutputName once before all matching steps instead of
  calling toLowerCase().trim() repeatedly
- remove explicit any types where inference handles it

* test(agentflow): cover ConditionAgent scenario matching fallbacks

* Update matchScenario.test.ts

* Update matchScenario.ts

---------

Co-authored-by: Henry Heng <henryheng@flowiseai.com>
…Smith, and other providers (fixes #5763) (#5764)

* fix(analytics): capture token usage and model for Langfuse, LangSmith, and other providers

What changed
------------
- handler.ts: Extended onLLMEnd() to accept string | structured output. When
  structured output is passed, we now extract content, usageMetadata (input/
  output/total tokens), and responseMetadata (model name) and forward them
  to all analytics providers. Added usage/model to Langfuse generation.end(),
  LangSmith llm_output, and token attributes for Lunary, LangWatch, Arize,
  Phoenix, and Opik. Call langfuse.flushAsync() after generation.end() so
  updates are sent before the request completes.
- LLM.ts: Pass full output object from prepareOutputObject() to onLLMEnd
  instead of finalResponse string, so usage and model are available.
- Agent.ts: Same as LLM.ts — pass output object to onLLMEnd.
- ConditionAgent.ts: Build analyticsOutput with content, usageMetadata, and
  responseMetadata from the LLM response and pass to onLLMEnd.
- handler.test.ts: Added unit tests for the extraction logic (string vs
  object, token field normalization, model name sources, missing fields).
  OpenAIAssistant.ts call sites unchanged (Assistants API; no usage data).

Why
---
Fixes #5763. Analytics (Langfuse, LangSmith, etc.) were only receiving
plain text from onLLMEnd; usage_metadata and response_metadata from
AIMessage were dropped, so token counts and model names were missing in
dashboards and cost tracking.

Testing
-------
- pnpm build succeeds with no TypeScript errors.
- Manual: Flowise started, Agentflow with ChatOpenAI run; LangSmith and
  Langfuse both show token usage and model on the LLM generation.
- Backward compatible: call sites that pass a string (e.g. OpenAIAssistant)
  still work; onLLMEnd treats string as content-only.

Co-authored-by: Cursor <cursoragent@cursor.com>

* refactor(analytics): address PR review feedback for token usage handling

- LangSmith: Only include token_usage properties that have defined values
  to avoid passing undefined to the API
- Extract common OpenTelemetry span logic into _endOtelSpan helper method
  used by arize, phoenix, and opik providers

Co-authored-by: Cursor <cursoragent@cursor.com>

* fix(analytics): LangSmith cost tracking and flow name in traces

- LangSmith: set usage_metadata and ls_model_name/ls_provider on run extra.metadata
  so LangSmith can compute costs from token counts (compatible with langsmith 0.1.6
  which has no end(metadata) param). Infer ls_provider from model name.
- buildAgentflow: use chatflow.name as analytics trace/run name instead of
  hardcoded 'Agentflow' so LangSmith and Langfuse show the Flowise flow name.

Co-authored-by: Cursor <cursoragent@cursor.com>

* update handlers to include model and provider for analytics

* fix: normalize provider names in analytics handler to include 'amazon_bedrock'

---------

Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Henry <hzj94@hotmail.com>
@pull pull bot locked and limited conversation to collaborators Feb 27, 2026
@pull pull bot added the ⤵️ pull label Feb 27, 2026
@pull pull bot merged commit 11261d2 into code:main Feb 27, 2026
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants