feat(metadata-editor): apply custom logic for multiple confidence scores and target locations#4502
Conversation
…res and target locations
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughFilter out metadata targetLocation entries that lack bounding boxes when computing highlights; select the lowest confidence score from arrays for AI suggestions and flatten nested reference arrays into a single targetLocation. Tests were added/updated to cover these behaviors. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/elements/content-sidebar/__tests__/useSidebarMetadataFetcher.test.tsx (1)
818-875: Add a regression test for empty confidence arrays.Please add a case where
confidence_score.field1is[]to ensure extraction does not throw and simply omitsaiSuggestionConfidenceScore.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/elements/content-sidebar/__tests__/useSidebarMetadataFetcher.test.tsx` around lines 818 - 875, Add a regression test that ensures empty confidence arrays are handled gracefully: in the test file use mockAPI.extractStructured.mockResolvedValue to return answer: { field1: 'value1' } and confidence_score: { field1: [] } (and created_at/completion_reason as in other tests), call setupHook('123', true) and await result.current.status === STATUS.SUCCESS, then call result.current.extractSuggestions('templateKey', 'global') and assert that the returned suggestion for field1 contains aiSuggestion: 'value1' but does NOT include aiSuggestionConfidenceScore (or that aiSuggestionConfidenceScore is undefined), referencing the existing tests like "should pick the lowest confidence score when confidence_score is an array" and the extractSuggestions flow to place the new test alongside them.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/elements/content-sidebar/hooks/useSidebarMetadataFetcher.ts`:
- Around line 287-293: The current computation of lowestConfidence in
useSidebarMetadataFetcher.ts uses Array.prototype.reduce without guarding
against an empty array, which throws when fieldConfidence === []; fix by
checking Array.isArray(fieldConfidence) && fieldConfidence.length > 0 before
reducing (or use reduce with an initial value such as fieldConfidence[0]) and
provide a safe fallback when the array is empty or fieldConfidence is
null/undefined so that result.aiSuggestionConfidenceScore is only assigned when
a valid lowestConfidence with .score and .level exists (or assign a safe
default/null). Update the logic around the symbols fieldConfidence,
lowestConfidence, and result.aiSuggestionConfidenceScore to handle empty arrays
and avoid TypeError.
---
Nitpick comments:
In `@src/elements/content-sidebar/__tests__/useSidebarMetadataFetcher.test.tsx`:
- Around line 818-875: Add a regression test that ensures empty confidence
arrays are handled gracefully: in the test file use
mockAPI.extractStructured.mockResolvedValue to return answer: { field1: 'value1'
} and confidence_score: { field1: [] } (and created_at/completion_reason as in
other tests), call setupHook('123', true) and await result.current.status ===
STATUS.SUCCESS, then call result.current.extractSuggestions('templateKey',
'global') and assert that the returned suggestion for field1 contains
aiSuggestion: 'value1' but does NOT include aiSuggestionConfidenceScore (or that
aiSuggestionConfidenceScore is undefined), referencing the existing tests like
"should pick the lowest confidence score when confidence_score is an array" and
the extractSuggestions flow to place the new test alongside them.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 5e18a21d-3f8e-4863-b9cd-700933d38bfe
📒 Files selected for processing (4)
src/elements/content-sidebar/__tests__/useMetadataFieldSelection.test.tssrc/elements/content-sidebar/__tests__/useSidebarMetadataFetcher.test.tsxsrc/elements/content-sidebar/hooks/useMetadataFieldSelection.tssrc/elements/content-sidebar/hooks/useSidebarMetadataFetcher.ts
Merge Queue Status
This pull request spent 10 minutes 25 seconds in the queue, including 10 minutes 3 seconds running CI. Required conditions to merge
|
For multiSelect metadata fields, the Intelligence API returns per-value confidence scores, confidence levels, and bounding boxes - rather than a single set per field. This doesn't align with the current review/approval (HITL) process or meta-metadata schema, which expect one confidence score and one flat list of bounding boxes per field.
This PR applies the agreed-upon workaround:
Some per-value granularity is lost (one score/level stored per field instead of per option), but this is acceptable for the HITL process.
Summary by CodeRabbit
Bug Fixes
Tests