feat: Add hybrid inference and structured output support to sample app#998
feat: Add hybrid inference and structured output support to sample app#998ryanwilson wants to merge 1 commit intomasterfrom
Conversation
- Added InferenceMode controls in MainLayout and ChatView. - Added JSON Schema textarea in RightSidebar and wired it to responseJsonSchema. - Updated available model names to Gemini 2.5 series.
There was a problem hiding this comment.
Code Review
This pull request introduces Hybrid Mode, enabling on-device inference via Chrome's Prompt API alongside cloud-based inference. Key updates include a new UI in the sidebar for managing model status and inference preferences, a JSON schema editor for structured outputs, and updated model configurations in the service layer. The review feedback identifies critical issues with the Chrome Prompt API access path, redundant code assignments, and opportunities to improve maintainability through refactoring duplicated logic and enhancing type safety.
| const checkModelAvailability = async () => { | ||
| setModelStatus("checking"); | ||
| try { | ||
| const ai = (window as any).LanguageModel; |
There was a problem hiding this comment.
The Chrome Prompt API is typically accessed via window.ai.languageModel. Accessing window.LanguageModel directly is likely to fail in standard Chrome environments, causing the availability check to incorrectly report the model as unavailable.
| const ai = (window as any).LanguageModel; | |
| const ai = (window as any).ai?.languageModel; |
| const handleDownloadModel = async () => { | ||
| setModelStatus("downloading"); | ||
| try { | ||
| const ai = (window as any).LanguageModel; |
There was a problem hiding this comment.
| nextState.generationConfig.responseJsonSchema = { type: "object", properties: {} }; // Default schema | ||
| nextState.generationConfig.responseSchema = undefined; |
There was a problem hiding this comment.
The assignment to responseSchema on line 214 is redundant because it is performed again on line 217 within the same logical block. Removing the duplicate assignment improves code clarity.
| nextState.generationConfig.responseJsonSchema = { type: "object", properties: {} }; // Default schema | |
| nextState.generationConfig.responseSchema = undefined; | |
| nextState.generationConfig.responseJsonSchema = { type: "object", properties: {} }; // Default schema |
| try { | ||
| const schema = JSON.parse(e.target.value); | ||
| handleModelParamsUpdate((prev: ModelParams) => ({ | ||
| ...prev, | ||
| generationConfig: { | ||
| ...prev.generationConfig, | ||
| responseJsonSchema: schema, | ||
| }, | ||
| })); | ||
| } catch (err) { | ||
| // Ignore invalid JSON while typing | ||
| } |
There was a problem hiding this comment.
The try-catch block silently ignores JSON parsing errors. While this prevents the application from crashing during typing, the user receives no feedback when their schema contains a syntax error. Consider adding a local state to track and display validation errors to the user for a better experience.
| const paramsToUse = isHybridMode ? { | ||
| mode: inferenceMode, | ||
| inCloudParams: params, | ||
| onDeviceParams: {} | ||
| } : params; |
There was a problem hiding this comment.
| inCloudParams: params, | ||
| onDeviceParams: {} | ||
| } : params; | ||
| const model = getGenerativeModel(aiInstance, paramsToUse as any); |
There was a problem hiding this comment.
This PR updates the AI React sample app to support new features of the Firebase AI SDK, specifically Hybrid Inference and Structured Output.
Changes:
RightSidebar) to toggle Hybrid Mode and select Inference Mode (Prefer On-Device, Only On-Device, etc.). UpdatedMainLayoutandChatViewto pass these parameters to the SDK.