Skip to content

feat: add web platform support for expo-ondevice-ai#9

Merged
hyochan merged 14 commits intomainfrom
feat/expo-web-support
Feb 24, 2026
Merged

feat: add web platform support for expo-ondevice-ai#9
hyochan merged 14 commits intomainfrom
feat/expo-web-support

Conversation

@hyochan
Copy link
Copy Markdown
Member

@hyochan hyochan commented Feb 22, 2026

Summary

  • Add full web platform support for expo-ondevice-ai using Chrome Built-in AI (Gemini Nano) APIs
  • All 8 AI features work on web: summarize, classify, extract, chat, chatStream, translate, rewrite, proofread
  • Add packages/web standalone SDK and ci-web.yml workflow
  • Update example app with web platform detection, fix streaming/UI issues
  • Update documentation and Claude guides with web platform coverage
  • Remove misleading build-web job from ci-react-native.yml (Nitro doesn't support web)

Changes

Core

  • ExpoOndeviceAiModule.web.ts — Chrome Built-in AI integration (Summarizer, LanguageModel, Translator, Rewriter, Writer APIs)
  • Web event emitter polyfill for Expo's native addListener/removeListeners
  • Lenient availability detection with 3s timeout
  • Chat streaming with cumulative/delta auto-detection

Example App

  • AIStatusBanner.tsx — Web-specific messages ("Chrome Built-in AI")
  • AppState.tsx — Web platform/provider labels
  • ChatDemo/index.tsx — Fix blue focus outline on web
  • ExtractDemo.tsx — Add organization entity color

CI

  • ci-expo.yml — Already has build-web (Expo web export)
  • ci-react-native.yml — Remove unrelated packages/web build job
  • ci-web.yml — New standalone web SDK CI

Documentation

  • README, Claude guides updated with Web platform info

Test plan

  • Run bun web in expo-ondevice-ai example, verify all 8 AI features work on Chrome 138+
  • Verify CI workflows pass (lint-and-test, build-android, build-ios, build-web)
  • Verify ci-react-native.yml no longer triggers on packages/web/** changes

Summary by CodeRabbit

  • New Features

    • Full Web support via Chrome Built‑in AI (Chrome 138+): summarize, translate, chat (streaming), rewrite, classify, extract, proofread, image description, writer APIs, plus a Web SDK and interactive demo.
  • Documentation

    • Expanded platform matrix, migration notes, setup guide, Quick Start, Web API reference, and Web-specific implementation guidance.
  • UX

    • Platform-aware status banners, device info, and minor web styling tweaks in the demo.
  • Tests & CI

    • Web test suite and CI workflows for Android, iOS, and Web.

Add full web support using Chrome Built-in AI (Gemini Nano) APIs.
All 8 AI features work on web: summarize, classify, extract, chat,
chatStream, translate, rewrite, and proofread.

- Add ExpoOndeviceAiModule.web.ts with Chrome Built-in AI integration
- Add packages/web standalone SDK
- Update example app with web platform detection (AIStatusBanner, AppState)
- Fix chat streaming (cumulative/delta auto-detection), input focus outline
- Add entity colors for organization type in ExtractDemo
- Add ci-web.yml workflow, update ci-expo.yml with build-web job
- Remove misleading build-web job from ci-react-native.yml
- Update README, Claude guides with web platform documentation
@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 22, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds full Web support: a new Locanara Web SDK package, a browser Expo module implementation, TypeScript types, docs, examples, tests, and CI workflows to integrate Chrome Built‑in AI (Gemini Nano) into on‑device AI APIs and platform matrices.

Changes

Cohort / File(s) Summary
Docs & Guides
\.claude/guides/09-expo-ondevice-ai.md, \.claude/guides/09-platform-differences.md, libraries/expo-ondevice-ai/README.md
Document Web support, Chrome Built‑in AI requirements (Chrome 138+), Web API mappings, availability/streaming notes, migration guidance, and updated platform matrices.
CI Workflows
.github/workflows/ci-expo.yml, .github/workflows/ci-react-native.yml, .github/workflows/ci-web.yml
Add multi-platform CI jobs for Expo (android/ios/web), React Native (android/ios) and a web package job (lint/test/build).
Expo Module & Types
libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts, libraries/expo-ondevice-ai/expo-module.config.json, libraries/expo-ondevice-ai/src/types.ts
New Expo Web implementation module exposing same public API surface for Web; enable web platform and add WEB to Platform union.
Expo Example App UI
libraries/expo-ondevice-ai/example/components/...
Platform-aware device/provider strings and messages, Web-specific banner for Chrome Built‑in AI, minor style tweak (outline removal) and entity color addition.
Web SDK Core
packages/web/src/Locanara.ts, packages/web/src/Types.ts, packages/web/src/Errors.ts, packages/web/src/index.ts
New Locanara Web SDK: singleton API (summarize/translate/chat/rewrite/classify/extract/proofread/describeImage/write) with streaming, capability probing, instance caching, Chrome Built‑in AI bindings, and model-management no-op stubs.
Web SDK Packaging & Tooling
packages/web/package.json, packages/web/tsconfig.json, packages/web/biome.json, packages/web/vite.config.ts, packages/web/vitest.config.ts, packages/web/scripts/build.mjs
Add package manifest, TS/Vite/Vitest configs, linter/formatter settings, and build/minify script.
Web SDK Example & UI
packages/web/example/index.html, packages/web/example/main.ts, packages/web/README.md
Full browser example UI and main.ts orchestrating capability checks, model download flow, streaming demos, and developer README/guides.
Web SDK Tests
packages/web/tests/Locanara.test.ts
Unit tests mocking browser AI services exercising lifecycle, capability detection, feature calls, caching, session management, and error handling.
Misc: Examples & Config
packages/web/example/*, packages/web/dist-example, libraries/expo-ondevice-ai/example/*, libraries/expo-ondevice-ai/example/app.config.ts, libraries/expo-ondevice-ai/android/build.gradle
Example scaffolding, Expo plugin toggle (enableLlamaCpp), iOS deployment target bump, and resilient android version lookup in build.gradle.

Sequence Diagram(s)

sequenceDiagram
    participant UI as App UI
    participant SDK as Locanara SDK
    participant Window as Browser (window / window.ai)
    participant Chrome as Chrome Built‑in AI (Gemini)

    UI->>SDK: initialize()
    SDK->>Window: probe APIs (Summarizer/LanguageModel/Translator/...)
    Window-->>SDK: constructors / availability
    SDK-->>UI: InitializeResult (capabilities)

    UI->>SDK: chatStreaming(message, opts)
    SDK->>Chrome: create session / promptStreaming()
    Chrome-->>SDK: ReadableStream chunks
    loop stream
      SDK-->>UI: yield chunk
    end

    UI->>SDK: downloadModel(id)
    SDK->>Chrome: (noop / managed by Chrome)
    SDK-->>UI: progress/completion (cached/no-op)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~55 minutes

Possibly related PRs

Suggested labels

🎯 feature

Poem

🐰
Tiny paws tap keys beneath the light,
Browser winds hum engines through the night.
From burrowed module to Chrome's bright glow,
A rabbit cheers — web AI on the go! 🥕✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.13% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add web platform support for expo-ondevice-ai' is concise, specific, and clearly summarizes the main change—adding web platform support to the expo-ondevice-ai library using Chrome Built-in AI.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/expo-web-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @hyochan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the reach of the expo-ondevice-ai library by introducing full support for the web platform. Leveraging Chrome's Built-in AI capabilities, it enables a comprehensive suite of AI features directly within web applications, mirroring the functionality previously available only on iOS and Android. This integration provides developers with a unified API for on-device AI across mobile and web, enhancing cross-platform consistency and accessibility.

Highlights

  • Web Platform Support: Added full web platform support for expo-ondevice-ai by integrating Chrome Built-in AI (Gemini Nano) APIs.
  • Comprehensive AI Features: Enabled all 8 AI features (summarize, classify, extract, chat, chatStream, translate, rewrite, proofread) to function on the web platform.
  • New Web SDK and CI: Introduced a new standalone packages/web SDK and a dedicated ci-web.yml workflow for web-specific continuous integration.
  • Example App Enhancements: Updated the example application with web platform detection, streaming fixes, and UI adjustments to support the new web capabilities.
  • Documentation Updates: Revised documentation, including READMEs and Claude guides, to reflect the new web platform coverage and requirements.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .claude/guides/09-expo-ondevice-ai.md
    • Updated the module description to include web platform support via Chrome Built-in AI.
    • Added Chrome 138+ (Chrome Built-in AI / Gemini Nano) as a new requirement.
    • Included ExpoOndeviceAiModule.web.ts in the file structure diagram.
    • Expanded the API mapping table to include Web (Chrome Built-in AI) implementations for each TypeScript function.
    • Added a new section detailing the Web Implementation (ExpoOndeviceAiModule.web.ts), outlining Chrome APIs used and key implementation details like availability detection, streaming, event emitter polyfill, instance caching, and no-op model management.
  • .claude/guides/09-platform-differences.md
    • Updated the guide's introduction to include Web as a platform.
    • Expanded the Feature Availability Matrix to include a 'Web' column, detailing which AI features are available via Chrome Built-in AI and which core framework features are native SDK only.
    • Added a new section for 'Web-Only Features (via expo-ondevice-ai)' explaining Chrome Built-in AI integration and APIs used.
    • Included a new 'Web' section under 'Implementation Differences' to list Chrome version and Gemini Nano requirements.
    • Added a 'Web (Expo)' section under 'Testing Platform-Specific Features' with instructions to run the web example.
    • Updated the summary points to reflect web support for AI features and backend differences.
  • libraries/expo-ondevice-ai/README.md
    • Added 'Web: Chrome 138+ (Chrome Built-in AI / Gemini Nano)' to the requirements list.
  • libraries/expo-ondevice-ai/example/bun.lock
    • Updated react-dom dependency from ^19.2.4 to 19.1.0.
    • Removed @isaacs/balanced-match and @isaacs/brace-expansion dependencies.
    • Updated @react-navigation/bottom-tabs from 7.12.0 to 7.14.0.
    • Updated @react-navigation/native-stack from 7.12.0 to 7.13.0.
    • Updated @types/node from 25.2.0 to 25.3.0.
    • Updated acorn from 8.15.0 to 8.16.0.
    • Updated acorn-walk from 8.3.4 to 8.3.5.
    • Updated ajv from 8.17.1 to 8.18.0.
    • Updated baseline-browser-mapping from 2.9.19 to 2.10.0.
    • Updated caniuse-lite from 1.0.30001767 to 1.0.30001770.
    • Updated electron-to-chromium from 1.5.286 to 1.5.302.
    • Updated metro-runtime from 0.83.3 to 0.83.4.
    • Updated metro-source-map from 0.83.3 to 0.83.4.
    • Updated minipass from 7.1.2 to 7.1.3.
    • Updated ob1 from 0.83.3 to 0.83.4.
    • Updated path-scurry from 2.0.1 to 2.0.2.
    • Updated tar from 7.5.7 to 7.5.9.
    • Updated undici-types from 7.16.0 to 7.18.2.
    • Updated various @expo and @react-native related glob and semver dependencies to newer versions.
  • libraries/expo-ondevice-ai/example/components/AppState.tsx
    • Modified device info setting logic to correctly identify 'Web' platform and 'Chrome Built-in AI' provider.
  • libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx
    • Added web-specific styling to remove the blue focus outline on text input fields.
  • libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx
    • Added 'organization' entity type with a specific color to the ENTITY_COLORS map.
  • libraries/expo-ondevice-ai/example/components/shared/AIStatusBanner.tsx
    • Updated the 'Checking AI Status' message to dynamically display 'Checking Chrome Built-in AI...' when on the web platform.
    • Adjusted the engine label display to show 'Chrome Built-in AI' for the web platform.
    • Added a new conditional block to display a web-specific warning message if Chrome Built-in AI is not available, including instructions to enable flags.
    • Modified the 'Device Not Supported' message to provide platform-specific guidance for iOS and Android, and to include web-specific information.
  • libraries/expo-ondevice-ai/expo-module.config.json
    • Added 'web' to the list of supported platforms.
  • libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts
    • Added a new file implementing the web-specific logic for ExpoOndeviceAiModule.
    • Implemented web polyfills for event emitters and availability checks for Chrome Built-in AI APIs.
    • Provided web-specific implementations for initialize, getDeviceCapability, summarize, classify, extract, chat, chatStream, translate, rewrite, and proofread using Chrome's native AI APIs (Summarizer, LanguageModel, Translator, Rewriter, Writer).
    • Included no-op implementations for model management functions as Chrome handles these automatically.
  • packages/web/README.md
    • Added a new README file detailing the @locanara/web SDK, its requirements, setup instructions for Chrome Built-in AI, installation, quick start guide, feature descriptions, streaming support, download progress monitoring, error handling, development instructions, API reference, browser support, and license.
  • packages/web/biome.json
    • Added a new Biome configuration file for code formatting and linting within the packages/web directory.
  • packages/web/example/index.html
    • Added a new HTML file for the web example application, including styling and basic structure for a tab-based UI with feature detail pages.
  • packages/web/example/main.ts
    • Added a new TypeScript file containing the main logic for the web example application, including navigation, feature rendering, AI status updates, device info display, and handlers for various AI features like summarize, classify, extract, chat, translate, rewrite, proofread, and describe image.
  • packages/web/package.json
    • Added a new package.json file for the @locanara/web package, defining its name, version, type, description, main entry points, files, publish configuration, repository, scripts (dev, build, test, lint), keywords, author, license, and development dependencies.
  • packages/web/scripts/build.mjs
    • Added a new build script for the @locanara/web package, which compiles TypeScript, bundles with esbuild, and minifies the output using Terser.
  • packages/web/src/Errors.ts
    • Added a new TypeScript file defining custom error codes and a LocanaraError class for the web SDK, including static helper methods for common error scenarios.
  • packages/web/src/Locanara.ts
    • Added a new TypeScript file implementing the core Locanara class for the web SDK, providing a unified interface to Chrome Built-in AI features.
    • Implemented methods for getDeviceCapability, summarize, summarizeStreaming, translate, translateStreaming, chat, chatStreaming, resetChat, rewrite, rewriteStreaming, classify, extract, proofread, describeImage, detectLanguage, write, writeStreaming, and utility functions for mapping options and monitoring download progress.
    • Included destroy method for cleanup and no-op model management functions.
  • packages/web/src/Types.ts
    • Added a new TypeScript file defining all types and enums for the Locanara Web SDK, including common types, web-specific enums for AI features, device capability interfaces, and Chrome Built-in AI API type declarations.
  • packages/web/src/index.ts
    • Added a new TypeScript entry point file for the @locanara/web package, exporting the main Locanara class, LocanaraError, and all defined types.
  • packages/web/tests/Locanara.test.ts
    • Added a new TypeScript file containing unit tests for the Locanara Web SDK, including tests for singleton instance, device capability, summarize, translate, chat, rewrite, classify, detect language, and error handling.
  • packages/web/tsconfig.json
    • Added a new TypeScript configuration file for the packages/web directory, specifying compiler options for target, modules, strictness, declaration generation, output directory, and file inclusions/exclusions.
  • packages/web/vite.config.ts
    • Added a new Vite configuration file for the packages/web example application, defining the root, server port, build output, and alias for the SDK.
  • packages/web/vitest.config.ts
    • Added a new Vitest configuration file for the packages/web package, specifying test environment, include patterns, coverage reporters, and an alias for the SDK.
Ignored Files
  • Ignored by pattern: .github/workflows/** (3)
    • .github/workflows/ci-expo.yml
    • .github/workflows/ci-react-native.yml
    • .github/workflows/ci-web.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces web platform support for the expo-ondevice-ai module, leveraging Chrome Built-in AI (Gemini Nano) APIs. Key changes include updating documentation across 09-expo-ondevice-ai.md, 09-platform-differences.md, and libraries/expo-ondevice-ai/README.md to reflect web compatibility, requirements, and feature availability. A new web-specific module, ExpoOndeviceAiModule.web.ts, was added to bridge the Expo API to Chrome's Summarizer, LanguageModel, Translator, Rewriter, and Writer APIs, including handling streaming and availability checks. The example application (libraries/expo-ondevice-ai/example/) was updated to correctly display platform information and AI status for web, and its bun.lock file saw numerous dependency updates. Additionally, a new standalone @locanara/web package was introduced, complete with its own README.md, biome.json configuration, a comprehensive example application (packages/web/example/index.html, packages/web/example/main.ts), build scripts, error handling (Errors.ts), and type definitions (Types.ts).

Review comments highlighted several areas for improvement: the platform property in AppState.tsx and ExpoOndeviceAiModule.web.ts was incorrectly typed or hardcoded to 'IOS' for web, requiring correction to 'WEB'. Multiple instances of direct user input concatenation into LLM prompts in classify, extract, and proofread methods were identified as prompt injection vulnerabilities in both ExpoOndeviceAiModule.web.ts and the standalone web SDK, with suggestions to use delimiters. Hardcoded confidence scores and generic entity types in classify and extract were noted as potential inaccuracies or limitations. Minor formatting issues like unnecessary blank lines in markdown files and a missing language identifier in a code snippet were also pointed out.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

🧹 Nitpick comments (14)
libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx (1)

311-311: Prefer casting the object rather than the value to avoid any in strict mode.

'none' as any leaks any into the style object. Since outlineStyle is a valid React Native Web property that simply isn't in the RN TextStyle types, a narrower object-level assertion is cleaner and keeps the any footprint minimal:

♻️ Proposed refactor
-    ...(Platform.OS === 'web' ? {outlineStyle: 'none' as any} : {}),
+    ...(Platform.OS === 'web' ? ({outlineStyle: 'none'} as object) : {}),

As per coding guidelines, **/*.ts?(x) files should use strict TypeScript; value-level any casts undermine that.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx`
at line 311, The inline value-level cast "'none' as any" in the spread
expression for the Platform.OS === 'web' branch should be replaced with an
object-level, narrow cast to avoid leaking any; update the expression
"(Platform.OS === 'web' ? {outlineStyle: 'none' as any} : {})" to use a single
object cast such as "(Platform.OS === 'web' ? ({ outlineStyle: 'none' } as
Partial<TextStyle>) : {})" (or "as unknown as TextStyle" if TextStyle isn't
imported) so the any is scoped to the whole object and not the string value.
packages/web/biome.json (1)

14-14: noNonNullAssertion: "off" silences all non-null assertions project-wide.

Disabling this rule globally can mask null-safety issues. Consider keeping it at "warn" to maintain visibility of risky ! assertions without blocking the build.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/biome.json` at line 14, The project-level Biome config currently
sets "noNonNullAssertion": "off" which mutes all non-null assertion warnings;
change the "noNonNullAssertion" setting in the biome.json config from "off" to
"warn" so non-null assertions (`!`) continue to surface as warnings (not errors)
across the codebase, preserving visibility without breaking builds.
libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx (1)

29-29: Consider adding organization to the entityTypes option array on Line 47.

The new color entry for organization is unused unless the model spontaneously emits that entity type, since entityTypes on Line 47 doesn't include 'organization'. Either add it to the extraction options or leave a comment explaining that this is intentional defensive coloring.

💡 Option: include `organization` in the requested entity types
- const options = {entityTypes: ['person', 'email', 'phone', 'date', 'location'], extractKeyValues: true};
+ const options = {entityTypes: ['person', 'email', 'phone', 'date', 'location', 'organization'], extractKeyValues: true};
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx`
at line 29, The color map adds an "organization" entry but the extraction
request's entityTypes array (the entityTypes option in the ExtractDemo
component) doesn't include 'organization', so the color is never used; update
the entityTypes option array to include 'organization' (i.e., add 'organization'
to the array referenced by the entityTypes variable/prop in ExtractDemo) or, if
intentional, add a one-line comment next to the entityTypes declaration
explaining this is defensive and that 'organization' is included only for
potential spontaneous model outputs.
libraries/expo-ondevice-ai/example/components/AppState.tsx (1)

244-255: platformLabel falls through to 'Android' for any unrecognized platform.

Platform.OS can also be 'macos' or 'windows' in some Expo environments. The current else-branch silently displays 'Android' in those cases. Low risk for an example app but worth tightening.

💡 Optional: handle all known Platform.OS values
- const platformLabel =
-   Platform.OS === 'ios'
-     ? 'iOS'
-     : Platform.OS === 'web'
-       ? 'Web'
-       : 'Android';
+ const platformLabel =
+   Platform.OS === 'ios'
+     ? 'iOS'
+     : Platform.OS === 'web'
+       ? 'Web'
+       : Platform.OS === 'macos'
+         ? 'macOS'
+         : 'Android';
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/example/components/AppState.tsx` around lines 244
- 255, platformLabel currently falls through to 'Android' for any unrecognized
Platform.OS; update the logic in the component that computes platformLabel (and
related providerLabel) to explicitly handle known values ('ios', 'android',
'web', 'macos', 'windows') instead of using a final else that assumes
Android—e.g., replace the nested ternary with a switch or explicit if/else
branches that return 'iOS', 'Android', 'Web', 'macOS', 'Windows' (or a clear
'Unknown' fallback) and adjust providerLabel to reference cap.platform
appropriately (checking 'IOS' vs other providers) so macOS/windows aren't
mislabeled as Android.
.github/workflows/ci-web.yml (1)

26-32: Consider enabling npm caching via actions/setup-node.

The setup-node action supports built-in caching which would speed up subsequent runs:

Proposed change
      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
+         cache: 'npm'
+         cache-dependency-path: packages/web/package-lock.json
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-web.yml around lines 26 - 32, Enable npm caching in the
GitHub Actions setup-node step to speed CI: update the existing
actions/setup-node@v4 step (the "Setup Node.js" job) to enable caching by adding
the cache: 'npm' option and set cache-dependency-path to your lockfile (e.g.,
package-lock.json or package.json) so subsequent runs restore node_modules
automatically; keep the "Install dependencies" step (npm install) but rely on
the setup-node cache settings to populate the cache.
.github/workflows/ci-react-native.yml (1)

65-147: Same needs and caching concerns as ci-expo.yml.

The build-android and build-ios jobs here share the same pattern: no needs: lint-and-test dependency and no Bun dependency caching. Consider applying the same improvements suggested for ci-expo.yml.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 65 - 147, Add the same
safety and caching improvements to the build-android and build-ios jobs: make
both jobs depend on the lint-and-test job by adding needs: lint-and-test to the
build-android and build-ios job definitions (so they don’t run if tests/lint
fail), and add Bun dependency caching around the Bun install steps (use
actions/cache keyed on libraries/react-native-ondevice-ai and the example’s
lockfile or bun.lockb) similar to the changes suggested for ci-expo.yml; update
the Install library dependencies and Install example dependencies steps in both
jobs to restore/save the Bun cache before/after bun install.
libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (2)

162-181: classify and extract create a new LanguageModel session per call without caching.

Unlike chat, these methods call lm.create({}) on every invocation. Session creation can be slow and resource-intensive. Consider caching a plain (no system-prompt) session for reuse across classify/extract calls, or document why per-call creation is preferred.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 162
- 181, The classify (and similarly extract) method creates a fresh LanguageModel
session via lm.create({}) on every call which is wasteful; modify
ExpoOndeviceAiModule.web to reuse a single plain session: add a module-scoped
cachedSession initialized lazily from getLanguageModelAPI()/lm.create({}) and
have classify and extract use cachedSession.prompt(...) (destroy only when
module unloads or via an explicit shutdown), or alternatively add a clear
comment explaining why per-call creation is required; references: classify,
extract, getLanguageModelAPI, lm.create, session, and session.destroy.

27-33: No mechanism to reset/destroy cached API instances.

Module-level cached instances (cachedSummarizer, cachedLanguageModel, cachedTranslators, etc.) persist for the lifetime of the page with no public API to release them. This could lead to stale sessions or memory pressure in long-lived SPAs. Consider exposing a destroy() or reset() method on the module.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 27 -
33, Add a public reset/destroy function (e.g., export function
resetCachedInstances() or destroyCachedInstances()) that releases and clears the
module-level cached instances: for cachedSummarizer, cachedLanguageModel,
cachedRewriter, cachedWriter call any available cleanup method on the instance
(close(), dispose(), destroy(), or similar) if present, then set them to null;
for cachedTranslators iterate the Map, call each translator's cleanup method if
present, then clear() the Map and reset cachedSummarizerKey to ''. Export this
function from ExpoOndeviceAiModule.web.ts and ensure callers can invoke it to
free resources in long‑lived SPA contexts.
.github/workflows/ci-expo.yml (1)

60-170: Consider adding needs: lint-and-test and dependency caching to build jobs.

The three new build jobs (build-android, build-ios, build-web) run in parallel with lint-and-test with no dependency. This means expensive builds (especially build-ios on macOS runners) will consume CI minutes even when lint or tests fail. Additionally, none of these jobs cache Bun dependencies, unlike the lint-and-test job.

Suggested improvements

Add needs and caching to each build job. Example for build-android:

  build-android:
    name: Build Android
    runs-on: ubuntu-latest
+   needs: lint-and-test

    steps:
      - uses: actions/checkout@v4

      - name: Setup Java
        uses: actions/setup-java@v4
        with:
          distribution: 'temurin'
          java-version: '17'

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'

      - name: Setup Bun
        uses: oven-sh/setup-bun@v2
        with:
          bun-version: '1.1.38'

+     - name: Cache Bun dependencies
+       uses: actions/cache@v4
+       with:
+         path: ~/.bun/install/cache
+         key: ${{ runner.os }}-bun-${{ hashFiles('libraries/expo-ondevice-ai/bun.lock') }}
+         restore-keys: |
+           ${{ runner.os }}-bun-

Apply the same pattern to build-ios and build-web.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-expo.yml around lines 60 - 170, The build jobs
(build-android, build-ios, build-web) should depend on lint-and-test and reuse
dependency caches: add needs: lint-and-test to each job and insert
actions/cache@v4 steps before Bun install to cache Bun deps for both
libraries/expo-ondevice-ai and libraries/expo-ondevice-ai/example (use keys that
include hashFiles('libraries/expo-ondevice-ai/bun.lockb') and
hashFiles('libraries/expo-ondevice-ai/example/bun.lockb') or similar), restoring
before running bun install; leave the existing CocoaPods cache step in build-ios
as-is and ensure the Bun cache runs before its bun install steps in that job as
well.
packages/web/src/Locanara.ts (1)

892-896: Non-null assertion on canvas.getContext("2d") could fail.

getContext("2d") returns null if another context type (e.g., WebGL) was already obtained on the same canvas. Since this canvas is freshly created, it's effectively impossible here, but a guard would be more defensive.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/src/Locanara.ts` around lines 892 - 896, The code uses a
non-null assertion for canvas.getContext("2d") which could return null; update
the creation of ctx to defensively handle that case by checking the return value
of canvas.getContext("2d") (the ctx variable) and either throwing a clear error
or returning early if ctx is null, so downstream code using ctx (created from
the canvas variable) never assumes non-null; locate the canvas,
canvas.width/height and ctx assignment in the same block and add the null-check
and proper error handling.
packages/web/example/main.ts (2)

447-457: replace('_', '') only removes the first underscore — fragile for future feature types.

Currently works since only DESCRIBE_IMAGE has an underscore, but any future enum value with multiple underscores (e.g., DESCRIBE_IMAGE_V2) would break.

Suggested fix
-      const featureId = f.feature.toLowerCase().replace('_', '')
+      const featureId = f.feature.toLowerCase().replaceAll('_', '')
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/example/main.ts` around lines 447 - 457, The code that builds
featureId in the loop over capability.availableFeatures uses
feature.toLowerCase().replace('_', '') which only removes the first underscore;
update the featureId generation (in the for loop that defines featureId from
f.feature) to remove all underscores (e.g., use a global replace or replaceAll)
so multi-underscore enum values like DESCRIBE_IMAGE_V2 are normalized correctly
while preserving the existing toLowerCase and subsequent assignments to
featureAvailability, hasAnyAvailable, and hasDownloadable.

120-151: markdownToHtml re-introduces HTML tags after escaping — XSS-safe but fragile.

The escapeHtml → regex pipeline is currently safe since $1 captures are already escaped. However, this hand-rolled markdown renderer is a fragile pattern. For an example app the risk is acceptable, but note that if any regex is later modified to capture unescaped content, it would open an XSS vector. The static analysis warnings for innerHTML at Line 160 stem from this pattern.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/example/main.ts` around lines 120 - 151, The markdownToHtml
function currently escapes input via escapeHtml then reinserts HTML via regex
replacements which is fragile and can reintroduce XSS when patterns change; fix
by either (A) replacing this hand-rolled logic with a vetted markdown-to-HTML
library and then sanitize the resulting HTML (e.g., run through DOMPurify)
before assigning to innerHTML, or (B) rewrite markdownToHtml so it parses tokens
first, escapes only plain text nodes with escapeHtml and then constructs HTML
nodes (never performing regex replacements that inject raw capture groups),
referencing the markdownToHtml and escapeHtml functions and the site code path
that sets innerHTML. Ensure the final HTML is sanitized before any innerHTML
assignment.
packages/web/tests/Locanara.test.ts (1)

37-52: Mock promptStreaming never exercises the streaming path.

The mock at Lines 39-47 returns a ReadableStream-like object, but no test actually calls chatStreaming to verify the streaming chunking logic. Since the PR summary highlights "cumulative/delta auto-detection" as a key behavior, this would benefit from a dedicated streaming test.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/tests/Locanara.test.ts` around lines 37 - 52, The
mockLanguageModelSession.promptStreaming currently returns a reader but no test
exercises chatStreaming; add a new unit test that uses the existing
mockLanguageModelSession (or adjust it to expose a true ReadableStream behavior)
and calls the chatStreaming function exported/used by the component to consume
the stream, reading chunks from getReader().read() and asserting the component's
cumulative vs delta auto-detection and chunk-assembly logic; specifically
reference mockLanguageModelSession.promptStreaming and the chatStreaming helper
(or method) in your test, drive the mocked reads (chunk1, chunk2, done) and
assert the final assembled message and detection flags to cover the streaming
path.
packages/web/src/Types.ts (1)

33-85: Several "common" enums appear unused in the web SDK.

SummarizeInputType, SummarizeOutputType, RewriteOutputType, ProofreadInputType, CapabilityLevel, FeatureStatus, and ExecutionState are declared as "common — from type.graphql" but don't appear to be referenced by the web SDK's Locanara.ts implementation. They may exist for API parity with the native libraries — if so, consider adding a brief comment to that effect to clarify intent.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/src/Types.ts` around lines 33 - 85, These enums
(SummarizeInputType, SummarizeOutputType, RewriteOutputType, ProofreadInputType,
CapabilityLevel, FeatureStatus, ExecutionState) are unused in the web SDK but
mirror types from common/type.graphql; update packages/web/src/Types.ts to add a
brief top-of-file or per-enum comment indicating they are intentionally retained
for API parity with native libraries (or centralized schema) so future
maintainers know they are kept on purpose rather than dead code, referencing the
enum names above to document intent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/guides/09-expo-ondevice-ai.md:
- Line 7: Update the incorrect feature count text "all 7 AI features" to "all 8
AI features" so it matches the mapping table and PR description; locate the
phrase "all 7 AI features" in the Expo module summary and change it to "all 8 AI
features", and double-check the mapping table rows (summarize, classify,
extract, chat, chatStream, translate, rewrite, proofread) remain unchanged to
ensure consistency.

In @.claude/guides/09-platform-differences.md:
- Around line 260-265: The docs have inconsistent Chrome minimums; standardize
to a single minimum (use Chrome 140 as the canonical minimum) by updating the
"### Web" requirements table entry that currently lists "Chrome 138" to "Chrome
140" and updating any other occurrences that say "Chrome 138+" (e.g., the other
mentions around the Web section) so they match the example app's "Chrome 140+"
string; ensure the table's Minimum column and all example/setup text use the
exact same phrasing "Chrome 140" or "Chrome 140+" consistently.

In `@libraries/expo-ondevice-ai/example/components/shared/AIStatusBanner.tsx`:
- Around line 53-57: The loading title currently uses Platform.OS === 'web' to
choose between web and Apple wording inside the AIStatusBanner component; update
that conditional to distinguish iOS from other native platforms so Android
doesn't show "Checking Apple Intelligence...". Replace the ternary so that
Platform.OS === 'web' keeps the web label, Platform.OS === 'ios' shows "Checking
Apple Intelligence...", and the fallback (else) shows an Android-appropriate
label such as "Checking Android Intelligence..." (locate the Text node rendering
styles.title in AIStatusBanner to change the conditional).

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 235-254: The cachedLanguageModel is reused across chats so
subsequent calls ignore a new options.systemPrompt; update chat (and chatStream)
to either recreate the model when a different systemPrompt is provided or always
create a fresh session like classify/extract do—specifically, track the
lastSystemPrompt (or compare options.systemPrompt) before reusing
cachedLanguageModel and if it differs (including one is undefined and the other
not) invalidate/replace cachedLanguageModel by calling lm.create({
initialPrompts: ... }) so each call honors the provided systemPrompt; apply the
same logic to the chatStream implementation to prevent silent ignoring of
changed system prompts.
- Around line 60-78: The catch in checkAvailability currently falls back to
hasAPI(api) which treats timeouts/errors as "available"; change the catch to
mark the API unavailable instead (return false) so an availability() rejection
or timeout won't report a feature as supported, and if you need a configurable
lenient behavior add an explicit option/flag to checkAvailability to
conditionally fall back to hasAPI(api) rather than doing so unconditionally.
- Around line 108-122: The returned capability object in
ExpoOndeviceAiModule.web.ts is incorrectly hardcoding platform: 'IOS' — update
the DeviceCapability type (the platform union in the shared ./types
DeviceCapability) to include 'WEB' and change the return to use platform: 'WEB'
in the capability object (the function returning { isSupported, isModelReady,
platform, features }). If you cannot change the shared type, use a clearly named
fallback like 'WEB' via a local cast to satisfy TypeScript (or add 'WEB' to the
DeviceCapability union) so consumers reading capability.platform from
ExpoOndeviceAiModule.web.ts see the correct 'WEB' value instead of 'IOS'.

In `@packages/web/example/index.html`:
- Around line 924-926: Update the Chrome minimum version text in the setup step
to match the other docs: change the content of the setup-step-title div that
currently reads "1. Use Chrome 140+ (Latest Stable recommended)" to use the same
minimum as the other docs (e.g., "Chrome 138+"), and ensure any linked or
adjacent copy (the setup-step-desc or similar nearby text) does not contradict
this; make the change in the element with class "setup-step-title" so it matches
09-platform-differences.md and the Expo guide.
- Around line 980-996: Update the external GitHub anchor link href to the
correct organization: locate the <a> element that wraps the settings-item (the
anchor with target="_blank" rel="noopener noreferrer" and the GitHub icon) and
change its href from "https://github.com/locanara/locanara" to
"https://github.com/hyodotdev/locanara".

In `@packages/web/example/main.ts`:
- Line 959: The catch block currently injects (error as Error).message directly
into innerHTML; change it to escape the message before inserting by calling the
existing escapeHtml function (or use element.textContent) so no unescaped HTML
is written; locate the catch handling where (error as Error).message is used and
replace the direct innerHTML assignment with one that wraps the message with
escapeHtml(error.message) (or assigns to textContent) and keep the surrounding
error formatting/logging unchanged.
- Around line 809-822: The copyToClipboard function calls
navigator.clipboard.writeText without handling rejections, which may cause
unhandled promise rejections; update copyToClipboard so the writeText promise
has a .catch handler (or use async/await with try/catch) that logs the error and
provides user feedback on failure, and ensure the UI state changes (the green
"Copied!" swap performed in copyToClipboard) only occur after a successful
write; optionally implement a fallback (e.g., document.execCommand or selecting
a hidden textarea) inside the catch to attempt copying in insecure contexts and
restore element styles/text on failure.
- Around line 660-674: The example breaks because locanara.chatStreaming yields
cumulative strings from promptStreaming in some Chrome versions; update the SDK
by normalizing inside Locanara.chatStreaming: keep a per-stream previousText
buffer, and for each incoming chunk compare it to previousText—if the chunk
startsWith previousText yield chunk.slice(previousText.length) (the delta) and
set previousText = chunk; if it doesn't (e.g., shorter or unrelated), treat it
as a reset and yield the full chunk and update previousText; ensure this logic
runs before yielding any chunk to callers so consumers (and the example's
response += chunk) receive consistent delta-only chunks.

In `@packages/web/package.json`:
- Around line 18-21: The package.json publishConfig currently sets "registry" to
"https://registry.npmjs.org/" which conflicts with the README .npmrc
instructions that point `@locanara` to GitHub Packages; update
publishConfig.registry to "https://npm.pkg.github.com" to match the README (or
alternatively update the README .npmrc section to document npmjs.org instead)
and ensure the publishConfig block and README .npmrc guidance are consistent;
check the "publishConfig" object in package.json and the README .npmrc section
and align them to the chosen registry.
- Line 2: The package.json "name" field is incorrect ("locanara") and must be
changed to the scoped package name used across the repo; update the "name" value
to "@locanara/web" so imports, Vite alias, and README install instructions
resolve correctly (replace the existing "locanara" string in package.json with
"@locanara/web").

In `@packages/web/README.md`:
- Line 7: Update the Chrome requirement in the README where the line reads
"**Chrome 131+** with Built-in AI enabled" so it matches the PR test plan
(change to "**Chrome 138+** with Built-in AI enabled"); ensure the same version
string is used wherever the README references Chrome minimum version to keep
documentation consistent.
- Line 41: Update the inline comment for the call to
window.LanguageModel?.availability() to reflect the correct expected return
values per current Chrome docs; replace the incorrect "'readily'" with
"'available'" (or mention the valid set: "unavailable", "downloadable",
"downloading", or "available") so the comment accurately describes
LanguageModel.availability().

In `@packages/web/scripts/build.mjs`:
- Around line 28-43: The minify call can return a result with result.code ===
undefined which would write the literal "undefined" to dist/index.js; update the
build step to guard the minify result (the result variable from minify) before
calling fs.writeFileSync: check that result and result.code are defined, and if
not, surface a clear error (throw or processLogger.error + process.exit(1)) or
fall back to the original code string, then only call
fs.writeFileSync("dist/index.js", result.code) when the code is valid; reference
the minify invocation, the result variable, and the fs.writeFileSync call when
locating where to add the guard.

In `@packages/web/src/Errors.ts`:
- Around line 15-24: Add the "use strict"; directive as the very first statement
in the file (above the LocanaraError class) to enable strict mode; update the
top of the file so the string literal "use strict"; appears before the
declaration of class LocanaraError (and before any imports or other code), and
do not add the Object.setPrototypeOf workaround because tsconfig targets ES2022
which handles Error subclassing correctly; keep the existing constructor,
this.name, this.code, and this.details assignments unchanged.

In `@packages/web/src/Locanara.ts`:
- Around line 488-523: The chat session keeps reusing the first-created session
stored in this._languageModel, so subsequent calls to chat() or chatStreaming()
ignore new options (systemPrompt, temperature, topK); update the logic in
chat()/chatStreaming() to detect when the incoming options differ from the
current session (compare systemPrompt, temperature, topK and initialPrompts) and
recreate the session via lmAPI.create(...) (or call resetChat() then recreate)
when they change, ensuring you pass the new initialPrompts and monitor from
createMonitor(); alternatively, explicitly document that options only apply on
first creation and require callers to invoke resetChat() to apply new options.
- Around line 556-572: The chat streaming loop currently yields raw chunks from
this._languageModel.promptStreaming(message) which can be cumulative on some
Chrome versions; modify the reader loop in the chatStreaming method to normalize
to deltas by tracking the last emitted string (e.g., lastEmitted), and for each
chunk: if it's a string and startsWith(lastEmitted) emit only the suffix (update
lastEmitted), if it's a string but not a prefix (or shorter) treat it as a full
new delta and emit + set lastEmitted to it, and if the chunk is non-string
(object/bytes) yield it unchanged; keep reader.releaseLock() in the finally
block and ensure empty or duplicate suffixes are not yielded.

In `@packages/web/vite.config.ts`:
- Around line 14-16: The alias currently builds a path using new
URL('./src/index.ts', import.meta.url).pathname which yields an invalid Windows
path; replace that usage by converting the file URL to a filesystem path via
fileURLToPath(import.meta.url) (i.e., import { fileURLToPath } from 'url') and
use fileURLToPath(new URL('./src/index.ts', import.meta.url)) for the
'@locanara/web' alias so Vite receives a proper platform-native path; update the
import list and the alias expression in vite.config.ts accordingly.

---

Nitpick comments:
In @.github/workflows/ci-expo.yml:
- Around line 60-170: The build jobs (build-android, build-ios, build-web)
should depend on lint-and-test and reuse dependency caches: add needs:
lint-and-test to each job and insert actions/cache@v4 steps before Bun install
to cache Bun deps for both libraries/expo-ondevice-ai and
libraries/expo-ondevice-ai/example (use keys that include
hashFiles('libraries/expo-ondevice-ai/bun.lockb') and
hashFiles('libraries/expo-ondevice-ai/example/bun.lockb') or similar), restoring
before running bun install; leave the existing CocoaPods cache step in build-ios
as-is and ensure the Bun cache runs before its bun install steps in that job as
well.

In @.github/workflows/ci-react-native.yml:
- Around line 65-147: Add the same safety and caching improvements to the
build-android and build-ios jobs: make both jobs depend on the lint-and-test job
by adding needs: lint-and-test to the build-android and build-ios job
definitions (so they don’t run if tests/lint fail), and add Bun dependency
caching around the Bun install steps (use actions/cache keyed on
libraries/react-native-ondevice-ai and the example’s lockfile or bun.lockb)
similar to the changes suggested for ci-expo.yml; update the Install library
dependencies and Install example dependencies steps in both jobs to restore/save
the Bun cache before/after bun install.

In @.github/workflows/ci-web.yml:
- Around line 26-32: Enable npm caching in the GitHub Actions setup-node step to
speed CI: update the existing actions/setup-node@v4 step (the "Setup Node.js"
job) to enable caching by adding the cache: 'npm' option and set
cache-dependency-path to your lockfile (e.g., package-lock.json or package.json)
so subsequent runs restore node_modules automatically; keep the "Install
dependencies" step (npm install) but rely on the setup-node cache settings to
populate the cache.

In `@libraries/expo-ondevice-ai/example/components/AppState.tsx`:
- Around line 244-255: platformLabel currently falls through to 'Android' for
any unrecognized Platform.OS; update the logic in the component that computes
platformLabel (and related providerLabel) to explicitly handle known values
('ios', 'android', 'web', 'macos', 'windows') instead of using a final else that
assumes Android—e.g., replace the nested ternary with a switch or explicit
if/else branches that return 'iOS', 'Android', 'Web', 'macOS', 'Windows' (or a
clear 'Unknown' fallback) and adjust providerLabel to reference cap.platform
appropriately (checking 'IOS' vs other providers) so macOS/windows aren't
mislabeled as Android.

In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx`:
- Line 311: The inline value-level cast "'none' as any" in the spread expression
for the Platform.OS === 'web' branch should be replaced with an object-level,
narrow cast to avoid leaking any; update the expression "(Platform.OS === 'web'
? {outlineStyle: 'none' as any} : {})" to use a single object cast such as
"(Platform.OS === 'web' ? ({ outlineStyle: 'none' } as Partial<TextStyle>) :
{})" (or "as unknown as TextStyle" if TextStyle isn't imported) so the any is
scoped to the whole object and not the string value.

In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx`:
- Line 29: The color map adds an "organization" entry but the extraction
request's entityTypes array (the entityTypes option in the ExtractDemo
component) doesn't include 'organization', so the color is never used; update
the entityTypes option array to include 'organization' (i.e., add 'organization'
to the array referenced by the entityTypes variable/prop in ExtractDemo) or, if
intentional, add a one-line comment next to the entityTypes declaration
explaining this is defensive and that 'organization' is included only for
potential spontaneous model outputs.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 162-181: The classify (and similarly extract) method creates a
fresh LanguageModel session via lm.create({}) on every call which is wasteful;
modify ExpoOndeviceAiModule.web to reuse a single plain session: add a
module-scoped cachedSession initialized lazily from
getLanguageModelAPI()/lm.create({}) and have classify and extract use
cachedSession.prompt(...) (destroy only when module unloads or via an explicit
shutdown), or alternatively add a clear comment explaining why per-call creation
is required; references: classify, extract, getLanguageModelAPI, lm.create,
session, and session.destroy.
- Around line 27-33: Add a public reset/destroy function (e.g., export function
resetCachedInstances() or destroyCachedInstances()) that releases and clears the
module-level cached instances: for cachedSummarizer, cachedLanguageModel,
cachedRewriter, cachedWriter call any available cleanup method on the instance
(close(), dispose(), destroy(), or similar) if present, then set them to null;
for cachedTranslators iterate the Map, call each translator's cleanup method if
present, then clear() the Map and reset cachedSummarizerKey to ''. Export this
function from ExpoOndeviceAiModule.web.ts and ensure callers can invoke it to
free resources in long‑lived SPA contexts.

In `@packages/web/biome.json`:
- Line 14: The project-level Biome config currently sets "noNonNullAssertion":
"off" which mutes all non-null assertion warnings; change the
"noNonNullAssertion" setting in the biome.json config from "off" to "warn" so
non-null assertions (`!`) continue to surface as warnings (not errors) across
the codebase, preserving visibility without breaking builds.

In `@packages/web/example/main.ts`:
- Around line 447-457: The code that builds featureId in the loop over
capability.availableFeatures uses feature.toLowerCase().replace('_', '') which
only removes the first underscore; update the featureId generation (in the for
loop that defines featureId from f.feature) to remove all underscores (e.g., use
a global replace or replaceAll) so multi-underscore enum values like
DESCRIBE_IMAGE_V2 are normalized correctly while preserving the existing
toLowerCase and subsequent assignments to featureAvailability, hasAnyAvailable,
and hasDownloadable.
- Around line 120-151: The markdownToHtml function currently escapes input via
escapeHtml then reinserts HTML via regex replacements which is fragile and can
reintroduce XSS when patterns change; fix by either (A) replacing this
hand-rolled logic with a vetted markdown-to-HTML library and then sanitize the
resulting HTML (e.g., run through DOMPurify) before assigning to innerHTML, or
(B) rewrite markdownToHtml so it parses tokens first, escapes only plain text
nodes with escapeHtml and then constructs HTML nodes (never performing regex
replacements that inject raw capture groups), referencing the markdownToHtml and
escapeHtml functions and the site code path that sets innerHTML. Ensure the
final HTML is sanitized before any innerHTML assignment.

In `@packages/web/src/Locanara.ts`:
- Around line 892-896: The code uses a non-null assertion for
canvas.getContext("2d") which could return null; update the creation of ctx to
defensively handle that case by checking the return value of
canvas.getContext("2d") (the ctx variable) and either throwing a clear error or
returning early if ctx is null, so downstream code using ctx (created from the
canvas variable) never assumes non-null; locate the canvas, canvas.width/height
and ctx assignment in the same block and add the null-check and proper error
handling.

In `@packages/web/src/Types.ts`:
- Around line 33-85: These enums (SummarizeInputType, SummarizeOutputType,
RewriteOutputType, ProofreadInputType, CapabilityLevel, FeatureStatus,
ExecutionState) are unused in the web SDK but mirror types from
common/type.graphql; update packages/web/src/Types.ts to add a brief top-of-file
or per-enum comment indicating they are intentionally retained for API parity
with native libraries (or centralized schema) so future maintainers know they
are kept on purpose rather than dead code, referencing the enum names above to
document intent.

In `@packages/web/tests/Locanara.test.ts`:
- Around line 37-52: The mockLanguageModelSession.promptStreaming currently
returns a reader but no test exercises chatStreaming; add a new unit test that
uses the existing mockLanguageModelSession (or adjust it to expose a true
ReadableStream behavior) and calls the chatStreaming function exported/used by
the component to consume the stream, reading chunks from getReader().read() and
asserting the component's cumulative vs delta auto-detection and chunk-assembly
logic; specifically reference mockLanguageModelSession.promptStreaming and the
chatStreaming helper (or method) in your test, drive the mocked reads (chunk1,
chunk2, done) and assert the final assembled message and detection flags to
cover the streaming path.

- Add 'WEB' to Platform type union (both expo and react-native libraries)
- Fix platform: 'IOS' → 'WEB' in web module getDeviceCapability
- Fix systemPrompt being ignored after first chat/chatStream call
- Fix AIStatusBanner loading text for Android ("Gemini Nano" not "Apple Intelligence")
- Fix "7 AI features" → "8 AI features" in Claude guide
- Fix Chrome version inconsistency (140 → 138) in web example
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
packages/web/example/index.html (2)

1075-1078: Redundant gap inline style — already set by .controls.

The .controls rule in the stylesheet already defines gap: 0.5rem; the style="gap: 0.5rem;" on line 1075 is redundant.

🧹 Proposed fix
-        <div class="controls" style="gap: 0.5rem;">
+        <div class="controls">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/example/index.html` around lines 1075 - 1078, Remove the
redundant inline gap from the div element that uses class "controls": edit the
element with class="controls" (the container that currently contains the buttons
with ids "chat-btn" and "chat-reset-btn") and delete the style fragment "gap:
0.5rem;" so the layout relies on the existing .controls CSS rule; leave the
other inline styles (e.g., margin, flex) unchanged.

1001-1003: Duplicate <!-- Summarize --> comment.

Lines 1001 and 1002 both carry the same comment; one should be removed.

🧹 Proposed fix
-    <!-- Summarize -->
     <!-- Summarize -->
     <div class="detail-page" id="page-summarize">
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/web/example/index.html` around lines 1001 - 1003, Remove the
duplicate HTML comment above the detail page: keep a single <!-- Summarize -->
and delete the redundant one so the block containing <div class="detail-page"
id="page-summarize"> only has one comment; update the area around the <div
class="detail-page" id="page-summarize"> to remove the extra comment occurrence.
libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (2)

240-251: Duplicated systemPrompt cache-invalidation logic in chat and chatStream.

The session creation/invalidation block (lines 240–251) is copy-pasted verbatim into chatStream (lines 274–285). Extracting it into a helper reduces the surface for future divergence.

♻️ Suggested helper extraction
+async function getOrCreateSession(
+  lm: any,
+  options?: ChatOptions,
+): Promise<any> {
+  const newSystemPrompt = options?.systemPrompt;
+  if (!cachedLanguageModel || newSystemPrompt !== cachedSystemPrompt) {
+    cachedLanguageModel?.destroy?.();
+    const initialPrompts: Array<{role: string; content: string}> = [];
+    if (newSystemPrompt) {
+      initialPrompts.push({role: 'system', content: newSystemPrompt});
+    }
+    cachedLanguageModel = await lm.create({
+      initialPrompts: initialPrompts.length > 0 ? initialPrompts : undefined,
+    });
+    cachedSystemPrompt = newSystemPrompt;
+  }
+  return cachedLanguageModel;
+}

Then both methods become:

-  const newSystemPrompt = options?.systemPrompt;
-  if (!cachedLanguageModel || newSystemPrompt !== cachedSystemPrompt) {
-    cachedLanguageModel?.destroy?.();
-    const initialPrompts: Array<{role: string; content: string}> = [];
-    if (newSystemPrompt) {
-      initialPrompts.push({role: 'system', content: newSystemPrompt});
-    }
-    cachedLanguageModel = await lm.create({
-      initialPrompts: initialPrompts.length > 0 ? initialPrompts : undefined,
-    });
-    cachedSystemPrompt = newSystemPrompt;
-  }
+  await getOrCreateSession(lm, options);

Also applies to: 274-285

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 240
- 251, The session creation/invalidation logic for cachedLanguageModel and
cachedSystemPrompt is duplicated in chat and chatStream; extract that block into
a small helper (e.g., ensureCachedLanguageModel or initCachedModel) that accepts
lm and options.systemPrompt (or newSystemPrompt) and returns/updates
cachedLanguageModel and cachedSystemPrompt as needed; replace the duplicated
blocks in chat and chatStream with calls to this helper and ensure it calls
cachedLanguageModel?.destroy?.() and sets cachedSystemPrompt and creates
cachedLanguageModel via lm.create({ initialPrompts }) with the same
initialPrompts logic.

356-368: cachedRewriter is never actually reused — cache key by tone+length like cachedTranslators does.

Lines 357–361 unconditionally destroy and recreate the Rewriter instance on every call, so the cachedRewriter variable only holds the last instance for cleanup, never providing a cache hit. This contrasts with the cachedTranslators map (line 32) which correctly keys instances by sourceLanguage-targetLanguage. Consider the same approach here.

♻️ Proposed refactor
-let cachedRewriter: any = null;
+const cachedRewriters = new Map<string, any>();
   const toneMap: Record<string, string> = { ... };
   const lengthMap: Record<string, string> = { ... };
 
+  const key = `${toneMap[options.outputType] ?? 'as-is'}:${lengthMap[options.outputType] ?? 'as-is'}`;
-  cachedRewriter?.destroy();
-  cachedRewriter = await Rewriter.create({
-    tone: toneMap[options.outputType] ?? 'as-is',
-    length: lengthMap[options.outputType] ?? 'as-is',
-  });
+  if (!cachedRewriters.has(key)) {
+    cachedRewriters.set(key, await Rewriter.create({
+      tone: toneMap[options.outputType] ?? 'as-is',
+      length: lengthMap[options.outputType] ?? 'as-is',
+    }));
+  }
+  const cachedRewriter = cachedRewriters.get(key)!;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 356
- 368, cachedRewriter is being destroyed and recreated on every call so it never
provides cache hits; change this to a keyed cache like cachedTranslators: create
a Map (e.g., cachedRewriters) keyed by `${tone}-${length}` using
toneMap[options.outputType] and lengthMap[options.outputType], look up an
existing Rewriter from the map before calling Rewriter.create, only call
Rewriter.create when there is a cache miss, store the new instance in
cachedRewriters, and update any cleanup logic that currently references
cachedRewriter to iterate/clear the map as needed (use symbols cachedRewriter,
Rewriter.create, toneMap, lengthMap, options.outputType, cachedTranslators to
locate places to change).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 109-122: The features object incorrectly sets proofread to
hasWriter even though proofread() will use LanguageModel first; update the
features mapping so that proofread is true when either hasLanguageModel or
hasWriter is available (i.e., change proofread: hasWriter to proofread:
hasLanguageModel || hasWriter) to match the behavior in the proofread()
implementation.

In `@packages/web/example/index.html`:
- Line 915: The interactive elements (the div with class "setup-guide-header"
that calls toggleSetupGuide() and the code elements that call
copyToClipboard(...)) are not keyboard-accessible; replace the <div
class="setup-guide-header" onclick="toggleSetupGuide()"> with a <button> (or add
tabindex="0", role="button" and an onkeydown handler that triggers
toggleSetupGuide on Enter/Space) and convert each <code
onclick="copyToClipboard(...)"> used as a copy affordance into a semantic
<button> (or similarly add tabindex/role/onkeydown) so keyboard users can focus
and activate them; update the CSS for the .copy-code / setup-guide-header button
styles to reset default button styling so appearance is unchanged while behavior
is accessible.

---

Duplicate comments:
In `@libraries/expo-ondevice-ai/example/components/shared/AIStatusBanner.tsx`:
- Around line 53-59: The Platform.OS three-way conditional in AIStatusBanner's
Text (styles.title) now correctly renders platform-specific loading titles
('Checking Chrome Built-in AI...' for web, 'Checking Apple Intelligence...' for
ios, and 'Checking Gemini Nano...' for android); no further changes
required—keep the conditional in the AIStatusBanner component as implemented and
ensure the Text with styles.title retains those exact strings.

In `@packages/web/example/index.html`:
- Around line 980-996: The GitHub link's anchor element (the <a> with
href="https://github.com/locanara/locanara" that wraps the settings-item whose
settings-text is "GitHub") uses the wrong repo URL; update that anchor's href to
"https://github.com/hyodotdev/locanara" so the GitHub settings-item points to
the correct repository.

---

Nitpick comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 240-251: The session creation/invalidation logic for
cachedLanguageModel and cachedSystemPrompt is duplicated in chat and chatStream;
extract that block into a small helper (e.g., ensureCachedLanguageModel or
initCachedModel) that accepts lm and options.systemPrompt (or newSystemPrompt)
and returns/updates cachedLanguageModel and cachedSystemPrompt as needed;
replace the duplicated blocks in chat and chatStream with calls to this helper
and ensure it calls cachedLanguageModel?.destroy?.() and sets cachedSystemPrompt
and creates cachedLanguageModel via lm.create({ initialPrompts }) with the same
initialPrompts logic.
- Around line 356-368: cachedRewriter is being destroyed and recreated on every
call so it never provides cache hits; change this to a keyed cache like
cachedTranslators: create a Map (e.g., cachedRewriters) keyed by
`${tone}-${length}` using toneMap[options.outputType] and
lengthMap[options.outputType], look up an existing Rewriter from the map before
calling Rewriter.create, only call Rewriter.create when there is a cache miss,
store the new instance in cachedRewriters, and update any cleanup logic that
currently references cachedRewriter to iterate/clear the map as needed (use
symbols cachedRewriter, Rewriter.create, toneMap, lengthMap, options.outputType,
cachedTranslators to locate places to change).

In `@packages/web/example/index.html`:
- Around line 1075-1078: Remove the redundant inline gap from the div element
that uses class "controls": edit the element with class="controls" (the
container that currently contains the buttons with ids "chat-btn" and
"chat-reset-btn") and delete the style fragment "gap: 0.5rem;" so the layout
relies on the existing .controls CSS rule; leave the other inline styles (e.g.,
margin, flex) unchanged.
- Around line 1001-1003: Remove the duplicate HTML comment above the detail
page: keep a single <!-- Summarize --> and delete the redundant one so the block
containing <div class="detail-page" id="page-summarize"> only has one comment;
update the area around the <div class="detail-page" id="page-summarize"> to
remove the extra comment occurrence.

- ci-expo: add expo prebuild step for ios/android (directories are
  gitignored), build library plugin first, remove unrelated
  packages/web trigger path
- ci-expo: disable llama.cpp bridge in CI to avoid complex SPM
  dependency chain (enableLlamaCpp: !process.env.CI)
- ci-react-native: fix CocoaPods infinite symlink loop by replacing
  bun's file: symlink with rsync copy excluding example/
- ci-web: use --legacy-peer-deps for npm install to handle workspace
  peer dependency conflict with react-helmet-async
- expo web: proofread feature check now includes LanguageModel
  (primary) in addition to Writer (fallback)
- packages/web: chat/chatStreaming now invalidate cached session
  when options change (systemPrompt, temperature, topK), matching
  the pattern used by summarize/translate/rewrite
- packages/web: resetChat clears options key
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
.github/workflows/ci-expo.yml (1)

58-99: No dependency on lint-and-test and no Gradle caching.

Two minor observations on the new build jobs:

  1. None of the three new jobs declare needs: lint-and-test, so they run in parallel even when lint/test fails — burning CI minutes on a known-bad commit. Consider adding needs: lint-and-test if you prefer fail-fast behavior.
  2. build-android has no Gradle dependency caching (unlike the CocoaPods cache in build-ios). Adding a cache for ~/.gradle/caches would speed up repeat builds.

Both are optional improvements.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-expo.yml around lines 58 - 99, Add a dependency on the
lint-and-test job and enable Gradle caching for the build-android job: update
the build-android job (named "build-android") to include needs: lint-and-test so
it runs only after lint/test completes, and add an actions/cache step that
caches the Gradle directory (e.g., path: ~/.gradle/caches) with a suitable key
and restore-keys to speed repeated builds.
.github/workflows/ci-react-native.yml (3)

65-98: Add a Gradle dependency cache to avoid re-downloading on every run.

The build-android job has no Gradle cache step. Gradle dependency downloads are multi-GB and dominate Android build time. The lint-and-test job already uses actions/cache@v4 for Bun; apply the same pattern here for Gradle.

♻️ Proposed Gradle cache step (insert before "Build Android")
       - name: Install example dependencies
         working-directory: libraries/react-native-ondevice-ai/example
         run: bun install

+      - name: Cache Gradle dependencies
+        uses: actions/cache@v4
+        with:
+          path: |
+            ~/.gradle/caches
+            ~/.gradle/wrapper
+          key: ${{ runner.os }}-gradle-${{ hashFiles('libraries/react-native-ondevice-ai/example/android/**/*.gradle*', 'libraries/react-native-ondevice-ai/example/android/**/gradle-wrapper.properties') }}
+          restore-keys: |
+            ${{ runner.os }}-gradle-

       - name: Build Android
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 65 - 98, The
build-android job lacks a Gradle dependency cache, causing large re-downloads
each run; add an actions/cache@v4 step (insert before the "Build Android" step
in the build-android job) that caches Gradle artifacts (e.g., paths like
~/.gradle/caches and ~/.gradle/wrapper) and uses a key based on runner OS and
the Gradle wrapper checksum (or gradle version) to restore/update the cache so
subsequent runs reuse downloaded dependencies for the Build Android step.

83-94: New build jobs are missing a Bun install cache.

Both build-android and build-ios call bun install twice (library + example) without any cache, unlike lint-and-test which caches ~/.bun/install/cache. Adding the same cache step prevents redundant network fetches.

♻️ Proposed Bun cache step (insert before "Install library dependencies" in each job)
+      - name: Cache Bun dependencies
+        uses: actions/cache@v4
+        with:
+          path: ~/.bun/install/cache
+          key: ${{ runner.os }}-bun-rn-${{ hashFiles('libraries/react-native-ondevice-ai/bun.lock') }}
+          restore-keys: |
+            ${{ runner.os }}-bun-rn-

       - name: Install library dependencies

Also applies to: 112-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 83 - 94, The CI jobs
build-android and build-ios run two bun install steps ("Install library
dependencies" and "Install example dependencies") without a Bun cache; add a
cache step that saves and restores ~/.bun/install/cache before the first bun
install in each job to avoid redundant network fetches and speed runs. Insert a
step named like "Cache Bun" (restore-key: path: ~/.bun/install/cache and
save-key likewise) immediately before the "Install library dependencies" step in
both jobs so both subsequent bun install runs reuse the cached artifacts.

100-103: Pin macos-latest to a specific macOS version for reproducible iOS builds.

macos-latest is a floating label that GitHub periodically re-points to new OS/Xcode versions. For example, it migrated from macOS 14 to macOS 15 in August–September 2025. A major version bump can break CocoaPods resolution or xcodebuild invocations without any change in this repo. Pinning (e.g. macos-15) keeps the build stable until an intentional upgrade.

-    runs-on: macos-latest
+    runs-on: macos-15
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 100 - 103, The build job
"build-ios" currently uses the floating runner label `runs-on: macos-latest`;
change it to a pinned macOS image (for example `runs-on: macos-15`) to ensure
reproducible iOS builds—update the `build-ios` job's `runs-on` value from
`macos-latest` to the specific macOS version you want to lock (e.g., `macos-15`)
so the workflow uses a stable Xcode/OS combination until you intentionally
upgrade.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.github/workflows/ci-expo.yml:
- Around line 174-184: The build-web job is missing the library build step; add
a step that runs "bun run build" in the libraries/expo-ondevice-ai
working-directory (same as in build-android and build-ios) before the example
dependencies install step so the library's build output (e.g., build/index.js)
exists for the example; ensure the step uses working-directory
"libraries/expo-ondevice-ai" and is placed immediately before the existing
"Install example dependencies" step.

---

Nitpick comments:
In @.github/workflows/ci-expo.yml:
- Around line 58-99: Add a dependency on the lint-and-test job and enable Gradle
caching for the build-android job: update the build-android job (named
"build-android") to include needs: lint-and-test so it runs only after lint/test
completes, and add an actions/cache step that caches the Gradle directory (e.g.,
path: ~/.gradle/caches) with a suitable key and restore-keys to speed repeated
builds.

In @.github/workflows/ci-react-native.yml:
- Around line 65-98: The build-android job lacks a Gradle dependency cache,
causing large re-downloads each run; add an actions/cache@v4 step (insert before
the "Build Android" step in the build-android job) that caches Gradle artifacts
(e.g., paths like ~/.gradle/caches and ~/.gradle/wrapper) and uses a key based
on runner OS and the Gradle wrapper checksum (or gradle version) to
restore/update the cache so subsequent runs reuse downloaded dependencies for
the Build Android step.
- Around line 83-94: The CI jobs build-android and build-ios run two bun install
steps ("Install library dependencies" and "Install example dependencies")
without a Bun cache; add a cache step that saves and restores
~/.bun/install/cache before the first bun install in each job to avoid redundant
network fetches and speed runs. Insert a step named like "Cache Bun"
(restore-key: path: ~/.bun/install/cache and save-key likewise) immediately
before the "Install library dependencies" step in both jobs so both subsequent
bun install runs reuse the cached artifacts.
- Around line 100-103: The build job "build-ios" currently uses the floating
runner label `runs-on: macos-latest`; change it to a pinned macOS image (for
example `runs-on: macos-15`) to ensure reproducible iOS builds—update the
`build-ios` job's `runs-on` value from `macos-latest` to the specific macOS
version you want to lock (e.g., `macos-15`) so the workflow uses a stable
Xcode/OS combination until you intentionally upgrade.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🧹 Nitpick comments (1)
libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (1)

340-368: rewrite destroys and recreates the Rewriter on every call — consider caching by options key.

Unlike the summarize and chat methods in this same file (and unlike Locanara.ts's rewrite), the rewriter instance is unconditionally torn down and rebuilt on each invocation. The Chrome docs note that each session consumes memory and that reuse is preferred.

♻️ Suggested fix using an options key (matching the summarizer pattern above)
+  let cachedRewriterKey: string = '';
   // (move cachedRewriter to have a companion key)

   async rewrite(text: string, options: RewriteOptions): Promise<RewriteResult> {
     const Rewriter = (globalThis as any).Rewriter;
     if (!Rewriter) throw new Error('Rewriter API not available in this browser');
+    const optionsKey = `${options.outputType}`;
-    cachedRewriter?.destroy();
-    cachedRewriter = await Rewriter.create({
+    if (!cachedRewriter || cachedRewriterKey !== optionsKey) {
+      cachedRewriter?.destroy();
+      cachedRewriter = await Rewriter.create({
         tone: toneMap[options.outputType] ?? 'as-is',
         length: lengthMap[options.outputType] ?? 'as-is',
       });
+      cachedRewriterKey = optionsKey;
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 340
- 368, The rewrite method currently destroys and recreates cachedRewriter on
every call; change it to reuse instances keyed by the computed options so we
only create a new Rewriter when the mapping (tone/length) changes. Compute the
key from options.outputType (using the same toneMap/lengthMap logic), check a
map of cached rewriters for that key, and only call Rewriter.create (and destroy
the previous instance for that key when replacing) when the key differs; update
cachedRewriter usage in rewrite to pull from that cache and avoid unconditional
cachedRewriter?.destroy() before creating a new one.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 32-34: Add lifecycle cleanup: implement and export a destroy()
function that clears cachedTranslators (Map.clear()), nulls cachedWriter and
cachedRewriter, and closes/disposes any underlying translator/writer instances
held in the map (iterate cachedTranslators values to call their close/dispose
method if present). Additionally, to avoid unbounded growth, add a simple
LRU-cap or max-size eviction policy for cachedTranslators where you remove the
least-recently-used entry when inserting beyond the limit (update access order
on lookup/usage). Update usages of cachedTranslators, cachedWriter, and
cachedRewriter in ExpoOndeviceAiModule.web.ts to ensure they respect the LRU
policy and that destroy() is callable by consumers.
- Around line 321-322: The cache key creation uses a hard-coded 'en' fallback
for options.sourceLanguage which breaks non-English inputs; change this to
either require callers to pass sourceLanguage or perform language detection
before building the key and before calling Translator.create. Specifically,
remove the `${options.sourceLanguage ?? 'en'}` fallback, call the
LanguageDetector (or equivalent async detect function) on the input text when
options.sourceLanguage is undefined to obtain detectedSource, then use
`${detectedSource}-${options.targetLanguage}` for the cachedTranslators key and
pass detectedSource (or the explicit options.sourceLanguage) into
Translator.create so caching and translation use the actual source language.
- Around line 28-34: Replace the loose any annotations with the proper types
from the `@types/dom-chromium-ai` package: import the appropriate exported
interfaces/types for the Chrome AI objects (the Summarizer, LanguageModel,
Rewriter, Writer and Translator types) and use them for cachedSummarizer,
cachedLanguageModel, cachedRewriter, cachedWriter and the Map value type for
cachedTranslators (keep cachedSystemPrompt as string | undefined). Update the
variable declarations (cachedSummarizer, cachedLanguageModel, cachedRewriter,
cachedWriter, cachedTranslators) to use those imported types so the file is
strict-mode friendly and no longer relies on any.

In `@packages/web/src/Locanara.ts`:
- Around line 1047-1055: The writeStreaming path is creating and caching a
Writer without passing the format, which poisons the cache keyed by optionsKey
that includes format; update the Writer creation in the writeStreaming branch
(where this._writer is set via window.Writer.create) to include the mapped
format (use the same mapper used in write, e.g.
this.mapWriterFormat(options.format) or equivalent) alongside tone and length so
the created instance matches the optionsKey and prevents format-less reuse; keep
the existing this._writerOptionsKey assignment logic unchanged.
- Around line 840-869: proofread currently reuses the shared this._writer
(created by write/writeStreaming) which can carry tone/length/format state;
change proofread to always create a fresh, neutral Writer instance via
window.Writer.create({ monitor: this.createMonitor() }) instead of reusing
this._writer (or create a dedicated this._proofreader and use that), use that
instance to call write(...) and then close/destroy it, and update destroy() to
also cleanup the new proofreader instance (or ensure any temporary writer is
disposed) so state from write/writeStreaming cannot affect proofread's output;
keep existing error handling (LocanaraError.executionFailed) and context
handling the same.
- Around line 250-257: checkTranslatorAvailability currently checks typeof
translator === "object" which misses WebIDL globals exposed as function objects;
update checkTranslatorAvailability to detect function or object (e.g., typeof
translator === "function" || typeof translator === "object" or simply truthiness
with typeof !== "undefined") when inspecting window.Translator so it returns
FeatureAvailability.AVAILABLE when the API exists; adjust the logic in the
checkTranslatorAvailability function (referencing window.Translator and
FeatureAvailability.AVAILABLE) so getDeviceCapability will correctly report
TRANSLATE as available.
- Around line 878-944: In describeImage, the multimodal call uses the wrong
format: update the lmAPI.create(...) call (used to instantiate
ChromeLanguageModelSession) to include expectedInputs: [{ type: "image" }], and
change the session.prompt(...) call to send a single role entry whose content is
an array of typed items — e.g. one object with role "user" and content [{ type:
"image", value: imageBlob }, { type: "text", value: prompt }] — then keep
session.destroy() and error handling as-is.

---

Duplicate comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 61-79: The current behavior in checkAvailability intentionally
returns hasAPI(api) from the catch block to perform a lenient fallback when
obj.availability() times out or fails; leave the implementation of
checkAvailability (and its use of hasAPI(api)) as-is to preserve the designed
behavior that Chrome Built-in AI availability timeouts do not lock out features.

In `@packages/web/src/Locanara.ts`:
- Around line 576-592: The chatStreaming loop currently yields raw chunks from
this._languageModel.promptStreaming (reader.read) which can be cumulative;
change chatStreaming to normalize to deltas by tracking previousText: initialize
prev = "" before the loop, on each result.value coerce to string, if
value.startsWith(prev) treat delta = value.slice(prev.length) else treat delta =
value (fallback for non-cumulative chunks or resets), then set prev = value and
yield only the delta when non-empty; keep reader.releaseLock() in the finally
block and ensure this logic lives around the promptStreaming/read loop
(referencing promptStreaming, reader.read and chatStreaming).
- Around line 549-575: chatStreaming currently builds optionsKey without
options.initialPrompts and never applies options.initialPrompts when creating a
session, causing dropped prompts and unnecessary session recreation; update
chatStreaming to mirror chat by including options.initialPrompts in the
optionsKey (e.g. JSON.stringify(initialPrompts or options.initialPrompts) as
part of optionsKey), construct the initialPrompts array when
options.initialPrompts exists (same shape: { role, content }), and pass that
array into lmAPI.create via the initialPrompts parameter when creating
_languageModel (and then set _languageModelOptionsKey) so session reuse and
prompt handling match chat; refer to the chatStreaming and chat methods, the
optionsKey / _languageModelOptionsKey variables, lmAPI.create and
createMonitor() to locate the changes.

---

Nitpick comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 340-368: The rewrite method currently destroys and recreates
cachedRewriter on every call; change it to reuse instances keyed by the computed
options so we only create a new Rewriter when the mapping (tone/length) changes.
Compute the key from options.outputType (using the same toneMap/lengthMap
logic), check a map of cached rewriters for that key, and only call
Rewriter.create (and destroy the previous instance for that key when replacing)
when the key differs; update cachedRewriter usage in rewrite to pull from that
cache and avoid unconditional cachedRewriter?.destroy() before creating a new
one.

- Expo iOS: bump deployment target 15.1 → 17.0 for Locanara pod compat
- Expo Android: resilient locanara-versions.json path with fallbacks
- RN iOS: use rm -rf for directory copy (not just symlink) on CI
- Web: add --ignore-scripts to skip root prepare script in CI
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (5)
.github/workflows/ci-react-native.yml (3)

65-67: Consider gating build jobs on lint-and-test.

Both new jobs run in parallel with lint-and-test, meaning a failing lint or test doesn't prevent the (slower, resource-intensive) Android/iOS builds from starting. Adding needs: lint-and-test would avoid burning macOS/ubuntu minutes on a broken branch.

♻️ Proposed change
   build-android:
     name: Build Android
     runs-on: ubuntu-latest
+    needs: lint-and-test

   build-ios:
     name: Build iOS
     runs-on: macos-latest
+    needs: lint-and-test

Also applies to: 100-102

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 65 - 67, The build jobs
currently run in parallel with lint-and-test; update the workflow so the Android
and iOS jobs (job names build-android and build-ios) declare needs:
lint-and-test to gate them on successful lint-and-test completion; locate the
build-android and build-ios job definitions in the YAML and add the needs
property referencing lint-and-test to prevent costly builds from starting when
lint or tests fail.

83-94: Both new jobs are missing a Bun dependency cache.

The lint-and-test job has an explicit actions/cache step for ~/.bun/install/cache. The build-android and build-ios jobs run bun install without caching, resulting in a full network fetch on every CI run.

⚡ Add Bun cache to each new job (after `Setup Bun`, before `Install library dependencies`)
+      - name: Cache Bun dependencies
+        uses: actions/cache@v4
+        with:
+          path: ~/.bun/install/cache
+          key: ${{ runner.os }}-bun-rn-android-${{ hashFiles('libraries/react-native-ondevice-ai/bun.lock', 'libraries/react-native-ondevice-ai/example/bun.lock') }}
+          restore-keys: |
+            ${{ runner.os }}-bun-rn-android-
+
       - name: Install library dependencies

Use an analogous block with -ios- in the key for build-ios.

Also applies to: 112-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 83 - 94, Add the Bun
dependency cache step to both build-android and build-ios jobs: after the "Setup
Bun" step and before the "Install library dependencies" step, insert an
actions/cache step that caches the Bun install cache directory
(~/.bun/install/cache) and uses a stable key (e.g., runner OS + bun-version +
checksum of lockfile) so bun install is served from cache; for the build-ios job
use a similar cache key name that includes "-ios-" to distinguish it. Ensure the
cache step runs before both "Install library dependencies" and "Install example
dependencies" in each job so bun install uses the cached files.

65-98: Add a Gradle dependency cache to build-android.

Without it, every run re-downloads all Gradle/Maven dependencies, adding several minutes to each build. The lint-and-test job already caches Bun — the same principle applies here.

⚡ Proposed Gradle cache step (insert before "Build Android")
       - name: Install example dependencies
         working-directory: libraries/react-native-ondevice-ai/example
         run: bun install

+      - name: Cache Gradle
+        uses: actions/cache@v4
+        with:
+          path: |
+            ~/.gradle/caches
+            ~/.gradle/wrapper
+          key: ${{ runner.os }}-gradle-rn-${{ hashFiles('libraries/react-native-ondevice-ai/example/android/**/*.gradle*', 'libraries/react-native-ondevice-ai/example/android/**/gradle-wrapper.properties') }}
+          restore-keys: |
+            ${{ runner.os }}-gradle-rn-
+
       - name: Build Android
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-react-native.yml around lines 65 - 98, Add a
Gradle/Maven cache step to the build-android job to avoid re-downloading
dependencies: insert an actions/cache step before the "Build Android" step that
caches Gradle directories (e.g., ~/.gradle/caches and ~/.gradle/wrapper) and the
local Maven repository (e.g., ~/.m2/repository), keyed by OS + Gradle wrapper
checksum or relevant gradle.properties/lockfile; ensure the cache restore/save
wraps the working-directory used by the Android build
(libraries/react-native-ondevice-ai/example/android) so the Gradle
daemon/wrapper and Maven artifacts are persisted across runs and significantly
speed up the assembleDebug ./gradlew assembleDebug --no-daemon step.
.github/workflows/ci-web.yml (2)

26-32: Add npm caching to the setup-node action. npm ci cannot be used without a lockfile.

The caching recommendation is valid: actions/setup-node@v4 supports a cache option that avoids re-downloading packages on every run. However, switching from npm install to npm ci requires committing a package-lock.json to the repository first—currently, packages/web has no npm lockfile.

♻️ Current path forward

To enable caching with the current setup:

      - name: Setup Node.js
        uses: actions/setup-node@v4
        with:
          node-version: '20'
+         cache: 'npm'
+         cache-dependency-path: packages/web/package.json

To unlock npm ci (for true reproducibility and faster installs), create and commit packages/web/package-lock.json by running npm install --package-lock-only locally, then update the workflow to use npm ci --ignore-scripts.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-web.yml around lines 26 - 32, Update the GitHub Actions
step that uses actions/setup-node@v4 to enable caching by adding the cache
option (e.g., cache: 'npm') alongside node-version: '20', and change the install
step: either keep using npm install with --legacy-peer-deps --ignore-scripts (so
caching still helps) or, if you commit packages/web/package-lock.json (run npm
install --package-lock-only locally and commit it), switch the install command
to npm ci --ignore-scripts to use the lockfile and benefit from faster
reproducible installs; reference the setup-node action and the install step in
the workflow and ensure packages/web/package-lock.json is present before
switching to npm ci.

24-29: Consider SHA-pinning third-party actions for supply-chain hardening.

Using mutable @v4 tags means a tag re-point (accidental or malicious) would silently execute different code. Pinning to a commit SHA removes this risk.

♻️ SHA-pinned equivalents (current as of February 2026)
-     - uses: actions/checkout@v4
+     - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5  # v4.3.1

      - name: Setup Node.js
-       uses: actions/setup-node@v4
+       uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020  # v4.4.0
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci-web.yml around lines 24 - 29, Replace mutable action
tags with commit SHAs to prevent silent repointing: update the usages of
actions/checkout@v4 and actions/setup-node@v4 in the workflow to their
corresponding full commit SHA refs (e.g., actions/checkout@<commit-sha> and
actions/setup-node@<commit-sha>), ensuring you pin both occurrences in the file
and verify the chosen SHAs match the currently intended releases before
committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@libraries/expo-ondevice-ai/android/build.gradle`:
- Around line 14-20: The loop over candidates currently returns a default as
soon as a candidate file exists even if the regex match fails; change the logic
in the for (candidate in candidates) loop so you read versionJson and compute
matcher but only return matcher[0][1] when matcher matches (i.e., move the
return inside the match guard), otherwise continue to the next candidate; after
the loop completes with no successful match, return the default "1.0.2". Ensure
this uses the existing variables candidate, candidates, versionJson, and
matcher.

In `@libraries/expo-ondevice-ai/example/app.config.ts`:
- Line 55: The example app currently sets deploymentTarget: '17.0' which drops
iOS 15/16 support — keep this intentional change but document it and add the
required TypeScript strict-mode directive: add a "use strict" directive at the
very top of app.config.ts and add a short inline comment next to the
deploymentTarget setting (or a README note) making it clear that iOS 17+ is
required for this example; ensure you update only app.config.ts and reference
the deploymentTarget setting so reviewers can find the change.

---

Nitpick comments:
In @.github/workflows/ci-react-native.yml:
- Around line 65-67: The build jobs currently run in parallel with
lint-and-test; update the workflow so the Android and iOS jobs (job names
build-android and build-ios) declare needs: lint-and-test to gate them on
successful lint-and-test completion; locate the build-android and build-ios job
definitions in the YAML and add the needs property referencing lint-and-test to
prevent costly builds from starting when lint or tests fail.
- Around line 83-94: Add the Bun dependency cache step to both build-android and
build-ios jobs: after the "Setup Bun" step and before the "Install library
dependencies" step, insert an actions/cache step that caches the Bun install
cache directory (~/.bun/install/cache) and uses a stable key (e.g., runner OS +
bun-version + checksum of lockfile) so bun install is served from cache; for the
build-ios job use a similar cache key name that includes "-ios-" to distinguish
it. Ensure the cache step runs before both "Install library dependencies" and
"Install example dependencies" in each job so bun install uses the cached files.
- Around line 65-98: Add a Gradle/Maven cache step to the build-android job to
avoid re-downloading dependencies: insert an actions/cache step before the
"Build Android" step that caches Gradle directories (e.g., ~/.gradle/caches and
~/.gradle/wrapper) and the local Maven repository (e.g., ~/.m2/repository),
keyed by OS + Gradle wrapper checksum or relevant gradle.properties/lockfile;
ensure the cache restore/save wraps the working-directory used by the Android
build (libraries/react-native-ondevice-ai/example/android) so the Gradle
daemon/wrapper and Maven artifacts are persisted across runs and significantly
speed up the assembleDebug ./gradlew assembleDebug --no-daemon step.

In @.github/workflows/ci-web.yml:
- Around line 26-32: Update the GitHub Actions step that uses
actions/setup-node@v4 to enable caching by adding the cache option (e.g., cache:
'npm') alongside node-version: '20', and change the install step: either keep
using npm install with --legacy-peer-deps --ignore-scripts (so caching still
helps) or, if you commit packages/web/package-lock.json (run npm install
--package-lock-only locally and commit it), switch the install command to npm ci
--ignore-scripts to use the lockfile and benefit from faster reproducible
installs; reference the setup-node action and the install step in the workflow
and ensure packages/web/package-lock.json is present before switching to npm ci.
- Around line 24-29: Replace mutable action tags with commit SHAs to prevent
silent repointing: update the usages of actions/checkout@v4 and
actions/setup-node@v4 in the workflow to their corresponding full commit SHA
refs (e.g., actions/checkout@<commit-sha> and actions/setup-node@<commit-sha>),
ensuring you pin both occurrences in the file and verify the chosen SHAs match
the currently intended releases before committing.

- Reformat web module and plugin for consistent code style
- Simplify inputType helper to expression body
- Fix import ordering in index.ts
- Enhance example app framework demos with interactive UI
- Expo iOS: bump ExpoOndeviceAi podspec platform 15.0 → 17.0
- RN iOS: add local Locanara pod path (trunk 1.0.1 lacks newer types)
- RN iOS: bump NitroOndeviceAi podspec platform 15.0 → 17.0
- Web: fix biome formatting (quotes, semicolons)
- Locanara podspec: remove weak_frameworks FoundationModels (not in
  Xcode 16.x SDK; canImport() handles this at compile time)
- RN iOS: update HybridOndeviceAi to match Nitro-generated protocol
  (Variant types for optional params, throws on listener methods)
- Replace all `any` casts with proper Chrome AI type interfaces
  in ExpoOndeviceAiModule.web.ts (Summarizer, Translator, Rewriter,
  Writer, LanguageModel)
- Add translator cache eviction (max 10 entries) and destroy()
  method for resource cleanup
- Fix README package name from @locanara/web to locanara
- Remove outdated GitHub Packages (Pro) installation instructions
- Separate _proofreadWriter from _writer to prevent tone/length
  contamination when write() is called before proofread()
- Add missing format and sharedContext to writeStreaming's Writer.create()
- Fix build.gradle version lookup to try all candidates instead of
  returning default on first file match failure
@hyochan hyochan merged commit cf95853 into main Feb 24, 2026
10 checks passed
@hyochan hyochan deleted the feat/expo-web-support branch February 24, 2026 09:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant