feat: add web platform support for expo-ondevice-ai#9
Conversation
Add full web support using Chrome Built-in AI (Gemini Nano) APIs. All 8 AI features work on web: summarize, classify, extract, chat, chatStream, translate, rewrite, and proofread. - Add ExpoOndeviceAiModule.web.ts with Chrome Built-in AI integration - Add packages/web standalone SDK - Update example app with web platform detection (AIStatusBanner, AppState) - Fix chat streaming (cumulative/delta auto-detection), input focus outline - Add entity colors for organization type in ExtractDemo - Add ci-web.yml workflow, update ci-expo.yml with build-web job - Remove misleading build-web job from ci-react-native.yml - Update README, Claude guides with web platform documentation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds full Web support: a new Locanara Web SDK package, a browser Expo module implementation, TypeScript types, docs, examples, tests, and CI workflows to integrate Chrome Built‑in AI (Gemini Nano) into on‑device AI APIs and platform matrices. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as App UI
participant SDK as Locanara SDK
participant Window as Browser (window / window.ai)
participant Chrome as Chrome Built‑in AI (Gemini)
UI->>SDK: initialize()
SDK->>Window: probe APIs (Summarizer/LanguageModel/Translator/...)
Window-->>SDK: constructors / availability
SDK-->>UI: InitializeResult (capabilities)
UI->>SDK: chatStreaming(message, opts)
SDK->>Chrome: create session / promptStreaming()
Chrome-->>SDK: ReadableStream chunks
loop stream
SDK-->>UI: yield chunk
end
UI->>SDK: downloadModel(id)
SDK->>Chrome: (noop / managed by Chrome)
SDK-->>UI: progress/completion (cached/no-op)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~55 minutes Possibly related PRs
Suggested labels
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @hyochan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the reach of the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces web platform support for the expo-ondevice-ai module, leveraging Chrome Built-in AI (Gemini Nano) APIs. Key changes include updating documentation across 09-expo-ondevice-ai.md, 09-platform-differences.md, and libraries/expo-ondevice-ai/README.md to reflect web compatibility, requirements, and feature availability. A new web-specific module, ExpoOndeviceAiModule.web.ts, was added to bridge the Expo API to Chrome's Summarizer, LanguageModel, Translator, Rewriter, and Writer APIs, including handling streaming and availability checks. The example application (libraries/expo-ondevice-ai/example/) was updated to correctly display platform information and AI status for web, and its bun.lock file saw numerous dependency updates. Additionally, a new standalone @locanara/web package was introduced, complete with its own README.md, biome.json configuration, a comprehensive example application (packages/web/example/index.html, packages/web/example/main.ts), build scripts, error handling (Errors.ts), and type definitions (Types.ts).
Review comments highlighted several areas for improvement: the platform property in AppState.tsx and ExpoOndeviceAiModule.web.ts was incorrectly typed or hardcoded to 'IOS' for web, requiring correction to 'WEB'. Multiple instances of direct user input concatenation into LLM prompts in classify, extract, and proofread methods were identified as prompt injection vulnerabilities in both ExpoOndeviceAiModule.web.ts and the standalone web SDK, with suggestions to use delimiters. Hardcoded confidence scores and generic entity types in classify and extract were noted as potential inaccuracies or limitations. Minor formatting issues like unnecessary blank lines in markdown files and a missing language identifier in a code snippet were also pointed out.
There was a problem hiding this comment.
Actionable comments posted: 20
🧹 Nitpick comments (14)
libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx (1)
311-311: Prefer casting the object rather than the value to avoidanyin strict mode.
'none' as anyleaksanyinto the style object. SinceoutlineStyleis a valid React Native Web property that simply isn't in the RNTextStyletypes, a narrower object-level assertion is cleaner and keeps theanyfootprint minimal:♻️ Proposed refactor
- ...(Platform.OS === 'web' ? {outlineStyle: 'none' as any} : {}), + ...(Platform.OS === 'web' ? ({outlineStyle: 'none'} as object) : {}),As per coding guidelines,
**/*.ts?(x)files should use strict TypeScript; value-levelanycasts undermine that.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx` at line 311, The inline value-level cast "'none' as any" in the spread expression for the Platform.OS === 'web' branch should be replaced with an object-level, narrow cast to avoid leaking any; update the expression "(Platform.OS === 'web' ? {outlineStyle: 'none' as any} : {})" to use a single object cast such as "(Platform.OS === 'web' ? ({ outlineStyle: 'none' } as Partial<TextStyle>) : {})" (or "as unknown as TextStyle" if TextStyle isn't imported) so the any is scoped to the whole object and not the string value.packages/web/biome.json (1)
14-14:noNonNullAssertion: "off"silences all non-null assertions project-wide.Disabling this rule globally can mask null-safety issues. Consider keeping it at
"warn"to maintain visibility of risky!assertions without blocking the build.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/biome.json` at line 14, The project-level Biome config currently sets "noNonNullAssertion": "off" which mutes all non-null assertion warnings; change the "noNonNullAssertion" setting in the biome.json config from "off" to "warn" so non-null assertions (`!`) continue to surface as warnings (not errors) across the codebase, preserving visibility without breaking builds.libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx (1)
29-29: Consider addingorganizationto theentityTypesoption array on Line 47.The new color entry for
organizationis unused unless the model spontaneously emits that entity type, sinceentityTypeson Line 47 doesn't include'organization'. Either add it to the extraction options or leave a comment explaining that this is intentional defensive coloring.💡 Option: include `organization` in the requested entity types
- const options = {entityTypes: ['person', 'email', 'phone', 'date', 'location'], extractKeyValues: true}; + const options = {entityTypes: ['person', 'email', 'phone', 'date', 'location', 'organization'], extractKeyValues: true};🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx` at line 29, The color map adds an "organization" entry but the extraction request's entityTypes array (the entityTypes option in the ExtractDemo component) doesn't include 'organization', so the color is never used; update the entityTypes option array to include 'organization' (i.e., add 'organization' to the array referenced by the entityTypes variable/prop in ExtractDemo) or, if intentional, add a one-line comment next to the entityTypes declaration explaining this is defensive and that 'organization' is included only for potential spontaneous model outputs.libraries/expo-ondevice-ai/example/components/AppState.tsx (1)
244-255:platformLabelfalls through to'Android'for any unrecognized platform.
Platform.OScan also be'macos'or'windows'in some Expo environments. The current else-branch silently displays'Android'in those cases. Low risk for an example app but worth tightening.💡 Optional: handle all known Platform.OS values
- const platformLabel = - Platform.OS === 'ios' - ? 'iOS' - : Platform.OS === 'web' - ? 'Web' - : 'Android'; + const platformLabel = + Platform.OS === 'ios' + ? 'iOS' + : Platform.OS === 'web' + ? 'Web' + : Platform.OS === 'macos' + ? 'macOS' + : 'Android';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/example/components/AppState.tsx` around lines 244 - 255, platformLabel currently falls through to 'Android' for any unrecognized Platform.OS; update the logic in the component that computes platformLabel (and related providerLabel) to explicitly handle known values ('ios', 'android', 'web', 'macos', 'windows') instead of using a final else that assumes Android—e.g., replace the nested ternary with a switch or explicit if/else branches that return 'iOS', 'Android', 'Web', 'macOS', 'Windows' (or a clear 'Unknown' fallback) and adjust providerLabel to reference cap.platform appropriately (checking 'IOS' vs other providers) so macOS/windows aren't mislabeled as Android..github/workflows/ci-web.yml (1)
26-32: Consider enabling npm caching viaactions/setup-node.The
setup-nodeaction supports built-in caching which would speed up subsequent runs:Proposed change
- name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' + cache: 'npm' + cache-dependency-path: packages/web/package-lock.json🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-web.yml around lines 26 - 32, Enable npm caching in the GitHub Actions setup-node step to speed CI: update the existing actions/setup-node@v4 step (the "Setup Node.js" job) to enable caching by adding the cache: 'npm' option and set cache-dependency-path to your lockfile (e.g., package-lock.json or package.json) so subsequent runs restore node_modules automatically; keep the "Install dependencies" step (npm install) but rely on the setup-node cache settings to populate the cache..github/workflows/ci-react-native.yml (1)
65-147: Sameneedsand caching concerns asci-expo.yml.The
build-androidandbuild-iosjobs here share the same pattern: noneeds: lint-and-testdependency and no Bun dependency caching. Consider applying the same improvements suggested forci-expo.yml.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 65 - 147, Add the same safety and caching improvements to the build-android and build-ios jobs: make both jobs depend on the lint-and-test job by adding needs: lint-and-test to the build-android and build-ios job definitions (so they don’t run if tests/lint fail), and add Bun dependency caching around the Bun install steps (use actions/cache keyed on libraries/react-native-ondevice-ai and the example’s lockfile or bun.lockb) similar to the changes suggested for ci-expo.yml; update the Install library dependencies and Install example dependencies steps in both jobs to restore/save the Bun cache before/after bun install.libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (2)
162-181:classifyandextractcreate a new LanguageModel session per call without caching.Unlike
chat, these methods calllm.create({})on every invocation. Session creation can be slow and resource-intensive. Consider caching a plain (no system-prompt) session for reuse acrossclassify/extractcalls, or document why per-call creation is preferred.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 162 - 181, The classify (and similarly extract) method creates a fresh LanguageModel session via lm.create({}) on every call which is wasteful; modify ExpoOndeviceAiModule.web to reuse a single plain session: add a module-scoped cachedSession initialized lazily from getLanguageModelAPI()/lm.create({}) and have classify and extract use cachedSession.prompt(...) (destroy only when module unloads or via an explicit shutdown), or alternatively add a clear comment explaining why per-call creation is required; references: classify, extract, getLanguageModelAPI, lm.create, session, and session.destroy.
27-33: No mechanism to reset/destroy cached API instances.Module-level cached instances (
cachedSummarizer,cachedLanguageModel,cachedTranslators, etc.) persist for the lifetime of the page with no public API to release them. This could lead to stale sessions or memory pressure in long-lived SPAs. Consider exposing adestroy()orreset()method on the module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 27 - 33, Add a public reset/destroy function (e.g., export function resetCachedInstances() or destroyCachedInstances()) that releases and clears the module-level cached instances: for cachedSummarizer, cachedLanguageModel, cachedRewriter, cachedWriter call any available cleanup method on the instance (close(), dispose(), destroy(), or similar) if present, then set them to null; for cachedTranslators iterate the Map, call each translator's cleanup method if present, then clear() the Map and reset cachedSummarizerKey to ''. Export this function from ExpoOndeviceAiModule.web.ts and ensure callers can invoke it to free resources in long‑lived SPA contexts..github/workflows/ci-expo.yml (1)
60-170: Consider addingneeds: lint-and-testand dependency caching to build jobs.The three new build jobs (
build-android,build-ios,build-web) run in parallel withlint-and-testwith no dependency. This means expensive builds (especiallybuild-ioson macOS runners) will consume CI minutes even when lint or tests fail. Additionally, none of these jobs cache Bun dependencies, unlike thelint-and-testjob.Suggested improvements
Add
needsand caching to each build job. Example forbuild-android:build-android: name: Build Android runs-on: ubuntu-latest + needs: lint-and-test steps: - uses: actions/checkout@v4 - name: Setup Java uses: actions/setup-java@v4 with: distribution: 'temurin' java-version: '17' - name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' - name: Setup Bun uses: oven-sh/setup-bun@v2 with: bun-version: '1.1.38' + - name: Cache Bun dependencies + uses: actions/cache@v4 + with: + path: ~/.bun/install/cache + key: ${{ runner.os }}-bun-${{ hashFiles('libraries/expo-ondevice-ai/bun.lock') }} + restore-keys: | + ${{ runner.os }}-bun-Apply the same pattern to
build-iosandbuild-web.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-expo.yml around lines 60 - 170, The build jobs (build-android, build-ios, build-web) should depend on lint-and-test and reuse dependency caches: add needs: lint-and-test to each job and insert actions/cache@v4 steps before Bun install to cache Bun deps for both libraries/expo-ondevice-ai and libraries/expo-ondevice-ai/example (use keys that include hashFiles('libraries/expo-ondevice-ai/bun.lockb') and hashFiles('libraries/expo-ondevice-ai/example/bun.lockb') or similar), restoring before running bun install; leave the existing CocoaPods cache step in build-ios as-is and ensure the Bun cache runs before its bun install steps in that job as well.packages/web/src/Locanara.ts (1)
892-896: Non-null assertion oncanvas.getContext("2d")could fail.
getContext("2d")returnsnullif another context type (e.g., WebGL) was already obtained on the same canvas. Since this canvas is freshly created, it's effectively impossible here, but a guard would be more defensive.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/Locanara.ts` around lines 892 - 896, The code uses a non-null assertion for canvas.getContext("2d") which could return null; update the creation of ctx to defensively handle that case by checking the return value of canvas.getContext("2d") (the ctx variable) and either throwing a clear error or returning early if ctx is null, so downstream code using ctx (created from the canvas variable) never assumes non-null; locate the canvas, canvas.width/height and ctx assignment in the same block and add the null-check and proper error handling.packages/web/example/main.ts (2)
447-457:replace('_', '')only removes the first underscore — fragile for future feature types.Currently works since only
DESCRIBE_IMAGEhas an underscore, but any future enum value with multiple underscores (e.g.,DESCRIBE_IMAGE_V2) would break.Suggested fix
- const featureId = f.feature.toLowerCase().replace('_', '') + const featureId = f.feature.toLowerCase().replaceAll('_', '')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/example/main.ts` around lines 447 - 457, The code that builds featureId in the loop over capability.availableFeatures uses feature.toLowerCase().replace('_', '') which only removes the first underscore; update the featureId generation (in the for loop that defines featureId from f.feature) to remove all underscores (e.g., use a global replace or replaceAll) so multi-underscore enum values like DESCRIBE_IMAGE_V2 are normalized correctly while preserving the existing toLowerCase and subsequent assignments to featureAvailability, hasAnyAvailable, and hasDownloadable.
120-151:markdownToHtmlre-introduces HTML tags after escaping — XSS-safe but fragile.The
escapeHtml→ regex pipeline is currently safe since$1captures are already escaped. However, this hand-rolled markdown renderer is a fragile pattern. For an example app the risk is acceptable, but note that if any regex is later modified to capture unescaped content, it would open an XSS vector. The static analysis warnings forinnerHTMLat Line 160 stem from this pattern.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/example/main.ts` around lines 120 - 151, The markdownToHtml function currently escapes input via escapeHtml then reinserts HTML via regex replacements which is fragile and can reintroduce XSS when patterns change; fix by either (A) replacing this hand-rolled logic with a vetted markdown-to-HTML library and then sanitize the resulting HTML (e.g., run through DOMPurify) before assigning to innerHTML, or (B) rewrite markdownToHtml so it parses tokens first, escapes only plain text nodes with escapeHtml and then constructs HTML nodes (never performing regex replacements that inject raw capture groups), referencing the markdownToHtml and escapeHtml functions and the site code path that sets innerHTML. Ensure the final HTML is sanitized before any innerHTML assignment.packages/web/tests/Locanara.test.ts (1)
37-52: MockpromptStreamingnever exercises the streaming path.The mock at Lines 39-47 returns a
ReadableStream-like object, but no test actually callschatStreamingto verify the streaming chunking logic. Since the PR summary highlights "cumulative/delta auto-detection" as a key behavior, this would benefit from a dedicated streaming test.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/tests/Locanara.test.ts` around lines 37 - 52, The mockLanguageModelSession.promptStreaming currently returns a reader but no test exercises chatStreaming; add a new unit test that uses the existing mockLanguageModelSession (or adjust it to expose a true ReadableStream behavior) and calls the chatStreaming function exported/used by the component to consume the stream, reading chunks from getReader().read() and asserting the component's cumulative vs delta auto-detection and chunk-assembly logic; specifically reference mockLanguageModelSession.promptStreaming and the chatStreaming helper (or method) in your test, drive the mocked reads (chunk1, chunk2, done) and assert the final assembled message and detection flags to cover the streaming path.packages/web/src/Types.ts (1)
33-85: Several "common" enums appear unused in the web SDK.
SummarizeInputType,SummarizeOutputType,RewriteOutputType,ProofreadInputType,CapabilityLevel,FeatureStatus, andExecutionStateare declared as "common — from type.graphql" but don't appear to be referenced by the web SDK'sLocanara.tsimplementation. They may exist for API parity with the native libraries — if so, consider adding a brief comment to that effect to clarify intent.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/src/Types.ts` around lines 33 - 85, These enums (SummarizeInputType, SummarizeOutputType, RewriteOutputType, ProofreadInputType, CapabilityLevel, FeatureStatus, ExecutionState) are unused in the web SDK but mirror types from common/type.graphql; update packages/web/src/Types.ts to add a brief top-of-file or per-enum comment indicating they are intentionally retained for API parity with native libraries (or centralized schema) so future maintainers know they are kept on purpose rather than dead code, referencing the enum names above to document intent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/guides/09-expo-ondevice-ai.md:
- Line 7: Update the incorrect feature count text "all 7 AI features" to "all 8
AI features" so it matches the mapping table and PR description; locate the
phrase "all 7 AI features" in the Expo module summary and change it to "all 8 AI
features", and double-check the mapping table rows (summarize, classify,
extract, chat, chatStream, translate, rewrite, proofread) remain unchanged to
ensure consistency.
In @.claude/guides/09-platform-differences.md:
- Around line 260-265: The docs have inconsistent Chrome minimums; standardize
to a single minimum (use Chrome 140 as the canonical minimum) by updating the
"### Web" requirements table entry that currently lists "Chrome 138" to "Chrome
140" and updating any other occurrences that say "Chrome 138+" (e.g., the other
mentions around the Web section) so they match the example app's "Chrome 140+"
string; ensure the table's Minimum column and all example/setup text use the
exact same phrasing "Chrome 140" or "Chrome 140+" consistently.
In `@libraries/expo-ondevice-ai/example/components/shared/AIStatusBanner.tsx`:
- Around line 53-57: The loading title currently uses Platform.OS === 'web' to
choose between web and Apple wording inside the AIStatusBanner component; update
that conditional to distinguish iOS from other native platforms so Android
doesn't show "Checking Apple Intelligence...". Replace the ternary so that
Platform.OS === 'web' keeps the web label, Platform.OS === 'ios' shows "Checking
Apple Intelligence...", and the fallback (else) shows an Android-appropriate
label such as "Checking Android Intelligence..." (locate the Text node rendering
styles.title in AIStatusBanner to change the conditional).
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 235-254: The cachedLanguageModel is reused across chats so
subsequent calls ignore a new options.systemPrompt; update chat (and chatStream)
to either recreate the model when a different systemPrompt is provided or always
create a fresh session like classify/extract do—specifically, track the
lastSystemPrompt (or compare options.systemPrompt) before reusing
cachedLanguageModel and if it differs (including one is undefined and the other
not) invalidate/replace cachedLanguageModel by calling lm.create({
initialPrompts: ... }) so each call honors the provided systemPrompt; apply the
same logic to the chatStream implementation to prevent silent ignoring of
changed system prompts.
- Around line 60-78: The catch in checkAvailability currently falls back to
hasAPI(api) which treats timeouts/errors as "available"; change the catch to
mark the API unavailable instead (return false) so an availability() rejection
or timeout won't report a feature as supported, and if you need a configurable
lenient behavior add an explicit option/flag to checkAvailability to
conditionally fall back to hasAPI(api) rather than doing so unconditionally.
- Around line 108-122: The returned capability object in
ExpoOndeviceAiModule.web.ts is incorrectly hardcoding platform: 'IOS' — update
the DeviceCapability type (the platform union in the shared ./types
DeviceCapability) to include 'WEB' and change the return to use platform: 'WEB'
in the capability object (the function returning { isSupported, isModelReady,
platform, features }). If you cannot change the shared type, use a clearly named
fallback like 'WEB' via a local cast to satisfy TypeScript (or add 'WEB' to the
DeviceCapability union) so consumers reading capability.platform from
ExpoOndeviceAiModule.web.ts see the correct 'WEB' value instead of 'IOS'.
In `@packages/web/example/index.html`:
- Around line 924-926: Update the Chrome minimum version text in the setup step
to match the other docs: change the content of the setup-step-title div that
currently reads "1. Use Chrome 140+ (Latest Stable recommended)" to use the same
minimum as the other docs (e.g., "Chrome 138+"), and ensure any linked or
adjacent copy (the setup-step-desc or similar nearby text) does not contradict
this; make the change in the element with class "setup-step-title" so it matches
09-platform-differences.md and the Expo guide.
- Around line 980-996: Update the external GitHub anchor link href to the
correct organization: locate the <a> element that wraps the settings-item (the
anchor with target="_blank" rel="noopener noreferrer" and the GitHub icon) and
change its href from "https://github.com/locanara/locanara" to
"https://github.com/hyodotdev/locanara".
In `@packages/web/example/main.ts`:
- Line 959: The catch block currently injects (error as Error).message directly
into innerHTML; change it to escape the message before inserting by calling the
existing escapeHtml function (or use element.textContent) so no unescaped HTML
is written; locate the catch handling where (error as Error).message is used and
replace the direct innerHTML assignment with one that wraps the message with
escapeHtml(error.message) (or assigns to textContent) and keep the surrounding
error formatting/logging unchanged.
- Around line 809-822: The copyToClipboard function calls
navigator.clipboard.writeText without handling rejections, which may cause
unhandled promise rejections; update copyToClipboard so the writeText promise
has a .catch handler (or use async/await with try/catch) that logs the error and
provides user feedback on failure, and ensure the UI state changes (the green
"Copied!" swap performed in copyToClipboard) only occur after a successful
write; optionally implement a fallback (e.g., document.execCommand or selecting
a hidden textarea) inside the catch to attempt copying in insecure contexts and
restore element styles/text on failure.
- Around line 660-674: The example breaks because locanara.chatStreaming yields
cumulative strings from promptStreaming in some Chrome versions; update the SDK
by normalizing inside Locanara.chatStreaming: keep a per-stream previousText
buffer, and for each incoming chunk compare it to previousText—if the chunk
startsWith previousText yield chunk.slice(previousText.length) (the delta) and
set previousText = chunk; if it doesn't (e.g., shorter or unrelated), treat it
as a reset and yield the full chunk and update previousText; ensure this logic
runs before yielding any chunk to callers so consumers (and the example's
response += chunk) receive consistent delta-only chunks.
In `@packages/web/package.json`:
- Around line 18-21: The package.json publishConfig currently sets "registry" to
"https://registry.npmjs.org/" which conflicts with the README .npmrc
instructions that point `@locanara` to GitHub Packages; update
publishConfig.registry to "https://npm.pkg.github.com" to match the README (or
alternatively update the README .npmrc section to document npmjs.org instead)
and ensure the publishConfig block and README .npmrc guidance are consistent;
check the "publishConfig" object in package.json and the README .npmrc section
and align them to the chosen registry.
- Line 2: The package.json "name" field is incorrect ("locanara") and must be
changed to the scoped package name used across the repo; update the "name" value
to "@locanara/web" so imports, Vite alias, and README install instructions
resolve correctly (replace the existing "locanara" string in package.json with
"@locanara/web").
In `@packages/web/README.md`:
- Line 7: Update the Chrome requirement in the README where the line reads
"**Chrome 131+** with Built-in AI enabled" so it matches the PR test plan
(change to "**Chrome 138+** with Built-in AI enabled"); ensure the same version
string is used wherever the README references Chrome minimum version to keep
documentation consistent.
- Line 41: Update the inline comment for the call to
window.LanguageModel?.availability() to reflect the correct expected return
values per current Chrome docs; replace the incorrect "'readily'" with
"'available'" (or mention the valid set: "unavailable", "downloadable",
"downloading", or "available") so the comment accurately describes
LanguageModel.availability().
In `@packages/web/scripts/build.mjs`:
- Around line 28-43: The minify call can return a result with result.code ===
undefined which would write the literal "undefined" to dist/index.js; update the
build step to guard the minify result (the result variable from minify) before
calling fs.writeFileSync: check that result and result.code are defined, and if
not, surface a clear error (throw or processLogger.error + process.exit(1)) or
fall back to the original code string, then only call
fs.writeFileSync("dist/index.js", result.code) when the code is valid; reference
the minify invocation, the result variable, and the fs.writeFileSync call when
locating where to add the guard.
In `@packages/web/src/Errors.ts`:
- Around line 15-24: Add the "use strict"; directive as the very first statement
in the file (above the LocanaraError class) to enable strict mode; update the
top of the file so the string literal "use strict"; appears before the
declaration of class LocanaraError (and before any imports or other code), and
do not add the Object.setPrototypeOf workaround because tsconfig targets ES2022
which handles Error subclassing correctly; keep the existing constructor,
this.name, this.code, and this.details assignments unchanged.
In `@packages/web/src/Locanara.ts`:
- Around line 488-523: The chat session keeps reusing the first-created session
stored in this._languageModel, so subsequent calls to chat() or chatStreaming()
ignore new options (systemPrompt, temperature, topK); update the logic in
chat()/chatStreaming() to detect when the incoming options differ from the
current session (compare systemPrompt, temperature, topK and initialPrompts) and
recreate the session via lmAPI.create(...) (or call resetChat() then recreate)
when they change, ensuring you pass the new initialPrompts and monitor from
createMonitor(); alternatively, explicitly document that options only apply on
first creation and require callers to invoke resetChat() to apply new options.
- Around line 556-572: The chat streaming loop currently yields raw chunks from
this._languageModel.promptStreaming(message) which can be cumulative on some
Chrome versions; modify the reader loop in the chatStreaming method to normalize
to deltas by tracking the last emitted string (e.g., lastEmitted), and for each
chunk: if it's a string and startsWith(lastEmitted) emit only the suffix (update
lastEmitted), if it's a string but not a prefix (or shorter) treat it as a full
new delta and emit + set lastEmitted to it, and if the chunk is non-string
(object/bytes) yield it unchanged; keep reader.releaseLock() in the finally
block and ensure empty or duplicate suffixes are not yielded.
In `@packages/web/vite.config.ts`:
- Around line 14-16: The alias currently builds a path using new
URL('./src/index.ts', import.meta.url).pathname which yields an invalid Windows
path; replace that usage by converting the file URL to a filesystem path via
fileURLToPath(import.meta.url) (i.e., import { fileURLToPath } from 'url') and
use fileURLToPath(new URL('./src/index.ts', import.meta.url)) for the
'@locanara/web' alias so Vite receives a proper platform-native path; update the
import list and the alias expression in vite.config.ts accordingly.
---
Nitpick comments:
In @.github/workflows/ci-expo.yml:
- Around line 60-170: The build jobs (build-android, build-ios, build-web)
should depend on lint-and-test and reuse dependency caches: add needs:
lint-and-test to each job and insert actions/cache@v4 steps before Bun install
to cache Bun deps for both libraries/expo-ondevice-ai and
libraries/expo-ondevice-ai/example (use keys that include
hashFiles('libraries/expo-ondevice-ai/bun.lockb') and
hashFiles('libraries/expo-ondevice-ai/example/bun.lockb') or similar), restoring
before running bun install; leave the existing CocoaPods cache step in build-ios
as-is and ensure the Bun cache runs before its bun install steps in that job as
well.
In @.github/workflows/ci-react-native.yml:
- Around line 65-147: Add the same safety and caching improvements to the
build-android and build-ios jobs: make both jobs depend on the lint-and-test job
by adding needs: lint-and-test to the build-android and build-ios job
definitions (so they don’t run if tests/lint fail), and add Bun dependency
caching around the Bun install steps (use actions/cache keyed on
libraries/react-native-ondevice-ai and the example’s lockfile or bun.lockb)
similar to the changes suggested for ci-expo.yml; update the Install library
dependencies and Install example dependencies steps in both jobs to restore/save
the Bun cache before/after bun install.
In @.github/workflows/ci-web.yml:
- Around line 26-32: Enable npm caching in the GitHub Actions setup-node step to
speed CI: update the existing actions/setup-node@v4 step (the "Setup Node.js"
job) to enable caching by adding the cache: 'npm' option and set
cache-dependency-path to your lockfile (e.g., package-lock.json or package.json)
so subsequent runs restore node_modules automatically; keep the "Install
dependencies" step (npm install) but rely on the setup-node cache settings to
populate the cache.
In `@libraries/expo-ondevice-ai/example/components/AppState.tsx`:
- Around line 244-255: platformLabel currently falls through to 'Android' for
any unrecognized Platform.OS; update the logic in the component that computes
platformLabel (and related providerLabel) to explicitly handle known values
('ios', 'android', 'web', 'macos', 'windows') instead of using a final else that
assumes Android—e.g., replace the nested ternary with a switch or explicit
if/else branches that return 'iOS', 'Android', 'Web', 'macOS', 'Windows' (or a
clear 'Unknown' fallback) and adjust providerLabel to reference cap.platform
appropriately (checking 'IOS' vs other providers) so macOS/windows aren't
mislabeled as Android.
In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ChatDemo/index.tsx`:
- Line 311: The inline value-level cast "'none' as any" in the spread expression
for the Platform.OS === 'web' branch should be replaced with an object-level,
narrow cast to avoid leaking any; update the expression "(Platform.OS === 'web'
? {outlineStyle: 'none' as any} : {})" to use a single object cast such as
"(Platform.OS === 'web' ? ({ outlineStyle: 'none' } as Partial<TextStyle>) :
{})" (or "as unknown as TextStyle" if TextStyle isn't imported) so the any is
scoped to the whole object and not the string value.
In
`@libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ExtractDemo.tsx`:
- Line 29: The color map adds an "organization" entry but the extraction
request's entityTypes array (the entityTypes option in the ExtractDemo
component) doesn't include 'organization', so the color is never used; update
the entityTypes option array to include 'organization' (i.e., add 'organization'
to the array referenced by the entityTypes variable/prop in ExtractDemo) or, if
intentional, add a one-line comment next to the entityTypes declaration
explaining this is defensive and that 'organization' is included only for
potential spontaneous model outputs.
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 162-181: The classify (and similarly extract) method creates a
fresh LanguageModel session via lm.create({}) on every call which is wasteful;
modify ExpoOndeviceAiModule.web to reuse a single plain session: add a
module-scoped cachedSession initialized lazily from
getLanguageModelAPI()/lm.create({}) and have classify and extract use
cachedSession.prompt(...) (destroy only when module unloads or via an explicit
shutdown), or alternatively add a clear comment explaining why per-call creation
is required; references: classify, extract, getLanguageModelAPI, lm.create,
session, and session.destroy.
- Around line 27-33: Add a public reset/destroy function (e.g., export function
resetCachedInstances() or destroyCachedInstances()) that releases and clears the
module-level cached instances: for cachedSummarizer, cachedLanguageModel,
cachedRewriter, cachedWriter call any available cleanup method on the instance
(close(), dispose(), destroy(), or similar) if present, then set them to null;
for cachedTranslators iterate the Map, call each translator's cleanup method if
present, then clear() the Map and reset cachedSummarizerKey to ''. Export this
function from ExpoOndeviceAiModule.web.ts and ensure callers can invoke it to
free resources in long‑lived SPA contexts.
In `@packages/web/biome.json`:
- Line 14: The project-level Biome config currently sets "noNonNullAssertion":
"off" which mutes all non-null assertion warnings; change the
"noNonNullAssertion" setting in the biome.json config from "off" to "warn" so
non-null assertions (`!`) continue to surface as warnings (not errors) across
the codebase, preserving visibility without breaking builds.
In `@packages/web/example/main.ts`:
- Around line 447-457: The code that builds featureId in the loop over
capability.availableFeatures uses feature.toLowerCase().replace('_', '') which
only removes the first underscore; update the featureId generation (in the for
loop that defines featureId from f.feature) to remove all underscores (e.g., use
a global replace or replaceAll) so multi-underscore enum values like
DESCRIBE_IMAGE_V2 are normalized correctly while preserving the existing
toLowerCase and subsequent assignments to featureAvailability, hasAnyAvailable,
and hasDownloadable.
- Around line 120-151: The markdownToHtml function currently escapes input via
escapeHtml then reinserts HTML via regex replacements which is fragile and can
reintroduce XSS when patterns change; fix by either (A) replacing this
hand-rolled logic with a vetted markdown-to-HTML library and then sanitize the
resulting HTML (e.g., run through DOMPurify) before assigning to innerHTML, or
(B) rewrite markdownToHtml so it parses tokens first, escapes only plain text
nodes with escapeHtml and then constructs HTML nodes (never performing regex
replacements that inject raw capture groups), referencing the markdownToHtml and
escapeHtml functions and the site code path that sets innerHTML. Ensure the
final HTML is sanitized before any innerHTML assignment.
In `@packages/web/src/Locanara.ts`:
- Around line 892-896: The code uses a non-null assertion for
canvas.getContext("2d") which could return null; update the creation of ctx to
defensively handle that case by checking the return value of
canvas.getContext("2d") (the ctx variable) and either throwing a clear error or
returning early if ctx is null, so downstream code using ctx (created from the
canvas variable) never assumes non-null; locate the canvas, canvas.width/height
and ctx assignment in the same block and add the null-check and proper error
handling.
In `@packages/web/src/Types.ts`:
- Around line 33-85: These enums (SummarizeInputType, SummarizeOutputType,
RewriteOutputType, ProofreadInputType, CapabilityLevel, FeatureStatus,
ExecutionState) are unused in the web SDK but mirror types from
common/type.graphql; update packages/web/src/Types.ts to add a brief top-of-file
or per-enum comment indicating they are intentionally retained for API parity
with native libraries (or centralized schema) so future maintainers know they
are kept on purpose rather than dead code, referencing the enum names above to
document intent.
In `@packages/web/tests/Locanara.test.ts`:
- Around line 37-52: The mockLanguageModelSession.promptStreaming currently
returns a reader but no test exercises chatStreaming; add a new unit test that
uses the existing mockLanguageModelSession (or adjust it to expose a true
ReadableStream behavior) and calls the chatStreaming function exported/used by
the component to consume the stream, reading chunks from getReader().read() and
asserting the component's cumulative vs delta auto-detection and chunk-assembly
logic; specifically reference mockLanguageModelSession.promptStreaming and the
chatStreaming helper (or method) in your test, drive the mocked reads (chunk1,
chunk2, done) and assert the final assembled message and detection flags to
cover the streaming path.
- Add 'WEB' to Platform type union (both expo and react-native libraries)
- Fix platform: 'IOS' → 'WEB' in web module getDeviceCapability
- Fix systemPrompt being ignored after first chat/chatStream call
- Fix AIStatusBanner loading text for Android ("Gemini Nano" not "Apple Intelligence")
- Fix "7 AI features" → "8 AI features" in Claude guide
- Fix Chrome version inconsistency (140 → 138) in web example
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (4)
packages/web/example/index.html (2)
1075-1078: Redundantgapinline style — already set by.controls.The
.controlsrule in the stylesheet already definesgap: 0.5rem; thestyle="gap: 0.5rem;"on line 1075 is redundant.🧹 Proposed fix
- <div class="controls" style="gap: 0.5rem;"> + <div class="controls">🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/example/index.html` around lines 1075 - 1078, Remove the redundant inline gap from the div element that uses class "controls": edit the element with class="controls" (the container that currently contains the buttons with ids "chat-btn" and "chat-reset-btn") and delete the style fragment "gap: 0.5rem;" so the layout relies on the existing .controls CSS rule; leave the other inline styles (e.g., margin, flex) unchanged.
1001-1003: Duplicate<!-- Summarize -->comment.Lines 1001 and 1002 both carry the same comment; one should be removed.
🧹 Proposed fix
- <!-- Summarize --> <!-- Summarize --> <div class="detail-page" id="page-summarize">🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/web/example/index.html` around lines 1001 - 1003, Remove the duplicate HTML comment above the detail page: keep a single <!-- Summarize --> and delete the redundant one so the block containing <div class="detail-page" id="page-summarize"> only has one comment; update the area around the <div class="detail-page" id="page-summarize"> to remove the extra comment occurrence.libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (2)
240-251: DuplicatedsystemPromptcache-invalidation logic inchatandchatStream.The session creation/invalidation block (lines 240–251) is copy-pasted verbatim into
chatStream(lines 274–285). Extracting it into a helper reduces the surface for future divergence.♻️ Suggested helper extraction
+async function getOrCreateSession( + lm: any, + options?: ChatOptions, +): Promise<any> { + const newSystemPrompt = options?.systemPrompt; + if (!cachedLanguageModel || newSystemPrompt !== cachedSystemPrompt) { + cachedLanguageModel?.destroy?.(); + const initialPrompts: Array<{role: string; content: string}> = []; + if (newSystemPrompt) { + initialPrompts.push({role: 'system', content: newSystemPrompt}); + } + cachedLanguageModel = await lm.create({ + initialPrompts: initialPrompts.length > 0 ? initialPrompts : undefined, + }); + cachedSystemPrompt = newSystemPrompt; + } + return cachedLanguageModel; +}Then both methods become:
- const newSystemPrompt = options?.systemPrompt; - if (!cachedLanguageModel || newSystemPrompt !== cachedSystemPrompt) { - cachedLanguageModel?.destroy?.(); - const initialPrompts: Array<{role: string; content: string}> = []; - if (newSystemPrompt) { - initialPrompts.push({role: 'system', content: newSystemPrompt}); - } - cachedLanguageModel = await lm.create({ - initialPrompts: initialPrompts.length > 0 ? initialPrompts : undefined, - }); - cachedSystemPrompt = newSystemPrompt; - } + await getOrCreateSession(lm, options);Also applies to: 274-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 240 - 251, The session creation/invalidation logic for cachedLanguageModel and cachedSystemPrompt is duplicated in chat and chatStream; extract that block into a small helper (e.g., ensureCachedLanguageModel or initCachedModel) that accepts lm and options.systemPrompt (or newSystemPrompt) and returns/updates cachedLanguageModel and cachedSystemPrompt as needed; replace the duplicated blocks in chat and chatStream with calls to this helper and ensure it calls cachedLanguageModel?.destroy?.() and sets cachedSystemPrompt and creates cachedLanguageModel via lm.create({ initialPrompts }) with the same initialPrompts logic.
356-368:cachedRewriteris never actually reused — cache key by tone+length likecachedTranslatorsdoes.Lines 357–361 unconditionally destroy and recreate the
Rewriterinstance on every call, so thecachedRewritervariable only holds the last instance for cleanup, never providing a cache hit. This contrasts with thecachedTranslatorsmap (line 32) which correctly keys instances bysourceLanguage-targetLanguage. Consider the same approach here.♻️ Proposed refactor
-let cachedRewriter: any = null; +const cachedRewriters = new Map<string, any>();const toneMap: Record<string, string> = { ... }; const lengthMap: Record<string, string> = { ... }; + const key = `${toneMap[options.outputType] ?? 'as-is'}:${lengthMap[options.outputType] ?? 'as-is'}`; - cachedRewriter?.destroy(); - cachedRewriter = await Rewriter.create({ - tone: toneMap[options.outputType] ?? 'as-is', - length: lengthMap[options.outputType] ?? 'as-is', - }); + if (!cachedRewriters.has(key)) { + cachedRewriters.set(key, await Rewriter.create({ + tone: toneMap[options.outputType] ?? 'as-is', + length: lengthMap[options.outputType] ?? 'as-is', + })); + } + const cachedRewriter = cachedRewriters.get(key)!;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 356 - 368, cachedRewriter is being destroyed and recreated on every call so it never provides cache hits; change this to a keyed cache like cachedTranslators: create a Map (e.g., cachedRewriters) keyed by `${tone}-${length}` using toneMap[options.outputType] and lengthMap[options.outputType], look up an existing Rewriter from the map before calling Rewriter.create, only call Rewriter.create when there is a cache miss, store the new instance in cachedRewriters, and update any cleanup logic that currently references cachedRewriter to iterate/clear the map as needed (use symbols cachedRewriter, Rewriter.create, toneMap, lengthMap, options.outputType, cachedTranslators to locate places to change).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 109-122: The features object incorrectly sets proofread to
hasWriter even though proofread() will use LanguageModel first; update the
features mapping so that proofread is true when either hasLanguageModel or
hasWriter is available (i.e., change proofread: hasWriter to proofread:
hasLanguageModel || hasWriter) to match the behavior in the proofread()
implementation.
In `@packages/web/example/index.html`:
- Line 915: The interactive elements (the div with class "setup-guide-header"
that calls toggleSetupGuide() and the code elements that call
copyToClipboard(...)) are not keyboard-accessible; replace the <div
class="setup-guide-header" onclick="toggleSetupGuide()"> with a <button> (or add
tabindex="0", role="button" and an onkeydown handler that triggers
toggleSetupGuide on Enter/Space) and convert each <code
onclick="copyToClipboard(...)"> used as a copy affordance into a semantic
<button> (or similarly add tabindex/role/onkeydown) so keyboard users can focus
and activate them; update the CSS for the .copy-code / setup-guide-header button
styles to reset default button styling so appearance is unchanged while behavior
is accessible.
---
Duplicate comments:
In `@libraries/expo-ondevice-ai/example/components/shared/AIStatusBanner.tsx`:
- Around line 53-59: The Platform.OS three-way conditional in AIStatusBanner's
Text (styles.title) now correctly renders platform-specific loading titles
('Checking Chrome Built-in AI...' for web, 'Checking Apple Intelligence...' for
ios, and 'Checking Gemini Nano...' for android); no further changes
required—keep the conditional in the AIStatusBanner component as implemented and
ensure the Text with styles.title retains those exact strings.
In `@packages/web/example/index.html`:
- Around line 980-996: The GitHub link's anchor element (the <a> with
href="https://github.com/locanara/locanara" that wraps the settings-item whose
settings-text is "GitHub") uses the wrong repo URL; update that anchor's href to
"https://github.com/hyodotdev/locanara" so the GitHub settings-item points to
the correct repository.
---
Nitpick comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 240-251: The session creation/invalidation logic for
cachedLanguageModel and cachedSystemPrompt is duplicated in chat and chatStream;
extract that block into a small helper (e.g., ensureCachedLanguageModel or
initCachedModel) that accepts lm and options.systemPrompt (or newSystemPrompt)
and returns/updates cachedLanguageModel and cachedSystemPrompt as needed;
replace the duplicated blocks in chat and chatStream with calls to this helper
and ensure it calls cachedLanguageModel?.destroy?.() and sets cachedSystemPrompt
and creates cachedLanguageModel via lm.create({ initialPrompts }) with the same
initialPrompts logic.
- Around line 356-368: cachedRewriter is being destroyed and recreated on every
call so it never provides cache hits; change this to a keyed cache like
cachedTranslators: create a Map (e.g., cachedRewriters) keyed by
`${tone}-${length}` using toneMap[options.outputType] and
lengthMap[options.outputType], look up an existing Rewriter from the map before
calling Rewriter.create, only call Rewriter.create when there is a cache miss,
store the new instance in cachedRewriters, and update any cleanup logic that
currently references cachedRewriter to iterate/clear the map as needed (use
symbols cachedRewriter, Rewriter.create, toneMap, lengthMap, options.outputType,
cachedTranslators to locate places to change).
In `@packages/web/example/index.html`:
- Around line 1075-1078: Remove the redundant inline gap from the div element
that uses class "controls": edit the element with class="controls" (the
container that currently contains the buttons with ids "chat-btn" and
"chat-reset-btn") and delete the style fragment "gap: 0.5rem;" so the layout
relies on the existing .controls CSS rule; leave the other inline styles (e.g.,
margin, flex) unchanged.
- Around line 1001-1003: Remove the duplicate HTML comment above the detail
page: keep a single <!-- Summarize --> and delete the redundant one so the block
containing <div class="detail-page" id="page-summarize"> only has one comment;
update the area around the <div class="detail-page" id="page-summarize"> to
remove the extra comment occurrence.
- ci-expo: add expo prebuild step for ios/android (directories are gitignored), build library plugin first, remove unrelated packages/web trigger path - ci-expo: disable llama.cpp bridge in CI to avoid complex SPM dependency chain (enableLlamaCpp: !process.env.CI) - ci-react-native: fix CocoaPods infinite symlink loop by replacing bun's file: symlink with rsync copy excluding example/ - ci-web: use --legacy-peer-deps for npm install to handle workspace peer dependency conflict with react-helmet-async
- expo web: proofread feature check now includes LanguageModel (primary) in addition to Writer (fallback) - packages/web: chat/chatStreaming now invalidate cached session when options change (systemPrompt, temperature, topK), matching the pattern used by summarize/translate/rewrite - packages/web: resetChat clears options key
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
.github/workflows/ci-expo.yml (1)
58-99: No dependency onlint-and-testand no Gradle caching.Two minor observations on the new build jobs:
- None of the three new jobs declare
needs: lint-and-test, so they run in parallel even when lint/test fails — burning CI minutes on a known-bad commit. Consider addingneeds: lint-and-testif you prefer fail-fast behavior.build-androidhas no Gradle dependency caching (unlike the CocoaPods cache inbuild-ios). Adding a cache for~/.gradle/cacheswould speed up repeat builds.Both are optional improvements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-expo.yml around lines 58 - 99, Add a dependency on the lint-and-test job and enable Gradle caching for the build-android job: update the build-android job (named "build-android") to include needs: lint-and-test so it runs only after lint/test completes, and add an actions/cache step that caches the Gradle directory (e.g., path: ~/.gradle/caches) with a suitable key and restore-keys to speed repeated builds..github/workflows/ci-react-native.yml (3)
65-98: Add a Gradle dependency cache to avoid re-downloading on every run.The
build-androidjob has no Gradle cache step. Gradle dependency downloads are multi-GB and dominate Android build time. Thelint-and-testjob already usesactions/cache@v4for Bun; apply the same pattern here for Gradle.♻️ Proposed Gradle cache step (insert before "Build Android")
- name: Install example dependencies working-directory: libraries/react-native-ondevice-ai/example run: bun install + - name: Cache Gradle dependencies + uses: actions/cache@v4 + with: + path: | + ~/.gradle/caches + ~/.gradle/wrapper + key: ${{ runner.os }}-gradle-${{ hashFiles('libraries/react-native-ondevice-ai/example/android/**/*.gradle*', 'libraries/react-native-ondevice-ai/example/android/**/gradle-wrapper.properties') }} + restore-keys: | + ${{ runner.os }}-gradle- - name: Build Android🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 65 - 98, The build-android job lacks a Gradle dependency cache, causing large re-downloads each run; add an actions/cache@v4 step (insert before the "Build Android" step in the build-android job) that caches Gradle artifacts (e.g., paths like ~/.gradle/caches and ~/.gradle/wrapper) and uses a key based on runner OS and the Gradle wrapper checksum (or gradle version) to restore/update the cache so subsequent runs reuse downloaded dependencies for the Build Android step.
83-94: New build jobs are missing a Bun install cache.Both
build-androidandbuild-ioscallbun installtwice (library + example) without any cache, unlikelint-and-testwhich caches~/.bun/install/cache. Adding the same cache step prevents redundant network fetches.♻️ Proposed Bun cache step (insert before "Install library dependencies" in each job)
+ - name: Cache Bun dependencies + uses: actions/cache@v4 + with: + path: ~/.bun/install/cache + key: ${{ runner.os }}-bun-rn-${{ hashFiles('libraries/react-native-ondevice-ai/bun.lock') }} + restore-keys: | + ${{ runner.os }}-bun-rn- - name: Install library dependenciesAlso applies to: 112-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 83 - 94, The CI jobs build-android and build-ios run two bun install steps ("Install library dependencies" and "Install example dependencies") without a Bun cache; add a cache step that saves and restores ~/.bun/install/cache before the first bun install in each job to avoid redundant network fetches and speed runs. Insert a step named like "Cache Bun" (restore-key: path: ~/.bun/install/cache and save-key likewise) immediately before the "Install library dependencies" step in both jobs so both subsequent bun install runs reuse the cached artifacts.
100-103: Pinmacos-latestto a specific macOS version for reproducible iOS builds.
macos-latestis a floating label that GitHub periodically re-points to new OS/Xcode versions. For example, it migrated from macOS 14 to macOS 15 in August–September 2025. A major version bump can break CocoaPods resolution or xcodebuild invocations without any change in this repo. Pinning (e.g.macos-15) keeps the build stable until an intentional upgrade.- runs-on: macos-latest + runs-on: macos-15🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 100 - 103, The build job "build-ios" currently uses the floating runner label `runs-on: macos-latest`; change it to a pinned macOS image (for example `runs-on: macos-15`) to ensure reproducible iOS builds—update the `build-ios` job's `runs-on` value from `macos-latest` to the specific macOS version you want to lock (e.g., `macos-15`) so the workflow uses a stable Xcode/OS combination until you intentionally upgrade.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/ci-expo.yml:
- Around line 174-184: The build-web job is missing the library build step; add
a step that runs "bun run build" in the libraries/expo-ondevice-ai
working-directory (same as in build-android and build-ios) before the example
dependencies install step so the library's build output (e.g., build/index.js)
exists for the example; ensure the step uses working-directory
"libraries/expo-ondevice-ai" and is placed immediately before the existing
"Install example dependencies" step.
---
Nitpick comments:
In @.github/workflows/ci-expo.yml:
- Around line 58-99: Add a dependency on the lint-and-test job and enable Gradle
caching for the build-android job: update the build-android job (named
"build-android") to include needs: lint-and-test so it runs only after lint/test
completes, and add an actions/cache step that caches the Gradle directory (e.g.,
path: ~/.gradle/caches) with a suitable key and restore-keys to speed repeated
builds.
In @.github/workflows/ci-react-native.yml:
- Around line 65-98: The build-android job lacks a Gradle dependency cache,
causing large re-downloads each run; add an actions/cache@v4 step (insert before
the "Build Android" step in the build-android job) that caches Gradle artifacts
(e.g., paths like ~/.gradle/caches and ~/.gradle/wrapper) and uses a key based
on runner OS and the Gradle wrapper checksum (or gradle version) to
restore/update the cache so subsequent runs reuse downloaded dependencies for
the Build Android step.
- Around line 83-94: The CI jobs build-android and build-ios run two bun install
steps ("Install library dependencies" and "Install example dependencies")
without a Bun cache; add a cache step that saves and restores
~/.bun/install/cache before the first bun install in each job to avoid redundant
network fetches and speed runs. Insert a step named like "Cache Bun"
(restore-key: path: ~/.bun/install/cache and save-key likewise) immediately
before the "Install library dependencies" step in both jobs so both subsequent
bun install runs reuse the cached artifacts.
- Around line 100-103: The build job "build-ios" currently uses the floating
runner label `runs-on: macos-latest`; change it to a pinned macOS image (for
example `runs-on: macos-15`) to ensure reproducible iOS builds—update the
`build-ios` job's `runs-on` value from `macos-latest` to the specific macOS
version you want to lock (e.g., `macos-15`) so the workflow uses a stable
Xcode/OS combination until you intentionally upgrade.
There was a problem hiding this comment.
Actionable comments posted: 8
🧹 Nitpick comments (1)
libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts (1)
340-368:rewritedestroys and recreates theRewriteron every call — consider caching by options key.Unlike the
summarizeand chat methods in this same file (and unlikeLocanara.ts's rewrite), the rewriter instance is unconditionally torn down and rebuilt on each invocation. The Chrome docs note that each session consumes memory and that reuse is preferred.♻️ Suggested fix using an options key (matching the summarizer pattern above)
+ let cachedRewriterKey: string = ''; // (move cachedRewriter to have a companion key) async rewrite(text: string, options: RewriteOptions): Promise<RewriteResult> { const Rewriter = (globalThis as any).Rewriter; if (!Rewriter) throw new Error('Rewriter API not available in this browser'); + const optionsKey = `${options.outputType}`; - cachedRewriter?.destroy(); - cachedRewriter = await Rewriter.create({ + if (!cachedRewriter || cachedRewriterKey !== optionsKey) { + cachedRewriter?.destroy(); + cachedRewriter = await Rewriter.create({ tone: toneMap[options.outputType] ?? 'as-is', length: lengthMap[options.outputType] ?? 'as-is', }); + cachedRewriterKey = optionsKey; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts` around lines 340 - 368, The rewrite method currently destroys and recreates cachedRewriter on every call; change it to reuse instances keyed by the computed options so we only create a new Rewriter when the mapping (tone/length) changes. Compute the key from options.outputType (using the same toneMap/lengthMap logic), check a map of cached rewriters for that key, and only call Rewriter.create (and destroy the previous instance for that key when replacing) when the key differs; update cachedRewriter usage in rewrite to pull from that cache and avoid unconditional cachedRewriter?.destroy() before creating a new one.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 32-34: Add lifecycle cleanup: implement and export a destroy()
function that clears cachedTranslators (Map.clear()), nulls cachedWriter and
cachedRewriter, and closes/disposes any underlying translator/writer instances
held in the map (iterate cachedTranslators values to call their close/dispose
method if present). Additionally, to avoid unbounded growth, add a simple
LRU-cap or max-size eviction policy for cachedTranslators where you remove the
least-recently-used entry when inserting beyond the limit (update access order
on lookup/usage). Update usages of cachedTranslators, cachedWriter, and
cachedRewriter in ExpoOndeviceAiModule.web.ts to ensure they respect the LRU
policy and that destroy() is callable by consumers.
- Around line 321-322: The cache key creation uses a hard-coded 'en' fallback
for options.sourceLanguage which breaks non-English inputs; change this to
either require callers to pass sourceLanguage or perform language detection
before building the key and before calling Translator.create. Specifically,
remove the `${options.sourceLanguage ?? 'en'}` fallback, call the
LanguageDetector (or equivalent async detect function) on the input text when
options.sourceLanguage is undefined to obtain detectedSource, then use
`${detectedSource}-${options.targetLanguage}` for the cachedTranslators key and
pass detectedSource (or the explicit options.sourceLanguage) into
Translator.create so caching and translation use the actual source language.
- Around line 28-34: Replace the loose any annotations with the proper types
from the `@types/dom-chromium-ai` package: import the appropriate exported
interfaces/types for the Chrome AI objects (the Summarizer, LanguageModel,
Rewriter, Writer and Translator types) and use them for cachedSummarizer,
cachedLanguageModel, cachedRewriter, cachedWriter and the Map value type for
cachedTranslators (keep cachedSystemPrompt as string | undefined). Update the
variable declarations (cachedSummarizer, cachedLanguageModel, cachedRewriter,
cachedWriter, cachedTranslators) to use those imported types so the file is
strict-mode friendly and no longer relies on any.
In `@packages/web/src/Locanara.ts`:
- Around line 1047-1055: The writeStreaming path is creating and caching a
Writer without passing the format, which poisons the cache keyed by optionsKey
that includes format; update the Writer creation in the writeStreaming branch
(where this._writer is set via window.Writer.create) to include the mapped
format (use the same mapper used in write, e.g.
this.mapWriterFormat(options.format) or equivalent) alongside tone and length so
the created instance matches the optionsKey and prevents format-less reuse; keep
the existing this._writerOptionsKey assignment logic unchanged.
- Around line 840-869: proofread currently reuses the shared this._writer
(created by write/writeStreaming) which can carry tone/length/format state;
change proofread to always create a fresh, neutral Writer instance via
window.Writer.create({ monitor: this.createMonitor() }) instead of reusing
this._writer (or create a dedicated this._proofreader and use that), use that
instance to call write(...) and then close/destroy it, and update destroy() to
also cleanup the new proofreader instance (or ensure any temporary writer is
disposed) so state from write/writeStreaming cannot affect proofread's output;
keep existing error handling (LocanaraError.executionFailed) and context
handling the same.
- Around line 250-257: checkTranslatorAvailability currently checks typeof
translator === "object" which misses WebIDL globals exposed as function objects;
update checkTranslatorAvailability to detect function or object (e.g., typeof
translator === "function" || typeof translator === "object" or simply truthiness
with typeof !== "undefined") when inspecting window.Translator so it returns
FeatureAvailability.AVAILABLE when the API exists; adjust the logic in the
checkTranslatorAvailability function (referencing window.Translator and
FeatureAvailability.AVAILABLE) so getDeviceCapability will correctly report
TRANSLATE as available.
- Around line 878-944: In describeImage, the multimodal call uses the wrong
format: update the lmAPI.create(...) call (used to instantiate
ChromeLanguageModelSession) to include expectedInputs: [{ type: "image" }], and
change the session.prompt(...) call to send a single role entry whose content is
an array of typed items — e.g. one object with role "user" and content [{ type:
"image", value: imageBlob }, { type: "text", value: prompt }] — then keep
session.destroy() and error handling as-is.
---
Duplicate comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 61-79: The current behavior in checkAvailability intentionally
returns hasAPI(api) from the catch block to perform a lenient fallback when
obj.availability() times out or fails; leave the implementation of
checkAvailability (and its use of hasAPI(api)) as-is to preserve the designed
behavior that Chrome Built-in AI availability timeouts do not lock out features.
In `@packages/web/src/Locanara.ts`:
- Around line 576-592: The chatStreaming loop currently yields raw chunks from
this._languageModel.promptStreaming (reader.read) which can be cumulative;
change chatStreaming to normalize to deltas by tracking previousText: initialize
prev = "" before the loop, on each result.value coerce to string, if
value.startsWith(prev) treat delta = value.slice(prev.length) else treat delta =
value (fallback for non-cumulative chunks or resets), then set prev = value and
yield only the delta when non-empty; keep reader.releaseLock() in the finally
block and ensure this logic lives around the promptStreaming/read loop
(referencing promptStreaming, reader.read and chatStreaming).
- Around line 549-575: chatStreaming currently builds optionsKey without
options.initialPrompts and never applies options.initialPrompts when creating a
session, causing dropped prompts and unnecessary session recreation; update
chatStreaming to mirror chat by including options.initialPrompts in the
optionsKey (e.g. JSON.stringify(initialPrompts or options.initialPrompts) as
part of optionsKey), construct the initialPrompts array when
options.initialPrompts exists (same shape: { role, content }), and pass that
array into lmAPI.create via the initialPrompts parameter when creating
_languageModel (and then set _languageModelOptionsKey) so session reuse and
prompt handling match chat; refer to the chatStreaming and chat methods, the
optionsKey / _languageModelOptionsKey variables, lmAPI.create and
createMonitor() to locate the changes.
---
Nitpick comments:
In `@libraries/expo-ondevice-ai/src/ExpoOndeviceAiModule.web.ts`:
- Around line 340-368: The rewrite method currently destroys and recreates
cachedRewriter on every call; change it to reuse instances keyed by the computed
options so we only create a new Rewriter when the mapping (tone/length) changes.
Compute the key from options.outputType (using the same toneMap/lengthMap
logic), check a map of cached rewriters for that key, and only call
Rewriter.create (and destroy the previous instance for that key when replacing)
when the key differs; update cachedRewriter usage in rewrite to pull from that
cache and avoid unconditional cachedRewriter?.destroy() before creating a new
one.
- Expo iOS: bump deployment target 15.1 → 17.0 for Locanara pod compat - Expo Android: resilient locanara-versions.json path with fallbacks - RN iOS: use rm -rf for directory copy (not just symlink) on CI - Web: add --ignore-scripts to skip root prepare script in CI
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (5)
.github/workflows/ci-react-native.yml (3)
65-67: Consider gating build jobs onlint-and-test.Both new jobs run in parallel with
lint-and-test, meaning a failing lint or test doesn't prevent the (slower, resource-intensive) Android/iOS builds from starting. Addingneeds: lint-and-testwould avoid burning macOS/ubuntu minutes on a broken branch.♻️ Proposed change
build-android: name: Build Android runs-on: ubuntu-latest + needs: lint-and-test build-ios: name: Build iOS runs-on: macos-latest + needs: lint-and-testAlso applies to: 100-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 65 - 67, The build jobs currently run in parallel with lint-and-test; update the workflow so the Android and iOS jobs (job names build-android and build-ios) declare needs: lint-and-test to gate them on successful lint-and-test completion; locate the build-android and build-ios job definitions in the YAML and add the needs property referencing lint-and-test to prevent costly builds from starting when lint or tests fail.
83-94: Both new jobs are missing a Bun dependency cache.The
lint-and-testjob has an explicitactions/cachestep for~/.bun/install/cache. Thebuild-androidandbuild-iosjobs runbun installwithout caching, resulting in a full network fetch on every CI run.⚡ Add Bun cache to each new job (after `Setup Bun`, before `Install library dependencies`)
+ - name: Cache Bun dependencies + uses: actions/cache@v4 + with: + path: ~/.bun/install/cache + key: ${{ runner.os }}-bun-rn-android-${{ hashFiles('libraries/react-native-ondevice-ai/bun.lock', 'libraries/react-native-ondevice-ai/example/bun.lock') }} + restore-keys: | + ${{ runner.os }}-bun-rn-android- + - name: Install library dependenciesUse an analogous block with
-ios-in the key forbuild-ios.Also applies to: 112-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 83 - 94, Add the Bun dependency cache step to both build-android and build-ios jobs: after the "Setup Bun" step and before the "Install library dependencies" step, insert an actions/cache step that caches the Bun install cache directory (~/.bun/install/cache) and uses a stable key (e.g., runner OS + bun-version + checksum of lockfile) so bun install is served from cache; for the build-ios job use a similar cache key name that includes "-ios-" to distinguish it. Ensure the cache step runs before both "Install library dependencies" and "Install example dependencies" in each job so bun install uses the cached files.
65-98: Add a Gradle dependency cache tobuild-android.Without it, every run re-downloads all Gradle/Maven dependencies, adding several minutes to each build. The
lint-and-testjob already caches Bun — the same principle applies here.⚡ Proposed Gradle cache step (insert before "Build Android")
- name: Install example dependencies working-directory: libraries/react-native-ondevice-ai/example run: bun install + - name: Cache Gradle + uses: actions/cache@v4 + with: + path: | + ~/.gradle/caches + ~/.gradle/wrapper + key: ${{ runner.os }}-gradle-rn-${{ hashFiles('libraries/react-native-ondevice-ai/example/android/**/*.gradle*', 'libraries/react-native-ondevice-ai/example/android/**/gradle-wrapper.properties') }} + restore-keys: | + ${{ runner.os }}-gradle-rn- + - name: Build Android🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-react-native.yml around lines 65 - 98, Add a Gradle/Maven cache step to the build-android job to avoid re-downloading dependencies: insert an actions/cache step before the "Build Android" step that caches Gradle directories (e.g., ~/.gradle/caches and ~/.gradle/wrapper) and the local Maven repository (e.g., ~/.m2/repository), keyed by OS + Gradle wrapper checksum or relevant gradle.properties/lockfile; ensure the cache restore/save wraps the working-directory used by the Android build (libraries/react-native-ondevice-ai/example/android) so the Gradle daemon/wrapper and Maven artifacts are persisted across runs and significantly speed up the assembleDebug ./gradlew assembleDebug --no-daemon step..github/workflows/ci-web.yml (2)
26-32: Add npm caching to the setup-node action.npm cicannot be used without a lockfile.The caching recommendation is valid:
actions/setup-node@v4supports acacheoption that avoids re-downloading packages on every run. However, switching fromnpm installtonpm cirequires committing apackage-lock.jsonto the repository first—currently,packages/webhas no npm lockfile.♻️ Current path forward
To enable caching with the current setup:
- name: Setup Node.js uses: actions/setup-node@v4 with: node-version: '20' + cache: 'npm' + cache-dependency-path: packages/web/package.jsonTo unlock
npm ci(for true reproducibility and faster installs), create and commitpackages/web/package-lock.jsonby runningnpm install --package-lock-onlylocally, then update the workflow to usenpm ci --ignore-scripts.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-web.yml around lines 26 - 32, Update the GitHub Actions step that uses actions/setup-node@v4 to enable caching by adding the cache option (e.g., cache: 'npm') alongside node-version: '20', and change the install step: either keep using npm install with --legacy-peer-deps --ignore-scripts (so caching still helps) or, if you commit packages/web/package-lock.json (run npm install --package-lock-only locally and commit it), switch the install command to npm ci --ignore-scripts to use the lockfile and benefit from faster reproducible installs; reference the setup-node action and the install step in the workflow and ensure packages/web/package-lock.json is present before switching to npm ci.
24-29: Consider SHA-pinning third-party actions for supply-chain hardening.Using mutable
@v4tags means a tag re-point (accidental or malicious) would silently execute different code. Pinning to a commit SHA removes this risk.♻️ SHA-pinned equivalents (current as of February 2026)
- - uses: actions/checkout@v4 + - uses: actions/checkout@34e114876b0b11c390a56381ad16ebd13914f8d5 # v4.3.1 - name: Setup Node.js - uses: actions/setup-node@v4 + uses: actions/setup-node@49933ea5288caeca8642d1e84afbd3f7d6820020 # v4.4.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/ci-web.yml around lines 24 - 29, Replace mutable action tags with commit SHAs to prevent silent repointing: update the usages of actions/checkout@v4 and actions/setup-node@v4 in the workflow to their corresponding full commit SHA refs (e.g., actions/checkout@<commit-sha> and actions/setup-node@<commit-sha>), ensuring you pin both occurrences in the file and verify the chosen SHAs match the currently intended releases before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@libraries/expo-ondevice-ai/android/build.gradle`:
- Around line 14-20: The loop over candidates currently returns a default as
soon as a candidate file exists even if the regex match fails; change the logic
in the for (candidate in candidates) loop so you read versionJson and compute
matcher but only return matcher[0][1] when matcher matches (i.e., move the
return inside the match guard), otherwise continue to the next candidate; after
the loop completes with no successful match, return the default "1.0.2". Ensure
this uses the existing variables candidate, candidates, versionJson, and
matcher.
In `@libraries/expo-ondevice-ai/example/app.config.ts`:
- Line 55: The example app currently sets deploymentTarget: '17.0' which drops
iOS 15/16 support — keep this intentional change but document it and add the
required TypeScript strict-mode directive: add a "use strict" directive at the
very top of app.config.ts and add a short inline comment next to the
deploymentTarget setting (or a README note) making it clear that iOS 17+ is
required for this example; ensure you update only app.config.ts and reference
the deploymentTarget setting so reviewers can find the change.
---
Nitpick comments:
In @.github/workflows/ci-react-native.yml:
- Around line 65-67: The build jobs currently run in parallel with
lint-and-test; update the workflow so the Android and iOS jobs (job names
build-android and build-ios) declare needs: lint-and-test to gate them on
successful lint-and-test completion; locate the build-android and build-ios job
definitions in the YAML and add the needs property referencing lint-and-test to
prevent costly builds from starting when lint or tests fail.
- Around line 83-94: Add the Bun dependency cache step to both build-android and
build-ios jobs: after the "Setup Bun" step and before the "Install library
dependencies" step, insert an actions/cache step that caches the Bun install
cache directory (~/.bun/install/cache) and uses a stable key (e.g., runner OS +
bun-version + checksum of lockfile) so bun install is served from cache; for the
build-ios job use a similar cache key name that includes "-ios-" to distinguish
it. Ensure the cache step runs before both "Install library dependencies" and
"Install example dependencies" in each job so bun install uses the cached files.
- Around line 65-98: Add a Gradle/Maven cache step to the build-android job to
avoid re-downloading dependencies: insert an actions/cache step before the
"Build Android" step that caches Gradle directories (e.g., ~/.gradle/caches and
~/.gradle/wrapper) and the local Maven repository (e.g., ~/.m2/repository),
keyed by OS + Gradle wrapper checksum or relevant gradle.properties/lockfile;
ensure the cache restore/save wraps the working-directory used by the Android
build (libraries/react-native-ondevice-ai/example/android) so the Gradle
daemon/wrapper and Maven artifacts are persisted across runs and significantly
speed up the assembleDebug ./gradlew assembleDebug --no-daemon step.
In @.github/workflows/ci-web.yml:
- Around line 26-32: Update the GitHub Actions step that uses
actions/setup-node@v4 to enable caching by adding the cache option (e.g., cache:
'npm') alongside node-version: '20', and change the install step: either keep
using npm install with --legacy-peer-deps --ignore-scripts (so caching still
helps) or, if you commit packages/web/package-lock.json (run npm install
--package-lock-only locally and commit it), switch the install command to npm ci
--ignore-scripts to use the lockfile and benefit from faster reproducible
installs; reference the setup-node action and the install step in the workflow
and ensure packages/web/package-lock.json is present before switching to npm ci.
- Around line 24-29: Replace mutable action tags with commit SHAs to prevent
silent repointing: update the usages of actions/checkout@v4 and
actions/setup-node@v4 in the workflow to their corresponding full commit SHA
refs (e.g., actions/checkout@<commit-sha> and actions/setup-node@<commit-sha>),
ensuring you pin both occurrences in the file and verify the chosen SHAs match
the currently intended releases before committing.
- Reformat web module and plugin for consistent code style - Simplify inputType helper to expression body - Fix import ordering in index.ts - Enhance example app framework demos with interactive UI
- Expo iOS: bump ExpoOndeviceAi podspec platform 15.0 → 17.0 - RN iOS: add local Locanara pod path (trunk 1.0.1 lacks newer types) - RN iOS: bump NitroOndeviceAi podspec platform 15.0 → 17.0 - Web: fix biome formatting (quotes, semicolons)
- Locanara podspec: remove weak_frameworks FoundationModels (not in Xcode 16.x SDK; canImport() handles this at compile time) - RN iOS: update HybridOndeviceAi to match Nitro-generated protocol (Variant types for optional params, throws on listener methods)
…ms and Variant types
- Replace all `any` casts with proper Chrome AI type interfaces in ExpoOndeviceAiModule.web.ts (Summarizer, Translator, Rewriter, Writer, LanguageModel) - Add translator cache eviction (max 10 entries) and destroy() method for resource cleanup - Fix README package name from @locanara/web to locanara - Remove outdated GitHub Packages (Pro) installation instructions
- Separate _proofreadWriter from _writer to prevent tone/length contamination when write() is called before proofread() - Add missing format and sharedContext to writeStreaming's Writer.create() - Fix build.gradle version lookup to try all candidates instead of returning default on first file match failure
Summary
expo-ondevice-aiusing Chrome Built-in AI (Gemini Nano) APIspackages/webstandalone SDK andci-web.ymlworkflowbuild-webjob fromci-react-native.yml(Nitro doesn't support web)Changes
Core
ExpoOndeviceAiModule.web.ts— Chrome Built-in AI integration (Summarizer, LanguageModel, Translator, Rewriter, Writer APIs)addListener/removeListenersExample App
AIStatusBanner.tsx— Web-specific messages ("Chrome Built-in AI")AppState.tsx— Web platform/provider labelsChatDemo/index.tsx— Fix blue focus outline on webExtractDemo.tsx— Add organization entity colorCI
ci-expo.yml— Already hasbuild-web(Expo web export)ci-react-native.yml— Remove unrelatedpackages/webbuild jobci-web.yml— New standalone web SDK CIDocumentation
Test plan
bun webin expo-ondevice-ai example, verify all 8 AI features work on Chrome 138+ci-react-native.ymlno longer triggers onpackages/web/**changesSummary by CodeRabbit
New Features
Documentation
UX
Tests & CI