Skip to content

feat: add flutter_ondevice_ai library#10

Merged
hyochan merged 5 commits intomainfrom
feat/flutter-ondevice-ai
Feb 26, 2026
Merged

feat: add flutter_ondevice_ai library#10
hyochan merged 5 commits intomainfrom
feat/flutter-ondevice-ai

Conversation

@hyochan
Copy link
Copy Markdown
Member

@hyochan hyochan commented Feb 26, 2026

Summary

  • Add flutter_ondevice_ai Flutter plugin with full iOS, Android, and Web (Chrome Built-in AI) support
  • MethodChannel + EventChannel architecture for native communication, dart:js_interop for web
  • 39 unit tests (MethodChannel mocks + type serialization round-trip)
  • Example app with all 8 AI features + 7 framework demos (Chain, Pipeline, Memory, Guardrail, Session, Agent, Model)
  • Web support includes Chrome setup guide, availability detection with timeout, real promptStreaming
  • CI workflow (ci-flutter.yml) for analyze + test on PRs
  • Publish workflow (publish-flutter.yml) with pub.dev OIDC authentication on flutter-v* tags
  • Site documentation for flutter_ondevice_ai and expo-ondevice-ai framework section

What's Included

  • Plugin: Dart API, iOS Swift native, Android Kotlin native, Web dart:js_interop
  • Example App: 5-tab app (Device, Features, Framework, Settings) matching Expo/RN examples
  • Docs: Site library page with installation, APIs, framework section, model management
  • CI/CD: GitHub Actions for Flutter analyze/test + pub.dev publish

Test plan

  • cd libraries/flutter_ondevice_ai && flutter test — all 39 tests pass
  • cd libraries/flutter_ondevice_ai && flutter analyze — no errors
  • Run example on iOS device — Apple Intelligence + llama.cpp engine
  • Run example on Android device — Gemini Nano via ExecuTorch
  • Run example on Chrome — Chrome Built-in AI with setup guide
  • Verify site docs render at /docs/libraries/flutter

Summary by CodeRabbit

  • New Features

    • Flutter on-device AI plugin (iOS/Android/Web): summarize, classify, extract, chat (streaming), translate, rewrite, proofread, model management, plus a full example app and demos.
  • Documentation

    • New developer guides and site docs for Flutter, React Native, and Expo; Flutter README and changelog added.
  • Bug Fixes

    • Improved model availability, lifecycle and cross-platform model handling in examples.
  • Tests

    • Comprehensive Flutter unit tests for types and plugin API.
  • Chores

    • CI/workflows, .gitignore, license, packaging and launch configs added.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 26, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6e0b7b3 and a875088.

📒 Files selected for processing (1)
  • .github/workflows/publish-flutter.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • .github/workflows/publish-flutter.yml

📝 Walkthrough

Walkthrough

Adds a new Flutter on-device AI plugin (Dart + native iOS/Android/Web), a full example app with model lifecycle UI, native llama.cpp/SPM bridge and CocoaPods hooks, CI/publish workflows, site/docs pages and cross-library model-management improvements applied to Expo and React Native.

Changes

Cohort / File(s) Summary
Flutter plugin core
libraries/flutter_ondevice_ai/lib/*, libraries/flutter_ondevice_ai/pubspec.yaml, libraries/flutter_ondevice_ai/CHANGELOG.md, libraries/flutter_ondevice_ai/LICENSE, libraries/flutter_ondevice_ai/analysis_options.yaml
New public Dart API, comprehensive types, error wrapper, web JS-interop plugin, tests, exports, package metadata and lint rules.
Flutter iOS native
libraries/flutter_ondevice_ai/ios/Classes/*, libraries/flutter_ondevice_ai/ios/flutter_ondevice_ai.podspec
New Swift plugin, option parsing helpers, PrefilledMemory adapter, serializers, event/progress handlers, and CocoaPods podspec.
Flutter Android native
libraries/flutter_ondevice_ai/android/..., libraries/flutter_ondevice_ai/android/build.gradle, libraries/flutter_ondevice_ai/android/settings.gradle, libraries/flutter_ondevice_ai/android/src/main/...
Kotlin Flutter plugin, ExecuTorch model wrapper, serialization, option decoding, model I/O, dependency version resolution and Android manifest.
Flutter example app & assets
libraries/flutter_ondevice_ai/example/**
Complete example app including Dart app state, many demo screens/widgets, iOS/Android/Xcode/Gradle project files, web entry, pubspec and .gitignore.
iOS llama.cpp bridge & Podfile hooks
libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/*, libraries/flutter_ondevice_ai/example/ios/Podfile, libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/...
Bridged LlamaCppEngine Swift wrapper, bridge podspec, Podfile post_install to enable C++ interop, embed SPM frameworks into app bundle and Xcode project integration.
CI & publish workflows
.github/workflows/ci-flutter.yml, .github/workflows/publish-flutter.yml
New GitHub Actions to run Flutter analyze/tests on PRs and tag-triggered dart pub publish for the plugin.
Site & docs
packages/site/src/pages/docs/libraries/flutter.tsx, packages/site/src/pages/docs/libraries/react-native.tsx, .claude/guides/*, AGENTS.md, packages/site/src/lib/navigation.ts
Adds Flutter and React Native docs pages, cross-library bridge guide, site navigation updates and AGENTS.md edits.
Expo & React Native Android changes
libraries/expo-ondevice-ai/android/..., libraries/react-native-ondevice-ai/android/...
Simulated model state and device-memory-aware available models; implemented download/load/delete flows, serialization helpers, and download-progress listener hooks.
Example UI updates (cross-platform)
libraries/*/example/**/ClassifyDemo.*, ModelSelectionSheet.*, libraries/flutter_ondevice_ai/example/lib/widgets/**
Category chips + custom categories UI; removed iOS-only gating for available models; many new Flutter demo widgets, model selection sheet and shared UI components (debug, banners, run button, etc.).
Repo config & IDE
.gitignore, .vscode/launch.json, locanara-versions.json, packages/android/locanara/.../ModelRegistry.kt
Add Flutter ignore rules and VS Code Flutter run configs, add flutter version entry, and update a model entry (size/quant/URLs).

Sequence Diagram(s)

mermaid
sequenceDiagram
rect rgba(30,144,255,0.5)
participant FlutterUI as "Flutter UI"
end
rect rgba(46,204,113,0.5)
participant Plugin as "Flutter Plugin\n(MethodChannel/EventChannel)"
end
rect rgba(241,196,15,0.5)
participant Locanara as "Locanara SDK\n(Inference Router)"
end
rect rgba(155,89,182,0.5)
participant Engine as "Native Engine\n(llama/ExecuTorch / SPM framework)"
FlutterUI->>Plugin: invoke downloadModel(modelId)
Plugin->>Locanara: request model download / file management
Locanara->>Engine: fetch & register model files
Engine-->>Locanara: emit progress / complete
Locanara-->>Plugin: forward progress events / completion
Plugin-->>FlutterUI: emit progress via EventChannel / resolve
FlutterUI->>Plugin: invoke chatStream(message)
Plugin->>Locanara: start streaming inference
Locanara->>Engine: run streaming generate
Engine-->>Locanara: token chunks
Locanara-->>Plugin: forward chunks
Plugin-->>FlutterUI: stream chunks via EventChannel

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Possibly related PRs

"🐰 I nibble bytes beneath the moon,
Bridges built and tokens croon,
From Flutter screen to native den,
Streams and models hop again,
Cheers — a rabbit shipped the tune!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 30.67% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add flutter_ondevice_ai library' directly and specifically summarizes the main change: introducing a new Flutter on-device AI library package with complete platform support.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/flutter-ondevice-ai

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @hyochan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands the project's cross-platform capabilities by integrating a new Flutter plugin for on-device AI. It provides a unified Dart API for accessing native AI features on iOS, Android, and Web, complete with a robust example application and comprehensive documentation. The changes also include necessary build system configurations and CI/CD pipelines to support the new platform.

Highlights

  • New Flutter Plugin: Introduced flutter_ondevice_ai, a new Flutter plugin providing on-device AI capabilities with full support for iOS (Apple Intelligence), Android (Gemini Nano), and Web (Chrome Built-in AI).
  • Comprehensive Example App: Added a detailed example Flutter application showcasing all 8 AI features (summarize, classify, extract, chat, translate, rewrite, proofread) and 7 framework demos (Chain, Pipeline, Memory, Guardrail, Session, Agent, Model).
  • iOS llama.cpp Bridge Integration: Implemented and documented specific iOS integration patterns for llama.cpp models within Flutter, addressing framework linking differences and ensuring compatibility.
  • Web Support via dart:js_interop: Enabled web support for the Flutter plugin using dart:js_interop to interact with Chrome Built-in AI APIs, including real promptStreaming and a setup guide.
  • CI/CD Workflows: Added dedicated GitHub Actions workflows for Flutter, including ci-flutter.yml for analysis and testing on pull requests, and publish-flutter.yml for publishing to pub.dev with OIDC authentication.
  • Documentation and Project Structure Updates: Updated project documentation (AGENTS.md, .claude/guides) and site navigation to reflect the new Flutter library and its API parity with existing Expo and React Native libraries.
  • Android Model Management Simulation: Enhanced Android modules in expo-ondevice-ai and react-native-ondevice-ai to simulate model management (download, load, delete) for better cross-platform consistency in examples.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • .claude/guides/09-expo-ondevice-ai.md
    • Updated with a cross-library iOS llama.cpp bridge comparison, detailing Flutter-specific requirements.
  • .claude/guides/11-react-native-ondevice-ai.md
    • Added documentation for the react-native-ondevice-ai library, including overview, requirements, build commands, project structure, how it works, Nitro constraints, iOS llama.cpp bridge, CocoaPods config, development workflow, example app, and API parity.
  • .claude/guides/12-flutter-ondevice-ai.md
    • Added comprehensive documentation for the flutter_ondevice_ai library, covering overview, requirements, build commands, project structure, Dart to native chain mapping, MethodChannel/EventChannel usage, Web implementation, critical iOS llama.cpp bridge details, build phase order, common build errors, isolation architecture, example app features, and API parity.
  • .gitignore
    • Added entries for Flutter/Dart build artifacts and lock files.
  • .vscode/launch.json
    • Added new launch configurations for running the site and Flutter examples on iOS, Android, and Web.
  • AGENTS.md
    • Updated the project structure to include flutter_ondevice_ai and added a new section detailing API parity across all libraries.
  • libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiModule.kt
    • Updated to simulate model management (download, load, delete) and expose available models on Android.
  • libraries/expo-ondevice-ai/android/src/main/java/expo/modules/ondeviceai/ExpoOndeviceAiSerialization.kt
    • Added serialization logic for DownloadableModelInfo.
  • libraries/expo-ondevice-ai/example/components/pages/FeatureDetail/ClassifyDemo.tsx
    • Refactored the UI for category selection to use chips and allow custom category input.
  • libraries/expo-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx
    • Modified to display downloadable models on Android, removing the iOS-only restriction.
  • libraries/flutter_ondevice_ai/CHANGELOG.md
    • Added initial changelog entry for version 0.1.0.
  • libraries/flutter_ondevice_ai/LICENSE
    • Added MIT license file.
  • libraries/flutter_ondevice_ai/analysis_options.yaml
    • Added Dart analysis options.
  • libraries/flutter_ondevice_ai/android/build.gradle
    • Added Gradle build configuration for the Flutter plugin, including Locanara SDK dependency and Kotlin setup.
  • libraries/flutter_ondevice_ai/android/settings.gradle
    • Added root project name for the Android part of the Flutter plugin.
  • libraries/flutter_ondevice_ai/android/src/main/AndroidManifest.xml
    • Added Android manifest with internet permission.
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/ExecuTorchModelWrapper.kt
    • Added a wrapper for ExecuTorchEngine to conform to LocanaraModel, enabling its use with built-in chains.
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiHelper.kt
    • Added helper functions to decode Flutter options maps into native Kotlin chain parameters.
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt
    • Implemented the main Flutter plugin for Android, handling MethodChannel and EventChannel calls, AI features, and model management.
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt
    • Added serialization logic to convert native Kotlin SDK results into Flutter-compatible maps.
  • libraries/flutter_ondevice_ai/example/.gitignore
    • Added Git ignore rules for Flutter example project files.
  • libraries/flutter_ondevice_ai/example/.metadata
    • Added Flutter project metadata file.
  • libraries/flutter_ondevice_ai/example/android/.gitignore
    • Added Git ignore rules for Android example project files.
  • libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts
    • Added Kotlin DSL Gradle build file for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/debug/AndroidManifest.xml
    • Added debug Android manifest for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/AndroidManifest.xml
    • Added main Android manifest for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/kotlin/com/locanara/flutter/example/MainActivity.kt
    • Added main activity for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable-v21/launch_background.xml
    • Added launch background drawable for Android v21+.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/res/drawable/launch_background.xml
    • Added launch background drawable for Android.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/res/values-night/styles.xml
    • Added night mode styles for the Android example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/main/res/values/styles.xml
    • Added styles for the Android example app.
  • libraries/flutter_ondevice_ai/example/android/app/src/profile/AndroidManifest.xml
    • Added profile Android manifest for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/build.gradle.kts
    • Added root Kotlin DSL Gradle build file for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/android/gradle.properties
    • Added Gradle properties for the Android example app.
  • libraries/flutter_ondevice_ai/example/android/gradle/wrapper/gradle-wrapper.properties
    • Added Gradle wrapper properties.
  • libraries/flutter_ondevice_ai/example/android/settings.gradle.kts
    • Added Kotlin DSL settings Gradle file for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/ios/.gitignore
    • Added Git ignore rules for iOS example project files.
  • libraries/flutter_ondevice_ai/example/ios/Flutter/AppFrameworkInfo.plist
    • Added Flutter app framework info plist.
  • libraries/flutter_ondevice_ai/example/ios/Flutter/Debug.xcconfig
    • Added debug Xcode configuration for Flutter.
  • libraries/flutter_ondevice_ai/example/ios/Flutter/Release.xcconfig
    • Added release Xcode configuration for Flutter.
  • libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec
    • Added CocoaPods podspec for the LocanaraLlamaBridge.
  • libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
    • Added Swift source for the LlamaCppBridgeEngine.
  • libraries/flutter_ondevice_ai/example/ios/Podfile
    • Added iOS Podfile for the Flutter example app, including LocanaraLlamaBridge and SPM embedding logic.
  • libraries/flutter_ondevice_ai/example/ios/Podfile.lock
    • Added Podfile.lock for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj
    • Added Xcode project file for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/xcshareddata/xcschemes/Runner.xcscheme
    • Added Xcode scheme for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/Runner/AppDelegate.swift
    • Added Swift AppDelegate for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/AppIcon.appiconset/Contents.json
    • Added AppIcon asset catalog contents.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/Contents.json
    • Added LaunchImage asset catalog contents.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Assets.xcassets/LaunchImage.imageset/README.md
    • Added README for LaunchImage assets.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/LaunchScreen.storyboard
    • Added LaunchScreen storyboard.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Base.lproj/Main.storyboard
    • Added Main storyboard.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Info.plist
    • Added Info.plist for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/Runner/Runner-Bridging-Header.h
    • Added bridging header for the iOS example app.
  • libraries/flutter_ondevice_ai/example/ios/RunnerTests/RunnerTests.swift
    • Added example unit tests for the iOS example app.
  • libraries/flutter_ondevice_ai/example/lib/app_state.dart
    • Added app state management for the Flutter example app, including SDK initialization, device capability, and model management.
  • libraries/flutter_ondevice_ai/example/lib/main.dart
    • Added main entry point for the Flutter example app, setting up navigation and theme.
  • libraries/flutter_ondevice_ai/example/lib/screens/device_screen.dart
    • Added device information screen for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/lib/screens/feature_detail_screen.dart
    • Added feature detail screen for the Flutter example app, routing to specific AI feature demos.
  • libraries/flutter_ondevice_ai/example/lib/screens/features_screen.dart
    • Added features list screen for the Flutter example app, displaying available AI features and SDK status.
  • libraries/flutter_ondevice_ai/example/lib/screens/framework_detail_screen.dart
    • Added framework detail screen for the Flutter example app, routing to specific framework demos.
  • libraries/flutter_ondevice_ai/example/lib/screens/framework_screen.dart
    • Added framework list screen for the Flutter example app, displaying framework primitives.
  • libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart
    • Added settings screen for the Flutter example app, including setup guides and links.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart
    • Added chat demo widget for the Flutter example app, demonstrating streaming and non-streaming chat.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart
    • Added classify demo widget for the Flutter example app, with category selection and custom input.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/coming_soon_demo.dart
    • Added coming soon demo widget for features not yet implemented.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/extract_demo.dart
    • Added extract demo widget for the Flutter example app, showing entity extraction.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart
    • Added proofread demo widget for the Flutter example app, displaying corrections.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/rewrite_demo.dart
    • Added rewrite demo widget for the Flutter example app, with style selection.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart
    • Added summarize demo widget for the Flutter example app, with input/output type selection.
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart
    • Added translate demo widget for the Flutter example app, with target language selection.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart
    • Added agent demo widget for the Flutter example app, simulating a ReAct agent.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart
    • Added chain demo widget for the Flutter example app, demonstrating sequential, parallel, and conditional chains.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/code_pattern_card.dart
    • Added code pattern card widget for displaying native code snippets.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart
    • Added guardrail demo widget for the Flutter example app, demonstrating input length and content blocking.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/memory_demo.dart
    • Added memory demo widget for the Flutter example app, demonstrating chat memory.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/model_demo.dart
    • Added model demo widget for the Flutter example app, demonstrating direct model usage and streaming.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/pipeline_demo.dart
    • Added pipeline demo widget for the Flutter example app, demonstrating chained AI operations.
  • libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/session_demo.dart
    • Added session demo widget for the Flutter example app, demonstrating stateful chat sessions.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_model_required_banner.dart
    • Added AI model required banner widget.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart
    • Added AI status banner widget, displaying current engine and model status.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart
    • Added debug log panel widget for displaying API requests and responses.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/feature_row.dart
    • Added feature row widget for displaying individual AI features.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart
    • Added info row widget for displaying key-value pairs.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
    • Added model selection sheet widget for managing downloadable models.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart
    • Added run button widget for triggering AI operations.
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart
    • Added stat badge widget for displaying statistics.
  • libraries/flutter_ondevice_ai/example/pubspec.lock
    • Added pubspec.lock for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/pubspec.yaml
    • Added pubspec.yaml for the Flutter example app.
  • libraries/flutter_ondevice_ai/example/web/index.html
    • Added index.html for the Flutter web example.
  • libraries/flutter_ondevice_ai/example/web/manifest.json
    • Added manifest.json for the Flutter web example.
  • libraries/flutter_ondevice_ai/lib/flutter_ondevice_ai.dart
    • Added barrel export file for the Flutter plugin.
  • libraries/flutter_ondevice_ai/lib/src/errors.dart
    • Added custom exception class for the Flutter plugin.
  • libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_plugin.dart
    • Implemented the main Flutter plugin, handling MethodChannel and EventChannel calls, AI features, and model management.
  • libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart
    • Implemented the web-specific Flutter plugin, using dart:js_interop for Chrome Built-in AI APIs.
  • libraries/flutter_ondevice_ai/lib/src/types.dart
    • Added Dart type definitions for the Flutter plugin, mirroring the native SDK types.
  • libraries/flutter_ondevice_ai/pubspec.yaml
    • Added pubspec.yaml for the Flutter plugin.
  • libraries/flutter_ondevice_ai/test/flutter_ondevice_ai_test.dart
    • Added unit tests for the Flutter plugin's MethodChannel interactions.
  • libraries/flutter_ondevice_ai/test/types_test.dart
    • Added unit tests for the Flutter plugin's type serialization and deserialization.
  • libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt
    • Updated to simulate model management (download, load, delete) and expose available models on Android.
  • libraries/react-native-ondevice-ai/example/src/components/pages/FeatureDetail/ClassifyDemo.tsx
    • Refactored the UI for category selection to use chips and allow custom category input.
  • libraries/react-native-ondevice-ai/example/src/components/shared/ModelSelectionSheet.tsx
    • Modified to display downloadable models on Android, removing the iOS-only restriction.
  • locanara-versions.json
    • Updated to include the version for the new Flutter plugin.
  • packages/android/locanara/src/main/kotlin/com/locanara/engine/ModelRegistry.kt
    • Updated Llama model download URL, tokenizer URL, and quantization type.
  • packages/site/src/lib/navigation.ts
    • Updated site navigation to include the new Flutter plugin documentation.
  • packages/site/src/pages/docs/index.tsx
    • Updated site routing to include the new Flutter library documentation.
  • packages/site/src/pages/docs/libraries/expo.tsx
    • Updated site documentation for Expo library, adding a framework section and updating navigation.
  • packages/site/src/pages/docs/libraries/flutter.tsx
    • Added new documentation page for the flutter_ondevice_ai library.
  • packages/site/src/pages/docs/libraries/index.tsx
    • Updated libraries index page to include the new Flutter plugin.
  • packages/site/src/pages/docs/libraries/react-native.tsx
    • Added new documentation page for the react-native-ondevice-ai library.
Ignored Files
  • Ignored by pattern: .github/workflows/** (2)
    • .github/workflows/ci-flutter.yml
    • .github/workflows/publish-flutter.yml
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is an excellent and comprehensive pull request that adds the flutter_ondevice_ai library. The implementation across iOS, Android, and Web is very well done, and the example app is thorough. The code is clean, well-structured, and follows platform-specific best practices. The documentation updates are also clear and helpful. I have a few minor suggestions for improvement.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Note

Due to the large number of review comments, Critical severity comments were prioritized as inline comments.

🟠 Major comments (23)
libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt-46-48 (1)

46-48: ⚠️ Potential issue | 🟠 Major

Thread safety concern for mutable model state.

downloadedModelIds and loadedModelId are accessed from Promise.async blocks which execute on coroutine dispatchers. Unlike the listener collections that use CopyOnWriteArrayList, these are plain mutable types without synchronization.

Consider using thread-safe alternatives:

🔒 Proposed fix for thread-safe state
-    // Simulated model state (matches native example behavior)
-    private val downloadedModelIds = mutableSetOf<String>()
-    private var loadedModelId: String? = null
+    // Simulated model state (thread-safe)
+    private val downloadedModelIds = java.util.concurrent.ConcurrentHashMap.newKeySet<String>()
+    `@Volatile` private var loadedModelId: String? = null
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt`
around lines 46 - 48, The mutable fields downloadedModelIds and loadedModelId
are not thread-safe but are accessed from coroutine/Promise.async contexts;
replace downloadedModelIds with a concurrent set (e.g.,
Collections.newSetFromMap(new ConcurrentHashMap<>()) /
ConcurrentHashMap.newKeySet()) and replace loadedModelId with a thread-safe
holder (e.g., AtomicReference<String?>) or guard both with a single private lock
object; update all usages in the Promise.async blocks and any functions that
read/write downloadedModelIds or loadedModelId to use the new thread-safe types
or synchronized block to avoid race conditions (references: downloadedModelIds,
loadedModelId, Promise.async, CopyOnWriteArrayList).
packages/site/src/pages/docs/index.tsx-55-56 (1)

55-56: ⚠️ Potential issue | 🟠 Major

react-native library docs page is added in this PR but not routable/discoverable.

packages/site/src/pages/docs/libraries/react-native.tsx has no route or sidebar entry here, so /docs/libraries/react-native resolves to NotFound. Either wire it in, or remove the page file from this PR.

🛠️ Suggested wiring patch
 import LibrariesIndex from "./libraries/index";
 import ExpoLibrary from "./libraries/expo";
+import ReactNativeLibrary from "./libraries/react-native";
 import FlutterLibrary from "./libraries/flutter";
@@
           <ul>
             <li>
               <NavLink
                 to="/docs/libraries/expo"
                 className={({ isActive }) => (isActive ? "active" : "")}
                 onClick={closeSidebar}
               >
                 expo-ondevice-ai
               </NavLink>
             </li>
+            <li>
+              <NavLink
+                to="/docs/libraries/react-native"
+                className={({ isActive }) => (isActive ? "active" : "")}
+                onClick={closeSidebar}
+              >
+                react-native-ondevice-ai
+              </NavLink>
+            </li>
             <li>
               <NavLink
                 to="/docs/libraries/flutter"
                 className={({ isActive }) => (isActive ? "active" : "")}
                 onClick={closeSidebar}
               >
                 flutter_ondevice_ai
               </NavLink>
             </li>
           </ul>
@@
           <Route path="libraries" element={<LibrariesIndex />} />
           <Route path="libraries/expo" element={<ExpoLibrary />} />
+          <Route
+            path="libraries/react-native"
+            element={<ReactNativeLibrary />}
+          />
           <Route path="libraries/flutter" element={<FlutterLibrary />} />

Also applies to: 336-345, 412-415

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/site/src/pages/docs/index.tsx` around lines 55 - 56, The
react-native docs page is present but not wired into the docs index so
/docs/libraries/react-native falls back to NotFound; import the ReactNative
library component (e.g., add an import for ReactNativeLibrary from
"./libraries/react-native") and add it to the same routing/sidebar registration
where FlutterLibrary is used (ensure the route path or sidebar entry for
"react-native" is added alongside the existing entries and that the component is
referenced in the routing list/render logic), removing the unused page file only
if you prefer not to expose the route.
locanara-versions.json-7-8 (1)

7-8: ⚠️ Potential issue | 🟠 Major

Do not update locanara-versions.json version entries without explicit release instruction.

Line 8 introduces a new version number entry ("flutter": "0.1.0"), which violates the repository rule for this file unless explicitly requested by maintainers/release flow.

Suggested revert for this PR (if no explicit release instruction exists)
-  "react-native": "0.1.0",
-  "flutter": "0.1.0"
+  "react-native": "0.1.0"

As per coding guidelines: locanara-versions.json: NEVER modify version numbers in locanara-versions.json unless explicitly instructed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@locanara-versions.json` around lines 7 - 8, Revert the unintended version
bump by removing the newly added JSON entry "flutter": "0.1.0" so
locanara-versions.json only contains the approved version entries (e.g., keep
the existing "react-native": "0.1.0" line as-is); do not modify any version
numbers in locanara-versions.json unless you have explicit release/maintainer
instruction.
libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart-114-119 (1)

114-119: ⚠️ Potential issue | 🟠 Major

Color.withValues(alpha:) requires Flutter 3.27.0 but package supports Flutter 3.22.0.

The code uses Color(...).withValues(alpha: 0.3) on lines 114-119, which was introduced in Flutter 3.27.0. However, the package's pubspec.yaml specifies flutter: ">=3.22.0", causing a compile error for users on Flutter 3.22–3.26. Use withOpacity(0.3) instead, which is available in all supported versions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart`
around lines 114 - 119, The Border.all color uses Color(...).withValues(alpha:
0.3) which requires Flutter 3.27; replace those calls with
Color(...).withOpacity(0.3) so the border colors work on Flutter >=3.22.
Specifically, update the expression inside the switch on step.type (the three
Color(...) branches) to call .withOpacity(0.3) instead of .withValues(alpha:
0.3).
.claude/guides/11-react-native-ondevice-ai.md-114-114 (1)

114-114: ⚠️ Potential issue | 🟠 Major

Nitro optional-field syntax in guide is inconsistent with repo convention.

Line 114 documents field?: Type | null, but this repo’s Nitro guidance uses the ?(field: Type | null) pattern. Please align this line to prevent spec/codegen mistakes.

Based on learnings: Applies to libraries/react-native-ondevice-ai/src/specs/** : Use optional field pattern '?(field: Type | null)' in Nitro specs.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/guides/11-react-native-ondevice-ai.md at line 114, Update the
optional-field example in the guide to match the repo’s Nitro convention by
replacing the `field?: Type | null` pattern with the `?(field: Type | null)`
pattern; ensure any examples or prose that mention optional fields (the inline
example on the affected line) use `?(field: Type | null)` so generated Nitro
specs in libraries/react-native-ondevice-ai/src/specs/** follow the repo
convention.
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart-34-44 (1)

34-44: ⚠️ Potential issue | 🟠 Major

Guard async state updates with mounted checks.

After Line 34 await, setState runs unconditionally; if the widget is disposed, this can throw at runtime.

💡 Suggested fix
     try {
       final result = await _ai.proofread(_controller.text);
       sw.stop();
+      if (!mounted) return;
       setState(() {
         _result = result;
         _debugLog = DebugLog(api: 'proofread', request: {'text': _controller.text}, response: {'correctedText': result.correctedText, 'corrections': result.corrections.length}, timing: sw.elapsedMilliseconds);
         _loading = false;
       });
     } catch (e) {
       sw.stop();
+      if (!mounted) return;
       setState(() { _loading = false; });
-      if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
+      ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart`
around lines 34 - 44, The async result handling in proofread_demo.dart calls
setState after awaiting _ai.proofread and inside the catch block without
checking widget lifecycle; wrap the post-await and error-path state updates in a
mounted guard (e.g., if (!mounted) return or if (mounted) { setState(...) }) so
that _result, _debugLog, and _loading are only updated while the widget is
mounted, and ensure showing the SnackBar is also predicated on mounted; apply
this to the setState calls around _ai.proofread and the catch block while
preserving the stopwatch stop calls.
libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart-45-50 (1)

45-50: ⚠️ Potential issue | 🟠 Major

Add mounted guards around async state updates.

setState is called after Line 45 await without checking widget lifecycle, so this can throw when users navigate away mid-run.

💡 Suggested fix
     try {
       final result = await _ai.chat(text, options: const ChatOptions(systemPrompt: 'You are a helpful assistant.'));
       sw.stop();
+      if (!mounted) return;
       setState(() { _resultText = result.message; _timing = sw.elapsedMilliseconds; _loading = false; });
     } catch (e) {
       sw.stop();
+      if (!mounted) return;
       setState(() { _errorText = 'Error: $e'; _loading = false; });
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart`
around lines 45 - 50, After the await to _ai.chat(...) and in the catch block,
wrap the setState calls with a mounted check to avoid updating state after the
widget is disposed; specifically, before calling setState to update
_resultText/_timing/_loading in the try block and before setState that updates
_errorText/_loading in the catch, verify mounted (or return early if not
mounted) so setState is only invoked while the widget is still active.
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart-38-48 (1)

38-48: ⚠️ Potential issue | 🟠 Major

Prevent setState after dispose in async flow.

Lines 40 and 47 update state after an async call without verifying mounted. This can crash if the user leaves the screen mid-request.

💡 Suggested fix
     try {
       final result = await _ai.translate(_controller.text, options: TranslateOptions(sourceLanguage: 'en', targetLanguage: _targetLang));
       sw.stop();
+      if (!mounted) return;
       setState(() {
         _result = result;
         _debugLog = DebugLog(api: 'translate', request: {'text': _controller.text, 'source': 'en', 'target': _targetLang}, response: {'translatedText': result.translatedText, 'sourceLanguage': result.sourceLanguage, 'targetLanguage': result.targetLanguage}, timing: sw.elapsedMilliseconds);
         _loading = false;
       });
     } catch (e) {
       sw.stop();
+      if (!mounted) return;
       setState(() { _loading = false; });
-      if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
+      ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart`
around lines 38 - 48, The async translate flow updates widget state after
awaiting _ai.translate and in the catch block without checking widget lifecycle;
modify the code around the translate call (the block that calls _ai.translate
and the surrounding setState calls) to guard all setState and
ScaffoldMessenger.of(context).showSnackBar(...) invocations with if (!mounted)
return or if (mounted) { setState(...) } so no setState/Scaffold access runs
after dispose; ensure the stopwatch (sw.stop()) still runs and that both the
success branch (where _result, _debugLog, _loading are set) and the error branch
only call setState/showSnackBar when mounted.
libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart-28-28 (1)

28-28: ⚠️ Potential issue | 🟠 Major

Color.withValues() requires Flutter 3.27.0+, but project minimum is 3.3.0.

Line 28 uses Color.withValues(alpha: 0.6), which was first introduced in Flutter 3.27.0. Your project declares sdk: ">=3.3.0", making this code incompatible with Flutter 3.3.0–3.26.x.

Either update your minimum SDK to >=3.27.0 or replace with Color.withOpacity(0.6) (deprecated but available in 3.3.0+) and plan a migration path.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart` at
line 28, The use of Color.withValues(alpha: 0.6) in the RunButton widget is
incompatible with the project's minimum Flutter SDK (>=3.3.0); replace the call
to Color.withValues(alpha: 0.6) with Color.withOpacity(0.6) (or update the SDK
constraint to >=3.27.0 if you intentionally require the newer API) so
disabledBackgroundColor uses a supported API; update the RunButton widget's
disabledBackgroundColor assignment accordingly (search for
disabledBackgroundColor in run_button.dart to locate the exact spot).
libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart-326-371 (1)

326-371: ⚠️ Potential issue | 🟠 Major

switchToDeviceAI is missing from the web method handler.

The Dart API exposes switchToDeviceAI(), but web falls into default Unimplemented. Add an explicit no-op/ack branch for consistency with other unsupported web stubs.

Suggested fix
       case 'downloadPromptApiModel':
         return false;
+      case 'switchToDeviceAI':
+        return null;
       default:
         throw PlatformException(
           code: 'Unimplemented',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart` around
lines 326 - 371, The web method handler handleMethodCall is missing a case for
the Dart API method name "switchToDeviceAI", causing calls to fall into the
default Unimplemented branch; add a new case 'switchToDeviceAI' in the switch
inside handleMethodCall and return a no-op/ack response (e.g., true or a success
map) consistent with other unsupported web stubs so the web plugin gracefully
acknowledges the call instead of throwing PlatformException.
libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart-389-401 (1)

389-401: ⚠️ Potential issue | 🟠 Major

isSupported can report false while supported web features are true.

isSupported excludes hasRewriter/hasWriter, so browsers with only those APIs can be marked unsupported even when feature flags expose usable functionality.

Suggested fix
-      'isSupported': hasSummarizer || hasLanguageModel || hasTranslator,
+      'isSupported':
+          hasSummarizer ||
+          hasLanguageModel ||
+          hasTranslator ||
+          hasRewriter ||
+          hasWriter,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart` around
lines 389 - 401, The isSupported boolean currently only checks hasSummarizer,
hasLanguageModel, and hasTranslator and thus can be false even when
rewriter/writer features exist; update the return object's 'isSupported'
expression to include hasRewriter and hasWriter (so it returns true if any of
hasSummarizer, hasLanguageModel, hasTranslator, hasRewriter, or hasWriter is
true)—edit the object returned in flutter_ondevice_ai_web.dart (the map literal
containing 'isSupported', 'isModelReady', 'platform', 'features') to include
those symbols in the isSupported check.
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart-46-63 (1)

46-63: ⚠️ Potential issue | 🟠 Major

Add mounted guards around post-await state updates.

The summarize request can finish after this screen is disposed; calling setState in that case throws an error. Both the try block (line 48) and catch block (line 60) lack mounted checks before setState. The redundant mounted guard around ScaffoldMessenger can be removed since the early return will prevent that code from executing.

🔧 Suggested fix
     try {
       final result = await _ai.summarize(_controller.text, options: options);
       sw.stop();
+      if (!mounted) return;
       setState(() {
         _result = result;
         _debugLog = DebugLog(
@@
     } catch (e) {
       sw.stop();
+      if (!mounted) return;
       setState(() { _loading = false; });
-      if (mounted) {
-        ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
-      }
+      ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart`
around lines 46 - 63, The try and catch blocks that call setState after awaiting
_ai.summarize can run after the widget is disposed; add mounted guards before
each setState to return early if !mounted, i.e., check mounted right after
sw.stop() in both the success branch (before updating _result, _debugLog,
_loading) and the catch branch (before setting _loading), and remove the
now-redundant mounted check around ScaffoldMessenger.of(context).showSnackBar so
the SnackBar call only runs when mounted is true.
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart-36-46 (1)

36-46: ⚠️ Potential issue | 🟠 Major

Guard async UI updates with mounted checks.

After the await on line 36 completes, if the widget has been disposed (e.g., user navigated away), calling setState() on lines 38 and 45 will crash with "setState() called after dispose()". This is a real race condition that can occur in normal user flows.

Add if (!mounted) return; immediately after sw.stop() in both the try and catch blocks to prevent setState calls on a disposed widget. The existing if (mounted) check on line 46 should be removed since the mounted guard is now at the block entry.

🔧 Suggested fix
     try {
       final result = await _ai.classify(_textController.text, options: ClassifyOptions(categories: _selectedCategories));
       sw.stop();
+      if (!mounted) return;
       setState(() {
         _result = result;
         _debugLog = DebugLog(api: 'classify', request: {'text': _textController.text, 'categories': _selectedCategories}, response: {'classifications': result.classifications.map((c) => {'label': c.label, 'score': c.score}).toList()}, timing: sw.elapsedMilliseconds);
         _loading = false;
       });
     } catch (e) {
       sw.stop();
+      if (!mounted) return;
       setState(() { _loading = false; });
-      if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
+      ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e')));
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart`
around lines 36 - 46, After awaiting _ai.classify and stopping the stopwatch
(sw.stop()), guard any subsequent UI updates by adding "if (!mounted) return;"
immediately after sw.stop() in both the try and catch blocks so setState() and
ScaffoldMessenger calls never run on a disposed widget; ensure you still call
setState to update _result, _debugLog, and _loading only when mounted, and
remove the now‑redundant "if (mounted)" check that currently wraps the
ScaffoldMessenger.of(context).showSnackBar call; reference symbols:
_ai.classify, sw.stop(), setState(), _textController, DebugLog, mounted,
ScaffoldMessenger.of(context).showSnackBar.
libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift-200-201 (1)

200-201: ⚠️ Potential issue | 🟠 Major

Avoid logging raw user prompt content.

Prompt text can contain sensitive data; logging it undermines the privacy posture.

Proposed fix
-        NSLog("\(TAG) chat() message: \(message.prefix(100))")
+        NSLog("\(TAG) chat() message length: \(message.count)")
@@
-        NSLog("\(TAG) chatStream() message: \(message.prefix(100))")
+        NSLog("\(TAG) chatStream() message length: \(message.count)")

Also applies to: 218-218

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift`
around lines 200 - 201, The NSLog call in the chat() handler currently logs raw
prompt content via NSLog("\(TAG) chat() message: \(message.prefix(100))");
remove or redact that sensitive data: stop logging the message text and instead
log non-sensitive metadata (e.g., message length, a hash, or a fixed
placeholder). Update the instances in FlutterOndeviceAiPlugin.swift where chat()
logs the message (the NSLog referencing TAG and message) and the similar NSLog
usage around the other occurrence to avoid printing raw user prompts.
libraries/flutter_ondevice_ai/example/lib/app_state.dart-147-147 (1)

147-147: ⚠️ Potential issue | 🟠 Major

Model readiness is over-reported when device is supported but model isn’t ready yet.

Using cap.isModelReady || cap.isSupported can mark web as ready too early and suppress setup guidance.

Proposed fix
-      _isModelReady = cap.isModelReady || cap.isSupported;
+      final modelReady = kIsWeb ? cap.isModelReady : (cap.isModelReady || cap.isSupported);
+      _isModelReady = modelReady;
@@
-      final modelReady = cap.isModelReady || cap.isSupported;
       _availableFeatures = _featureDefinitions.map((def) {

Also applies to: 161-172

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/app_state.dart` at line 147, The
readiness flag _isModelReady is being set using the disjunction cap.isModelReady
|| cap.isSupported which incorrectly marks the model ready when the device only
supports it; change the logic to require cap.isModelReady (and only fallback to
cap.isSupported in a deliberate, separate check if truly intended). Update the
assignment to _isModelReady to rely solely on cap.isModelReady and audit the
similar assignments/blocks referenced around the same area (the other occurrence
in the component/setup logic) to use cap.isModelReady instead of the OR
expression so UI/setup guidance is not suppressed when the device is supported
but the model isn't ready.
libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift-225-250 (1)

225-250: ⚠️ Potential issue | 🟠 Major

Propagate errors to EventChannel stream listeners.

When chain.streamRun() fails, the method call receives an error, but stream listeners receive no terminal signal. Stream listeners need error notification to properly handle failures.

Add error propagation to the event sink before completing the method call:

Proposed fix
         } catch {
+            chatStreamSink?(FlutterError(
+                code: "ERR_CHAT_STREAM",
+                message: error.localizedDescription,
+                details: nil
+            ))
             result(FlutterError(code: "ERR_CHAT_STREAM", message: error.localizedDescription, details: nil))
         }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift`
around lines 225 - 250, When chain.streamRun(...) throws, you must propagate the
failure to the EventChannel by sending an error payload through chatStreamSink
before finishing the method; in the catch block for the async loop (around
chain.streamRun and result), call chatStreamSink with a terminal message that
includes the error (e.g. "error": error.localizedDescription, "isFinal": true,
"delta": "", "accumulated": accumulated, "conversationId": NSNull()) and then
call result(FlutterError(...)) as currently implemented so both stream listeners
and the method caller receive the failure; update the catch handling around
chatStreamSink, chain.streamRun, and result accordingly.
libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart-340-364 (1)

340-364: ⚠️ Potential issue | 🟠 Major

Use semantic button widgets instead of raw GestureDetector for interactive controls.

Bare GestureDetector lacks proper button semantics, focus management, and keyboard activation on web/desktop, which breaks accessibility for these platforms.

The proposed fix using Semantics + Material + InkWell is a good starting point, but for complete keyboard accessibility it should also include FocusableActionDetector with ActivateIntent to enable Enter/Space activation, along with visible focus indicators and cursor hints (e.g., MouseRegion(cursor: SystemMouseCursors.click)). See Flutter's focus system and custom button patterns in the accessibility documentation for the recommended approach.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart`
around lines 340 - 364, Replace the bare GestureDetector wrapper around the copy
control with an accessible interactive widget: wrap the Container in Semantics
(button: true, label: 'Copy flag') and Material + InkWell for ripple and hit
testing, and add FocusableActionDetector wired to ActivateIntent to handle
Enter/Space activation, along with a MouseRegion(cursor:
SystemMouseCursors.click) for pointer hinting; ensure focus visuals are drawn
(e.g., via focused state used to show a visible outline around the Container)
and keep the existing action logic (Clipboard.setData(...) and
ScaffoldMessenger.of(context).showSnackBar(...)) inside the
ActivateIntent/InkWell onTap handlers so keyboard, mouse, and screen-reader
users can activate the control.
libraries/flutter_ondevice_ai/example/ios/Podfile-66-69 (1)

66-69: ⚠️ Potential issue | 🟠 Major

Fail fast when bridge/SPM setup fails in post_install.

These rescue blocks only log and continue, which can leave the project in a partially configured state that fails later at build/runtime.

🔧 Proposed fix
   rescue => e
     puts "\u26a0\ufe0f [flutter_ondevice_ai] Bridge configuration failed: #{e.message}"
     puts e.backtrace&.first(3)&.join("\n")
+    raise e
   end
 end
@@
   rescue => e
     puts "\u26a0\ufe0f [flutter_ondevice_ai] SPM embedding setup failed: #{e.message}"
     puts e.backtrace&.first(3)&.join("\n")
+    raise e
   end
 end

Also applies to: 120-123

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/ios/Podfile` around lines 66 - 69, The
rescue blocks inside the Podfile's post_install should fail fast instead of only
logging; update the rescue handlers in the post_install block (and the similar
handler later in the file) to re-raise the caught exception (or call abort)
after logging the error and backtrace so the build stops immediately on
bridge/SPM setup failure rather than continuing in a partially configured state.
Ensure you reference the post_install do |installer| rescue blocks and preserve
the existing log messages while re-raising the original exception to surface the
failure to the caller.
libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj-469-470 (1)

469-470: ⚠️ Potential issue | 🟠 Major

Deployment target drift: project configs still declare iOS 13.0.

These build settings conflict with the iOS 17.0 requirement used elsewhere in this integration. Align the project-level IPHONEOS_DEPLOYMENT_TARGET values to 17.0 for consistency and predictable builds.

🔧 Proposed fix
-				IPHONEOS_DEPLOYMENT_TARGET = 13.0;
+				IPHONEOS_DEPLOYMENT_TARGET = 17.0;

Also applies to: 599-600, 650-651

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj`
around lines 469 - 470, Project-level build settings still set
IPHONEOS_DEPLOYMENT_TARGET = 13.0 which conflicts with iOS 17.0 used elsewhere;
update every occurrence of the IPHONEOS_DEPLOYMENT_TARGET token in the
project.pbxproj (including the blocks containing MTL_ENABLE_DEBUG_INFO) to 17.0
so project and target settings are consistent, verify all target and project
build configurations now report 17.0, and leave unrelated keys (e.g.,
MTL_ENABLE_DEBUG_INFO) unchanged.
libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift-120-133 (1)

120-133: ⚠️ Potential issue | 🟠 Major

Reset cancellation state before starting a new stream.

After cancel() is called once, isCancelled remains true, so subsequent generateStreaming calls can terminate immediately.

🔧 Proposed fix
     func generateStreaming(prompt: String, config: InferenceConfig) -> AsyncThrowingStream<String, Error> {
         AsyncThrowingStream { continuation in
             Task { [weak self] in
                 guard let self, self.isLoaded, let session = self.llmSession else {
                     continuation.finish(throwing: LocanaraError.custom(.modelNotLoaded, "Model not loaded"))
                     return
                 }
+                self.lock.withLock { self.isCancelled = false }
                 do {
                     for try await text in session.streamResponse(to: prompt) {
                         if self.lock.withLock({ self.isCancelled }) { break }
                         continuation.yield(text)
                     }

Also applies to: 160-164

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`
around lines 120 - 133, Reset the cancellation flag under the same mutex before
starting a new stream so previous cancel() calls don't short-circuit new
sessions: inside generateStreaming (the AsyncThrowingStream initializer / Task
block) before the guard, call lock.withLock { isCancelled = false } to clear
cancellation state, and apply the same change to the other streaming entrypoint
referenced around the 160-164 region; keep using the existing lock and
isCancelled variables to avoid races.
libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec-12-13 (1)

12-13: ⚠️ Potential issue | 🟠 Major

Align bridge swift_version with C++ interop support and Locanara SDK requirements.

Line 12 declares s.swift_version = "5.0", but the Podfile enables C++ interop with -cxx-interoperability-mode=default and C++20 on this target, which requires Swift 5.9 minimum. Since this bridge depends on Locanara and the SDK requires Swift 6.0+, use "6.0" to match the SDK's strict concurrency requirements and avoid toolchain incompatibility.

🔧 Proposed fix
-  s.swift_version = "5.0"
+  s.swift_version = "6.0"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec`
around lines 12 - 13, Update the Podspec's Swift version to meet C++ interop and
Locanara SDK requirements by changing the s.swift_version declaration in
LocanaraLlamaBridge.podspec from "5.0" to "6.0"; ensure the s.swift_version
symbol is set to "6.0" so the target matches the -cxx-interoperability-mode and
C++20 toolchain and satisfies the SDK's strict concurrency/Swift 6 requirement.
libraries/flutter_ondevice_ai/example/ios/Podfile-35-53 (1)

35-53: ⚠️ Potential issue | 🟠 Major

Make configure_llama_bridge idempotent for repeated pod install runs.

Lines 35–38 and 48–53 unconditionally append package references and dependencies without checking for existing entries. Since post_install runs on every pod install, repeated invocations create duplicate SPM references and product dependencies, which destabilizes the Xcode project. This pattern is already handled elsewhere in the file (lines 83–86 explicitly remove duplicate build phases before re-adding them).

Additionally, errors in the function are silently swallowed by the rescue block (lines 66–69), masking any build failures caused by corrupted project state.

Apply the idempotency checks before appending: verify that pkg_ref doesn't already exist in package_references, and verify that each product dependency doesn't already exist in bridge_target.package_product_dependencies before creating and appending new instances.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/ios/Podfile` around lines 35 - 53, The
configure_llama_bridge code unconditionally appends a new
XCRemoteSwiftPackageReference (pkg_ref) and XCSwiftPackageProductDependency
entries to pods_project.root_object.package_references and
bridge_target.package_product_dependencies, causing duplicates on repeated pod
install runs; update configure_llama_bridge to first search
pods_project.root_object.package_references for an existing
XCRemoteSwiftPackageReference with repositoryURL
'https://github.com/tattn/LocalLLMClient.git' and reuse it if present (assign to
pkg_ref only when absent), and for each product_name ('LocalLLMClient',
'LocalLLMClientLlama') check bridge_target.package_product_dependencies for an
existing dependency with matching product_name and package reference before
creating/appending a new XCSwiftPackageProductDependency; also remove or improve
the silent rescue by logging or re-raising the caught exception so build issues
in the function aren't swallowed.
libraries/flutter_ondevice_ai/example/ios/Podfile-37-37 (1)

37-37: ⚠️ Potential issue | 🟠 Major

Pin LocalLLMClient to a stable version instead of tracking main branch.

Line 37 pins to a mutable branch, which makes builds non-reproducible and breaks unexpectedly. LocalLLMClient publishes stable tagged releases (latest: 0.4.6); use "exactVersion" or "revision" in the requirement hash.

🔧 Proposed fix
-    pkg_ref.requirement = { 'kind' => 'branch', 'branch' => 'main' }
+    pkg_ref.requirement = { 'kind' => 'exactVersion', 'version' => '0.4.6' }

Or pin to a specific commit:

pkg_ref.requirement = { 'kind' => 'revision', 'revision' => '<commit-sha>' }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/ios/Podfile` at line 37, The Podfile
currently pins LocalLLMClient by setting pkg_ref.requirement = { 'kind' =>
'branch', 'branch' => 'main' } which tracks a mutable branch; change the
requirement hash in the pkg_ref.requirement assignment to a stable pin using
either 'exactVersion' with the released tag (e.g. "0.4.6") or 'revision' with a
commit SHA to ensure reproducible builds; update the pkg_ref.requirement entry
for LocalLLMClient accordingly (replace 'branch' => 'main' with either
'exactVersion' => '<tag>' or 'revision' => '<commit-sha>').
🟡 Minor comments (14)
packages/site/src/pages/docs/libraries/react-native.tsx-74-128 (1)

74-128: ⚠️ Potential issue | 🟡 Minor

API parity statement and list are inconsistent (describeImage() missing).

This section says the API matches expo-ondevice-ai, but describeImage() is not listed, which can mislead readers.

🛠️ Suggested doc patch
           <li>
             <Link to="/docs/utils/proofread">
               <code>proofread()</code>
             </Link>{" "}
             - Grammar correction
           </li>
+          <li>
+            <Link to="/docs/utils/describe-image">
+              <code>describeImage()</code>
+            </Link>{" "}
+            - Image description
+          </li>
         </ul>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/site/src/pages/docs/libraries/react-native.tsx` around lines 74 -
128, The docs claim API parity with expo-ondevice-ai but the utility list omits
describeImage(); add an entry for describeImage() inside the ul alongside the
other utilities (similar pattern used for summarize(), classify(), etc.),
linking to the describe-image docs page (e.g., <Link
to="/docs/utils/describe-image"> with a <code>describeImage()</code> label) so
the list includes describeImage() alongside getDeviceCapability() and the other
utility functions and keeps the API parity statement accurate.
packages/site/src/pages/docs/libraries/flutter.tsx-64-120 (1)

64-120: ⚠️ Potential issue | 🟡 Minor

Available APIs list is incomplete (describeImage() missing).

Given this page positions the Flutter API as aligned with native SDK utilities, omitting describeImage() creates an avoidable mismatch in docs coverage.

🛠️ Suggested doc patch
           <li>
             <Link to="/docs/utils/proofread">
               <code>proofread()</code>
             </Link>{" "}
             - Grammar correction
           </li>
+          <li>
+            <Link to="/docs/utils/describe-image">
+              <code>describeImage()</code>
+            </Link>{" "}
+            - Image description
+          </li>
         </ul>
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/site/src/pages/docs/libraries/flutter.tsx` around lines 64 - 120,
The "Available APIs" list omits the Flutter image description API; add a new
list item for describeImage() to mirror native SDKs so
FlutterOndeviceAi.instance shows parity — insert a <li> containing a Link to the
describe-image docs (e.g. <Link to="/docs/utils/describe-image">) and a
<code>describeImage()</code> label with a short description like "Image
captioning/description" alongside the other utilities in the same unordered
list.
libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart-22-23 (1)

22-23: ⚠️ Potential issue | 🟡 Minor

Avoid runtime toUpperCase() for i18n-safe label display.

At Line 22, label.toUpperCase() uses Dart's locale-agnostic case mapping, which produces incorrect results for certain locales (e.g., Turkish with dotted/dotless i). While this example app currently has no internationalization setup, pass display-ready labels into the widget instead—this aligns with Dart best practices and avoids issues if localization is added later.

Suggested change
-            label.toUpperCase(),
+            label,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart`
around lines 22 - 23, The widget currently calls label.toUpperCase() in the
StatBadge display (remove this call) because locale-agnostic uppercasing can
break i18n; instead accept and render a display-ready label (leave casing
decisions to callers or localization code). Update the StatBadge rendering to
use label as-is (remove any toUpperCase usage) and adjust any places that
construct StatBadge to pass already-formatted/uppercased strings or localized
variants (search for StatBadge and the label parameter to update callers).
libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart-103-105 (1)

103-105: ⚠️ Potential issue | 🟡 Minor

Guard JSON serialization to avoid runtime crashes.

Line 104 can throw UnsupportedError for non-encodable dynamic values and crash the widget tree. Please add a safe fallback for debug rendering.

Proposed fix
   Widget _codeBlock(dynamic data) {
-    final text = const JsonEncoder.withIndent('  ').convert(data);
+    final String text;
+    try {
+      text = const JsonEncoder.withIndent('  ').convert(data);
+    } catch (_) {
+      text = data?.toString() ?? 'null';
+    }
     return SizedBox(
       height: 200,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart`
around lines 103 - 105, The JsonEncoder.withIndent('  ').convert call in
_codeBlock can throw UnsupportedError for non-encodable dynamic values; update
the _codeBlock function to guard serialization by wrapping the convert call in a
try/catch (catch UnsupportedError and generic Exception), and on error (or if
data is null) produce a safe fallback string (e.g., data.toString() or a clear
"<non-serializable>" message) to render in the widget instead of letting the
exception bubble and crash the widget tree; ensure the final returned widget
uses that safe string for display.
libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart-14-19 (1)

14-19: ⚠️ Potential issue | 🟡 Minor

Prevent text overflow in narrow layouts.

Line 14-19 can overflow when label or value is long. Make both sides flexible and add overflow handling.

Proposed fix
       child: Row(
-        mainAxisAlignment: MainAxisAlignment.spaceBetween,
         children: [
-          Text(label, style: const TextStyle(fontSize: 17, color: Colors.black)),
-          Text(value, style: TextStyle(fontSize: 17, color: valueColor ?? const Color(0xFF666666))),
+          Expanded(
+            child: Text(
+              label,
+              style: const TextStyle(fontSize: 17, color: Colors.black),
+              maxLines: 1,
+              overflow: TextOverflow.ellipsis,
+            ),
+          ),
+          const SizedBox(width: 12),
+          Flexible(
+            child: Text(
+              value,
+              textAlign: TextAlign.end,
+              style: TextStyle(fontSize: 17, color: valueColor ?? const Color(0xFF666666)),
+              maxLines: 1,
+              overflow: TextOverflow.ellipsis,
+            ),
+          ),
         ],
       ),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart`
around lines 14 - 19, The Row's Text widgets can overflow; update the InfoRow
widget to make both label and value flexible and clamp long text: wrap each Text
in a Flexible/Expanded (use Flexible with appropriate fit for label and value),
set maxLines: 1, overflow: TextOverflow.ellipsis, and for the right-side value
add textAlign: TextAlign.right so long values truncate and don't break the
layout; keep the existing fontSize and color handling (valueColor) intact.
.claude/guides/12-flutter-ondevice-ai.md-226-227 (1)

226-227: ⚠️ Potential issue | 🟡 Minor

Sync app scope counts in the guide.

Lines 226-227 still describe a smaller app scope. Please update tab/feature counts to match the current example app in this PR so the guide doesn’t drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/guides/12-flutter-ondevice-ai.md around lines 226 - 227, Update the
guide text where it currently says "3-tab navigation: Features, Device,
Settings" and "Feature list → demo screens for all 7 AI features + chat" to
reflect the actual app scope in this PR — change the tab count/names and the
feature/demo count to match the example app (update the wording that references
the tab navigation and the feature list/demo screens so they match the current
app implementation). Locate and edit the exact strings "3-tab navigation:
Features, Device, Settings" and "Feature list → demo screens for all 7 AI
features + chat" in .claude/guides/12-flutter-ondevice-ai.md and replace them
with the correct tab names/number and correct feature/demo count used by the PR.
libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt-137-137 (1)

137-137: ⚠️ Potential issue | 🟡 Minor

Prefer "other" over empty string for unknown correction type.

Line 137 serializes null type as "", which bypasses downstream null-fallback logic and can render a blank label.

💡 Suggested fix
-                    "type" to (c.type ?: ""),
+                    "type" to (c.type ?: "other"),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt`
at line 137, The serialization currently writes the "type" field as an empty
string when c.type is null; update the serializer in
FlutterOndeviceAiSerialization.kt to use a default of "other" instead of ""
(i.e., when serializing the "type" key for the correction object `c`, replace
the null-coalescing fallback from `""` to `"other"`) so downstream
null/unknown-fallback logic receives a meaningful sentinel value.
libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt-49-51 (1)

49-51: ⚠️ Potential issue | 🟡 Minor

Force-unwrapping context!! may cause NPE if accessed before attachment.

The locanara lazy property uses context!! which will throw NPE if accessed before onAttachedToEngine sets the context. Consider adding a null check or ensuring the lazy property is only accessed after attachment.

🛡️ Suggested defensive approach
     private val locanara: Locanara by lazy {
-        Locanara.getInstance(context!!)
+        Locanara.getInstance(
+            context ?: throw IllegalStateException("Plugin not attached to engine")
+        )
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`
around lines 49 - 51, The lazy property locanara in FlutterOndeviceAiPlugin
currently force-unwraps context (context!!) which can NPE if accessed before
onAttachedToEngine; change initialization to avoid force-unwrapping by either:
(A) make locanara nullable and initialize it inside onAttachedToEngine using
Locanara.getInstance(context) after verifying context is non-null, or (B) keep
locanara non-null but initialize it in onAttachedToEngine (remove the lazy
block) or use a lazy block that calls requireNotNull(context) with a clear error
message so access before attachment fails fast; update any callers to handle the
nullable form if you choose option A.
libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt-670-703 (1)

670-703: ⚠️ Potential issue | 🟡 Minor

Ensure EventSink callbacks are invoked on the main thread.

In handleDownloadPromptApiModel, the downloadProgressSink?.success() calls (lines 679, 689) are made within the suspend function context. Since locanara.downloadPromptApiModel may invoke the progress callback on a background thread, ensure these UI thread calls are properly dispatched. The earlier downloadFile method correctly uses scope.launch { ... } for progress updates.

🛡️ Suggested fix to ensure main thread dispatch
             locanara.downloadPromptApiModel { progress ->
                 val pct = if (progress.bytesToDownload > 0) {
                     progress.bytesDownloaded.toDouble() / progress.bytesToDownload.toDouble()
                 } else {
                     0.0
                 }
-                downloadProgressSink?.success(
+                scope.launch {
+                    downloadProgressSink?.success(
                     mapOf(
                         "modelId" to "gemini-nano",
                         "bytesDownloaded" to progress.bytesDownloaded,
                         "totalBytes" to progress.bytesToDownload,
                         "progress" to pct,
                         "state" to "downloading",
                     )
-                )
+                    )
+                }
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`
around lines 670 - 703, The progress callbacks in handleDownloadPromptApiModel
may be invoked on a background thread; wrap all calls to
downloadProgressSink?.success (both the per-progress callback passed into
locanara.downloadPromptApiModel and the final completion success call) so they
run on the main/UI thread (e.g., dispatch via the plugin's CoroutineScope with
Dispatchers.Main or use activity.runOnUiThread) to match the approach used in
downloadFile and avoid threading issues with EventSink.
libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart-48-50 (1)

48-50: ⚠️ Potential issue | 🟡 Minor

Use topClassification for the conditional decision.

classifications.first is order-dependent; use the explicit top classification field to avoid wrong rewrite style selection.

Suggested fix
-        final topCategory = classify.classifications.isNotEmpty ? classify.classifications.first.label : 'Unknown';
+        final topCategory = classify.topClassification.label;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart`
around lines 48 - 50, The current logic uses
classify.classifications.first.label (order-dependent) to set topCategory and
decide rewrite style; change it to use the explicit top classification field
returned by classify (e.g., classify.topClassification?.label ?? 'Unknown') when
computing topCategory, then pass that topCategory into RewriteOptions for
rewrite (RewriteOutputType.professional vs friendly) so the rewrite decision
uses the topClassification rather than classifications.first; keep the same
fallback 'Unknown' behavior.
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart-101-116 (1)

101-116: ⚠️ Potential issue | 🟡 Minor

Add button enablement does not refresh while typing.

The button state depends on _customController.text, but this widget does not rebuild as the text changes.

🔧 Suggested fix
               child: TextField(
                 controller: _customController,
+                onChanged: (_) => setState(() {}),
                 decoration: InputDecoration(
                   hintText: 'Add custom category...',
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart`
around lines 101 - 116, The Add button doesn't rebuild while typing because the
widget never reacts to changes in _customController; add a listener to
_customController in the State's initState that calls setState() when the text
changes (so the TextButton's onPressed check
_customController.text.trim().isEmpty updates), and remove the listener in
dispose; locate _customController, the _addCustomCategory handler and the
TextButton/TextField in classify_demo.dart to implement this change.
libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart-79-85 (1)

79-85: ⚠️ Potential issue | 🟡 Minor

The “Enable” CTA is currently a no-op.

Line 80 uses onTap: () {} so the button appears actionable but does nothing. Please wire a real action (or remove tap affordance until action exists).

🐛 Minimal safe fix (remove misleading tap behavior)
-        trailing: GestureDetector(
-          onTap: () {},
-          child: Container(
+        trailing: Container(
             padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8),
             decoration: BoxDecoration(color: const Color(0xFFFF9500), borderRadius: BorderRadius.circular(8)),
             child: const Text('Enable', style: TextStyle(color: Colors.white, fontSize: 14, fontWeight: FontWeight.w600)),
-          ),
-        ),
+          ),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`
around lines 79 - 85, The "Enable" CTA is currently a no-op because the trailing
GestureDetector in ai_status_banner.dart has onTap: () {} — either wire a real
handler or remove the misleading tap affordance; for a minimal safe fix replace
the GestureDetector with a non-interactive wrapper (e.g., use the Container
directly or wrap with Padding) or set onTap to null so the button no longer
appears actionable until you implement an actual enable function (update the
trailing GestureDetector/onTap usage in the AIStatusBanner widget).
libraries/flutter_ondevice_ai/example/lib/main.dart-49-60 (1)

49-60: ⚠️ Potential issue | 🟡 Minor

Guard route argument parsing to prevent runtime crashes.

Line 51/57 perform a hard cast and Line 53/59 force unwrap values. A malformed or missing route argument will throw at runtime. Please validate args before constructing detail screens.

🐛 Proposed fix
         onGenerateRoute: (settings) {
           if (settings.name == '/feature-detail') {
-            final args = settings.arguments as Map<String, String>;
+            final args = settings.arguments;
+            if (args is! Map) return null;
+            final id = args['id'];
+            final name = args['name'];
+            if (id is! String || name is! String) return null;
             return MaterialPageRoute(
-              builder: (_) => FeatureDetailScreen(id: args['id']!, name: args['name']!),
+              builder: (_) => FeatureDetailScreen(id: id, name: name),
             );
           }
           if (settings.name == '/framework-detail') {
-            final args = settings.arguments as Map<String, String>;
+            final args = settings.arguments;
+            if (args is! Map) return null;
+            final id = args['id'];
+            final name = args['name'];
+            if (id is! String || name is! String) return null;
             return MaterialPageRoute(
-              builder: (_) => FrameworkDetailScreen(id: args['id']!, name: args['name']!),
+              builder: (_) => FrameworkDetailScreen(id: id, name: name),
             );
           }
           return null;
         },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/main.dart` around lines 49 - 60,
The route handler in onGenerateRoute currently force-casts settings.arguments
and force-unwraps map entries for FeatureDetailScreen and FrameworkDetailScreen,
which will crash on malformed or missing args; update the code to defensively
check that settings.arguments is a Map<String, String> (or contains the expected
keys), verify that args['id'] and args['name'] are non-null/non-empty before
creating FeatureDetailScreen or FrameworkDetailScreen, and return a safe
fallback Route (e.g., an error screen or Navigator.pop-friendly route) when
validation fails; look for the onGenerateRoute closure and the usage sites of
settings.arguments, FeatureDetailScreen(id: ...), and FrameworkDetailScreen(id:
...) to implement these checks.
libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart-67-74 (1)

67-74: ⚠️ Potential issue | 🟡 Minor

Don’t always show a success message after refresh.

Line 71 currently reports success even if SDK refresh fails. Please branch snackbar messaging by final SDK state.

🐛 Proposed fix
             onTap: () async {
               await state.initializeSDK();
-              if (context.mounted) {
-                ScaffoldMessenger.of(context).showSnackBar(
-                  const SnackBar(content: Text('SDK state has been refreshed.')),
-                );
-              }
+              if (!context.mounted) return;
+              final snackBar = state.sdkState == SDKState.error
+                  ? SnackBar(content: Text(state.errorMessage ?? 'Failed to refresh SDK state.'))
+                  : const SnackBar(content: Text('SDK state has been refreshed.'));
+              ScaffoldMessenger.of(context).showSnackBar(snackBar);
             },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart`
around lines 67 - 74, The current onTap handler always shows a success SnackBar
after calling state.initializeSDK(); change it to inspect the resulting SDK
state (e.g. a boolean like state.isInitialized or a status property
returned/updated by initializeSDK()) and branch the Snackbar message: show the
success message only when the SDK did initialize successfully, otherwise show a
failure/error message; keep the context.mounted guard and use
ScaffoldMessenger.of(context).showSnackBar accordingly (update references around
state.initializeSDK() and the SnackBar call).

@hyochan hyochan added the 🎯 feature New feature label Feb 26, 2026
- Remove dart:io imports from example app (web compatibility)
- Replace Platform.isIOS with defaultTargetPlatform across all files
- Add inference serialization to LlamaCppBridgeEngine (beginInference/endInference)
- Add mounted checks in chat_demo.dart and summarize_demo.dart
- Fix substring RangeError in summarize_demo.dart debug log
- Set readTimeout to 900s for model downloads (was 0/unlimited)
- Bump example minSdk from 31 to 34 (Gemini Nano requirement)
- Clarify guide diagram dependency labels
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

♻️ Duplicate comments (1)
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart (1)

49-50: ⚠️ Potential issue | 🟡 Minor

Missing _messages.isEmpty guard before accessing .last in non-streaming path.

Line 49 checks mounted but not _messages.isEmpty. If the user clears messages (line 136) while awaiting the chat response, _messages.last at line 50 will throw RangeError. The streaming path correctly has both guards at line 38, but this path is missing the second check.

🔧 Proposed fix
-        if (!mounted) return;
+        if (!mounted || _messages.isEmpty) return;
         setState(() { _messages.last = _Message(role: 'assistant', content: result.message); });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`
around lines 49 - 50, The non-streaming completion branch updates _messages.last
without ensuring the list isn't empty, causing a RangeError if messages were
cleared while awaiting the response; update the guard to check
_messages.isNotEmpty (e.g., ensure !mounted &&/or return early if
_messages.isEmpty) before calling setState that assigns _messages.last =
_Message(role: 'assistant', content: result.message) so it matches the streaming
path's protections.
🧹 Nitpick comments (3)
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart (1)

135-136: Consider disabling clear button while loading.

The clear button remains enabled during active requests. While the guards at line 38 prevent crashes, allowing message clearing mid-request creates an inconsistent UX where the response appears in an empty chat.

♻️ Optional improvement
             if (_messages.isNotEmpty)
-              IconButton(icon: const Icon(Icons.delete, color: Color(0xFFFF3B30)), onPressed: () => setState(() => _messages.clear())),
+              IconButton(
+                icon: Icon(Icons.delete, color: _loading ? const Color(0xFFFF3B30).withOpacity(0.4) : const Color(0xFFFF3B30)),
+                onPressed: _loading ? null : () => setState(() => _messages.clear()),
+              ),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`
around lines 135 - 136, The clear IconButton currently calls setState(() =>
_messages.clear()) regardless of in-flight requests; update its onPressed to be
null (disabled) while the loading flag (e.g., _isLoading) is true so users
cannot clear mid-request, and keep the existing check for _messages.isNotEmpty;
locate the IconButton with symbol names IconButton, onPressed, setState and the
_messages.clear() call and conditionally disable it by gating on !_isLoading (or
existing loading variable) so the button is inactive during active requests.
libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt (2)

536-546: Minor: Consider using withContext(Dispatchers.Main) instead of nested scope.launch.

The scope.launch {} creates a fire-and-forget coroutine for progress updates. Since you're already in a suspend function, using withContext(Dispatchers.Main) would ensure the event is sent before continuing, avoiding potential ordering issues on rapid completions.

♻️ Optional: Use withContext for ordered execution
-                            scope.launch {
-                                downloadProgressSink?.success(
-                                    mapOf(
-                                        "modelId" to modelId,
-                                        "bytesDownloaded" to bytesRead,
-                                        "totalBytes" to contentLength,
-                                        "progress" to progress,
-                                        "state" to "downloading",
-                                    )
-                                )
-                            }
+                            withContext(Dispatchers.Main) {
+                                downloadProgressSink?.success(
+                                    mapOf(
+                                        "modelId" to modelId,
+                                        "bytesDownloaded" to bytesRead,
+                                        "totalBytes" to contentLength,
+                                        "progress" to progress,
+                                        "state" to "downloading",
+                                    )
+                                )
+                            }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`
around lines 536 - 546, Replace the fire-and-forget coroutine used to emit
download progress from scope.launch to an explicit main-context suspension so
progress events are sent in order: inside the suspend function where
downloadProgressSink is used (currently wrapped with scope.launch), swap that
call to use withContext(Dispatchers.Main) to invoke
downloadProgressSink?.success(...) synchronously on the main dispatcher; keep
the same payload keys
("modelId","bytesDownloaded","totalBytes","progress","state") and remove the
surrounding scope.launch usage to avoid ordering/race issues.

49-51: Consider defensive null handling for context in lazy initialization.

The context!! forced unwrap could throw NullPointerException if locanara is accessed before onAttachedToEngine sets the context. While unlikely in normal plugin lifecycle, defensive patterns are safer.

♻️ Optional: Add null check with meaningful error
     private val locanara: Locanara by lazy {
-        Locanara.getInstance(context!!)
+        val ctx = context ?: throw IllegalStateException("Plugin not attached to engine")
+        Locanara.getInstance(ctx)
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`
around lines 49 - 51, The lazy init for locanara uses context!! which can NPE if
accessed before onAttachedToEngine; change the initializer to defensively handle
null by checking context and failing fast with a clear error (e.g., throw
IllegalStateException with a message) or defer initialization until
onAttachedToEngine sets the context (move Locanara.getInstance(context) call
into onAttachedToEngine or use context ?: throw IllegalStateException("Plugin
not attached")). Update the reference to Locanara.getInstance and the locanara
property initializer accordingly so locanara is only created with a non-null
context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 226-227: Update the mismatched docs text: replace the literal
"3-tab navigation" and the listed tabs "Features, Device, Settings" with the
correct tab count and names that include the Framework tab (e.g., "4-tab
navigation: Features, Framework, Device, Settings"), and adjust the adjacent
line "Feature list → demo screens for all 7 AI features + chat" to reflect the
larger demo surface (mentioning Framework demos where applicable); search for
the exact phrases "3-tab navigation" and "Feature list → demo screens for all 7
AI features + chat" in this doc and edit them to match the PR’s objective.
- Around line 11-16: Update the prerequisites list in
.claude/guides/12-flutter-ondevice-ai.md to distinguish the library vs example
app Android SDK requirements: keep "Android API 26+" as the minimum for the core
library, and add an explicit line stating the example app requires minSdk 34
(Gemini Nano); ensure the two bullets are clearly labeled (e.g., "Library:" and
"Example app:") so users know why local runs can fail if they use the example
without raising minSdk.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 673-688: The download progress EventSink is being called from a
background thread inside the locanara.downloadPromptApiModel callback, which
violates Flutter's requirement that EventSink methods run on the main thread;
update the callback in FlutterOndeviceAiPlugin.kt (the lambda passed to
locanara.downloadPromptApiModel) to post the EventSink.success(...) call to the
Android main thread (for example by using a Handler(Looper.getMainLooper()).post
{ ... } or Dispatchers.Main) so that downloadProgressSink?.success(...) is
invoked on the UI thread while preserving the same payload
("modelId","bytesDownloaded","totalBytes","progress","state").

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 124-126: The current check inside LlamaCppBridgeEngine that
inspects error.localizedDescription for substrings "nil" or "fatal" is fragile;
instead, update the error handling in the same block (the code that calls
lock.withLock { isLoaded = false; llmSession = nil }) to detect unrecoverable
errors by inspecting concrete error types or error codes (e.g., cast to NSError
and check domain/code or match specific error enums from the underlying library)
and only clear isLoaded and nil out llmSession when those conditions are met;
keep the lock.withLock usage and state updates but replace the substring logic
with explicit type/code-based checks and a safe fallback that logs the error
without resetting state.
- Around line 216-242: Concurrent calls to loadAndRegisterModel can orphan a
previous BridgedLlamaCppEngine (and its LLMSession); serialize load/unload and
ensure any in-progress load is cancelled or the previous engine is explicitly
unloaded before replacing it. Add a single-task guard (e.g., a serial actor or a
private serial DispatchQueue + a loadingTask property) that runs
loadAndRegisterModel and unloadModel operations serially; before assigning
self.engine inside loadAndRegisterModel, capture and call unload() on any
existing engine (or cancel the running loadingTask and then unload) and ensure
unloadModel cancels/awaits the loadingTask then calls engine?.unload() and
unregisters the engine from InferenceRouter; apply these changes around the
loadAndRegisterModel, unloadModel, engine, BridgedLlamaCppEngine and
InferenceRouter usage.

In `@libraries/flutter_ondevice_ai/example/lib/app_state.dart`:
- Around line 146-147: The code sets _isModelReady using "cap.isModelReady ||
cap.isSupported", which incorrectly treats mere support as readiness; change
assignments to only use "cap.isModelReady" for _isModelReady and keep
"cap.isSupported" as a separate check (e.g., _isSupported or using
cap.isSupported inline) to drive capability-available UI paths without enabling
ready-only features; update the other occurrences referenced (the block around
the 160-171 logic) to follow the same separation so readiness and support are
handled independently.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart`:
- Around line 60-66: The catch block calls setState(() { _loading = false; })
without verifying mounted, which can cause setState() after dispose; update the
catch handler in summarize_demo.dart to check mounted before calling setState
(i.e., move or wrap the setState call with if (mounted) { setState(...) }),
ensure sw.stop() still runs, and keep the existing mounted-wrapped
ScaffoldMessenger.of(context).showSnackBar(...) as-is.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`:
- Around line 77-83: The trailing GestureDetector in ai_status_banner.dart
currently has an empty onTap; either wire it to a real action or remove the
button: add a VoidCallback property (e.g., onEnablePressed) to the
AiStatusBanner widget and call that callback from the trailing GestureDetector's
onTap so callers can open iOS Settings or request permissions, or if you don't
need external handling simply call a concrete action (e.g.,
openAppSettings/openAppSettings from permission_handler) inside onTap; update
the AiStatusBanner constructor and usages to provide the new callback or delete
the trailing enable widget altogether.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart`:
- Around line 34-38: Replace user-facing SnackBar messages that currently
interpolate raw exception details (e) with a generic, non-sensitive message
while keeping the detailed error in the debug/log output; specifically, in the
catch blocks related to model download, load, delete and switch operations (the
catch handling around debugPrint('[ModelSheet] ... FAILED: $e\n$st') and
subsequent ScaffoldMessenger.of(context).showSnackBar calls) change the SnackBar
content to something like 'Download failed' / 'Load failed' / 'Delete failed' /
'Switch failed' (no $e or stack trace), preserve the existing debugPrint (or
logger) that includes $e and $st, and keep the context.mounted guard when
showing the SnackBar so only the UI message is sanitized while full error
details remain in logs.
- Around line 239-254: The UI currently only marks the row whose id equals
_actionLoading as busy, allowing other rows to start concurrent operations;
change the coordination so operations are globally exclusive by replacing the
single _actionLoading string with a shared state (either a boolean like
_isActionInProgress or a Set<String> _actionLoadingIds) and update all callers
(_handleDownload, _handleLoad, _handleDelete) to check/set this shared state
atomically before starting async work and clear it in finally; update the
_ModelRow "loading" prop to reflect the new shared state (e.g., loading =
_isActionInProgress || _actionLoadingIds.contains(model.modelId)) so other rows
disable their buttons while any native operation is running (apply same change
to the other mappings at the noted ranges).

---

Duplicate comments:
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`:
- Around line 49-50: The non-streaming completion branch updates _messages.last
without ensuring the list isn't empty, causing a RangeError if messages were
cleared while awaiting the response; update the guard to check
_messages.isNotEmpty (e.g., ensure !mounted &&/or return early if
_messages.isEmpty) before calling setState that assigns _messages.last =
_Message(role: 'assistant', content: result.message) so it matches the streaming
path's protections.

---

Nitpick comments:
In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 536-546: Replace the fire-and-forget coroutine used to emit
download progress from scope.launch to an explicit main-context suspension so
progress events are sent in order: inside the suspend function where
downloadProgressSink is used (currently wrapped with scope.launch), swap that
call to use withContext(Dispatchers.Main) to invoke
downloadProgressSink?.success(...) synchronously on the main dispatcher; keep
the same payload keys
("modelId","bytesDownloaded","totalBytes","progress","state") and remove the
surrounding scope.launch usage to avoid ordering/race issues.
- Around line 49-51: The lazy init for locanara uses context!! which can NPE if
accessed before onAttachedToEngine; change the initializer to defensively handle
null by checking context and failing fast with a clear error (e.g., throw
IllegalStateException with a message) or defer initialization until
onAttachedToEngine sets the context (move Locanara.getInstance(context) call
into onAttachedToEngine or use context ?: throw IllegalStateException("Plugin
not attached")). Update the reference to Locanara.getInstance and the locanara
property initializer accordingly so locanara is only created with a non-null
context.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`:
- Around line 135-136: The clear IconButton currently calls setState(() =>
_messages.clear()) regardless of in-flight requests; update its onPressed to be
null (disabled) while the loading flag (e.g., _isLoading) is true so users
cannot clear mid-request, and keep the existing check for _messages.isNotEmpty;
locate the IconButton with symbol names IconButton, onPressed, setState and the
_messages.clear() call and conditionally disable it by gating on !_isLoading (or
existing loading variable) so the button is inactive during active requests.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0f1088f and 2332522.

📒 Files selected for processing (10)
  • .claude/guides/12-flutter-ondevice-ai.md
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt
  • libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts
  • libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
  • libraries/flutter_ondevice_ai/example/lib/app_state.dart
  • libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
🚧 Files skipped from review as they are similar to previous changes (2)
  • libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart
  • libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts

- Guide: clarify Android SDK requirement (lib API 24+ vs example API 34+)
- Guide: fix tab-count mismatch (3-tab → multi-tab navigation)
- Android: dispatch EventSink callbacks to main thread via scope.launch
- Swift: replace fragile string-based error detection with NSError domain check
- Swift: add isLoading guard to prevent concurrent loadAndRegisterModel calls
- Dart: fix isModelReady to use cap.isModelReady only (not cap.isSupported)
- Dart: add mounted check before setState in summarize_demo catch block
- Dart: wire Enable button to open system settings in ai_status_banner
- Dart: use generic error messages in model_selection_sheet SnackBars
- Dart: add global isBusy lock to prevent overlapping model operations
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (2)
.claude/guides/12-flutter-ondevice-ai.md (1)

226-227: ⚠️ Potential issue | 🟡 Minor

Clarify feature coverage count to match PR scope.

The wording still reads as "7 AI features + chat", while PR scope says 8 AI features plus framework demos. Please make the count explicit and mention framework demos in this bullet to avoid reader confusion.

Suggested doc tweak
-- Feature list → demo screens for all 7 AI features + chat
+- Feature list → demo screens for all 8 AI features (including chat/streaming) + 7 framework demos
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.claude/guides/12-flutter-ondevice-ai.md around lines 226 - 227, Update the
feature-count bullet to match the PR scope: replace the line containing "Feature
list → demo screens for all 7 AI features + chat" with a line that explicitly
states "Feature list → demo screens for all 8 AI features (including chat) +
framework demos" so readers see the correct count and that framework demos are
included; target the exact bullet text in the file (the "Feature list → demo
screens..." line) for this change.
libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift (1)

220-255: ⚠️ Potential issue | 🟠 Major

Serialize loadAndRegisterModel and unloadModel as one operation stream.

unloadModel() can run while the Task started at Line 230 is still loading; then load completion can still set self.engine and re-register after unload was requested.

🔧 Suggested fix direction
 public class LlamaCppBridgeEngine: NSObject, LlamaCppBridgeProvider {
     private var engine: BridgedLlamaCppEngine?
     private var isLoading = false
     private let loadLock = NSLock()
+    private var loadingTask: Task<Void, Never>?

     public func loadAndRegisterModel(_ modelPath: String, mmprojPath: String?, completion: `@escaping` (NSError?) -> Void) {
         loadLock.lock()
-        guard !isLoading else {
+        guard !isLoading, loadingTask == nil else {
             loadLock.unlock()
             completion(NSError(domain: "LlamaCppBridge", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model load already in progress"]))
             return
         }
         isLoading = true
-        loadLock.unlock()
-
-        Task {
+        let task = Task {
             do {
                 // existing load flow...
                 try await newEngine.loadModel()
+                try Task.checkCancellation()
                 self.engine = newEngine
                 InferenceRouter.shared.registerEngine(newEngine as any InferenceEngine)
                 self.loadLock.withLock { self.isLoading = false; self.loadingTask = nil }
                 completion(nil)
             } catch {
                 self.loadLock.withLock { self.isLoading = false; self.loadingTask = nil }
                 completion(error as NSError)
             }
         }
+        loadingTask = task
+        loadLock.unlock()
     }

     public func unloadModel() {
+        let taskToCancel = loadLock.withLock { () -> Task<Void, Never>? in
+            let t = loadingTask
+            loadingTask = nil
+            isLoading = false
+            return t
+        }
+        taskToCancel?.cancel()
         engine?.unload()
         InferenceRouter.shared.unregisterEngine()
         engine = nil
     }
 }

Also applies to: 258-262

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`
around lines 220 - 255, Serialize load/unload by making them mutually exclusive:
use a single synchronization primitive (e.g., the existing loadLock or a
dedicated serial DispatchQueue) and a pendingUnload flag so unloadModel and
loadAndRegisterModel cannot race; modify unloadModel to set pendingUnload = true
under the same lock/queue, perform unregister + unload on that queue, and clear
pendingUnload; in loadAndRegisterModel, after creating and loading newEngine but
before assigning self.engine or calling InferenceRouter.registerEngine, check
pendingUnload (under the same lock/queue) — if pendingUnload is set, immediately
unload newEngine and complete with an error/early return; otherwise atomically
set self.engine, call InferenceRouter.registerEngine, and clear isLoading;
ensure all accesses to isLoading, pendingUnload, engine, and register/unregister
happen under the same lock/queue so load and unload are strictly serialized.
🧹 Nitpick comments (2)
libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart (1)

111-118: Consider adding user feedback when settings cannot be opened.

The _openSettings method silently fails if canLaunchUrl returns false. While this is unlikely on iOS, providing feedback would improve UX.

💡 Optional improvement
 Future<void> _openSettings() async {
   if (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS) {
     final uri = Uri.parse('app-settings:');
     if (await canLaunchUrl(uri)) {
       await launchUrl(uri);
+    } else {
+      debugPrint('[AIStatusBanner] Unable to open settings');
     }
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`
around lines 111 - 118, The _openSettings method currently silently does nothing
when canLaunchUrl(uri) is false; update _openSettings to handle the failure case
by providing user feedback (e.g., show a SnackBar or AlertDialog) when launch is
not possible, while keeping the existing iOS-only guard and using the same Uri
and launchUrl call for success; reference the _openSettings function and the
canLaunchUrl/launchUrl calls so you add a UI feedback path that runs when
canLaunchUrl returns false or when launchUrl throws.
libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt (1)

523-548: Consider adding cancellation checks for long-running downloads.

For large model downloads (2.5GB+), if the plugin is detached or the download is cancelled, the operation will continue until the stream is exhausted. Adding periodic isActive checks would allow the download to respond to cancellation more promptly.

♻️ Optional improvement for cancellation awareness
                     while (true) {
                         val count = input.read(buffer)
                         if (count == -1) break
+                        if (!isActive) {
+                            android.util.Log.d(TAG, "downloadFile: cancelled")
+                            throw kotlinx.coroutines.CancellationException("Download cancelled")
+                        }
                         output.write(buffer, 0, count)
                         bytesRead += count
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`
around lines 523 - 548, The download loop in FlutterOndeviceAiPlugin.kt (the
while(true) reading from input and writing to output using variables like input,
output, bytesRead, lastProgressReport, contentLength, silent, modelId and
reporting via scope and downloadProgressSink) lacks cancellation checks and can
continue after the coroutine/scope is cancelled; modify the loop to periodically
check coroutine cancellation (e.g., call coroutineContext.ensureActive() or
check scope.isActive) inside the loop and break/cleanup promptly when cancelled
so the stream is closed and no further progress events are sent, making sure any
necessary cleanup (closing input/output, sending a final cancelled state to
downloadProgressSink if appropriate) occurs before exit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 11-12: The docs state "Flutter SDK 3.3.0+" but pubspec.yaml
requires Flutter >= 3.22.0, so update the setup requirements in this guide by
replacing the "Flutter SDK 3.3.0+" entry with "Flutter SDK 3.22.0+" (leave the
"Dart SDK 3.3.0+" line unchanged) so the documented minimum matches the
library's pubspec.yaml.

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 37-44: beginInference() currently checks isInferencing under lock
and then later sets isInferencing/isCancelled in a separate lock, allowing a
race where two callers both see false and both start inference; change the logic
so the check and the set are atomic inside a single critical section: loop
attempting to acquire the gate should call lock.withLock to check isInferencing
and, if false, set isInferencing = true and isCancelled = false and then break;
only if the gate is already taken should the task sleep and retry—update the
beginInference() implementation to perform the check-and-set inside one
lock.withLock block to prevent concurrent starts.
- Around line 133-163: The spawned Task inside generateStreaming is not
cancelled when the AsyncThrowingStream is terminated; capture the Task returned
by Task { ... } in a local variable and wire up continuation.onTermination (or
the onTermination handler of AsyncThrowingStream) to cancel that Task and call
endInference() as needed. Specifically, in generateStreaming, assign the Task to
a variable (e.g., let fetchTask = Task { ... }), set continuation.onTermination
= { `@Sendable` _ in fetchTask.cancel(); self?.endInference() }, and keep the
existing in-loop cancellation check (lock.withLock { self.isCancelled }) and
error handling for beginInference()/endInference(); this ensures the internal
inference Task stops when the consumer cancels the stream.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart`:
- Line 64: Replace the SnackBar that displays the raw exception text (the call
to ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error:
$e')))) with a generic user-facing message (e.g., "An error occurred, please try
again") and move the detailed exception info to debug-only logging (use
debugPrint or kDebugMode from flutter/foundation to print/log the full exception
and stacktrace). Ensure you update the code where
ScaffoldMessenger.of(context).showSnackBar is invoked and stop interpolating the
exception variable e into UI strings; only log e in debug or to a non-UI logger.
- Around line 120-138: The custom _segmented widget (function
_segmented(List<String> values, List<String> labels, String selected,
ValueChanged<String> onChanged)) should be replaced with Flutter's
SegmentedButton<String>: remove the GestureDetector+Container implementation and
build a SegmentedButton where the segments are generated from values/labels, the
selected state maps to the SegmentedButton's selected set, and onChanged invokes
the provided onChanged callback; ensure you configure style (background, border,
text style) to match the current visuals and enable multiSelection: false
(single selection) and provide appropriate semantics so keyboard/arrow
navigation and screen-reader announcements work.

---

Duplicate comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 226-227: Update the feature-count bullet to match the PR scope:
replace the line containing "Feature list → demo screens for all 7 AI features +
chat" with a line that explicitly states "Feature list → demo screens for all 8
AI features (including chat) + framework demos" so readers see the correct count
and that framework demos are included; target the exact bullet text in the file
(the "Feature list → demo screens..." line) for this change.

In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 220-255: Serialize load/unload by making them mutually exclusive:
use a single synchronization primitive (e.g., the existing loadLock or a
dedicated serial DispatchQueue) and a pendingUnload flag so unloadModel and
loadAndRegisterModel cannot race; modify unloadModel to set pendingUnload = true
under the same lock/queue, perform unregister + unload on that queue, and clear
pendingUnload; in loadAndRegisterModel, after creating and loading newEngine but
before assigning self.engine or calling InferenceRouter.registerEngine, check
pendingUnload (under the same lock/queue) — if pendingUnload is set, immediately
unload newEngine and complete with an error/early return; otherwise atomically
set self.engine, call InferenceRouter.registerEngine, and clear isLoading;
ensure all accesses to isLoading, pendingUnload, engine, and register/unregister
happen under the same lock/queue so load and unload are strictly serialized.

---

Nitpick comments:
In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 523-548: The download loop in FlutterOndeviceAiPlugin.kt (the
while(true) reading from input and writing to output using variables like input,
output, bytesRead, lastProgressReport, contentLength, silent, modelId and
reporting via scope and downloadProgressSink) lacks cancellation checks and can
continue after the coroutine/scope is cancelled; modify the loop to periodically
check coroutine cancellation (e.g., call coroutineContext.ensureActive() or
check scope.isActive) inside the loop and break/cleanup promptly when cancelled
so the stream is closed and no further progress events are sent, making sure any
necessary cleanup (closing input/output, sending a final cancelled state to
downloadProgressSink if appropriate) occurs before exit.

In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`:
- Around line 111-118: The _openSettings method currently silently does nothing
when canLaunchUrl(uri) is false; update _openSettings to handle the failure case
by providing user feedback (e.g., show a SnackBar or AlertDialog) when launch is
not possible, while keeping the existing iOS-only guard and using the same Uri
and launchUrl call for success; reference the _openSettings function and the
canLaunchUrl/launchUrl calls so you add a UI feedback path that runs when
canLaunchUrl returns false or when launchUrl throws.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2332522 and 6e0b7b3.

📒 Files selected for processing (7)
  • .claude/guides/12-flutter-ondevice-ai.md
  • libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt
  • libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
  • libraries/flutter_ondevice_ai/example/lib/app_state.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart
  • libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart

@hyochan hyochan merged commit b1d8061 into main Feb 26, 2026
11 checks passed
@hyochan hyochan deleted the feat/flutter-ondevice-ai branch February 26, 2026 20:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

🎯 feature New feature

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant