Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review infoConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughAdds a new Flutter on-device AI plugin (Dart + native iOS/Android/Web), a full example app with model lifecycle UI, native llama.cpp/SPM bridge and CocoaPods hooks, CI/publish workflows, site/docs pages and cross-library model-management improvements applied to Expo and React Native. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @hyochan, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands the project's cross-platform capabilities by integrating a new Flutter plugin for on-device AI. It provides a unified Dart API for accessing native AI features on iOS, Android, and Web, complete with a robust example application and comprehensive documentation. The changes also include necessary build system configurations and CI/CD pipelines to support the new platform. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an excellent and comprehensive pull request that adds the flutter_ondevice_ai library. The implementation across iOS, Android, and Web is very well done, and the example app is thorough. The code is clean, well-structured, and follows platform-specific best practices. The documentation updates are also clear and helpful. I have a few minor suggestions for improvement.
...ndevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt
Outdated
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 5
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (23)
libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt-46-48 (1)
46-48:⚠️ Potential issue | 🟠 MajorThread safety concern for mutable model state.
downloadedModelIdsandloadedModelIdare accessed fromPromise.asyncblocks which execute on coroutine dispatchers. Unlike the listener collections that useCopyOnWriteArrayList, these are plain mutable types without synchronization.Consider using thread-safe alternatives:
🔒 Proposed fix for thread-safe state
- // Simulated model state (matches native example behavior) - private val downloadedModelIds = mutableSetOf<String>() - private var loadedModelId: String? = null + // Simulated model state (thread-safe) + private val downloadedModelIds = java.util.concurrent.ConcurrentHashMap.newKeySet<String>() + `@Volatile` private var loadedModelId: String? = null🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/react-native-ondevice-ai/android/src/main/java/com/margelo/nitro/ondeviceai/HybridOndeviceAi.kt` around lines 46 - 48, The mutable fields downloadedModelIds and loadedModelId are not thread-safe but are accessed from coroutine/Promise.async contexts; replace downloadedModelIds with a concurrent set (e.g., Collections.newSetFromMap(new ConcurrentHashMap<>()) / ConcurrentHashMap.newKeySet()) and replace loadedModelId with a thread-safe holder (e.g., AtomicReference<String?>) or guard both with a single private lock object; update all usages in the Promise.async blocks and any functions that read/write downloadedModelIds or loadedModelId to use the new thread-safe types or synchronized block to avoid race conditions (references: downloadedModelIds, loadedModelId, Promise.async, CopyOnWriteArrayList).packages/site/src/pages/docs/index.tsx-55-56 (1)
55-56:⚠️ Potential issue | 🟠 Major
react-nativelibrary docs page is added in this PR but not routable/discoverable.
packages/site/src/pages/docs/libraries/react-native.tsxhas no route or sidebar entry here, so/docs/libraries/react-nativeresolves toNotFound. Either wire it in, or remove the page file from this PR.🛠️ Suggested wiring patch
import LibrariesIndex from "./libraries/index"; import ExpoLibrary from "./libraries/expo"; +import ReactNativeLibrary from "./libraries/react-native"; import FlutterLibrary from "./libraries/flutter"; @@ <ul> <li> <NavLink to="/docs/libraries/expo" className={({ isActive }) => (isActive ? "active" : "")} onClick={closeSidebar} > expo-ondevice-ai </NavLink> </li> + <li> + <NavLink + to="/docs/libraries/react-native" + className={({ isActive }) => (isActive ? "active" : "")} + onClick={closeSidebar} + > + react-native-ondevice-ai + </NavLink> + </li> <li> <NavLink to="/docs/libraries/flutter" className={({ isActive }) => (isActive ? "active" : "")} onClick={closeSidebar} > flutter_ondevice_ai </NavLink> </li> </ul> @@ <Route path="libraries" element={<LibrariesIndex />} /> <Route path="libraries/expo" element={<ExpoLibrary />} /> + <Route + path="libraries/react-native" + element={<ReactNativeLibrary />} + /> <Route path="libraries/flutter" element={<FlutterLibrary />} />Also applies to: 336-345, 412-415
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/site/src/pages/docs/index.tsx` around lines 55 - 56, The react-native docs page is present but not wired into the docs index so /docs/libraries/react-native falls back to NotFound; import the ReactNative library component (e.g., add an import for ReactNativeLibrary from "./libraries/react-native") and add it to the same routing/sidebar registration where FlutterLibrary is used (ensure the route path or sidebar entry for "react-native" is added alongside the existing entries and that the component is referenced in the routing list/render logic), removing the unused page file only if you prefer not to expose the route.locanara-versions.json-7-8 (1)
7-8:⚠️ Potential issue | 🟠 MajorDo not update
locanara-versions.jsonversion entries without explicit release instruction.Line 8 introduces a new version number entry (
"flutter": "0.1.0"), which violates the repository rule for this file unless explicitly requested by maintainers/release flow.Suggested revert for this PR (if no explicit release instruction exists)
- "react-native": "0.1.0", - "flutter": "0.1.0" + "react-native": "0.1.0"As per coding guidelines:
locanara-versions.json: NEVER modify version numbers in locanara-versions.json unless explicitly instructed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@locanara-versions.json` around lines 7 - 8, Revert the unintended version bump by removing the newly added JSON entry "flutter": "0.1.0" so locanara-versions.json only contains the approved version entries (e.g., keep the existing "react-native": "0.1.0" line as-is); do not modify any version numbers in locanara-versions.json unless you have explicit release/maintainer instruction.libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart-114-119 (1)
114-119:⚠️ Potential issue | 🟠 Major
Color.withValues(alpha:)requires Flutter 3.27.0 but package supports Flutter 3.22.0.The code uses
Color(...).withValues(alpha: 0.3)on lines 114-119, which was introduced in Flutter 3.27.0. However, the package's pubspec.yaml specifiesflutter: ">=3.22.0", causing a compile error for users on Flutter 3.22–3.26. UsewithOpacity(0.3)instead, which is available in all supported versions.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/agent_demo.dart` around lines 114 - 119, The Border.all color uses Color(...).withValues(alpha: 0.3) which requires Flutter 3.27; replace those calls with Color(...).withOpacity(0.3) so the border colors work on Flutter >=3.22. Specifically, update the expression inside the switch on step.type (the three Color(...) branches) to call .withOpacity(0.3) instead of .withValues(alpha: 0.3)..claude/guides/11-react-native-ondevice-ai.md-114-114 (1)
114-114:⚠️ Potential issue | 🟠 MajorNitro optional-field syntax in guide is inconsistent with repo convention.
Line 114 documents
field?: Type | null, but this repo’s Nitro guidance uses the?(field: Type | null)pattern. Please align this line to prevent spec/codegen mistakes.Based on learnings: Applies to libraries/react-native-ondevice-ai/src/specs/** : Use optional field pattern '?(field: Type | null)' in Nitro specs.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/guides/11-react-native-ondevice-ai.md at line 114, Update the optional-field example in the guide to match the repo’s Nitro convention by replacing the `field?: Type | null` pattern with the `?(field: Type | null)` pattern; ensure any examples or prose that mention optional fields (the inline example on the affected line) use `?(field: Type | null)` so generated Nitro specs in libraries/react-native-ondevice-ai/src/specs/** follow the repo convention.libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart-34-44 (1)
34-44:⚠️ Potential issue | 🟠 MajorGuard async state updates with
mountedchecks.After Line 34
await,setStateruns unconditionally; if the widget is disposed, this can throw at runtime.💡 Suggested fix
try { final result = await _ai.proofread(_controller.text); sw.stop(); + if (!mounted) return; setState(() { _result = result; _debugLog = DebugLog(api: 'proofread', request: {'text': _controller.text}, response: {'correctedText': result.correctedText, 'corrections': result.corrections.length}, timing: sw.elapsedMilliseconds); _loading = false; }); } catch (e) { sw.stop(); + if (!mounted) return; setState(() { _loading = false; }); - if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/proofread_demo.dart` around lines 34 - 44, The async result handling in proofread_demo.dart calls setState after awaiting _ai.proofread and inside the catch block without checking widget lifecycle; wrap the post-await and error-path state updates in a mounted guard (e.g., if (!mounted) return or if (mounted) { setState(...) }) so that _result, _debugLog, and _loading are only updated while the widget is mounted, and ensure showing the SnackBar is also predicated on mounted; apply this to the setState calls around _ai.proofread and the catch block while preserving the stopwatch stop calls.libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart-45-50 (1)
45-50:⚠️ Potential issue | 🟠 MajorAdd
mountedguards around async state updates.
setStateis called after Line 45 await without checking widget lifecycle, so this can throw when users navigate away mid-run.💡 Suggested fix
try { final result = await _ai.chat(text, options: const ChatOptions(systemPrompt: 'You are a helpful assistant.')); sw.stop(); + if (!mounted) return; setState(() { _resultText = result.message; _timing = sw.elapsedMilliseconds; _loading = false; }); } catch (e) { sw.stop(); + if (!mounted) return; setState(() { _errorText = 'Error: $e'; _loading = false; }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/guardrail_demo.dart` around lines 45 - 50, After the await to _ai.chat(...) and in the catch block, wrap the setState calls with a mounted check to avoid updating state after the widget is disposed; specifically, before calling setState to update _resultText/_timing/_loading in the try block and before setState that updates _errorText/_loading in the catch, verify mounted (or return early if not mounted) so setState is only invoked while the widget is still active.libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart-38-48 (1)
38-48:⚠️ Potential issue | 🟠 MajorPrevent
setStateafter dispose in async flow.Lines 40 and 47 update state after an async call without verifying
mounted. This can crash if the user leaves the screen mid-request.💡 Suggested fix
try { final result = await _ai.translate(_controller.text, options: TranslateOptions(sourceLanguage: 'en', targetLanguage: _targetLang)); sw.stop(); + if (!mounted) return; setState(() { _result = result; _debugLog = DebugLog(api: 'translate', request: {'text': _controller.text, 'source': 'en', 'target': _targetLang}, response: {'translatedText': result.translatedText, 'sourceLanguage': result.sourceLanguage, 'targetLanguage': result.targetLanguage}, timing: sw.elapsedMilliseconds); _loading = false; }); } catch (e) { sw.stop(); + if (!mounted) return; setState(() { _loading = false; }); - if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/translate_demo.dart` around lines 38 - 48, The async translate flow updates widget state after awaiting _ai.translate and in the catch block without checking widget lifecycle; modify the code around the translate call (the block that calls _ai.translate and the surrounding setState calls) to guard all setState and ScaffoldMessenger.of(context).showSnackBar(...) invocations with if (!mounted) return or if (mounted) { setState(...) } so no setState/Scaffold access runs after dispose; ensure the stopwatch (sw.stop()) still runs and that both the success branch (where _result, _debugLog, _loading are set) and the error branch only call setState/showSnackBar when mounted.libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart-28-28 (1)
28-28:⚠️ Potential issue | 🟠 Major
Color.withValues()requires Flutter 3.27.0+, but project minimum is 3.3.0.Line 28 uses
Color.withValues(alpha: 0.6), which was first introduced in Flutter 3.27.0. Your project declaressdk: ">=3.3.0", making this code incompatible with Flutter 3.3.0–3.26.x.Either update your minimum SDK to
>=3.27.0or replace withColor.withOpacity(0.6)(deprecated but available in 3.3.0+) and plan a migration path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/run_button.dart` at line 28, The use of Color.withValues(alpha: 0.6) in the RunButton widget is incompatible with the project's minimum Flutter SDK (>=3.3.0); replace the call to Color.withValues(alpha: 0.6) with Color.withOpacity(0.6) (or update the SDK constraint to >=3.27.0 if you intentionally require the newer API) so disabledBackgroundColor uses a supported API; update the RunButton widget's disabledBackgroundColor assignment accordingly (search for disabledBackgroundColor in run_button.dart to locate the exact spot).libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart-326-371 (1)
326-371:⚠️ Potential issue | 🟠 Major
switchToDeviceAIis missing from the web method handler.The Dart API exposes
switchToDeviceAI(), but web falls into defaultUnimplemented. Add an explicit no-op/ack branch for consistency with other unsupported web stubs.Suggested fix
case 'downloadPromptApiModel': return false; + case 'switchToDeviceAI': + return null; default: throw PlatformException( code: 'Unimplemented',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart` around lines 326 - 371, The web method handler handleMethodCall is missing a case for the Dart API method name "switchToDeviceAI", causing calls to fall into the default Unimplemented branch; add a new case 'switchToDeviceAI' in the switch inside handleMethodCall and return a no-op/ack response (e.g., true or a success map) consistent with other unsupported web stubs so the web plugin gracefully acknowledges the call instead of throwing PlatformException.libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart-389-401 (1)
389-401:⚠️ Potential issue | 🟠 Major
isSupportedcan report false while supported web features are true.
isSupportedexcludeshasRewriter/hasWriter, so browsers with only those APIs can be marked unsupported even when feature flags expose usable functionality.Suggested fix
- 'isSupported': hasSummarizer || hasLanguageModel || hasTranslator, + 'isSupported': + hasSummarizer || + hasLanguageModel || + hasTranslator || + hasRewriter || + hasWriter,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/lib/src/flutter_ondevice_ai_web.dart` around lines 389 - 401, The isSupported boolean currently only checks hasSummarizer, hasLanguageModel, and hasTranslator and thus can be false even when rewriter/writer features exist; update the return object's 'isSupported' expression to include hasRewriter and hasWriter (so it returns true if any of hasSummarizer, hasLanguageModel, hasTranslator, hasRewriter, or hasWriter is true)—edit the object returned in flutter_ondevice_ai_web.dart (the map literal containing 'isSupported', 'isModelReady', 'platform', 'features') to include those symbols in the isSupported check.libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart-46-63 (1)
46-63:⚠️ Potential issue | 🟠 MajorAdd
mountedguards around post-await state updates.The summarize request can finish after this screen is disposed; calling
setStatein that case throws an error. Both the try block (line 48) and catch block (line 60) lackmountedchecks beforesetState. The redundantmountedguard aroundScaffoldMessengercan be removed since the early return will prevent that code from executing.🔧 Suggested fix
try { final result = await _ai.summarize(_controller.text, options: options); sw.stop(); + if (!mounted) return; setState(() { _result = result; _debugLog = DebugLog( @@ } catch (e) { sw.stop(); + if (!mounted) return; setState(() { _loading = false; }); - if (mounted) { - ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); - } + ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart` around lines 46 - 63, The try and catch blocks that call setState after awaiting _ai.summarize can run after the widget is disposed; add mounted guards before each setState to return early if !mounted, i.e., check mounted right after sw.stop() in both the success branch (before updating _result, _debugLog, _loading) and the catch branch (before setting _loading), and remove the now-redundant mounted check around ScaffoldMessenger.of(context).showSnackBar so the SnackBar call only runs when mounted is true.libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart-36-46 (1)
36-46:⚠️ Potential issue | 🟠 MajorGuard async UI updates with
mountedchecks.After the
awaiton line 36 completes, if the widget has been disposed (e.g., user navigated away), callingsetState()on lines 38 and 45 will crash with "setState() called after dispose()". This is a real race condition that can occur in normal user flows.Add
if (!mounted) return;immediately aftersw.stop()in both the try and catch blocks to prevent setState calls on a disposed widget. The existingif (mounted)check on line 46 should be removed since the mounted guard is now at the block entry.🔧 Suggested fix
try { final result = await _ai.classify(_textController.text, options: ClassifyOptions(categories: _selectedCategories)); sw.stop(); + if (!mounted) return; setState(() { _result = result; _debugLog = DebugLog(api: 'classify', request: {'text': _textController.text, 'categories': _selectedCategories}, response: {'classifications': result.classifications.map((c) => {'label': c.label, 'score': c.score}).toList()}, timing: sw.elapsedMilliseconds); _loading = false; }); } catch (e) { sw.stop(); + if (!mounted) return; setState(() { _loading = false; }); - if (mounted) ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); + ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error: $e'))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart` around lines 36 - 46, After awaiting _ai.classify and stopping the stopwatch (sw.stop()), guard any subsequent UI updates by adding "if (!mounted) return;" immediately after sw.stop() in both the try and catch blocks so setState() and ScaffoldMessenger calls never run on a disposed widget; ensure you still call setState to update _result, _debugLog, and _loading only when mounted, and remove the now‑redundant "if (mounted)" check that currently wraps the ScaffoldMessenger.of(context).showSnackBar call; reference symbols: _ai.classify, sw.stop(), setState(), _textController, DebugLog, mounted, ScaffoldMessenger.of(context).showSnackBar.libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift-200-201 (1)
200-201:⚠️ Potential issue | 🟠 MajorAvoid logging raw user prompt content.
Prompt text can contain sensitive data; logging it undermines the privacy posture.
Proposed fix
- NSLog("\(TAG) chat() message: \(message.prefix(100))") + NSLog("\(TAG) chat() message length: \(message.count)") @@ - NSLog("\(TAG) chatStream() message: \(message.prefix(100))") + NSLog("\(TAG) chatStream() message length: \(message.count)")Also applies to: 218-218
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift` around lines 200 - 201, The NSLog call in the chat() handler currently logs raw prompt content via NSLog("\(TAG) chat() message: \(message.prefix(100))"); remove or redact that sensitive data: stop logging the message text and instead log non-sensitive metadata (e.g., message length, a hash, or a fixed placeholder). Update the instances in FlutterOndeviceAiPlugin.swift where chat() logs the message (the NSLog referencing TAG and message) and the similar NSLog usage around the other occurrence to avoid printing raw user prompts.libraries/flutter_ondevice_ai/example/lib/app_state.dart-147-147 (1)
147-147:⚠️ Potential issue | 🟠 MajorModel readiness is over-reported when device is supported but model isn’t ready yet.
Using
cap.isModelReady || cap.isSupportedcan mark web as ready too early and suppress setup guidance.Proposed fix
- _isModelReady = cap.isModelReady || cap.isSupported; + final modelReady = kIsWeb ? cap.isModelReady : (cap.isModelReady || cap.isSupported); + _isModelReady = modelReady; @@ - final modelReady = cap.isModelReady || cap.isSupported; _availableFeatures = _featureDefinitions.map((def) {Also applies to: 161-172
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/app_state.dart` at line 147, The readiness flag _isModelReady is being set using the disjunction cap.isModelReady || cap.isSupported which incorrectly marks the model ready when the device only supports it; change the logic to require cap.isModelReady (and only fallback to cap.isSupported in a deliberate, separate check if truly intended). Update the assignment to _isModelReady to rely solely on cap.isModelReady and audit the similar assignments/blocks referenced around the same area (the other occurrence in the component/setup logic) to use cap.isModelReady instead of the OR expression so UI/setup guidance is not suppressed when the device is supported but the model isn't ready.libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift-225-250 (1)
225-250:⚠️ Potential issue | 🟠 MajorPropagate errors to
EventChannelstream listeners.When
chain.streamRun()fails, the method call receives an error, but stream listeners receive no terminal signal. Stream listeners need error notification to properly handle failures.Add error propagation to the event sink before completing the method call:
Proposed fix
} catch { + chatStreamSink?(FlutterError( + code: "ERR_CHAT_STREAM", + message: error.localizedDescription, + details: nil + )) result(FlutterError(code: "ERR_CHAT_STREAM", message: error.localizedDescription, details: nil)) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/ios/Classes/FlutterOndeviceAiPlugin.swift` around lines 225 - 250, When chain.streamRun(...) throws, you must propagate the failure to the EventChannel by sending an error payload through chatStreamSink before finishing the method; in the catch block for the async loop (around chain.streamRun and result), call chatStreamSink with a terminal message that includes the error (e.g. "error": error.localizedDescription, "isFinal": true, "delta": "", "accumulated": accumulated, "conversationId": NSNull()) and then call result(FlutterError(...)) as currently implemented so both stream listeners and the method caller receive the failure; update the catch handling around chatStreamSink, chain.streamRun, and result accordingly.libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart-340-364 (1)
340-364:⚠️ Potential issue | 🟠 MajorUse semantic button widgets instead of raw
GestureDetectorfor interactive controls.Bare
GestureDetectorlacks proper button semantics, focus management, and keyboard activation on web/desktop, which breaks accessibility for these platforms.The proposed fix using
Semantics+Material+InkWellis a good starting point, but for complete keyboard accessibility it should also includeFocusableActionDetectorwithActivateIntentto enable Enter/Space activation, along with visible focus indicators and cursor hints (e.g.,MouseRegion(cursor: SystemMouseCursors.click)). See Flutter's focus system and custom button patterns in the accessibility documentation for the recommended approach.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart` around lines 340 - 364, Replace the bare GestureDetector wrapper around the copy control with an accessible interactive widget: wrap the Container in Semantics (button: true, label: 'Copy flag') and Material + InkWell for ripple and hit testing, and add FocusableActionDetector wired to ActivateIntent to handle Enter/Space activation, along with a MouseRegion(cursor: SystemMouseCursors.click) for pointer hinting; ensure focus visuals are drawn (e.g., via focused state used to show a visible outline around the Container) and keep the existing action logic (Clipboard.setData(...) and ScaffoldMessenger.of(context).showSnackBar(...)) inside the ActivateIntent/InkWell onTap handlers so keyboard, mouse, and screen-reader users can activate the control.libraries/flutter_ondevice_ai/example/ios/Podfile-66-69 (1)
66-69:⚠️ Potential issue | 🟠 MajorFail fast when bridge/SPM setup fails in
post_install.These rescue blocks only log and continue, which can leave the project in a partially configured state that fails later at build/runtime.
🔧 Proposed fix
rescue => e puts "\u26a0\ufe0f [flutter_ondevice_ai] Bridge configuration failed: #{e.message}" puts e.backtrace&.first(3)&.join("\n") + raise e end end @@ rescue => e puts "\u26a0\ufe0f [flutter_ondevice_ai] SPM embedding setup failed: #{e.message}" puts e.backtrace&.first(3)&.join("\n") + raise e end endAlso applies to: 120-123
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/Podfile` around lines 66 - 69, The rescue blocks inside the Podfile's post_install should fail fast instead of only logging; update the rescue handlers in the post_install block (and the similar handler later in the file) to re-raise the caught exception (or call abort) after logging the error and backtrace so the build stops immediately on bridge/SPM setup failure rather than continuing in a partially configured state. Ensure you reference the post_install do |installer| rescue blocks and preserve the existing log messages while re-raising the original exception to surface the failure to the caller.libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj-469-470 (1)
469-470:⚠️ Potential issue | 🟠 MajorDeployment target drift: project configs still declare iOS 13.0.
These build settings conflict with the iOS 17.0 requirement used elsewhere in this integration. Align the project-level
IPHONEOS_DEPLOYMENT_TARGETvalues to17.0for consistency and predictable builds.🔧 Proposed fix
- IPHONEOS_DEPLOYMENT_TARGET = 13.0; + IPHONEOS_DEPLOYMENT_TARGET = 17.0;Also applies to: 599-600, 650-651
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/Runner.xcodeproj/project.pbxproj` around lines 469 - 470, Project-level build settings still set IPHONEOS_DEPLOYMENT_TARGET = 13.0 which conflicts with iOS 17.0 used elsewhere; update every occurrence of the IPHONEOS_DEPLOYMENT_TARGET token in the project.pbxproj (including the blocks containing MTL_ENABLE_DEBUG_INFO) to 17.0 so project and target settings are consistent, verify all target and project build configurations now report 17.0, and leave unrelated keys (e.g., MTL_ENABLE_DEBUG_INFO) unchanged.libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift-120-133 (1)
120-133:⚠️ Potential issue | 🟠 MajorReset cancellation state before starting a new stream.
After
cancel()is called once,isCancelledremainstrue, so subsequentgenerateStreamingcalls can terminate immediately.🔧 Proposed fix
func generateStreaming(prompt: String, config: InferenceConfig) -> AsyncThrowingStream<String, Error> { AsyncThrowingStream { continuation in Task { [weak self] in guard let self, self.isLoaded, let session = self.llmSession else { continuation.finish(throwing: LocanaraError.custom(.modelNotLoaded, "Model not loaded")) return } + self.lock.withLock { self.isCancelled = false } do { for try await text in session.streamResponse(to: prompt) { if self.lock.withLock({ self.isCancelled }) { break } continuation.yield(text) }Also applies to: 160-164
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift` around lines 120 - 133, Reset the cancellation flag under the same mutex before starting a new stream so previous cancel() calls don't short-circuit new sessions: inside generateStreaming (the AsyncThrowingStream initializer / Task block) before the guard, call lock.withLock { isCancelled = false } to clear cancellation state, and apply the same change to the other streaming entrypoint referenced around the 160-164 region; keep using the existing lock and isCancelled variables to avoid races.libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec-12-13 (1)
12-13:⚠️ Potential issue | 🟠 MajorAlign bridge
swift_versionwith C++ interop support and Locanara SDK requirements.Line 12 declares
s.swift_version = "5.0", but the Podfile enables C++ interop with-cxx-interoperability-mode=defaultand C++20 on this target, which requires Swift 5.9 minimum. Since this bridge depends on Locanara and the SDK requires Swift 6.0+, use"6.0"to match the SDK's strict concurrency requirements and avoid toolchain incompatibility.🔧 Proposed fix
- s.swift_version = "5.0" + s.swift_version = "6.0"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/LocanaraLlamaBridge.podspec` around lines 12 - 13, Update the Podspec's Swift version to meet C++ interop and Locanara SDK requirements by changing the s.swift_version declaration in LocanaraLlamaBridge.podspec from "5.0" to "6.0"; ensure the s.swift_version symbol is set to "6.0" so the target matches the -cxx-interoperability-mode and C++20 toolchain and satisfies the SDK's strict concurrency/Swift 6 requirement.libraries/flutter_ondevice_ai/example/ios/Podfile-35-53 (1)
35-53:⚠️ Potential issue | 🟠 MajorMake
configure_llama_bridgeidempotent for repeatedpod installruns.Lines 35–38 and 48–53 unconditionally append package references and dependencies without checking for existing entries. Since
post_installruns on everypod install, repeated invocations create duplicate SPM references and product dependencies, which destabilizes the Xcode project. This pattern is already handled elsewhere in the file (lines 83–86 explicitly remove duplicate build phases before re-adding them).Additionally, errors in the function are silently swallowed by the rescue block (lines 66–69), masking any build failures caused by corrupted project state.
Apply the idempotency checks before appending: verify that
pkg_refdoesn't already exist inpackage_references, and verify that each product dependency doesn't already exist inbridge_target.package_product_dependenciesbefore creating and appending new instances.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/Podfile` around lines 35 - 53, The configure_llama_bridge code unconditionally appends a new XCRemoteSwiftPackageReference (pkg_ref) and XCSwiftPackageProductDependency entries to pods_project.root_object.package_references and bridge_target.package_product_dependencies, causing duplicates on repeated pod install runs; update configure_llama_bridge to first search pods_project.root_object.package_references for an existing XCRemoteSwiftPackageReference with repositoryURL 'https://github.com/tattn/LocalLLMClient.git' and reuse it if present (assign to pkg_ref only when absent), and for each product_name ('LocalLLMClient', 'LocalLLMClientLlama') check bridge_target.package_product_dependencies for an existing dependency with matching product_name and package reference before creating/appending a new XCSwiftPackageProductDependency; also remove or improve the silent rescue by logging or re-raising the caught exception so build issues in the function aren't swallowed.libraries/flutter_ondevice_ai/example/ios/Podfile-37-37 (1)
37-37:⚠️ Potential issue | 🟠 MajorPin LocalLLMClient to a stable version instead of tracking
mainbranch.Line 37 pins to a mutable branch, which makes builds non-reproducible and breaks unexpectedly. LocalLLMClient publishes stable tagged releases (latest:
0.4.6); use"exactVersion"or"revision"in the requirement hash.🔧 Proposed fix
- pkg_ref.requirement = { 'kind' => 'branch', 'branch' => 'main' } + pkg_ref.requirement = { 'kind' => 'exactVersion', 'version' => '0.4.6' }Or pin to a specific commit:
pkg_ref.requirement = { 'kind' => 'revision', 'revision' => '<commit-sha>' }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/Podfile` at line 37, The Podfile currently pins LocalLLMClient by setting pkg_ref.requirement = { 'kind' => 'branch', 'branch' => 'main' } which tracks a mutable branch; change the requirement hash in the pkg_ref.requirement assignment to a stable pin using either 'exactVersion' with the released tag (e.g. "0.4.6") or 'revision' with a commit SHA to ensure reproducible builds; update the pkg_ref.requirement entry for LocalLLMClient accordingly (replace 'branch' => 'main' with either 'exactVersion' => '<tag>' or 'revision' => '<commit-sha>').
🟡 Minor comments (14)
packages/site/src/pages/docs/libraries/react-native.tsx-74-128 (1)
74-128:⚠️ Potential issue | 🟡 MinorAPI parity statement and list are inconsistent (
describeImage()missing).This section says the API matches
expo-ondevice-ai, butdescribeImage()is not listed, which can mislead readers.🛠️ Suggested doc patch
<li> <Link to="/docs/utils/proofread"> <code>proofread()</code> </Link>{" "} - Grammar correction </li> + <li> + <Link to="/docs/utils/describe-image"> + <code>describeImage()</code> + </Link>{" "} + - Image description + </li> </ul>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/site/src/pages/docs/libraries/react-native.tsx` around lines 74 - 128, The docs claim API parity with expo-ondevice-ai but the utility list omits describeImage(); add an entry for describeImage() inside the ul alongside the other utilities (similar pattern used for summarize(), classify(), etc.), linking to the describe-image docs page (e.g., <Link to="/docs/utils/describe-image"> with a <code>describeImage()</code> label) so the list includes describeImage() alongside getDeviceCapability() and the other utility functions and keeps the API parity statement accurate.packages/site/src/pages/docs/libraries/flutter.tsx-64-120 (1)
64-120:⚠️ Potential issue | 🟡 Minor
Available APIslist is incomplete (describeImage()missing).Given this page positions the Flutter API as aligned with native SDK utilities, omitting
describeImage()creates an avoidable mismatch in docs coverage.🛠️ Suggested doc patch
<li> <Link to="/docs/utils/proofread"> <code>proofread()</code> </Link>{" "} - Grammar correction </li> + <li> + <Link to="/docs/utils/describe-image"> + <code>describeImage()</code> + </Link>{" "} + - Image description + </li> </ul>🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/site/src/pages/docs/libraries/flutter.tsx` around lines 64 - 120, The "Available APIs" list omits the Flutter image description API; add a new list item for describeImage() to mirror native SDKs so FlutterOndeviceAi.instance shows parity — insert a <li> containing a Link to the describe-image docs (e.g. <Link to="/docs/utils/describe-image">) and a <code>describeImage()</code> label with a short description like "Image captioning/description" alongside the other utilities in the same unordered list.libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart-22-23 (1)
22-23:⚠️ Potential issue | 🟡 MinorAvoid runtime
toUpperCase()for i18n-safe label display.At Line 22,
label.toUpperCase()uses Dart's locale-agnostic case mapping, which produces incorrect results for certain locales (e.g., Turkish with dotted/dotless i). While this example app currently has no internationalization setup, pass display-ready labels into the widget instead—this aligns with Dart best practices and avoids issues if localization is added later.Suggested change
- label.toUpperCase(), + label,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/stat_badge.dart` around lines 22 - 23, The widget currently calls label.toUpperCase() in the StatBadge display (remove this call) because locale-agnostic uppercasing can break i18n; instead accept and render a display-ready label (leave casing decisions to callers or localization code). Update the StatBadge rendering to use label as-is (remove any toUpperCase usage) and adjust any places that construct StatBadge to pass already-formatted/uppercased strings or localized variants (search for StatBadge and the label parameter to update callers).libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart-103-105 (1)
103-105:⚠️ Potential issue | 🟡 MinorGuard JSON serialization to avoid runtime crashes.
Line 104 can throw
UnsupportedErrorfor non-encodabledynamicvalues and crash the widget tree. Please add a safe fallback for debug rendering.Proposed fix
Widget _codeBlock(dynamic data) { - final text = const JsonEncoder.withIndent(' ').convert(data); + final String text; + try { + text = const JsonEncoder.withIndent(' ').convert(data); + } catch (_) { + text = data?.toString() ?? 'null'; + } return SizedBox( height: 200,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/debug_log_panel.dart` around lines 103 - 105, The JsonEncoder.withIndent(' ').convert call in _codeBlock can throw UnsupportedError for non-encodable dynamic values; update the _codeBlock function to guard serialization by wrapping the convert call in a try/catch (catch UnsupportedError and generic Exception), and on error (or if data is null) produce a safe fallback string (e.g., data.toString() or a clear "<non-serializable>" message) to render in the widget instead of letting the exception bubble and crash the widget tree; ensure the final returned widget uses that safe string for display.libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart-14-19 (1)
14-19:⚠️ Potential issue | 🟡 MinorPrevent text overflow in narrow layouts.
Line 14-19 can overflow when
labelorvalueis long. Make both sides flexible and add overflow handling.Proposed fix
child: Row( - mainAxisAlignment: MainAxisAlignment.spaceBetween, children: [ - Text(label, style: const TextStyle(fontSize: 17, color: Colors.black)), - Text(value, style: TextStyle(fontSize: 17, color: valueColor ?? const Color(0xFF666666))), + Expanded( + child: Text( + label, + style: const TextStyle(fontSize: 17, color: Colors.black), + maxLines: 1, + overflow: TextOverflow.ellipsis, + ), + ), + const SizedBox(width: 12), + Flexible( + child: Text( + value, + textAlign: TextAlign.end, + style: TextStyle(fontSize: 17, color: valueColor ?? const Color(0xFF666666)), + maxLines: 1, + overflow: TextOverflow.ellipsis, + ), + ), ], ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/info_row.dart` around lines 14 - 19, The Row's Text widgets can overflow; update the InfoRow widget to make both label and value flexible and clamp long text: wrap each Text in a Flexible/Expanded (use Flexible with appropriate fit for label and value), set maxLines: 1, overflow: TextOverflow.ellipsis, and for the right-side value add textAlign: TextAlign.right so long values truncate and don't break the layout; keep the existing fontSize and color handling (valueColor) intact..claude/guides/12-flutter-ondevice-ai.md-226-227 (1)
226-227:⚠️ Potential issue | 🟡 MinorSync app scope counts in the guide.
Lines 226-227 still describe a smaller app scope. Please update tab/feature counts to match the current example app in this PR so the guide doesn’t drift.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/guides/12-flutter-ondevice-ai.md around lines 226 - 227, Update the guide text where it currently says "3-tab navigation: Features, Device, Settings" and "Feature list → demo screens for all 7 AI features + chat" to reflect the actual app scope in this PR — change the tab count/names and the feature/demo count to match the example app (update the wording that references the tab navigation and the feature list/demo screens so they match the current app implementation). Locate and edit the exact strings "3-tab navigation: Features, Device, Settings" and "Feature list → demo screens for all 7 AI features + chat" in .claude/guides/12-flutter-ondevice-ai.md and replace them with the correct tab names/number and correct feature/demo count used by the PR.libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt-137-137 (1)
137-137:⚠️ Potential issue | 🟡 MinorPrefer
"other"over empty string for unknown correction type.Line 137 serializes null
typeas"", which bypasses downstream null-fallback logic and can render a blank label.💡 Suggested fix
- "type" to (c.type ?: ""), + "type" to (c.type ?: "other"),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiSerialization.kt` at line 137, The serialization currently writes the "type" field as an empty string when c.type is null; update the serializer in FlutterOndeviceAiSerialization.kt to use a default of "other" instead of "" (i.e., when serializing the "type" key for the correction object `c`, replace the null-coalescing fallback from `""` to `"other"`) so downstream null/unknown-fallback logic receives a meaningful sentinel value.libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt-49-51 (1)
49-51:⚠️ Potential issue | 🟡 MinorForce-unwrapping
context!!may cause NPE if accessed before attachment.The
locanaralazy property usescontext!!which will throw NPE if accessed beforeonAttachedToEnginesets the context. Consider adding a null check or ensuring the lazy property is only accessed after attachment.🛡️ Suggested defensive approach
private val locanara: Locanara by lazy { - Locanara.getInstance(context!!) + Locanara.getInstance( + context ?: throw IllegalStateException("Plugin not attached to engine") + ) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt` around lines 49 - 51, The lazy property locanara in FlutterOndeviceAiPlugin currently force-unwraps context (context!!) which can NPE if accessed before onAttachedToEngine; change initialization to avoid force-unwrapping by either: (A) make locanara nullable and initialize it inside onAttachedToEngine using Locanara.getInstance(context) after verifying context is non-null, or (B) keep locanara non-null but initialize it in onAttachedToEngine (remove the lazy block) or use a lazy block that calls requireNotNull(context) with a clear error message so access before attachment fails fast; update any callers to handle the nullable form if you choose option A.libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt-670-703 (1)
670-703:⚠️ Potential issue | 🟡 MinorEnsure EventSink callbacks are invoked on the main thread.
In
handleDownloadPromptApiModel, thedownloadProgressSink?.success()calls (lines 679, 689) are made within the suspend function context. Sincelocanara.downloadPromptApiModelmay invoke the progress callback on a background thread, ensure these UI thread calls are properly dispatched. The earlierdownloadFilemethod correctly usesscope.launch { ... }for progress updates.🛡️ Suggested fix to ensure main thread dispatch
locanara.downloadPromptApiModel { progress -> val pct = if (progress.bytesToDownload > 0) { progress.bytesDownloaded.toDouble() / progress.bytesToDownload.toDouble() } else { 0.0 } - downloadProgressSink?.success( + scope.launch { + downloadProgressSink?.success( mapOf( "modelId" to "gemini-nano", "bytesDownloaded" to progress.bytesDownloaded, "totalBytes" to progress.bytesToDownload, "progress" to pct, "state" to "downloading", ) - ) + ) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt` around lines 670 - 703, The progress callbacks in handleDownloadPromptApiModel may be invoked on a background thread; wrap all calls to downloadProgressSink?.success (both the per-progress callback passed into locanara.downloadPromptApiModel and the final completion success call) so they run on the main/UI thread (e.g., dispatch via the plugin's CoroutineScope with Dispatchers.Main or use activity.runOnUiThread) to match the approach used in downloadFile and avoid threading issues with EventSink.libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart-48-50 (1)
48-50:⚠️ Potential issue | 🟡 MinorUse
topClassificationfor the conditional decision.
classifications.firstis order-dependent; use the explicit top classification field to avoid wrong rewrite style selection.Suggested fix
- final topCategory = classify.classifications.isNotEmpty ? classify.classifications.first.label : 'Unknown'; + final topCategory = classify.topClassification.label;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/framework_detail/chain_demo.dart` around lines 48 - 50, The current logic uses classify.classifications.first.label (order-dependent) to set topCategory and decide rewrite style; change it to use the explicit top classification field returned by classify (e.g., classify.topClassification?.label ?? 'Unknown') when computing topCategory, then pass that topCategory into RewriteOptions for rewrite (RewriteOutputType.professional vs friendly) so the rewrite decision uses the topClassification rather than classifications.first; keep the same fallback 'Unknown' behavior.libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart-101-116 (1)
101-116:⚠️ Potential issue | 🟡 Minor
Addbutton enablement does not refresh while typing.The button state depends on
_customController.text, but this widget does not rebuild as the text changes.🔧 Suggested fix
child: TextField( controller: _customController, + onChanged: (_) => setState(() {}), decoration: InputDecoration( hintText: 'Add custom category...',🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/classify_demo.dart` around lines 101 - 116, The Add button doesn't rebuild while typing because the widget never reacts to changes in _customController; add a listener to _customController in the State's initState that calls setState() when the text changes (so the TextButton's onPressed check _customController.text.trim().isEmpty updates), and remove the listener in dispose; locate _customController, the _addCustomCategory handler and the TextButton/TextField in classify_demo.dart to implement this change.libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart-79-85 (1)
79-85:⚠️ Potential issue | 🟡 MinorThe “Enable” CTA is currently a no-op.
Line 80 uses
onTap: () {}so the button appears actionable but does nothing. Please wire a real action (or remove tap affordance until action exists).🐛 Minimal safe fix (remove misleading tap behavior)
- trailing: GestureDetector( - onTap: () {}, - child: Container( + trailing: Container( padding: const EdgeInsets.symmetric(horizontal: 16, vertical: 8), decoration: BoxDecoration(color: const Color(0xFFFF9500), borderRadius: BorderRadius.circular(8)), child: const Text('Enable', style: TextStyle(color: Colors.white, fontSize: 14, fontWeight: FontWeight.w600)), - ), - ), + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart` around lines 79 - 85, The "Enable" CTA is currently a no-op because the trailing GestureDetector in ai_status_banner.dart has onTap: () {} — either wire a real handler or remove the misleading tap affordance; for a minimal safe fix replace the GestureDetector with a non-interactive wrapper (e.g., use the Container directly or wrap with Padding) or set onTap to null so the button no longer appears actionable until you implement an actual enable function (update the trailing GestureDetector/onTap usage in the AIStatusBanner widget).libraries/flutter_ondevice_ai/example/lib/main.dart-49-60 (1)
49-60:⚠️ Potential issue | 🟡 MinorGuard route argument parsing to prevent runtime crashes.
Line 51/57 perform a hard cast and Line 53/59 force unwrap values. A malformed or missing route argument will throw at runtime. Please validate args before constructing detail screens.
🐛 Proposed fix
onGenerateRoute: (settings) { if (settings.name == '/feature-detail') { - final args = settings.arguments as Map<String, String>; + final args = settings.arguments; + if (args is! Map) return null; + final id = args['id']; + final name = args['name']; + if (id is! String || name is! String) return null; return MaterialPageRoute( - builder: (_) => FeatureDetailScreen(id: args['id']!, name: args['name']!), + builder: (_) => FeatureDetailScreen(id: id, name: name), ); } if (settings.name == '/framework-detail') { - final args = settings.arguments as Map<String, String>; + final args = settings.arguments; + if (args is! Map) return null; + final id = args['id']; + final name = args['name']; + if (id is! String || name is! String) return null; return MaterialPageRoute( - builder: (_) => FrameworkDetailScreen(id: args['id']!, name: args['name']!), + builder: (_) => FrameworkDetailScreen(id: id, name: name), ); } return null; },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/main.dart` around lines 49 - 60, The route handler in onGenerateRoute currently force-casts settings.arguments and force-unwraps map entries for FeatureDetailScreen and FrameworkDetailScreen, which will crash on malformed or missing args; update the code to defensively check that settings.arguments is a Map<String, String> (or contains the expected keys), verify that args['id'] and args['name'] are non-null/non-empty before creating FeatureDetailScreen or FrameworkDetailScreen, and return a safe fallback Route (e.g., an error screen or Navigator.pop-friendly route) when validation fails; look for the onGenerateRoute closure and the usage sites of settings.arguments, FeatureDetailScreen(id: ...), and FrameworkDetailScreen(id: ...) to implement these checks.libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart-67-74 (1)
67-74:⚠️ Potential issue | 🟡 MinorDon’t always show a success message after refresh.
Line 71 currently reports success even if SDK refresh fails. Please branch snackbar messaging by final SDK state.
🐛 Proposed fix
onTap: () async { await state.initializeSDK(); - if (context.mounted) { - ScaffoldMessenger.of(context).showSnackBar( - const SnackBar(content: Text('SDK state has been refreshed.')), - ); - } + if (!context.mounted) return; + final snackBar = state.sdkState == SDKState.error + ? SnackBar(content: Text(state.errorMessage ?? 'Failed to refresh SDK state.')) + : const SnackBar(content: Text('SDK state has been refreshed.')); + ScaffoldMessenger.of(context).showSnackBar(snackBar); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart` around lines 67 - 74, The current onTap handler always shows a success SnackBar after calling state.initializeSDK(); change it to inspect the resulting SDK state (e.g. a boolean like state.isInitialized or a status property returned/updated by initializeSDK()) and branch the Snackbar message: show the success message only when the SDK did initialize successfully, otherwise show a failure/error message; keep the context.mounted guard and use ScaffoldMessenger.of(context).showSnackBar accordingly (update references around state.initializeSDK() and the SnackBar call).
...aries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart
Outdated
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
Outdated
Show resolved
Hide resolved
- Remove dart:io imports from example app (web compatibility) - Replace Platform.isIOS with defaultTargetPlatform across all files - Add inference serialization to LlamaCppBridgeEngine (beginInference/endInference) - Add mounted checks in chat_demo.dart and summarize_demo.dart - Fix substring RangeError in summarize_demo.dart debug log - Set readTimeout to 900s for model downloads (was 0/unlimited) - Bump example minSdk from 31 to 34 (Gemini Nano requirement) - Clarify guide diagram dependency labels
There was a problem hiding this comment.
Actionable comments posted: 10
♻️ Duplicate comments (1)
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart (1)
49-50:⚠️ Potential issue | 🟡 MinorMissing
_messages.isEmptyguard before accessing.lastin non-streaming path.Line 49 checks
mountedbut not_messages.isEmpty. If the user clears messages (line 136) while awaiting the chat response,_messages.lastat line 50 will throwRangeError. The streaming path correctly has both guards at line 38, but this path is missing the second check.🔧 Proposed fix
- if (!mounted) return; + if (!mounted || _messages.isEmpty) return; setState(() { _messages.last = _Message(role: 'assistant', content: result.message); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart` around lines 49 - 50, The non-streaming completion branch updates _messages.last without ensuring the list isn't empty, causing a RangeError if messages were cleared while awaiting the response; update the guard to check _messages.isNotEmpty (e.g., ensure !mounted &&/or return early if _messages.isEmpty) before calling setState that assigns _messages.last = _Message(role: 'assistant', content: result.message) so it matches the streaming path's protections.
🧹 Nitpick comments (3)
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart (1)
135-136: Consider disabling clear button while loading.The clear button remains enabled during active requests. While the guards at line 38 prevent crashes, allowing message clearing mid-request creates an inconsistent UX where the response appears in an empty chat.
♻️ Optional improvement
if (_messages.isNotEmpty) - IconButton(icon: const Icon(Icons.delete, color: Color(0xFFFF3B30)), onPressed: () => setState(() => _messages.clear())), + IconButton( + icon: Icon(Icons.delete, color: _loading ? const Color(0xFFFF3B30).withOpacity(0.4) : const Color(0xFFFF3B30)), + onPressed: _loading ? null : () => setState(() => _messages.clear()), + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart` around lines 135 - 136, The clear IconButton currently calls setState(() => _messages.clear()) regardless of in-flight requests; update its onPressed to be null (disabled) while the loading flag (e.g., _isLoading) is true so users cannot clear mid-request, and keep the existing check for _messages.isNotEmpty; locate the IconButton with symbol names IconButton, onPressed, setState and the _messages.clear() call and conditionally disable it by gating on !_isLoading (or existing loading variable) so the button is inactive during active requests.libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt (2)
536-546: Minor: Consider usingwithContext(Dispatchers.Main)instead of nestedscope.launch.The
scope.launch {}creates a fire-and-forget coroutine for progress updates. Since you're already in a suspend function, usingwithContext(Dispatchers.Main)would ensure the event is sent before continuing, avoiding potential ordering issues on rapid completions.♻️ Optional: Use withContext for ordered execution
- scope.launch { - downloadProgressSink?.success( - mapOf( - "modelId" to modelId, - "bytesDownloaded" to bytesRead, - "totalBytes" to contentLength, - "progress" to progress, - "state" to "downloading", - ) - ) - } + withContext(Dispatchers.Main) { + downloadProgressSink?.success( + mapOf( + "modelId" to modelId, + "bytesDownloaded" to bytesRead, + "totalBytes" to contentLength, + "progress" to progress, + "state" to "downloading", + ) + ) + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt` around lines 536 - 546, Replace the fire-and-forget coroutine used to emit download progress from scope.launch to an explicit main-context suspension so progress events are sent in order: inside the suspend function where downloadProgressSink is used (currently wrapped with scope.launch), swap that call to use withContext(Dispatchers.Main) to invoke downloadProgressSink?.success(...) synchronously on the main dispatcher; keep the same payload keys ("modelId","bytesDownloaded","totalBytes","progress","state") and remove the surrounding scope.launch usage to avoid ordering/race issues.
49-51: Consider defensive null handling forcontextin lazy initialization.The
context!!forced unwrap could throwNullPointerExceptioniflocanarais accessed beforeonAttachedToEnginesets the context. While unlikely in normal plugin lifecycle, defensive patterns are safer.♻️ Optional: Add null check with meaningful error
private val locanara: Locanara by lazy { - Locanara.getInstance(context!!) + val ctx = context ?: throw IllegalStateException("Plugin not attached to engine") + Locanara.getInstance(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt` around lines 49 - 51, The lazy init for locanara uses context!! which can NPE if accessed before onAttachedToEngine; change the initializer to defensively handle null by checking context and failing fast with a clear error (e.g., throw IllegalStateException with a message) or defer initialization until onAttachedToEngine sets the context (move Locanara.getInstance(context) call into onAttachedToEngine or use context ?: throw IllegalStateException("Plugin not attached")). Update the reference to Locanara.getInstance and the locanara property initializer accordingly so locanara is only created with a non-null context.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 226-227: Update the mismatched docs text: replace the literal
"3-tab navigation" and the listed tabs "Features, Device, Settings" with the
correct tab count and names that include the Framework tab (e.g., "4-tab
navigation: Features, Framework, Device, Settings"), and adjust the adjacent
line "Feature list → demo screens for all 7 AI features + chat" to reflect the
larger demo surface (mentioning Framework demos where applicable); search for
the exact phrases "3-tab navigation" and "Feature list → demo screens for all 7
AI features + chat" in this doc and edit them to match the PR’s objective.
- Around line 11-16: Update the prerequisites list in
.claude/guides/12-flutter-ondevice-ai.md to distinguish the library vs example
app Android SDK requirements: keep "Android API 26+" as the minimum for the core
library, and add an explicit line stating the example app requires minSdk 34
(Gemini Nano); ensure the two bullets are clearly labeled (e.g., "Library:" and
"Example app:") so users know why local runs can fail if they use the example
without raising minSdk.
In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 673-688: The download progress EventSink is being called from a
background thread inside the locanara.downloadPromptApiModel callback, which
violates Flutter's requirement that EventSink methods run on the main thread;
update the callback in FlutterOndeviceAiPlugin.kt (the lambda passed to
locanara.downloadPromptApiModel) to post the EventSink.success(...) call to the
Android main thread (for example by using a Handler(Looper.getMainLooper()).post
{ ... } or Dispatchers.Main) so that downloadProgressSink?.success(...) is
invoked on the UI thread while preserving the same payload
("modelId","bytesDownloaded","totalBytes","progress","state").
In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 124-126: The current check inside LlamaCppBridgeEngine that
inspects error.localizedDescription for substrings "nil" or "fatal" is fragile;
instead, update the error handling in the same block (the code that calls
lock.withLock { isLoaded = false; llmSession = nil }) to detect unrecoverable
errors by inspecting concrete error types or error codes (e.g., cast to NSError
and check domain/code or match specific error enums from the underlying library)
and only clear isLoaded and nil out llmSession when those conditions are met;
keep the lock.withLock usage and state updates but replace the substring logic
with explicit type/code-based checks and a safe fallback that logs the error
without resetting state.
- Around line 216-242: Concurrent calls to loadAndRegisterModel can orphan a
previous BridgedLlamaCppEngine (and its LLMSession); serialize load/unload and
ensure any in-progress load is cancelled or the previous engine is explicitly
unloaded before replacing it. Add a single-task guard (e.g., a serial actor or a
private serial DispatchQueue + a loadingTask property) that runs
loadAndRegisterModel and unloadModel operations serially; before assigning
self.engine inside loadAndRegisterModel, capture and call unload() on any
existing engine (or cancel the running loadingTask and then unload) and ensure
unloadModel cancels/awaits the loadingTask then calls engine?.unload() and
unregisters the engine from InferenceRouter; apply these changes around the
loadAndRegisterModel, unloadModel, engine, BridgedLlamaCppEngine and
InferenceRouter usage.
In `@libraries/flutter_ondevice_ai/example/lib/app_state.dart`:
- Around line 146-147: The code sets _isModelReady using "cap.isModelReady ||
cap.isSupported", which incorrectly treats mere support as readiness; change
assignments to only use "cap.isModelReady" for _isModelReady and keep
"cap.isSupported" as a separate check (e.g., _isSupported or using
cap.isSupported inline) to drive capability-available UI paths without enabling
ready-only features; update the other occurrences referenced (the block around
the 160-171 logic) to follow the same separation so readiness and support are
handled independently.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart`:
- Around line 60-66: The catch block calls setState(() { _loading = false; })
without verifying mounted, which can cause setState() after dispose; update the
catch handler in summarize_demo.dart to check mounted before calling setState
(i.e., move or wrap the setState call with if (mounted) { setState(...) }),
ensure sw.stop() still runs, and keep the existing mounted-wrapped
ScaffoldMessenger.of(context).showSnackBar(...) as-is.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`:
- Around line 77-83: The trailing GestureDetector in ai_status_banner.dart
currently has an empty onTap; either wire it to a real action or remove the
button: add a VoidCallback property (e.g., onEnablePressed) to the
AiStatusBanner widget and call that callback from the trailing GestureDetector's
onTap so callers can open iOS Settings or request permissions, or if you don't
need external handling simply call a concrete action (e.g.,
openAppSettings/openAppSettings from permission_handler) inside onTap; update
the AiStatusBanner constructor and usages to provide the new callback or delete
the trailing enable widget altogether.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart`:
- Around line 34-38: Replace user-facing SnackBar messages that currently
interpolate raw exception details (e) with a generic, non-sensitive message
while keeping the detailed error in the debug/log output; specifically, in the
catch blocks related to model download, load, delete and switch operations (the
catch handling around debugPrint('[ModelSheet] ... FAILED: $e\n$st') and
subsequent ScaffoldMessenger.of(context).showSnackBar calls) change the SnackBar
content to something like 'Download failed' / 'Load failed' / 'Delete failed' /
'Switch failed' (no $e or stack trace), preserve the existing debugPrint (or
logger) that includes $e and $st, and keep the context.mounted guard when
showing the SnackBar so only the UI message is sanitized while full error
details remain in logs.
- Around line 239-254: The UI currently only marks the row whose id equals
_actionLoading as busy, allowing other rows to start concurrent operations;
change the coordination so operations are globally exclusive by replacing the
single _actionLoading string with a shared state (either a boolean like
_isActionInProgress or a Set<String> _actionLoadingIds) and update all callers
(_handleDownload, _handleLoad, _handleDelete) to check/set this shared state
atomically before starting async work and clear it in finally; update the
_ModelRow "loading" prop to reflect the new shared state (e.g., loading =
_isActionInProgress || _actionLoadingIds.contains(model.modelId)) so other rows
disable their buttons while any native operation is running (apply same change
to the other mappings at the noted ranges).
---
Duplicate comments:
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`:
- Around line 49-50: The non-streaming completion branch updates _messages.last
without ensuring the list isn't empty, causing a RangeError if messages were
cleared while awaiting the response; update the guard to check
_messages.isNotEmpty (e.g., ensure !mounted &&/or return early if
_messages.isEmpty) before calling setState that assigns _messages.last =
_Message(role: 'assistant', content: result.message) so it matches the streaming
path's protections.
---
Nitpick comments:
In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 536-546: Replace the fire-and-forget coroutine used to emit
download progress from scope.launch to an explicit main-context suspension so
progress events are sent in order: inside the suspend function where
downloadProgressSink is used (currently wrapped with scope.launch), swap that
call to use withContext(Dispatchers.Main) to invoke
downloadProgressSink?.success(...) synchronously on the main dispatcher; keep
the same payload keys
("modelId","bytesDownloaded","totalBytes","progress","state") and remove the
surrounding scope.launch usage to avoid ordering/race issues.
- Around line 49-51: The lazy init for locanara uses context!! which can NPE if
accessed before onAttachedToEngine; change the initializer to defensively handle
null by checking context and failing fast with a clear error (e.g., throw
IllegalStateException with a message) or defer initialization until
onAttachedToEngine sets the context (move Locanara.getInstance(context) call
into onAttachedToEngine or use context ?: throw IllegalStateException("Plugin
not attached")). Update the reference to Locanara.getInstance and the locanara
property initializer accordingly so locanara is only created with a non-null
context.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dart`:
- Around line 135-136: The clear IconButton currently calls setState(() =>
_messages.clear()) regardless of in-flight requests; update its onPressed to be
null (disabled) while the loading flag (e.g., _isLoading) is true so users
cannot clear mid-request, and keep the existing check for _messages.isNotEmpty;
locate the IconButton with symbol names IconButton, onPressed, setState and the
_messages.clear() call and conditionally disable it by gating on !_isLoading (or
existing loading variable) so the button is inactive during active requests.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
.claude/guides/12-flutter-ondevice-ai.mdlibraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.ktlibraries/flutter_ondevice_ai/example/android/app/build.gradle.ktslibraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swiftlibraries/flutter_ondevice_ai/example/lib/app_state.dartlibraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dartlibraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/chat_demo.dartlibraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dartlibraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dartlibraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
🚧 Files skipped from review as they are similar to previous changes (2)
- libraries/flutter_ondevice_ai/example/lib/screens/settings_screen.dart
- libraries/flutter_ondevice_ai/example/android/app/build.gradle.kts
...ndevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt
Show resolved
Hide resolved
...aries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
Outdated
Show resolved
Hide resolved
...aries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
Show resolved
Hide resolved
libraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
Show resolved
Hide resolved
- Guide: clarify Android SDK requirement (lib API 24+ vs example API 34+) - Guide: fix tab-count mismatch (3-tab → multi-tab navigation) - Android: dispatch EventSink callbacks to main thread via scope.launch - Swift: replace fragile string-based error detection with NSError domain check - Swift: add isLoading guard to prevent concurrent loadAndRegisterModel calls - Dart: fix isModelReady to use cap.isModelReady only (not cap.isSupported) - Dart: add mounted check before setState in summarize_demo catch block - Dart: wire Enable button to open system settings in ai_status_banner - Dart: use generic error messages in model_selection_sheet SnackBars - Dart: add global isBusy lock to prevent overlapping model operations
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (2)
.claude/guides/12-flutter-ondevice-ai.md (1)
226-227:⚠️ Potential issue | 🟡 MinorClarify feature coverage count to match PR scope.
The wording still reads as "7 AI features + chat", while PR scope says 8 AI features plus framework demos. Please make the count explicit and mention framework demos in this bullet to avoid reader confusion.
Suggested doc tweak
-- Feature list → demo screens for all 7 AI features + chat +- Feature list → demo screens for all 8 AI features (including chat/streaming) + 7 framework demos🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/guides/12-flutter-ondevice-ai.md around lines 226 - 227, Update the feature-count bullet to match the PR scope: replace the line containing "Feature list → demo screens for all 7 AI features + chat" with a line that explicitly states "Feature list → demo screens for all 8 AI features (including chat) + framework demos" so readers see the correct count and that framework demos are included; target the exact bullet text in the file (the "Feature list → demo screens..." line) for this change.libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift (1)
220-255:⚠️ Potential issue | 🟠 MajorSerialize
loadAndRegisterModelandunloadModelas one operation stream.
unloadModel()can run while the Task started at Line 230 is still loading; then load completion can still setself.engineand re-register after unload was requested.🔧 Suggested fix direction
public class LlamaCppBridgeEngine: NSObject, LlamaCppBridgeProvider { private var engine: BridgedLlamaCppEngine? private var isLoading = false private let loadLock = NSLock() + private var loadingTask: Task<Void, Never>? public func loadAndRegisterModel(_ modelPath: String, mmprojPath: String?, completion: `@escaping` (NSError?) -> Void) { loadLock.lock() - guard !isLoading else { + guard !isLoading, loadingTask == nil else { loadLock.unlock() completion(NSError(domain: "LlamaCppBridge", code: -1, userInfo: [NSLocalizedDescriptionKey: "Model load already in progress"])) return } isLoading = true - loadLock.unlock() - - Task { + let task = Task { do { // existing load flow... try await newEngine.loadModel() + try Task.checkCancellation() self.engine = newEngine InferenceRouter.shared.registerEngine(newEngine as any InferenceEngine) self.loadLock.withLock { self.isLoading = false; self.loadingTask = nil } completion(nil) } catch { self.loadLock.withLock { self.isLoading = false; self.loadingTask = nil } completion(error as NSError) } } + loadingTask = task + loadLock.unlock() } public func unloadModel() { + let taskToCancel = loadLock.withLock { () -> Task<Void, Never>? in + let t = loadingTask + loadingTask = nil + isLoading = false + return t + } + taskToCancel?.cancel() engine?.unload() InferenceRouter.shared.unregisterEngine() engine = nil } }Also applies to: 258-262
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift` around lines 220 - 255, Serialize load/unload by making them mutually exclusive: use a single synchronization primitive (e.g., the existing loadLock or a dedicated serial DispatchQueue) and a pendingUnload flag so unloadModel and loadAndRegisterModel cannot race; modify unloadModel to set pendingUnload = true under the same lock/queue, perform unregister + unload on that queue, and clear pendingUnload; in loadAndRegisterModel, after creating and loading newEngine but before assigning self.engine or calling InferenceRouter.registerEngine, check pendingUnload (under the same lock/queue) — if pendingUnload is set, immediately unload newEngine and complete with an error/early return; otherwise atomically set self.engine, call InferenceRouter.registerEngine, and clear isLoading; ensure all accesses to isLoading, pendingUnload, engine, and register/unregister happen under the same lock/queue so load and unload are strictly serialized.
🧹 Nitpick comments (2)
libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart (1)
111-118: Consider adding user feedback when settings cannot be opened.The
_openSettingsmethod silently fails ifcanLaunchUrlreturns false. While this is unlikely on iOS, providing feedback would improve UX.💡 Optional improvement
Future<void> _openSettings() async { if (!kIsWeb && defaultTargetPlatform == TargetPlatform.iOS) { final uri = Uri.parse('app-settings:'); if (await canLaunchUrl(uri)) { await launchUrl(uri); + } else { + debugPrint('[AIStatusBanner] Unable to open settings'); } } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart` around lines 111 - 118, The _openSettings method currently silently does nothing when canLaunchUrl(uri) is false; update _openSettings to handle the failure case by providing user feedback (e.g., show a SnackBar or AlertDialog) when launch is not possible, while keeping the existing iOS-only guard and using the same Uri and launchUrl call for success; reference the _openSettings function and the canLaunchUrl/launchUrl calls so you add a UI feedback path that runs when canLaunchUrl returns false or when launchUrl throws.libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt (1)
523-548: Consider adding cancellation checks for long-running downloads.For large model downloads (2.5GB+), if the plugin is detached or the download is cancelled, the operation will continue until the stream is exhausted. Adding periodic
isActivechecks would allow the download to respond to cancellation more promptly.♻️ Optional improvement for cancellation awareness
while (true) { val count = input.read(buffer) if (count == -1) break + if (!isActive) { + android.util.Log.d(TAG, "downloadFile: cancelled") + throw kotlinx.coroutines.CancellationException("Download cancelled") + } output.write(buffer, 0, count) bytesRead += count🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt` around lines 523 - 548, The download loop in FlutterOndeviceAiPlugin.kt (the while(true) reading from input and writing to output using variables like input, output, bytesRead, lastProgressReport, contentLength, silent, modelId and reporting via scope and downloadProgressSink) lacks cancellation checks and can continue after the coroutine/scope is cancelled; modify the loop to periodically check coroutine cancellation (e.g., call coroutineContext.ensureActive() or check scope.isActive) inside the loop and break/cleanup promptly when cancelled so the stream is closed and no further progress events are sent, making sure any necessary cleanup (closing input/output, sending a final cancelled state to downloadProgressSink if appropriate) occurs before exit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 11-12: The docs state "Flutter SDK 3.3.0+" but pubspec.yaml
requires Flutter >= 3.22.0, so update the setup requirements in this guide by
replacing the "Flutter SDK 3.3.0+" entry with "Flutter SDK 3.22.0+" (leave the
"Dart SDK 3.3.0+" line unchanged) so the documented minimum matches the
library's pubspec.yaml.
In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 37-44: beginInference() currently checks isInferencing under lock
and then later sets isInferencing/isCancelled in a separate lock, allowing a
race where two callers both see false and both start inference; change the logic
so the check and the set are atomic inside a single critical section: loop
attempting to acquire the gate should call lock.withLock to check isInferencing
and, if false, set isInferencing = true and isCancelled = false and then break;
only if the gate is already taken should the task sleep and retry—update the
beginInference() implementation to perform the check-and-set inside one
lock.withLock block to prevent concurrent starts.
- Around line 133-163: The spawned Task inside generateStreaming is not
cancelled when the AsyncThrowingStream is terminated; capture the Task returned
by Task { ... } in a local variable and wire up continuation.onTermination (or
the onTermination handler of AsyncThrowingStream) to cancel that Task and call
endInference() as needed. Specifically, in generateStreaming, assign the Task to
a variable (e.g., let fetchTask = Task { ... }), set continuation.onTermination
= { `@Sendable` _ in fetchTask.cancel(); self?.endInference() }, and keep the
existing in-loop cancellation check (lock.withLock { self.isCancelled }) and
error handling for beginInference()/endInference(); this ensures the internal
inference Task stops when the consumer cancels the stream.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dart`:
- Line 64: Replace the SnackBar that displays the raw exception text (the call
to ScaffoldMessenger.of(context).showSnackBar(SnackBar(content: Text('Error:
$e')))) with a generic user-facing message (e.g., "An error occurred, please try
again") and move the detailed exception info to debug-only logging (use
debugPrint or kDebugMode from flutter/foundation to print/log the full exception
and stacktrace). Ensure you update the code where
ScaffoldMessenger.of(context).showSnackBar is invoked and stop interpolating the
exception variable e into UI strings; only log e in debug or to a non-UI logger.
- Around line 120-138: The custom _segmented widget (function
_segmented(List<String> values, List<String> labels, String selected,
ValueChanged<String> onChanged)) should be replaced with Flutter's
SegmentedButton<String>: remove the GestureDetector+Container implementation and
build a SegmentedButton where the segments are generated from values/labels, the
selected state maps to the SegmentedButton's selected set, and onChanged invokes
the provided onChanged callback; ensure you configure style (background, border,
text style) to match the current visuals and enable multiSelection: false
(single selection) and provide appropriate semantics so keyboard/arrow
navigation and screen-reader announcements work.
---
Duplicate comments:
In @.claude/guides/12-flutter-ondevice-ai.md:
- Around line 226-227: Update the feature-count bullet to match the PR scope:
replace the line containing "Feature list → demo screens for all 7 AI features +
chat" with a line that explicitly states "Feature list → demo screens for all 8
AI features (including chat) + framework demos" so readers see the correct count
and that framework demos are included; target the exact bullet text in the file
(the "Feature list → demo screens..." line) for this change.
In
`@libraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swift`:
- Around line 220-255: Serialize load/unload by making them mutually exclusive:
use a single synchronization primitive (e.g., the existing loadLock or a
dedicated serial DispatchQueue) and a pendingUnload flag so unloadModel and
loadAndRegisterModel cannot race; modify unloadModel to set pendingUnload = true
under the same lock/queue, perform unregister + unload on that queue, and clear
pendingUnload; in loadAndRegisterModel, after creating and loading newEngine but
before assigning self.engine or calling InferenceRouter.registerEngine, check
pendingUnload (under the same lock/queue) — if pendingUnload is set, immediately
unload newEngine and complete with an error/early return; otherwise atomically
set self.engine, call InferenceRouter.registerEngine, and clear isLoading;
ensure all accesses to isLoading, pendingUnload, engine, and register/unregister
happen under the same lock/queue so load and unload are strictly serialized.
---
Nitpick comments:
In
`@libraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.kt`:
- Around line 523-548: The download loop in FlutterOndeviceAiPlugin.kt (the
while(true) reading from input and writing to output using variables like input,
output, bytesRead, lastProgressReport, contentLength, silent, modelId and
reporting via scope and downloadProgressSink) lacks cancellation checks and can
continue after the coroutine/scope is cancelled; modify the loop to periodically
check coroutine cancellation (e.g., call coroutineContext.ensureActive() or
check scope.isActive) inside the loop and break/cleanup promptly when cancelled
so the stream is closed and no further progress events are sent, making sure any
necessary cleanup (closing input/output, sending a final cancelled state to
downloadProgressSink if appropriate) occurs before exit.
In
`@libraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dart`:
- Around line 111-118: The _openSettings method currently silently does nothing
when canLaunchUrl(uri) is false; update _openSettings to handle the failure case
by providing user feedback (e.g., show a SnackBar or AlertDialog) when launch is
not possible, while keeping the existing iOS-only guard and using the same Uri
and launchUrl call for success; reference the _openSettings function and the
canLaunchUrl/launchUrl calls so you add a UI feedback path that runs when
canLaunchUrl returns false or when launchUrl throws.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (7)
.claude/guides/12-flutter-ondevice-ai.mdlibraries/flutter_ondevice_ai/android/src/main/kotlin/dev/hyodot/flutter_ondevice_ai/FlutterOndeviceAiPlugin.ktlibraries/flutter_ondevice_ai/example/ios/LocanaraLlamaBridge/Sources/LlamaCppBridgeEngine.swiftlibraries/flutter_ondevice_ai/example/lib/app_state.dartlibraries/flutter_ondevice_ai/example/lib/widgets/feature_detail/summarize_demo.dartlibraries/flutter_ondevice_ai/example/lib/widgets/shared/ai_status_banner.dartlibraries/flutter_ondevice_ai/example/lib/widgets/shared/model_selection_sheet.dart
Summary
flutter_ondevice_aiFlutter plugin with full iOS, Android, and Web (Chrome Built-in AI) supportdart:js_interopfor webpromptStreamingci-flutter.yml) for analyze + test on PRspublish-flutter.yml) with pub.dev OIDC authentication onflutter-v*tagsflutter_ondevice_aiandexpo-ondevice-aiframework sectionWhat's Included
dart:js_interopTest plan
cd libraries/flutter_ondevice_ai && flutter test— all 39 tests passcd libraries/flutter_ondevice_ai && flutter analyze— no errors/docs/libraries/flutterSummary by CodeRabbit
New Features
Documentation
Bug Fixes
Tests
Chores