A collection of on-device AI primitives for React Native with first-class Vercel AI SDK support. Run AI models directly on users' devices for privacy-preserving, low-latency inference without server costs.
- π Instant AI - Use built-in system models immediately without downloads
- π Privacy-first - All processing happens on-device, data stays local
- π― Vercel AI SDK compatible - Drop-in replacement with familiar APIs
- π¨ Complete toolkit - Text generation, embeddings, transcription, speech synthesis
Native integration with Apple's on-device AI capabilities:
- Text Generation - Apple Foundation Models for chat and completion
- Embeddings - NLContextualEmbedding for 512-dimensional semantic vectors
- Transcription - SpeechAnalyzer for fast, accurate speech-to-text
- Speech Synthesis - AVSpeechSynthesizer for natural text-to-speech with system voices
npm install @react-native-ai/apple
No additional linking needed, works immediately on iOS devices (autolinked).
import { apple } from '@react-native-ai/apple'
import {
generateText,
embed,
experimental_transcribe as transcribe,
experimental_generateSpeech as speech
} from 'ai'
// Text generation with Apple Intelligence
const { text } = await generateText({
model: apple(),
prompt: 'Explain quantum computing'
})
// Generate embeddings
const { embedding } = await embed({
model: apple.textEmbeddingModel(),
value: 'Hello world'
})
// Transcribe audio
const { text } = await transcribe({
model: apple.transcriptionModel(),
audio: audioBuffer
})
// Text-to-speech
const { audio } = await speech({
model: apple.speechModel(),
text: 'Hello from Apple!'
})
Feature | iOS Version | Additional Requirements |
---|---|---|
Text Generation | iOS 26+ | Apple Intelligence device |
Embeddings | iOS 17+ | - |
Transcription | iOS 26+ | - |
Speech Synthesis | iOS 13+ | iOS 17+ for Personal Voice |
See the Apple documentation for detailed setup and usage guides.
Run popular open-source LLMs directly on-device using MLC's optimized runtime.
npm install @react-native-ai/mlc
Requires the "Increased Memory Limit" capability in Xcode. See the getting started guide for setup instructions.
import { mlc } from '@react-native-ai/mlc'
import { generateText } from 'ai'
// Create model instance
const model = mlc.languageModel('Llama-3.2-3B-Instruct')
// Download and prepare model (one-time setup)
await model.download()
await model.prepare()
// Generate response with Llama via MLC engine
const { text } = await generateText({
model,
prompt: 'Explain quantum computing'
})
Model ID | Size |
---|---|
Llama-3.2-3B-Instruct |
~2GB |
Phi-3-mini-4k-instruct |
~2.5GB |
Mistral-7B-Instruct |
~4.5GB |
Qwen2.5-1.5B-Instruct |
~1GB |
Note
MLC requires iOS devices with sufficient memory (1-8GB depending on model). The prebuilt runtime supports the models listed above. For other models or custom configurations, you'll need to recompile the MLC runtime from source.
Support for Google's on-device models is planned for future releases.
Comprehensive guides and API references are available at react-native-ai.dev.
Read the contribution guidelines before contributing.
react-native-ai is an open source project and will always remain free to use. If you think it's cool, please star it π.
Callstack is a group of React and React Native geeks, contact us at [email protected] if you need any help with these or just want to say hi!
Made with create-react-native-library