High-performance TensorFlow Lite library for React Native, powered by Nitro Modules.
Note: This is an unofficial Nitro Module migration of react-native-fast-tflite, fully optimized for the New Architecture, Bridgeless mode, and Vision Camera frame processors.
- ✅ HybridObject-based — No
install()or global JSI functions - ✅ Works from any frame processor / worklet — Callable from any thread or runtime
- ✅ Bridge / Bridgeless agnostic — Works on both architectures automatically
- ✅ Same API — Drop-in replacement for
react-native-fast-tflite - ✅ Model caching — Identical calls return the same model with zero re-allocation
- ✅ Progress callbacks — Track large model downloads in real time
- ✅ Dynamic input shapes — Resize input tensors at runtime
- ✅ Metal GPU delegate (iOS) — Hardware-accelerated inference via Apple Metal
npm install react-native-nitro-tflite react-native-nitro-modulescd ios && pod installNo additional steps needed — TFLite libraries are downloaded automatically via Gradle.
import { useTensorflowModel } from 'react-native-nitro-tflite'
function App() {
const model = useTensorflowModel(require('./model.tflite'))
if (model.state === 'loaded') {
const output = model.model.runSync([inputFloat32Array])
console.log('Output:', output)
}
if (model.state === 'error') {
console.error('Failed to load:', model.error)
}
return <View />
}import { loadTensorflowModel } from 'react-native-nitro-tflite'
const model = await loadTensorflowModel(require('./model.tflite'))
const output = model.runSync([inputData])const model = await loadTensorflowModel({ url: 'https://example.com/model.tflite' })Track download progress for large models (useful for remote URLs):
import { loadTensorflowModel } from 'react-native-nitro-tflite'
const model = await loadTensorflowModel(
{ url: 'https://example.com/large_model.tflite' },
'default',
(progress) => {
if (progress === -1) {
console.log('Downloading… (size unknown)')
} else {
console.log(`Downloading: ${Math.round(progress * 100)}%`)
}
}
)
progressis in[0, 1]. Receives-1when the server does not send aContent-Lengthheader.
Models are automatically cached in native memory by URL + delegate.
Calling loadTensorflowModel twice with the same arguments returns the same model instantly:
const m1 = await loadTensorflowModel(require('./model.tflite'))
const m2 = await loadTensorflowModel(require('./model.tflite')) // cache hit — instant
// m1 and m2 share the same underlying TfLiteInterpreterTo eagerly free all cached models (e.g. on screen unmount):
import { clearTensorflowModelCache } from 'react-native-nitro-tflite'
clearTensorflowModelCache()For models that support dynamic tensor shapes, call reshapeInput before inference:
const model = await loadTensorflowModel(require('./dynamic_model.tflite'))
// Switch to 640×640 input at runtime
model.reshapeInput(0, [1, 640, 640, 3])
const output = model.runSync([new Float32Array(1 * 640 * 640 * 3)])After
reshapeInput, both input and output tensor sizes are re-allocated automatically.
| Delegate | Platform | Description | Setup |
|---|---|---|---|
default |
Both | CPU inference (always available) | — |
core-ml |
iOS | Apple CoreML acceleration | $EnableCoreMLDelegate = true in Podfile |
metal |
iOS | Metal GPU acceleration | $EnableMetalDelegate = true in Podfile |
nnapi |
Android | Android Neural Networks API | — |
android-gpu |
Android | Android GPU delegate | — |
# ios/Podfile
$EnableCoreMLDelegate = true
use_frameworks! ...# ios/Podfile
$EnableMetalDelegate = true
use_frameworks! ...Then re-run pod install.
Add tflite as an asset extension in metro.config.js:
const { getDefaultConfig } = require('@react-native/metro-config')
const config = getDefaultConfig(__dirname)
config.resolver.assetExts.push('tflite')
module.exports = configThis library uses Nitro Modules with manual C++ HybridObject implementation:
HybridTfliteModelFactory— Singleton factory; loads models on a background thread, manages the cache, handles delegate selectionHybridTfliteModel— WrapsTfLiteInterpreter; exposesrun/runSync/reshapeInput/inputs/outputs/delegateTfliteModelHostObject— Ajsi::HostObjectwrapper that makes the model usable across worklet runtimes (VisionCamera frame processors)
JS: loadTensorflowModel(source, delegate, onProgress?)
↓
resolveSourceUri() → URI string
↓
factory.loadModel(uri, delegate, onProgress)
↓ [cache hit → resolve immediately]
↓ [cache miss → background thread]
C++: HybridTfliteModelFactory::loadModelRaw()
↓
fetchURL(uri, onProgress) ← platform-specific (iOS: NSURLSession, Android: OkHttp)
↓
TfLiteModelCreate() + TfLiteInterpreterCreate()
↓
HybridTfliteModel → TfliteModelHostObject → JS Promise resolved
↓
_modelCache[key] = weak_ptr<HybridTfliteModel>
MIT