▸Install the React Native SDK
npm install @respectlytics/react-native
# or
yarn add @respectlytics/react-native
JavaScript-only — no native modules, no auto-linking, no New Architecture migration concerns. Bundle size: ~14KB minified+gzipped. Works in any Expo project (managed or bare) without expo prebuild.
▸Initialize Respectlytics in React Native
// App.tsx (or App.js)
import { useEffect } from 'react';
import Respectlytics from '@respectlytics/react-native';
export default function App() {
useEffect(() => {
Respectlytics.configure({ appKey: '<YOUR_APP_KEY>' });
}, []);
return <YourApp />;
}
Initialize once in your top-level component. No native config; no Info.plist or AndroidManifest changes. The SDK is Hermes- and JSC-compatible.
✦Privacy & implementation notes
The deliberate event-loss-on-force-quit posture is the most common point of friction in adoption: most product / data teams default to "events must be 100% delivered" without questioning the assumption. The follow-up question — "what decision changes if we lose 1%?" — usually produces no real answer for product analytics. Revenue and accounting data, where the answer matters, lives in your billing system, not your product analytics.
On Android, app force-quit via the recent-tasks switcher is more aggressive than on iOS — the OS doesn't deliver a onPause callback in many such cases. Practical mitigation: SDK flushes more aggressively when the app comes back to foreground, catching the previous session's tail events. The 1–2% loss figure is after that mitigation.
The React Native SDK is JavaScript-only — no Objective-C/Swift bridging on iOS, no Java/Kotlin bridging on Android. Side effects: no react-native link, no auto-linking, no New Architecture migration concerns, no platform-channel exception surfaces. Trade-off: no access to platform-only metadata (which we don't want to collect anyway).
Works in Expo managed workflow without expo prebuild. No config plugin is required. EAS Build users: nothing to configure. This is the smoothest integration path on RN — most analytics SDKs require ejecting from managed.
⇋How this compares to other analytics SDKs
| Event queue | Firebase Analytics | Mixpanel | Amplitude | Respectlytics |
|---|---|---|---|---|
| Backed by | SQLite on disk | SQLite on disk | SQLite on disk | In-memory ring buffer |
| Default flush cadence | ~1 hour | Configurable (30s default) | 30s | 30s |
| Flushes on backgrounding | Yes | Yes | Yes | Yes |
| Flushes on terminate | Yes (best-effort) | Yes (best-effort) | Yes | No (RAM only) |
| Queue survives crash | Yes | Yes | Yes | No |
| Maximum queued event count | 100k+ | 100k+ | Unbounded | ~5k (ring eviction) |
❓Frequently asked questions
What's the typical event-loss rate?
In our internal benchmarks against fixture apps, force-quit between event submission and the next flush loses approximately 0.5–2% of events — the lower end on iOS where backgrounding is more predictable, the higher end on Android where users force-quit more aggressively. For aggregate metrics this is invisible; for per-event reconciliation it would be a problem, but per-event reconciliation isn't a use case Respectlytics supports.
Can the queue size grow unboundedly?
No — the in-memory buffer is bounded (default ~5,000 events). When the buffer fills (which we've never observed in production: it would require a sustained outage of our network endpoint plus very high in-app event volume), the oldest event is evicted. The bound is intentional: an unbounded queue would invite memory pressure on low-RAM Android devices.
Does the flush cadence affect billing / quota counts?
No. Quota counts the number of events ingested at our API, not the number of flushes. A single flush carries a batch of events; the batch size is a performance optimisation, not a billable unit.
How does this affect testing analytics in CI / debug?
Identically to production. The 30s cadence applies in debug too, but the SDK exposes a flush() API for tests to force-flush at known points. Use this in integration tests against a staging app key.