Respectlytics Respect lytics
Menu
React Native RAM-only event queue

How React Native analytics works with a RAM-only event queue

Respectlytics's React Native SDK holds unsent events in a small in-memory buffer that flushes on a 30-second timer and on app backgrounding. There is no disk-backed queue, no SQLite database, no file in the sandbox. The trade-off is intentional: aggregate analytics fidelity in exchange for zero forensic surface. Below: the queue mechanics, the operational implications, and the FAQ.

Install the React Native SDK

bash Respectlytics
npm install @respectlytics/react-native
# or
yarn add @respectlytics/react-native

JavaScript-only — no native modules, no auto-linking, no New Architecture migration concerns. Bundle size: ~14KB minified+gzipped. Works in any Expo project (managed or bare) without expo prebuild.

Initialize Respectlytics in React Native

js Respectlytics
// App.tsx (or App.js)
import { useEffect } from 'react';
import Respectlytics from '@respectlytics/react-native';

export default function App() {
  useEffect(() => {
    Respectlytics.configure({ appKey: '<YOUR_APP_KEY>' });
  }, []);
  return <YourApp />;
}

Initialize once in your top-level component. No native config; no Info.plist or AndroidManifest changes. The SDK is Hermes- and JSC-compatible.

Privacy & implementation notes

The deliberate event-loss-on-force-quit posture is the most common point of friction in adoption: most product / data teams default to "events must be 100% delivered" without questioning the assumption. The follow-up question — "what decision changes if we lose 1%?" — usually produces no real answer for product analytics. Revenue and accounting data, where the answer matters, lives in your billing system, not your product analytics.

On Android, app force-quit via the recent-tasks switcher is more aggressive than on iOS — the OS doesn't deliver a onPause callback in many such cases. Practical mitigation: SDK flushes more aggressively when the app comes back to foreground, catching the previous session's tail events. The 1–2% loss figure is after that mitigation.

The React Native SDK is JavaScript-only — no Objective-C/Swift bridging on iOS, no Java/Kotlin bridging on Android. Side effects: no react-native link, no auto-linking, no New Architecture migration concerns, no platform-channel exception surfaces. Trade-off: no access to platform-only metadata (which we don't want to collect anyway).

Works in Expo managed workflow without expo prebuild. No config plugin is required. EAS Build users: nothing to configure. This is the smoothest integration path on RN — most analytics SDKs require ejecting from managed.

How this compares to other analytics SDKs

Event queueFirebase AnalyticsMixpanelAmplitudeRespectlytics
Backed bySQLite on diskSQLite on diskSQLite on diskIn-memory ring buffer
Default flush cadence~1 hourConfigurable (30s default)30s30s
Flushes on backgroundingYesYesYesYes
Flushes on terminateYes (best-effort)Yes (best-effort)YesNo (RAM only)
Queue survives crashYesYesYesNo
Maximum queued event count100k+100k+Unbounded~5k (ring eviction)

Frequently asked questions

What's the typical event-loss rate?

In our internal benchmarks against fixture apps, force-quit between event submission and the next flush loses approximately 0.5–2% of events — the lower end on iOS where backgrounding is more predictable, the higher end on Android where users force-quit more aggressively. For aggregate metrics this is invisible; for per-event reconciliation it would be a problem, but per-event reconciliation isn't a use case Respectlytics supports.

Can the queue size grow unboundedly?

No — the in-memory buffer is bounded (default ~5,000 events). When the buffer fills (which we've never observed in production: it would require a sustained outage of our network endpoint plus very high in-app event volume), the oldest event is evicted. The bound is intentional: an unbounded queue would invite memory pressure on low-RAM Android devices.

Does the flush cadence affect billing / quota counts?

No. Quota counts the number of events ingested at our API, not the number of flushes. A single flush carries a batch of events; the batch size is a performance optimisation, not a billable unit.

How does this affect testing analytics in CI / debug?

Identically to production. The 30s cadence applies in debug too, but the SDK exposes a flush() API for tests to force-flush at known points. Use this in integration tests against a staging app key.

Related guides

Track what matters. Collect nothing you don't.

Five-field event schema, RAM-only event queue, no IDFA, no AAID, no persistent user IDs. Helps developers avoid collecting personal data in the first place.