Respectlytics Respect lytics
Menu
Swift (iOS) RAM-only event queue

How Swift (iOS) analytics works with a RAM-only event queue

Respectlytics's Swift (iOS) SDK holds unsent events in a small in-memory buffer that flushes on a 30-second timer and on app backgrounding. There is no disk-backed queue, no SQLite database, no file in the sandbox. The trade-off is intentional: aggregate analytics fidelity in exchange for zero forensic surface. Below: the queue mechanics, the operational implications, and the FAQ.

Install the Swift (iOS) SDK

swift Respectlytics
// Package.swift
dependencies: [
    .package(url: "https://github.com/respectlytics/respectlytics-swift.git", from: "3.0.0")
]
// Or via Xcode → File → Add Packages → paste the URL above.

The SDK ships only via Swift Package Manager. CocoaPods and Carthage are not published — fewer integration paths means fewer surfaces to keep audited.

Initialize Respectlytics in Swift (iOS)

swift Respectlytics
import Respectlytics

@main
struct MyApp: App {
    init() {
        Respectlytics.configure(appKey: "<YOUR_APP_KEY>")
    }
    var body: some Scene { WindowGroup { ContentView() } }
}

Call configure once at app launch — typically in your App struct's init. No Info.plist keys are required: the SDK does not call ATTrackingManager and does not request the IDFA, so NSUserTrackingUsageDescription should NOT be added.

Privacy & implementation notes

The deliberate event-loss-on-force-quit posture is the most common point of friction in adoption: most product / data teams default to "events must be 100% delivered" without questioning the assumption. The follow-up question — "what decision changes if we lose 1%?" — usually produces no real answer for product analytics. Revenue and accounting data, where the answer matters, lives in your billing system, not your product analytics.

On Android, app force-quit via the recent-tasks switcher is more aggressive than on iOS — the OS doesn't deliver a onPause callback in many such cases. Practical mitigation: SDK flushes more aggressively when the app comes back to foreground, catching the previous session's tail events. The 1–2% loss figure is after that mitigation.

Apple rejected approximately 3% of apps in 2024 for incorrectly omitting NSUserTrackingUsageDescription when ATT was required by the SDKs they shipped. Respectlytics doesn't trigger ATT. The corollary is also true: do not add the key on Respectlytics's account — its presence implies you track across apps, even if your code never calls requestTrackingAuthorization.

Internally the Swift SDK uses Swift Concurrency: events are queued in an actor-isolated buffer (RAM-only), flushed on a 30-second timer and on UIApplication.willResignActiveNotification. Force-quit before flush drops queued events — by design. There is no UserDefaults or file backing.

How this compares to other analytics SDKs

Event queueFirebase AnalyticsMixpanelAmplitudeRespectlytics
Backed bySQLite on diskSQLite on diskSQLite on diskIn-memory ring buffer
Default flush cadence~1 hourConfigurable (30s default)30s30s
Flushes on backgroundingYesYesYesYes
Flushes on terminateYes (best-effort)Yes (best-effort)YesNo (RAM only)
Queue survives crashYesYesYesNo
Maximum queued event count100k+100k+Unbounded~5k (ring eviction)

Frequently asked questions

What's the typical event-loss rate?

In our internal benchmarks against fixture apps, force-quit between event submission and the next flush loses approximately 0.5–2% of events — the lower end on iOS where backgrounding is more predictable, the higher end on Android where users force-quit more aggressively. For aggregate metrics this is invisible; for per-event reconciliation it would be a problem, but per-event reconciliation isn't a use case Respectlytics supports.

Can the queue size grow unboundedly?

No — the in-memory buffer is bounded (default ~5,000 events). When the buffer fills (which we've never observed in production: it would require a sustained outage of our network endpoint plus very high in-app event volume), the oldest event is evicted. The bound is intentional: an unbounded queue would invite memory pressure on low-RAM Android devices.

Does the flush cadence affect billing / quota counts?

No. Quota counts the number of events ingested at our API, not the number of flushes. A single flush carries a batch of events; the batch size is a performance optimisation, not a billable unit.

How does this affect testing analytics in CI / debug?

Identically to production. The 30s cadence applies in debug too, but the SDK exposes a flush() API for tests to force-flush at known points. Use this in integration tests against a staging app key.

Related guides

Track what matters. Collect nothing you don't.

Five-field event schema, RAM-only event queue, no IDFA, no AAID, no persistent user IDs. Helps developers avoid collecting personal data in the first place.