Testing your analytics implementation before release prevents silent data loss that can go unnoticed for weeks. The SDK only has 3 methods—configure(), track(), and flush()—but where and how you call them matters. This 7-step checklist covers what to verify before every release.
🎯 Why Analytics Testing Is Non-Negotiable
You wouldn't ship a feature without testing it. But analytics code? Most teams add tracking calls, glance at a dashboard, and move on. The result: silent data loss that goes unnoticed for weeks.
Unlike a broken button or a crashed screen, broken analytics don't produce visible errors. An event that never fires, a misspelled event name, or a tracking call on the wrong screen—none of these trigger crashes. You only find out when someone asks "why did conversions drop to zero?" and the answer is "they didn't—we stopped measuring them."
The Cost of Broken Analytics
- • Lost data is unrecoverable — You can't retroactively capture events that never fired
- • Wrong data is worse than no data — Decisions based on incorrect metrics lead you in the wrong direction
- • Detection is slow — Broken analytics typically go unnoticed for 2-4 weeks
- • Fixes require another release — Unlike server-side bugs, mobile analytics fixes need an app store update
The good news: with a minimal SDK—just 3 methods and no custom properties—there's less surface area for things to go wrong. But the things that can go wrong are exactly the things you need to check.
⚙️ What the SDK Handles Automatically (Don't Test This)
Before diving into the checklist, it helps to understand what you don't need to worry about. The SDK handles these internally—you can't configure them and you don't need to verify them:
- ✓ Session IDs — Generated in RAM on each app launch, rotated every 2 hours, never persisted to disk. You don't see them, and you don't need to.
- ✓ Platform detection — The SDK automatically sets "iOS", "Android", etc. based on the device.
- ✓ Timestamps — ISO 8601 format, generated automatically with each event.
- ✓ Event batching — Events queue and send automatically every 30 seconds or when 10 events accumulate.
-
✓
Offline queuing — Events are persisted on every
track()call and sent when connectivity returns. - ✓ Retry with backoff — Failed requests retry 3 times with exponential backoff (2s, 4s, 8s). Permanent failures (400, 401) are not retried.
- ✓ Background flushing — Events flush automatically when your app enters the background.
Your job is simpler than you think: call configure() once, call track() with the right event name at the right time, and occasionally flush() during testing. Everything else is handled for you. Here's how to get those three calls right.
1️⃣ Configure the SDK in the Right Place
configure() must be called once, early in your app's lifecycle—before any track() call. If you configure too late (or in the wrong place), early events silently drop.
Where to Call configure()
Swift (SwiftUI)
// ✅ In the @main App struct init()
@main struct MyApp: App {
init() {
Respectlytics.configure(apiKey: "your-api-key")
}
}
Kotlin (Android)
// ✅ In your Application class onCreate()
class MyApp : Application() {
override fun onCreate() {
super.onCreate()
Respectlytics.configure(this, "your-api-key")
}
}
Flutter
// ✅ In main() before runApp()
void main() async {
WidgetsFlutterBinding.ensureInitialized();
await Respectlytics.configure(apiKey: 'your-api-key');
runApp(const MyApp());
}
React Native
// ✅ In App.tsx useEffect
useEffect(() => {
Respectlytics.configure('your-api-key');
}, []);
What to Check
-
☐
configure()is called once at app launch, not inside a view or screen - ☐ You're using the correct API key (development key for testing, production key for release builds)
-
☐
Console shows
[Respectlytics] ✓ SDK configured -
☐
Console shows
[Respectlytics] ✓ Session started
Common Configuration Mistakes
-
✗
Calling
configure()inside a view controller or screen component—it gets called multiple times or too late - ✗ Using the production API key in development (pollutes your real analytics data)
-
✗
Wrapping
configure()in a try/catch that swallows errors silently -
✗
Calling
track()beforeconfigure()—the SDK logs[Respectlytics] ⚠️ SDK not configuredand the event is dropped
2️⃣ Place track() Calls in Action Handlers
track() takes a single argument—the event name. No properties, no metadata, no screen parameter. That's it. The hard part isn't the API call—it's putting it in the right place in your code.
Put track() in action handlers
Button tap callbacks, form submission handlers, navigation events, API success/failure callbacks
Don't put track() in view rendering
SwiftUI body property, React Native render(), Flutter build()—these re-execute on every re-render, causing duplicate events
The Walkthrough Method
Open your app and physically walk through every action that should fire an event. No shortcuts. No "I'm sure it works." Tap the button, complete the form, navigate the screen.
- Open your event dictionary (tracking plan) side-by-side with your app
- For each documented event, perform the exact user action that should trigger it
- Watch the console for
[Respectlytics]log messages confirming each event - Verify the event name in the log matches your dictionary exactly
- Mark each event as ✓ Pass or ✗ Fail
This takes 15-20 minutes for a typical app with 20-30 events. It catches the most common problems: events that never fire, events on the wrong screen, and events that fire twice.
⚠️ Don't skip error paths. If you track checkout_completed, also test what happens when checkout fails. Payment declined, network timeout, validation error—does checkout_failed fire correctly?
3️⃣ Validate Event Names Against Your Taxonomy
Since track() only takes an event name—no properties, no metadata—that name carries all the meaning. A single typo creates a completely separate event in your dashboard. purchase_completed and purchase_completd will never be grouped together.
Name Validation Checklist
- ☐ All event names use snake_case (no camelCase, no spaces, no hyphens)
-
☐
Namespace prefixes are consistent (all onboarding events start with
onboarding_) -
☐
Verb tense is consistent across all events (all past tense:
_completed,_viewed,_tapped) - ☐ No dynamic values in event names (no item IDs, prices, or usernames)
- ☐ Event names are under 100 characters (the SDK rejects longer names with a console warning)
- ☐ Event names are not empty (the SDK rejects empty strings with a console warning)
💡 Tip: Define event names as constants in your codebase—an enum in Swift/Kotlin, a constants file in Dart/TypeScript. The compiler catches typos in constant references but not in string literals. This is the single most effective way to prevent naming bugs.
4️⃣ Call flush() and Verify in Your Dashboard
Events batch automatically (every 30 seconds or when 10 events queue up). During testing, you don't want to wait. Call flush() to force-send everything immediately, then check your dashboard.
Testing workflow
// 1. Trigger some events
Respectlytics.track("signup_started")
Respectlytics.track("signup_completed")
// 2. Force-send immediately
Respectlytics.flush()
// 3. Check your dashboard in ~30 seconds
What to Verify in the Dashboard
- ☐ Each event name appears in the event types list
- ☐ Event counts match the number of times you triggered each event (no duplicates, no missing events)
- ☐ The platform field correctly shows iOS, Android, or your test platform
- ☐ The country field is reasonable for your location
5️⃣ Test Offline and Background Scenarios
The SDK queues events automatically when offline and sends them when connectivity returns. You should verify this works for your specific app flow.
- ☐ Airplane mode test — Enable airplane mode, trigger 5-10 events, disable airplane mode. After a short delay, check your dashboard—all events should appear.
- ☐ No duplicate events — After reconnecting, verify events appear exactly once. The SDK's retry logic uses exponential backoff to prevent duplicates.
- ☐ Background to foreground — Background the app, wait a minute, then bring it back. Trigger an event—it should work normally. The SDK flushes queued events when backgrounding.
💡 Good to know: The SDK persists queued events on every track() call. This means events survive app termination—if a user force-kills the app while offline, those events will be sent on the next launch.
6️⃣ Run a Full User Journey End-to-End
Individual events might pass, but do they tell a coherent story? Walk through a complete user flow and verify the sequence appears correctly as a funnel in your dashboard.
Example Journey: E-Commerce App
1. app_launched
2. home_screen_viewed
3. search_performed
4. product_viewed
5. cart_item_added
6. checkout_started
7. checkout_payment_selected
8. purchase_completed
- ☐ All events in the funnel fire in the correct order
- ☐ No gaps in the sequence — If step 5 fires but step 4 doesn't, your funnel analysis will be wrong
-
☐
After calling
flush(), the entire sequence appears in the dashboard
💡 Tip: Also run the abandonment path. Start a checkout flow but don't complete it. Does the drop-off point appear correctly in your analytics? This validates your drop-off detection works for non-converting sessions.
7️⃣ Verify Cross-Platform Consistency
If you ship on multiple platforms, the event names must be identical across all of them. The SDKs for Swift, Kotlin, Flutter, and React Native all use the same track("event_name") API—but developers on different platforms can easily introduce naming drift.
-
☐
Same event names on both platforms (no
screen_viewedon iOS andscreenViewedon Android) - ☐ Same event coverage — Every event tracked on iOS is also tracked on Android (and vice versa)
- ☐ Platform field is correct in the dashboard — Events from iOS show "iOS", events from Android show "Android"
- ☐ Platform-specific events are intentional and documented — If an event only exists on one platform, it should be in the event dictionary as such
⚠️ Cross-platform naming drift is one of the most common analytics bugs. It happens gradually—one developer uses screen_viewed on iOS while another uses page_viewed on Android. Maintain a shared event dictionary that both platform teams reference. Better yet, use a shared constants file if your architecture allows it (Flutter and React Native get this for free).
🐛 Common Analytics Bugs and How to Catch Them
These are the bugs that come up most often. All of them are preventable with the checklist above.
1. The Silent Typo
purchase_completd instead of purchase_completed
Fix: Define event names as constants or an enum. The compiler catches typos in constant references but not in string literals.
2. The Double Fire
Events fire twice on the same user action—common in React Native (re-renders) and SwiftUI (view body re-evaluation).
Fix: Place track() in action handlers (button tap callbacks), never in view rendering methods.
3. The Missing Error Path
checkout_completed is tracked, but checkout_failed is not. Your conversion rate looks artificially high.
Fix: For every success event, ask: "What's the failure equivalent?" Track both.
4. The Wrong Screen
A settings_screen_viewed event fires when the user is actually on the profile screen. Copy-paste errors.
Fix: During manual walkthrough, verify each event fires on the correct screen—not just that it fires.
5. The track()-Before-configure()
Early events silently drop because track() was called before configure() finished.
Fix: Check your console for [Respectlytics] ⚠️ SDK not configured. Call configure(apiKey:) first. This warning means events are being dropped.
🔧 Reading Console Logs: Your Debugging Tool
The SDK logs messages to the console with a [Respectlytics] prefix. These messages are your primary debugging tool—they tell you what the SDK is doing without needing any external tools.
Console Messages Reference
✅ Success Messages (Everything is working)
[Respectlytics] ✓ SDK configured
[Respectlytics] ✓ Session started (rotates every 2 hours)
⚠️ Warning Messages (Something is wrong)
[Respectlytics] ⚠️ SDK not configured. Call configure(apiKey:) first.
[Respectlytics] ⚠️ Event name cannot be empty
[Respectlytics] ⚠️ Event name too long (max 100 characters)
❌ Error Messages (Network issues)
[Respectlytics] Failed to send events, will retry later
If you see warning messages, fix the root cause. If you see the "failed to send" error, the SDK will retry automatically with exponential backoff—but check that your API key is correct and your device has internet connectivity.
✅ The Complete Pre-Release Checklist
Here's the full checklist in one place. Run through this before every release that includes analytics changes:
Pre-Release Analytics Checklist
SDK Configuration
-
☐
configure()called at app launch (not in a view) - ☐ Correct API key (dev key for testing, prod key for release)
-
☐
Console shows
✓ SDK configuredand✓ Session started -
☐
No
⚠️ SDK not configuredwarnings anywhere
Event Coverage
- ☐ Every documented event fires when triggered manually
- ☐ Events are in action handlers, not view rendering methods
- ☐ Error/failure paths are tracked (not just success paths)
- ☐ No events fire twice on a single action (check for re-render issues)
Event Names
- ☐ All names match event dictionary exactly
- ☐ Consistent snake_case formatting
- ☐ No dynamic values in names
- ☐ Names defined as constants (not inline strings)
Dashboard Verification
-
☐
Called
flush()and events appear in dashboard - ☐ Event counts match expected numbers
- ☐ Platform and country fields are correct
Offline & Edge Cases
- ☐ Events queued offline appear after reconnecting
- ☐ No duplicates after reconnect
User Journey & Cross-Platform
- ☐ Full funnel journey has no gaps
- ☐ Same event names on all platforms
- ☐ Same event coverage on all platforms
💡 Why This Checklist Is Short
With most analytics SDKs, your testing surface is huge: custom properties, user attributes, screen tracking, session configuration, consent management, and more. Each setting is another thing that can go wrong.
With a minimal SDK—just configure(), track(), and flush()—the surface area shrinks dramatically. No custom properties means no property validation. No user profiles means no identity management. No consent flags means no consent logic to test.
This is the Return of Avoidance (ROA) approach in practice: by avoiding unnecessary data collection, you don't just protect privacy—you reduce the number of things that can break.
❓ Frequently Asked Questions
How do I test analytics events before releasing my app?
Use a pre-release checklist: verify configure() runs at app launch (check for the ✓ SDK configured console message), trigger every event manually while watching the console, validate event names match your dictionary, call flush() and check your dashboard, and run full user journeys on each platform.
What are common analytics implementation bugs?
The most common bugs are: misspelled event names (purchase_completd vs purchase_completed), events firing on the wrong screen, duplicate events from view re-renders in SwiftUI or React Native, missing events in error paths, and calling track() before configure().
How do I debug analytics events on mobile?
Check your console for [Respectlytics] log messages—the SDK logs warnings for issues like empty event names, names over 100 characters, and calling track() before configure(). Call flush() to force-send events, then check your dashboard. For deeper debugging, use Charles Proxy or mitmproxy to inspect HTTP requests.
Should analytics testing be automated or manual?
Both. Manual testing catches issues like events firing on the wrong screen. Automated unit tests (mocking the SDK and asserting track() was called with the right name) catch regressions. A pre-release manual walkthrough is always recommended.
What does the SDK handle automatically?
Session IDs (generated in RAM, rotated every 2 hours, reset on app restart), platform detection, timestamps, event batching (every 30 seconds or 10 events), offline queuing with persistent storage, retry with exponential backoff, and background flushing. You only need to call configure() once and track() for each event.
Disclaimer:
This checklist provides general best practices for testing analytics implementations. The specific steps may vary depending on your app architecture and testing infrastructure. Adapt the checklist to fit your team's workflow and release process.
Related Resources
- Event Naming Best Practices for Mobile Analytics — Design a consistent event taxonomy before you test it
- The Analytics Events Every Mobile App Should Track — Know which events to implement before validating them
- Why We Killed Custom Event Properties — Why constraints make your analytics safer and testing simpler
- Session IDs Are Not User IDs — Understand how session-based analytics works
- SDK Documentation — Integration guides for Swift, Kotlin, Flutter, React Native