TL;DR. Analytics is a contract between product, data, and engineering. Treat it like one: keep an event schema in the repo, validate every commit against it, snapshot the events emitted by golden user journeys, and gate the merge on a staging smoke test. Your dashboards stop silently breaking.
๐ฏ Why Analytics Belongs in CI/CD
Analytics breakage has a special property that makes it the worst kind of bug: it is silent. The build is green, the app launches, the screen renders, but the funnel chart on the marketing dashboard is flat for the last three days. Nobody notices until someone asks a question that depends on the data โ usually after a release where the question matters most.
Production monitoring catches a fraction of this. The rest of it slips through because there is no test to fail. The fix is the same as for any other contract: encode it, version it, and check it at every stage of the pipeline.
Most of this post applies to any analytics SDK. The last section covers what changes when your SDK has a strict server-side schema like Respectlytics โ it gets considerably easier.
๐ Step 1: Write the Event Contract
The first artifact you need is a single source of truth for what events exist. Most teams put it in a JSON or YAML file checked in next to the code. It does not have to be fancy โ it has to be canonical.
events.yaml example
events:
- name: app_opened
description: First event of every session.
required_for: [funnel.activation]
- name: signup_started
description: User taps the signup button.
required_for: [funnel.activation]
- name: signup_completed
description: Server confirms account creation.
required_for: [funnel.activation, kpi.activation_rate]
- name: checkout_completed
description: Payment succeeds.
required_for: [funnel.purchase, kpi.conversion]
Pair the schema with a tiny linter that reads the contract and grep-checks every analytics.track(...) call site against it. If a developer adds a call with an unknown event name, the lint step fails with a clear message and a link to the contract.
๐ก Keep the contract human-readable
A YAML file with a description column doubles as the event dictionary referenced in Event Naming Best Practices. One artifact, two readers.
๐งช Step 2: Stub the SDK Transport
Tests should never hit a real analytics endpoint. Wrap the SDK behind an interface and inject a recording stub in tests:
Swift example
protocol AnalyticsClient {
func track(_ event: String)
}
final class RecordingAnalytics: AnalyticsClient {
private(set) var events: [String] = []
func track(_ event: String) { events.append(event) }
}
// In your test:
let analytics = RecordingAnalytics()
let viewModel = SignupViewModel(analytics: analytics)
viewModel.tapSignupButton()
XCTAssertEqual(analytics.events, ["signup_started"])
The same pattern works for Kotlin (interface + fake), Flutter (abstract class), and React Native (interface + jest mock). Keep the interface small โ track(eventName) is usually all you need.
๐ธ Step 3: Snapshot the Golden Journeys
For each critical funnel โ signup, checkout, primary feature use โ write a single test that simulates the full happy path and asserts on the entire event sequence. When someone reorders or removes an event, the snapshot diff is right there in the PR.
Why snapshots beat individual asserts
- โขOne failing test points at the whole journey, not a single tracker call.
- โขReviewer sees an explicit diff:
- checkout_started. - โขIntentional changes are easy: rerun tests with a flag to update the snapshot.
๐ฅ Step 4: Smoke-Test Against Staging
Unit tests check that your code calls the SDK correctly. They do not check that the SDK actually delivers events to your server. Add a tiny end-to-end test that fires three or four canonical events at a staging API key and asserts the server returns 2xx.
Pseudo-flow
- CI job spins up the app (UI test or headless harness).
- App is configured with the
CI_STAGINGAPI key. - Test driver triggers
app_openedand one funnel-critical event. - Test asserts the SDK reports a successful flush (no retries, no 4xx).
You do not need to query the dashboard. Asserting that the SDK got a 2xx is enough โ if the schema is wrong, a strict server like Respectlytics will reject the event with a 400 and your test fails immediately.
๐ฆ Step 5: Gate the Merge
Configure your CI provider (GitHub Actions, GitLab CI, Bitrise, etc.) to require all three checks as a status before a PR can merge:
| Check | When | Cost |
|---|---|---|
| Contract lint | Every push | ~5s |
| Snapshot unit tests | Every push | ~30s |
| Staging smoke test | PR open + merge to main | 2โ5 min |
โ ๏ธ Pitfalls to Avoid
Mocking the assertion away
If your stub returns success even when the contract is violated, you have a useless test. The recorder should record exactly what was passed; the assertion is on the recorded list.
Letting tests fire to production
Use a dedicated CI app key and a staging environment. A test that pollutes production data is a test that gets disabled within a week.
Asserting on dynamic values
Snapshots should compare event names and order, not timestamps or session IDs. Strip those before comparing or the test is permanently flaky.
Skipping the smoke test "because it's slow"
Run it on PR open, not on every push. The five-minute investment per PR pays for itself the first time it catches a bad release.
๐ก Why this is easier with Respectlytics
Most analytics platforms accept arbitrary custom properties, which means your contract has to enumerate every property of every event. Drift is constant. With Respectlytics, the API stores five fields total โ event_name, session_id, timestamp, platform, country.
The contract collapses to an event-name allowlist, and the server rejects anything else with a 400. Your CI smoke test fails the same minute someone introduces an unknown event.
โ Frequently Asked Questions
How do you test analytics in CI/CD?
Treat analytics as a contract: maintain a schema of expected events, replace the SDK transport with an in-memory recorder in unit tests, and assert that your code emits the right events in the right order. For end-to-end coverage, fire canonical events at a staging API on every PR.
What is event contract testing?
Event contract testing checks that the events your app fires match a shared schema โ names, required fields, allowed values. It catches drift between product, data, and engineering before it reaches production. The schema is versioned alongside the code.
Should analytics tests run on every commit?
Cheap checks (contract validation, snapshot tests) should run on every commit. Expensive checks (staging smoke tests) can run on PR open and on merge to main.
How do you mock an analytics SDK in tests?
Wrap the SDK behind a small interface in your code. In tests, swap the production implementation for a recorder that appends every event to a list, then assert against that list.
Why does Respectlytics make CI testing easier?
The strict five-field schema and server-side allowlist mean the contract reduces to a list of event names. The API rejects unknown fields and unknown events with a 400, so your staging smoke test fails immediately on drift.
Related Reading
- Testing Your Analytics Implementation โ Pre-release checklist for manual verification
- Event Naming Best Practices โ Naming conventions that make contracts simpler
- Analytics Events Every App Should Track โ Starter event lists by app type
- Track Onboarding Flow Completion Rates โ A complete funnel that benefits from snapshot testing