Skip to content

A/B Test Results

Go to A/B Tests and click on an active or completed test. The detail page shows per-variant metrics.

MetricDescription
ClicksTotal link clicks attributed to this variant
InstallsApp installs attributed to this variant
Store redirectsUsers sent to the App Store or Play Store
Custom eventsCount of the goal event (if using a custom event goal)

Each metric is shown alongside the conversion rate (e.g. installs / clicks) so you can compare variants directly.

There is no automatic winner declaration. You decide when you have enough data. As a general guideline:

  • Minimum sample size: Each variant should have at least a few hundred events (for the goal metric) before drawing conclusions. Small samples lead to unreliable results.
  • Run duration: Run the test for at least 7 days to account for day-of-week variation in traffic.
  • Clear difference: If one variant has a noticeably higher conversion rate and the difference has held consistently over several days, it is likely a real improvement.

When you are confident in the results:

  1. Click Declare Winner on the test detail page.
  2. Select the winning variant.
  3. Confirm.

This action:

  • Marks the test as completed.
  • Stops traffic splitting. All visitors now see the winning variant.
  • For landing page tests: the winning landing page is automatically assigned to the route.
  • For banner, message, and bio page tests: you may want to manually deactivate the losing variants.

The test remains in your history for reference. You can view the final metrics at any time.

If you want to run a follow-up test (e.g. testing the winner against a new challenger), create a new test. Previous tests do not interfere with new ones.

  • Test one thing at a time: Change a single element between variants (headline, image, CTA text, layout). If you change multiple things, you will not know which one caused the difference.
  • Do not peek and stop: Checking results daily is fine, but do not end the test the moment one variant looks better. Wait for a sufficient sample.
  • Document your tests: Keep notes on what you changed and why. Over time, you will build a library of what works for your audience.