Skip to content
Tolinku
Tolinku
Sign In Start Free
Use Cases · · 5 min read

A/B Testing Onboarding Flows

By Tolinku Staff
|
Tolinku e commerce deep linking dashboard screenshot for use cases blog posts

Onboarding intuition is unreliable. The flow that "feels right" to your team often isn't the one that performs best with real users. A/B testing replaces guesswork with data: show different users different onboarding experiences, measure the results, and keep the winner.

For onboarding completion strategies, see Improving Onboarding Completion Rates. For A/B testing deep link destinations, see A/B Testing Deep Link Destinations. For the foundational experimentation framework, see A/B Testing Onboarding Flows.

What to Test

High-Impact Variables

Variable Variants to Test Primary Metric
Number of steps 3 vs. 5 vs. 7 steps Completion rate
Signup method Email-first vs. social-first Signup conversion
Permissions timing Upfront vs. contextual Permission grant rate
Feature tour With tour vs. without tour Day 7 retention
Welcome screen copy Benefit-focused vs. feature-focused Tap-through rate
Progress indicator Progress bar vs. step dots vs. none Completion rate
First action prompt Guided vs. open-ended Activation rate
Personalization Generic vs. source-based Completion + retention

Low-Impact Variables (Skip These)

  • Button color on the welcome screen
  • Exact font size of onboarding text
  • Animation style between steps
  • Icon design for feature tour

Focus on structural changes (step count, order, content) rather than cosmetic ones.

Experiment Architecture

Assignment

Assign users to variants on first launch and persist the assignment:

function assignOnboardingVariant(userId, experimentId) {
  // Check for existing assignment
  const existing = getExperimentAssignment(userId, experimentId);
  if (existing) return existing;

  // Assign based on user ID hash for consistency
  const hash = hashString(`${userId}-${experimentId}`);
  const variantIndex = hash % getVariantCount(experimentId);
  const variant = getVariantByIndex(experimentId, variantIndex);

  saveAssignment(userId, experimentId, variant);

  analytics.track('experiment_assigned', {
    experimentId,
    variant: variant.id,
    userId,
  });

  return variant;
}

Important: use a deterministic hash, not Math.random(). The same user should always see the same variant, even across sessions and reinstalls.

Variant Configuration

const experiments = {
  onboarding_steps_v3: {
    id: 'onboarding_steps_v3',
    variants: [
      {
        id: 'control',
        weight: 50, // 50% of users
        config: { steps: ['welcome', 'signup', 'profile', 'permissions', 'tour', 'first_action'] },
      },
      {
        id: 'short',
        weight: 50,
        config: { steps: ['welcome', 'signup', 'first_action'] },
      },
    ],
    primaryMetric: 'onboarding_completed',
    secondaryMetrics: ['day_7_retention', 'first_purchase'],
    minSampleSize: 1000, // Per variant
    startDate: '2026-05-04',
  },
};

Rendering the Flow

function OnboardingFlow({ userId }) {
  const variant = assignOnboardingVariant(userId, 'onboarding_steps_v3');
  const steps = variant.config.steps;

  const [currentStep, setCurrentStep] = useState(0);

  const stepComponents = {
    welcome: <WelcomeScreen />,
    signup: <SignupScreen />,
    profile: <ProfileScreen />,
    permissions: <PermissionsScreen />,
    tour: <FeatureTour />,
    first_action: <FirstActionPrompt />,
  };

  return (
    <View>
      <ProgressBar current={currentStep} total={steps.length} />
      {stepComponents[steps[currentStep]]}
      <NextButton
        onPress={() => {
          trackStep(steps[currentStep], variant.id);
          if (currentStep < steps.length - 1) {
            setCurrentStep(currentStep + 1);
          } else {
            completeOnboarding(variant.id);
          }
        }}
      />
    </View>
  );
}

Event Tracking

Required Events

Every experiment needs consistent event tracking across all variants:

function trackStep(step, variantId) {
  analytics.track('onboarding_step_completed', {
    step,
    variantId,
    experimentId: 'onboarding_steps_v3',
    timestamp: Date.now(),
    sessionDuration: getSessionDuration(),
  });
}

function completeOnboarding(variantId) {
  analytics.track('onboarding_completed', {
    variantId,
    experimentId: 'onboarding_steps_v3',
    totalTime: getOnboardingDuration(),
    stepsCompleted: getCurrentStepIndex() + 1,
  });
}

Downstream Metrics

Track what happens after onboarding to measure long-term impact. For a comprehensive analytics setup, see Onboarding Analytics: Measuring Activation Success.

// Track these for each variant over 7, 14, 30 days
const downstreamMetrics = [
  'day_1_retention',
  'day_7_retention',
  'day_30_retention',
  'first_purchase_within_7_days',
  'feature_adoption_count',
  'support_ticket_created',
  'app_uninstall_within_7_days',
];

A shorter onboarding might have a higher completion rate but lower retention if users missed important context. Always measure downstream.

Statistical Rigor

Sample Size

Calculate the required sample size before starting the experiment:

function calculateSampleSize(baselineRate, minimumDetectableEffect, power, significance) {
  // baselineRate: current completion rate (e.g., 0.45)
  // minimumDetectableEffect: smallest change you care about (e.g., 0.05 for 5%)
  // power: 0.80 (standard)
  // significance: 0.05 (standard)

  const p1 = baselineRate;
  const p2 = baselineRate + minimumDetectableEffect;
  const zAlpha = 1.96; // For significance = 0.05
  const zBeta = 0.84; // For power = 0.80

  const n = Math.ceil(
    Math.pow(zAlpha * Math.sqrt(2 * p1 * (1 - p1)) + zBeta * Math.sqrt(p1 * (1 - p1) + p2 * (1 - p2)), 2)
    / Math.pow(p2 - p1, 2)
  );

  return n; // Per variant
}

// Example: 45% baseline, detect 5% lift, 80% power, 95% significance
// calculateSampleSize(0.45, 0.05, 0.80, 0.05) ≈ 1,570 per variant

When to Stop

Don't peek at results daily and declare a winner as soon as one variant looks better. Premature stopping inflates false positive rates.

Rules for stopping:

  1. Run until each variant reaches the pre-calculated sample size
  2. Or run for a minimum of 2 full weeks (to account for day-of-week effects)
  3. Only then evaluate statistical significance
function canEvaluateExperiment(experiment) {
  const variants = getVariantResults(experiment.id);
  const minSize = experiment.minSampleSize;
  const minDays = 14;

  const daysSinceStart = daysBetween(experiment.startDate, new Date());
  const allVariantsReachedSize = variants.every(v => v.sampleSize >= minSize);

  return daysSinceStart >= minDays && allVariantsReachedSize;
}

Test Different Flows by Source

Users from different acquisition sources may respond differently to onboarding variants:

function assignVariantBySource(userId, source) {
  // Run separate experiments per source
  const experimentId = `onboarding_${source}_v1`;

  return assignOnboardingVariant(userId, experimentId);
}

// Referral users might perform better with abbreviated onboarding
// Ad users might need more education
async function setupExperiment(userId) {
  const deferred = await Tolinku.checkDeferredLink();

  const segments = {
    hasReferral: deferred && deferred.params.ref ? true : false,
    hasCampaign: deferred && deferred.params.utm_campaign ? true : false,
    source: deferred ? deferred.params.utm_source || 'deep_link' : 'organic',
  };

  // Different experiments for different segments
  if (segments.hasReferral) {
    return assignOnboardingVariant(userId, 'referral_onboarding_v2');
  }

  if (segments.hasCampaign) {
    return assignOnboardingVariant(userId, 'campaign_onboarding_v1');
  }

  return assignOnboardingVariant(userId, 'organic_onboarding_v3');
}

Common Experiments and Results

Experiment 1: Step Count

Variant Steps Completion Rate Day 7 Retention
Control (6 steps) 6 42% 28%
Short (3 steps) 3 61% 31%
Minimal (2 steps) 2 68% 24%

Result: 3 steps wins. 2 steps has the highest completion but lower retention because users missed essential context.

Experiment 2: Signup Method Priority

Variant Primary Method Signup Rate Time to Sign Up
Email first Email/password form 48% 52s
Social first Google + Apple buttons 63% 12s
Choice All options equally 55% 28s

Result: Social login first wins for both conversion and speed.

Experiment 3: Permission Timing

Variant When Permissions Asked Grant Rate Completion Rate
Upfront Step 2 of onboarding 35% 44%
Contextual When feature is used 62% 58%
Pre-prompt Explain, then ask 51% 52%

Result: Contextual wins for both grant rates and completion rates.

Avoiding Common Mistakes

1. Testing Too Many Things at Once

Change one variable per experiment. If you change the step count AND the copy AND the layout, you won't know which change caused the result.

2. Not Segmenting Results

Overall results can hide important segment differences. A variant that works well for organic users might hurt referral users. Always break down results by acquisition source, platform, and geography.

3. Optimizing for the Wrong Metric

Completion rate is easy to measure but isn't always the right primary metric. A flow that maximizes completion but produces users who churn after 3 days is worse than one with lower completion but higher retention.

Choose your primary metric based on business value:

  • Early-stage app: Activation (user performs core action)
  • Growth-stage app: Day 7 retention
  • Monetized app: Revenue per user (Day 30)

For A/B testing features, see Tolinku A/B testing. For onboarding use cases, see the onboarding documentation. For A/B testing setup, see the A/B testing docs.

Get deep linking tips in your inbox

One email per week. No spam.

Ready to add deep linking to your app?

Set up Universal Links, App Links, deferred deep linking, and analytics in minutes. Free to start.