Use CasesAISQLdashboards

How to Track A/B Test Results Directly from Your Database

Most A/B testing tools are black boxes. You pour traffic through them, and they hand you a winner. But when someone asks "why did variant B win?" or "what ab...

Priya Sharma· Product LeadApril 11, 20267 min read

Most A/B testing tools are black boxes. You pour traffic through them, and they hand you a winner. But when someone asks "why did variant B win?" or "what about revenue per user, not just clicks?"the tool gives you nothing. The underlying event data that would answer those questions lives in your database, not in Optimizely or LaunchDarkly.

This guide shows how to pull, analyse, and monitor A/B test results directly from your databasewhether you're using a product analytics schema, a custom experiment table, or raw event logs.

-

Why A/B Test Data Lives in Your Database

Every meaningful A/B test touches your backend at some point. Users get assigned to variants. They take actionssign up, upgrade, churn, purchase. Those actions write rows to your database.

The gap between "which variant did the user see?" and "what did they do next?" is almost always in your production database, not in a third-party testing tool.

Common patterns:

  • Experiment assignment table a row per user per experiment, with variant and timestamp
  • Event log a stream of user actions, tagged with experiment context
  • Feature flag column a variant column baked into user records
  • Your testing tool might capture click-through rates. Your database captures the rest: trial-to-paid conversion, revenue, usage depth, churn timing.

    If you want to know whether variant B actually made you moneynot just got more clicksyou need to join experiment assignment data against your transaction and usage tables. That means SQL.

    -

    The SQL Behind A/B Test Analysis

    Here's a realistic example. Suppose you have:

  • experiment_assignments: user_id, experiment_id, variant, assigned_at
  • subscriptions: user_id, plan, started_at, cancelled_at, mrr
  • Conversion rate by variant:

    SELECT
      ea.variant,
      COUNT(DISTINCT ea.user_id) AS assigned_users,
      COUNT(DISTINCT s.user_id) AS converted_users,
      ROUND(
        COUNT(DISTINCT s.user_id) * 100.0 / COUNT(DISTINCT ea.user_id),
        2
      ) AS conversion_rate_pct
    FROM experiment_assignments ea
    LEFT JOIN subscriptions s
      ON ea.user_id = s.user_id
      AND s.started_at >= ea.assigned_at
    WHERE ea.experiment_id = 'onboarding-v3'
    GROUP BY ea.variant
    ORDER BY conversion_rate_pct DESC;

    Revenue per user by variant:

    SELECT
      ea.variant,
      COUNT(DISTINCT ea.user_id) AS users,
      COALESCE(SUM(s.mrr), 0) AS total_mrr,
      ROUND(
        COALESCE(SUM(s.mrr), 0) / COUNT(DISTINCT ea.user_id),
        2
      ) AS mrr_per_user
    FROM experiment_assignments ea
    LEFT JOIN subscriptions s
      ON ea.user_id = s.user_id
      AND s.started_at >= ea.assigned_at
    WHERE ea.experiment_id = 'onboarding-v3'
    GROUP BY ea.variant;

    Churn rate by variant (30-day window):

    SELECT
      ea.variant,
      COUNT(DISTINCT ea.user_id) AS converted,
      COUNT(DISTINCT CASE
        WHEN s.cancelled_at IS NOT NULL
        AND s.cancelled_at <= ea.assigned_at + INTERVAL '30 days'
        THEN s.user_id
      END) AS churned_30d,
      ROUND(
        COUNT(DISTINCT CASE
          WHEN s.cancelled_at <= ea.assigned_at + INTERVAL '30 days'
          THEN s.user_id
        END) * 100.0 / NULLIF(COUNT(DISTINCT ea.user_id), 0),
        2
      ) AS churn_rate_30d
    FROM experiment_assignments ea
    LEFT JOIN subscriptions s ON ea.user_id = s.user_id
    WHERE ea.experiment_id = 'onboarding-v3'
    GROUP BY ea.variant;

    These queries answer the questions your A/B testing tool can'tbecause they join experiment data with the outcomes that actually matter to your business.

    -

    Building a Live A/B Test Dashboard

    Running these queries ad hoc in a SQL client is fine once. Running them repeatedly throughout a testand sharing results with your teamis painful.

    With AI for Database, you can turn any of these queries into a live dashboard panel by describing what you want in plain English:

  • "Show me conversion rate by variant for experiment onboarding-v3"
  • "Give me revenue per user split by A/B variant for the pricing test"
  • "How many users are in each arm of experiment checkout-flow-2?"
  • The tool translates the question to SQL, runs it against your database, and returns a chart or table. You can pin that panel to a dashboard that refreshes on a scheduleso you and your product team can check results each morning without touching SQL.

    The dashboard updates automatically as new assignment and conversion data arrives. No CSV exports, no re-running queries, no "can you pull the latest numbers?" messages to your data team.

    -

    Setting Up Alerts for Test Completion

    Knowing when a test reaches statistical significanceor a minimum detectable sample sizeis easy to miss if you're checking manually.

    Common signals worth monitoring from the database:

  • COUNT(DISTINCT user_id) per variant crosses your target sample size
  • Conversion rate gap between variants exceeds a threshold
  • A variant shows anomalous churn within the first week
  • With action workflows in AI for Database, you define the condition in plain English and point it at Slack, email, or a webhook:

  • "When experiment_id = 'pricing-test-q2' has more than 2000 users per variant, send a Slack message to #product"
  • "If churn rate in any variant exceeds 15% within 14 days of assignment, alert via email"
  • Instead of someone remembering to check, the system watches your database and fires when conditions are met. Your team gets notified. You review the data. You ship the winner.

    -

    Common Mistakes When Reading A/B Test Data from Databases

    1. Not filtering by assignment date

    If you join experiment assignments to subscriptions without constraining by assigned_at, you'll pick up conversions from before the experiment started. A user who was already paying before they were assigned to variant B counts as a conversionincorrectly.

    Always use s.started_at >= ea.assigned_at (or equivalent) to scope conversions to post-assignment activity.

    2. Mixing assignment and exposure

    Being assigned to a variant and exposed to it are different. If your experiment only activates for users who reach a certain screen, assignment-level analysis dilutes the effect. Log a separate experiment_exposures event when the variant is actually shown, and analyse from there.

    3. Ignoring novelty effect

    Early data in an A/B test is noisy. Users who happen to convert in the first 24 hours after assignment skew variant metrics. If your test is short-lived by coincidence (a deployment bug, a traffic spike), your database will show a winner that isn't real. Always check assigned_at distribution before calling results.

    4. Declaring a winner too early

    This is peekingchecking results repeatedly and stopping the test when the numbers look good. Your database makes it very easy to check results live, which is useful, but stopping a test before your target sample size is a common source of false positives.

    5. Forgetting to account for multiple experiments running simultaneously

    If users are in two experiments at the same time, you need to stratify or filter. A straightforward query that ignores concurrent experiments may mix effects.

    -

    Segmenting Test Results by User Attributes

    Raw aggregate results hide a lot. The real insight is often in segments.

    Conversion rate by variant and plan type:

    SELECT
      ea.variant,
      u.plan_type,
      COUNT(DISTINCT ea.user_id) AS assigned,
      COUNT(DISTINCT s.user_id) AS converted,
      ROUND(
        COUNT(DISTINCT s.user_id) * 100.0 / NULLIF(COUNT(DISTINCT ea.user_id), 0),
        2
      ) AS cvr
    FROM experiment_assignments ea
    JOIN users u ON ea.user_id = u.id
    LEFT JOIN subscriptions s
      ON ea.user_id = s.user_id
      AND s.started_at >= ea.assigned_at
    WHERE ea.experiment_id = 'onboarding-v3'
    GROUP BY ea.variant, u.plan_type
    ORDER BY ea.variant, cvr DESC;

    You might find variant B wins overall but variant A wins for users on the Pro plana result that changes your shipping decision entirely. This kind of segmented analysis is trivial once the data is in your database and you can query it directly.

    -

    Wrapping Up

    A/B testing tools tell you which button colour got more clicks. Your database tells you whether any of it made a difference to revenue, retention, or product usage.

    The queries in this guide are starting points. Adapt the table names and columns to your schema. Once you have them working, turn the most useful ones into a self-refreshing dashboard so you and your team can monitor test progress without anyone running SQL manually.

    If writing these queries from scratch feels daunting, try AI for Databasedescribe the analysis you need, and it handles the SQL for you. Try it free at aifordatabase.com.

    Ready to try AI for Database?

    Query your database in plain English. No SQL required. Start free today.