How to A/B Test a Landing Page Without Wasting Traffic
How to A/B Test Landing Pages Without Wasting Traffic
Most A/B tests fail because the person running them does not have enough traffic to get a real answer. They run a test for two weeks, see Variant B is up 8%, ship it, and move on. Three months later their conversion rate is identical to where it started.
This guide shows you how to run A/B tests on landing pages when you have limited traffic, which is most of us. No statistics PhD required. Just a process that stops you from making decisions based on noise.
Why most A/B tests waste traffic
Here is the brutal math. To detect a 10% lift on a page that converts at 3%, you need roughly 14,000 visitors per variant. That is 28,000 visitors total. If your landing page gets 500 visitors a week, that test takes a year.
Most founders do not know this. They run a test for 200 visitors, see a "winner," and declare victory. The result is random noise. They are not optimizing. They are gambling.
Three things waste traffic in A/B testing:
- Testing changes too small to detect (button colors, single words)
- Stopping tests as soon as a variant looks like it is winning
- Running multiple tests at once on the same page
Avoid these three and you are already ahead of 80% of teams running tests.
Step 1: Calculate if you can even run the test
Before you build a single variant, run the numbers. Use a sample size calculator like Evan Miller's or one built into your testing tool.
You need three inputs:
- Baseline conversion rate: what your page converts at right now
- Minimum detectable effect (MDE): the smallest lift worth caring about
- Statistical significance: stick with 95%
Here is a quick reference for a 3% baseline conversion rate at 95% significance:
- Detect a 50% lift: 1,030 visitors per variant
- Detect a 25% lift: 3,830 visitors per variant
- Detect a 10% lift: 23,100 visitors per variant
- Detect a 5% lift: 91,500 visitors per variant
If you get 1,000 visitors a week, you can realistically test changes you expect will move conversions by 25% or more. Anything subtler is a waste.
This is the single most important step. If the math says you need six months to get an answer, do not run the test. Pick a bigger change.
Step 2: Test one big thing, not five small things
When traffic is scarce, you have to test changes large enough to actually shift behavior. Forget button colors. Forget headline word swaps. Test entire concepts.
Examples of changes worth testing:
- A completely different headline angle (problem-focused vs outcome-focused)
- A new hero layout (text + image vs full-width video)
- A different offer (free trial vs free demo vs free tool)
- Long-form vs short-form page
- A different primary CTA (sign up vs see pricing)
Examples not worth testing on low traffic:
- Button color changes
- Single word swaps
- Adding or removing one testimonial
- Font changes
The bigger the difference between A and B, the smaller the sample size you need. This is counterintuitive but true. Big swings detect faster.
Step 3: Form a hypothesis, not a guess
A real hypothesis has three parts:
- What you observed
- What you think is causing it
- What outcome you expect from the change
Bad: "Let's try a video on the hero."
Good: "Heatmap data shows 70% of visitors do not scroll past the hero. The current hero copy is abstract. If we replace the static image with a 30-second product demo, we expect signup rate to increase by at least 20% because visitors will understand what the product does without needing to scroll."
The second version forces you to think. It also tells you what to do next regardless of the result. If the test wins, you know why. If it loses, you can rule out that hypothesis and try a different angle.
Step 4: Set your sample size and stopping rule before you start
Write down two numbers before launching:
- Total visitors you will let the test run for
- Minimum days you will wait
The second one matters more than people think. Even if you hit your sample size in three days, run for at least one full week, ideally two. Tuesday traffic behaves differently from Saturday traffic. Paid traffic on launch day behaves differently from organic traffic two weeks in.
Once you set these numbers, do not look at the dashboard daily. Looking constantly leads to peeking, and peeking leads to stopping early. Set a calendar reminder for the end date. Check then.
Step 5: Pick the right tool for your traffic level
The tool matters less than the process, but some tools handle low traffic better than others.
For low traffic (under 5,000 visitors/month):
- Google Optimize replacements like GrowthBook or PostHog
- Built-in testing in your landing page builder
For mid traffic (5,000 to 50,000):
- VWO, Convert, or AB Tasty
- PostHog (still works great)
For higher traffic:
- Optimizely, Statsig, Eppo
Whatever you pick, make sure it does proper sequential testing or shows confidence intervals, not just a "winner" badge after 50 visitors.
Step 6: Run the test cleanly
A few rules to keep your test valid:
Do not change anything else on the page during the test. No copy edits. No new traffic sources. No pricing changes. The only difference between A and B should be the variable you are testing.
Split traffic 50/50. Uneven splits make analysis harder and rarely speed things up.
Exclude internal traffic. Your team clicking around will skew small samples fast. Most tools let you exclude by IP or cookie.
Track the right metric. Track the conversion that actually matters: signup, purchase, demo booked. Not clicks on the CTA. A click is not a conversion.
Step 7: Analyze results without fooling yourself
When the test ends, three outcomes are possible:
- Clear winner with significance. Ship it. Document what you learned.
- Clear loser. Roll back. Note the hypothesis that failed so you do not repeat it.
- Inconclusive. This is the most common result. Do not ship the variant. The fact that you cannot tell them apart means they are roughly equal.
Inconclusive is not a failure. It tells you that change did not move the needle, so you can stop iterating in that direction and try a different angle.
If you keep getting inconclusive results, the problem is usually upstream. Your traffic source is wrong, your offer is wrong, or your page has a deeper UX issue.
Common mistakes that ruin tests
A short list of things I see weekly:
- Running tests on pages with under 100 conversions per month. You will never reach significance. Fix the page directly using best practices instead of testing.
- Testing during a launch or campaign. Spike traffic skews everything. Wait until traffic stabilizes.
- Stopping early because a variant "looks like" it is winning. Conversion rates swing wildly with small samples. A 30% "lift" at 80 visitors per variant is meaningless.
- Testing on mobile and desktop combined. Mobile and desktop convert differently. Segment your results.
- Forgetting about external factors. Holidays, news cycles, and ad changes affect conversion rates. Note them.
A simple framework for picking what to test next
When you finish one test, the next one should follow logically. Use this priority order:
- Headline and value proposition (highest impact on most pages)
- Hero section structure (image, video, layout)
- Offer and CTA (what you are asking for)
- Social proof placement and type
- Page length and order of sections
- Form fields and friction
Work top to bottom. Do not skip ahead because something below looks easier to test.
Stop testing when you cannot afford to test
Here is the unpopular truth: most early-stage SaaS pages should not be A/B testing at all. They should be making large, opinionated changes based on user research, then measuring the before/after. Once you cross 5,000 to 10,000 visitors a month, structured A/B testing starts to pay off.
If you are below that threshold, focus on qualitative feedback. Talk to users. Watch session recordings. Run five-second tests. These give you signal at any traffic level.
If you want a single dashboard that tracks your landing page performance, surfaces UX issues, and tells you when you have enough traffic to test, PagePulse was built for exactly this. Run a free page audit and find out which problems to fix before you start splitting traffic.