How to A/B Test a Landing Page (Step-by-Step for Beginners)
A/B testing is the most reliable way to improve a landing page. Not opinions, not best practices, not what a competitor is doing. You take your current page, change one element, split your traffic between the two versions, and let the data tell you which one converts better.
The concept is simple. The execution is where most beginners go wrong. They test the wrong things, end tests too early, draw conclusions from insufficient data, or make changes that are too small to measure. This guide walks you through how to A/B test a landing page correctly, from forming your first hypothesis to interpreting results you can actually trust.
What Is an A/B Test (and What It Is Not)
An A/B test, also called a split test, is a controlled experiment where you show two versions of a page to different segments of your traffic at the same time. Version A is your current page (the control). Version B is a variation with one specific change (the challenger).
After enough people have seen both versions, you compare conversion rates. If Version B converts significantly better than Version A, you implement the change permanently.
What an A/B test is not:
- It is not a redesign test. Changing your entire page and comparing it to the old version does not tell you what caused the improvement or decline. You cannot learn from it
- It is not a before/after comparison. Launching a new page and comparing this month's data to last month's is not a valid test. Seasonal trends, traffic changes, and external factors make before/after comparisons unreliable
- It is not a gut check. "Version B looks better" is not a test result. Statistical significance is
Step 1: Identify What Is Underperforming
Before you can test, you need to know what to test. This starts with understanding where your landing page is losing people.
Data sources to review:
- Google Analytics (or your analytics tool). Look at bounce rate, time on page, and scroll depth. A high bounce rate suggests a headline or message match problem. Low scroll depth suggests the above-the-fold content is not compelling enough
- Heatmaps. Tools like Hotjar or Microsoft Clarity show where visitors click, how far they scroll, and what they ignore. If nobody scrolls past section three, everything below it is invisible
- Form analytics. If your landing page includes a form, check which fields cause drop-off. Many form analytics tools show exactly where people abandon
- Session recordings. Watch 10-20 recordings of visitors who did not convert. Look for patterns: do they hesitate at the same point? Do they scroll back up? Do they leave from the same section?
If your landing page is not converting at all, start by fixing obvious issues before running tests. A/B testing works best when your page is functional but underperforming, not when it is fundamentally broken.
Step 2: Form a Hypothesis
A hypothesis is not "let's try a different headline." A hypothesis is a specific, testable prediction that connects a change to an expected outcome.
The hypothesis formula:
"If I [change this specific element], then [this specific metric] will improve because [this reason based on data or evidence]."
Examples of good hypotheses:
- "If I change the headline from 'Our Platform' to 'Reduce Employee Onboarding Time by 50%,' then click-through rate on the CTA will increase because the current headline does not communicate a specific benefit"
- "If I reduce the form from 6 fields to 3, then form completion rate will increase because heatmap data shows visitors abandoning at field 4"
- "If I add customer testimonials above the CTA, then conversion rate will increase because exit surveys show trust is the primary objection"
Examples of weak hypotheses:
- "Let's try a green button instead of blue" (no reason connected to data)
- "The page needs to look more modern" (not testable or measurable)
- "Our competitor has a video, so we should add one" (no evidence this will impact your specific audience)
A strong hypothesis ensures you learn something regardless of whether the test wins or loses. If the test fails, you know the element you changed was not the problem, and you can move on to the next hypothesis.
Step 3: Choose the Right Testing Tool
You need software to split traffic between your control and variation and track conversions. Here are your options at different price points.
Free or low-cost tools:
- Google Optimize successor tools. Since Google Optimize was sunset, several free alternatives have emerged. Tools like VWO's free plan and Optimizely's starter tier offer basic A/B testing
- Cloudflare Workers. If you are technical, you can build simple split tests using edge workers. Free for low-traffic sites
- WordPress plugins. Nelio A/B Testing and similar plugins offer basic testing for WordPress sites
Mid-range tools (for growing teams):
- VWO (Visual Website Optimizer). Offers a visual editor, heatmaps, and testing in one platform. Starts around $99/month
- Convert. Privacy-focused testing tool with a strong visual editor. Well-suited for teams that need GDPR compliance
- AB Tasty. Good for teams that want personalization alongside A/B testing
Enterprise tools:
- Optimizely. Full-featured experimentation platform. Excellent for large-scale, multi-page testing programs
- Adobe Target. Part of the Adobe Experience Cloud. Best for teams already using the Adobe stack
What to look for in any tool:
- Easy integration with your landing page (JavaScript snippet or CMS plugin)
- Support for tracking custom conversion goals (not just pageviews)
- Statistical significance calculator built in
- Ability to segment results by device, traffic source, or audience
Step 4: Determine Your Sample Size
This is where most beginners make their biggest mistake. They launch a test, see one version leading after 50 visitors, and declare a winner. That is not a test. That is a coin flip.
Why sample size matters:
Statistical significance tells you whether the difference between your two versions is real or just random noise. To reach statistical significance, you need enough conversions (not just visitors) in each variation.
How to calculate the sample size you need:
Use a free sample size calculator (Evan Miller's is the most popular). You will need three inputs:
- Your current conversion rate. If your page converts at 3%, enter 3%
- The minimum detectable effect. This is the smallest improvement you want to be able to detect. For most landing page tests, set this to 10-20% relative improvement (so if your conversion rate is 3%, a 20% improvement would mean detecting a lift to 3.6%)
- Statistical significance level. Use 95% (the industry standard)
Typical sample sizes for landing page tests:
- A page converting at 5% with a 20% minimum detectable effect needs approximately 3,600 visitors per variation (7,200 total)
- A page converting at 2% with a 20% minimum detectable effect needs approximately 9,500 visitors per variation (19,000 total)
- A page converting at 10% with a 20% minimum detectable effect needs approximately 1,600 visitors per variation (3,200 total)
What this means practically: If your landing page gets 500 visitors per week, a test might need to run for 6-14 weeks to reach valid results. This is normal. Running the test for 3 days because you are impatient will give you meaningless data.
Step 5: Build Your Variation
Now create the page variation based on your hypothesis. The golden rule: change one element at a time.
High-impact elements to test (in order of typical impact):
- Headline. The single highest-impact element on most landing pages. Test specific vs. vague, benefit-focused vs. feature-focused, or short vs. long
- CTA copy and design. Test "Start My Free Trial" vs. "Get Started Free." Test button color contrast. Test adding a sub-line below the button ("No credit card required")
- Form length. Test removing fields. Test splitting a long form into steps. Test removing the phone number field specifically
- Social proof. Test adding customer testimonials above the fold. Test numerical proof ("11,000+ users") vs. individual testimonials. Test video testimonials vs. text
- Page length. Test a short, focused page against a long-form page with more detail. The winner varies by offer complexity and audience awareness
- Hero image or video. Test a product screenshot vs. a lifestyle photo. Test adding a demo video vs. a static image
- Pricing presentation. Test showing the price vs. hiding it. Test annual vs. monthly pricing as default. Test anchoring with a higher-priced plan
What not to test:
- Button color alone (the impact is almost always too small to measure unless the current button has zero contrast)
- Font changes (affects readability but rarely moves conversion rates in measurable ways)
- Multiple changes at once (you will not know which change caused the result)
Step 6: Launch and Monitor Your Test
Before launching, double-check these items:
Pre-launch checklist:
- Both variations load correctly on desktop and mobile
- Conversion tracking fires correctly on both versions (test this yourself)
- Traffic is being split evenly (your tool should show a near 50/50 split)
- The test is not interfering with other tracking (check your analytics to verify)
- You have defined what "winning" means before the test starts (conversion rate improvement, not clicks or time on page, unless that is your specific goal)
During the test:
- Do not peek at results daily and make decisions based on early data. Set a calendar reminder for when you expect to reach your sample size, and check then
- Monitor for technical issues only. If one variation is broken (500 errors, tracking failures), pause the test and fix it
- Do not change anything on either version while the test is running. Even a small copy edit invalidates the results
Step 7: Analyze Your Results
When your test reaches the required sample size, it is time to analyze.
How to determine a winner:
Your testing tool will show you the conversion rate for each variation and a statistical significance percentage. You want 95% confidence or higher.
- 95% or higher confidence that B beats A: Implement the change permanently
- 95% or higher confidence that A beats B: Your hypothesis was wrong. That is valuable learning. Move to the next test
- No significant difference: The element you changed does not meaningfully impact conversions. This is also valuable. Move on
Segment your results:
The overall result might hide important differences. Check performance by:
- Device type: Version B might win on desktop but lose on mobile
- Traffic source: Paid traffic and organic traffic often respond differently to the same changes
- New vs. returning visitors: Returning visitors already have context and may respond differently to messaging changes
Calculate the business impact:
Do not just look at percentage improvements. Translate the result into revenue or leads.
Example: If Version B increased conversion rate from 3% to 3.6%, and your landing page gets 10,000 visitors per month with an average order value of $50, that is:
- Version A: 300 conversions x $50 = $15,000/month
- Version B: 360 conversions x $50 = $18,000/month
- Impact: $3,000/month additional revenue, or $36,000/year
That context makes it easier to prioritize future tests and justify spending time on optimization.
Step 8: Document and Iterate
Every test, whether it wins or loses, should be documented. Over time, this creates a knowledge base that makes each subsequent test more likely to succeed.
What to record for each test:
- The hypothesis
- What was changed (include screenshots)
- Sample size and test duration
- Results (conversion rates, confidence level, segment breakdowns)
- What you learned
- What to test next based on the results
How to build a testing roadmap:
After your first test, use the results to inform your next hypothesis. If the headline test won, the next logical test might be to optimize the subheadline. If the form reduction test won, test removing one more field or adding a progress indicator.
A structured testing program should aim for one to two tests per month on each high-traffic landing page. Over 12 months, that is 12-24 data-informed improvements, which compound into significant conversion gains.
Common A/B Testing Mistakes to Avoid
Ending tests too early. This is mistake number one. A test that has not reached statistical significance has not told you anything. If you implement a "winner" after 200 visitors, you are making decisions based on noise
Testing too many things at once. If you change the headline, CTA, form, and hero image simultaneously, you have no idea which change drove the result. Test one element at a time
Ignoring mobile. If 60% of your traffic is mobile and your variation only looks good on desktop, your test results will be misleading. Always check variations on both devices
Not tracking the right metric. If your landing page goal is lead generation, track form submissions, not button clicks. Button clicks measure interest. Form submissions measure conversion. See our beginner's guide to CRO for more on choosing the right metrics
Running tests without enough traffic. If your landing page gets 200 visitors per month, A/B testing is not the right approach yet. Focus on qualitative research (user testing, surveys, expert reviews) until you have enough traffic for valid tests
Testing trivial changes. If you are debating whether the button should be forest green or emerald green, you are wasting testing capacity. Focus on changes large enough to produce measurable differences: headline rewrites, CTA overhauls, form redesigns, layout restructuring
Setting and forgetting. Some teams launch a test and never check the results. Set a specific date to review, and commit to acting on the findings
How to A/B Test When You Have Low Traffic
Not every landing page gets thousands of visitors per week. If you are working with lower traffic, here is how to still make data-informed improvements:
- Test bigger changes. Small tweaks require enormous sample sizes to detect. Large changes (completely different headlines, different page structures, different offers) produce larger effects that are detectable with smaller samples
- Focus on macro conversions. Track the final conversion (purchase, signup, lead submission), not micro conversions (scroll depth, button hovers). Macro conversions give you the most actionable signal
- Use qualitative methods alongside testing. Run five user tests where you watch real people interact with your page. The patterns you observe can inform high-impact changes that you then validate with an A/B test
- Combine pages. If you have five landing pages that each get 200 visitors per month, consider whether you can run the same test across all five (if the change applies to all of them)
Start Your First Test Today
Here is the fastest path from reading this guide to running your first A/B test:
- Open your analytics and identify your highest-traffic landing page that is underperforming
- Look at the bounce rate and scroll depth data. Identify where visitors are dropping off
- Form a hypothesis about why they are dropping off
- Sign up for a free testing tool
- Create one variation that addresses your hypothesis
- Calculate your required sample size and set a calendar reminder to review results
- Launch the test and resist the urge to peek
A/B testing is not complicated. It is disciplined. The teams that see the biggest gains are not the ones with the fanciest tools. They are the ones that test consistently, learn from every result, and never stop iterating.
If you want to identify what to test first, CROgrader analyzes your landing page against 50+ conversion factors and highlights the elements with the most room for improvement. It is a fast way to generate your first round of test hypotheses.
Get your free landing page analysis with CROgrader
Get the free CRO Quick Wins checklist
7 conversion fixes you can implement today. No fluff. Download free →
Related articles
Get your free CRO Score
Scan your website in 60 seconds. AI analyzes 50+ conversion signals and tells you exactly what to fix.
Scan your site free