A/B Testing
Split traffic across different form variants to test and optimize performance
What is A/B Testing?
A/B testing (also known as split testing) allows you to create multiple versions of your form and automatically distribute visitors across these variants. This helps you identify which version performs better in terms of completion rates, conversions, or any other metric you're tracking.
Each user will experience only one variant throughout their entire journey - even if they visit multiple times - ensuring consistent testing and accurate results.
Key Features
Automatically distributes users across variants based on your specified percentages. Users are consistently assigned to the same variant across sessions.
Test multiple variations simultaneously (2-5 variants) to find the optimal form design, flow, or messaging that resonates with your audience.
Precisely control what percentage of traffic each variant receives. Perfect for gradual rollouts or testing risky changes with limited exposure.
Clear visual indicators show unconnected paths and percentage errors, helping you avoid configuration mistakes before publishing.
How to Set Up A/B Testing
1Add A/B Testing Node
In your workflow form builder, click the "+" button on any connection point and select "A/B Testing" from the node menu.
2Configure Variants
By default, you'll have two variants (A and B) with 50/50 traffic split. You can:
- Rename each variant to something descriptive (e.g., "Control", "New Design")
- Adjust traffic percentages for each variant
- Add up to 3 more variants (5 total maximum)
- Reorder variants using the up/down arrows
- Remove variants (minimum 2 required)
3Set Traffic Percentages
Enter the percentage of traffic each variant should receive. The total must equal 100%.
4Connect Flow Paths
Each variant needs a connected flow path. Click the "+" button on the right side of each variant to add the next step in that variant's journey. You can create completely different flows for each variant.
5Enable Smart Routing
Smart Routing is enabled by default and ensures users are randomly assigned to variants based on your specified percentages. Users will always see the same variant on return visits.
Traffic Distribution Examples
Here are some common traffic distribution strategies:
Common Use Cases
Compare a short form (fewer questions) against a detailed form (more comprehensive). Track which approach yields better completion rates.
Example: Short form (5 questions) vs. Detailed form (12 questions)
Test different sequences of questions. Should you ask for contact info first or last? Does starting with engaging questions improve completion?
Example: Email first vs. Email last
Compare different themes, layouts, or visual elements. Test button colors, progress indicators, or page transitions.
Example: Minimalist design vs. Rich visual design
Test different headlines, descriptions, or call-to-action text. Small wording changes can have significant impact on conversions.
Example: "Sign Up" vs. "Get Started Free"
Before rolling out major changes, test them with a portion of your audience. Use an 80/20 split to minimize risk while gathering real-world data.
Example: Current version (80%) vs. Version with new AI suggestions (20%)
Best Practices
Test One Variable at a Time
Change only one element between variants to clearly understand what drives performance differences. If you change multiple things, you won't know which change made the impact.
Ensure Sufficient Traffic
A/B tests need adequate sample sizes to be statistically significant. As a rule of thumb, aim for at least 100-200 completions per variant before drawing conclusions.
Run Tests Long Enough
Don't stop tests too early. Run tests for at least 1-2 weeks to account for daily/weekly variations in traffic patterns and user behavior.
Use Descriptive Variant Names
Name your variants clearly (e.g., "Short Form", "Long Form") instead of generic names like "Variant A" and "Variant B". This makes analytics much easier to understand.
Monitor All Metrics
Don't just focus on completion rate. Also track time to complete, drop-off points, and quality of submissions to get the full picture.
Start with 50/50 Splits
Unless you have a specific reason for uneven distribution, start with equal traffic splits. This gives you the fastest path to statistical significance.
Document Your Hypothesis
Before launching a test, write down what you expect to happen and why. This helps you learn from both winning and losing variants.
Smart Routing Explained
Smart Routing is Beeform's intelligent traffic distribution system. When enabled, it ensures:
- Consistent User Experience: Each user is assigned to one variant and will always see that same variant, even on return visits.
- Accurate Distribution: Traffic is distributed according to your specified percentages across all users.
- Random Assignment: Users are randomly assigned to variants to eliminate selection bias and ensure fair testing.
- Session Persistence: The variant assignment is tracked across sessions, so returning users continue their journey without disruption.
Analyzing Test Results
After your A/B test has run for sufficient time and gathered enough data:
1. Check Your Analytics Dashboard
View completion rates, average time to complete, and drop-off rates for each variant. Look for statistically significant differences (typically at least 10-20% improvement).
2. Compare Key Metrics
Focus on metrics that align with your goals:
- Completion rate (% who finish the form)
- Time to complete (faster isn't always better)
- Drop-off points (where users abandon)
- Submission quality (are responses complete and useful?)
- Conversion rate (if tracking specific outcomes)
3. Declare a Winner
Once you have clear results, implement the winning variant for 100% of your traffic. If results are inconclusive, consider running the test longer or redesigning your variants.
4. Iterate and Improve
Use learnings from this test to inform your next optimization. Continuous testing and improvement is the key to maximizing form performance over time.
Frequently Asked Questions
Yes, but it's not recommended. Changing percentages mid-test can skew your results and make it harder to draw accurate conclusions. If you must adjust, consider starting a fresh test.
You'll see an error message and won't be able to publish until you fix it. Use the "Auto-balance" button for quick correction, or manually adjust the percentages.
Minimum 1-2 weeks to account for weekly patterns, and until you have at least 100-200 completions per variant. High-traffic forms may reach significance faster, while low-traffic forms may need several weeks.
Yes, you can test up to 5 variants simultaneously. However, more variants mean you need more traffic to reach statistical significance. For most tests, 2-3 variants is optimal.
No. Smart Routing ensures users see the same variant regardless of device, browser, or session. The assignment is tied to the user's identity for consistency.
Yes, you can remove the A/B testing node and replace it with a single path. However, this will end the test and you'll need to implement one of the variants for all users.