A/B Testing – Best Practices

Work in progress!
Please check back later for the comprehensive guide.


Always preview your test

A broken test completely invalidates the results, losing time and money.
Many of the issues outlined can be mitigated by previewing your test.

 

Avoid ambiguous, dynamic elements or pages you plan to modify

Elements that move, dynamically update, or change structure have a higher risk of breaking tests (for example: dropdowns, carousels, lazy-loaded content, dynamic list of most recent or top products). Elements that are difficult to uniquely identify, such as buttons repeated multiple times on a page, also carry increased risk. If you choose to test these elements, use preview mode and thoroughly validate the experiment across screen sizes. Even with validation, these tests may stop working if the page structure changes. Any manual changes to a page or its layout can invalidate active tests. After making changes, always verify that the experiment is still functioning as intended. If targeting becomes unreliable, the test should be ended.

 

Avoid changing elements that affect more than UI/UX

Elements that convey pricing, legal disclosures, or user consent should not be included in experiments.
Editing form controls (for example, text inputs) is also discouraged due to their functional and behavioral implications.
The system applies best-effort safeguards to restrict experiments on these classes of elements, but ultimate responsibility remains with the experiment owner.

 

Take note of confidence and uplift measurements

Real improvements take lots of traffic and conversions to prove. If applying a winning variant on a test with low confidence, we recommend continuing to watch conversion rates.

 

Reach out to support if you have any questions, concerns or suggestions.

A/B testing is a complex process. We’d be more than happy to take feedback or clarify things.

Oh!

Il semble que vous soyez déjà abonné