Whether you’re a newcomer to Userpilot or an experienced user, you are most likely interested in understanding how to effectively assess goal conversions or measure the impact of certain modifications on your Userpilot flows. Userpilot has made this possible by introducing multiple convenient methods for testing flows against control groups (no flow) and other flows. Our experimentation tools empower your team to make well-informed choices and achieve greater precision when assessing the influence of flows on your growth objectives.
Userpilot provides 3 experiment types, Controlled A/B Test also known as Null hypothesis (Userpilot Flow VS nothing), Head-to-Head A/B Test (Flow VS Another Flow), and Controlled Multivariate Test to test two flows across 3 groups (Null, Group A, Group B).
Flows that can be chosen for any experiment type must have the frequency set to Only Once, and cannot be set to trigger Only-Manually.
Controlled A/B TestIn a Controlled A/B Test, you compare the performance of a single flow (Flow A) against a control group (no flow), aiming to determine if Flow A is effective in achieving its goal.Head-to-Head A/B TestIn a Head-to-Head A/B Test, you directly compare two different flows (Flow A and Flow B) without a control group. This is done to determine which version of the two flows yields better results. To ensure a fair comparison, both flows must share the same trigger settings and target the same user segment. You won’t be able to select Flow B if it doesn’t share the same settings as Flow A.Controlled Multivariate TestIn a Controlled Multivariate Test, you compare the performance of multiple flows (Flow A, Flow B) against a control group (no flow). The key difference here is that these flows can have different trigger settings and target different user segments, allowing you to test more than one variable simultaneously.
“Once a result has been determined” will stop when the experiment has lasted for two weeks and there are at least 200 participants. If not, we’ll wait another two weeks or until the total number of participants reaches 200 (whichever comes first).
In essence, these A/B testing methods help you fine-tune your user engagement strategies by comparing different flows or variations to see which one performs best in achieving your specific goals. Each method provides a different level of control and insight into user behavior, allowing you to make data-driven decisions to improve user experiences.
You can’t change the settings of a flow that has a running experiment.
Any created test will be seen in the Experiments tab and can be clicked on to see further details. After the experiment has been completed we’ll show a summary of the experiment at the top based on the collected data.