Type 2 Error
Type 2 errors occur when there’s no winner of the two possibilities you are considering. They can be caused by the error rate in your test, such as when an experiment's sample size is not large enough to make sure that a theory is true or false.
Type 1 vs. Type 2 Errors
Type I errors occur when you falsely believe that a hypothesis is true when in fact it is not. When this happens, you are incorrectly concluding that a relationship exists when it does not. The statistical probability of an error is called a type I error or a false positive. This number is conventionally set at 0.05 (5%) and will be reported in the literature as a p-value.
Type 2 errors are incorrect rejects by a researcher. Statistically speaking, this means you’re incorrectly concluding that a relationship doesn’t exist when in fact it does. You commit a type 2 error when you don’t believe something that in fact is true.
What Causes Type 2 Errors?
Statistical power is the probability of a test detecting a difference in conversion rate between two or more variations. It depends on the sample size used to conduct the test, as well as the magnitude of the difference you are looking to detect.
The smaller the difference you want to detect, the greater your sample size should be. Marketers should always use ample sample sizes when testing for small differences in conversion rates. This way they have a better chance of detecting true positives even when there's a substantial difference in conversion rates between their test group and alternate group.
In A/B testing, there is a balance between speed and accuracy. Running a test for a long period of time will increase its sample size and reduce the probability of making a type 2 error.
The Importance Of Looking Out For Type 2 Errors
One reason to check your alternative hypotheses is to make sure that you are not missing opportunities to improve your conversion rate. If you don't have a sufficiently large sample size, it might take a while for your experiment to detect variations in your alternative hypothesis where they actually exist.
Avoiding Type 2 Errors
While they are impossible to completely avoid, type 2 errors can be reduced by increasing your sample size. By collecting more data, your experiment will have less false positives. This will help you avoid reaching the conclusion that a test does not have any effect when it actually does.
Another way to prevent type 2 errors is to make large, bold changes to your web pages and apps during experiments. The larger the effect of a change, the smaller the sample size needed and the easier it will be to detect a change. A 25% increase in conversion rate is much easier to notice than a 0.001% increase
Looking To Upgrade Your Current Stack?
The #1 platform for delivering high-quality software releases, instantly.
All-In-One Product Growth
• Visual, Code-free A/B testing on web and mobile
• Both Client Side and Server Side Options
• Flexible API and SDK-free deployments
• Connected messaging features
Fastest & Most Reliable Feature Management System
• Edge deployment for sub 50ms response times
• Enterprise grade performance SLA
• 99.9% uptime guarantee
Personalization Across All Your Users
• Personalize every experiment and experience
• No audience reach limits
• No domain or sub-domain limits
• No user seat limits
Real-Time Slack Support
• Best in class service
• Responsive support and customer success team
• Training and onboarding
• Taplytics Growth Framework assessment
Full Suite of Seamless Integrations
• mParticle
• Segment
• Mixpanel
• Amplitude
• Google Analytics
• Adobe Analytics and more
Protect Customer Privacy
• Balance personalization & experimentation with customer data privacy
• GDPR
• EU Privacy Shield
• HIPAA compliant