Congratulations! You’ve now learned how to use simulation to investigate the trade-offs for an A/B test sample-size calculation. As a recap, this lesson covered the following:

- The significance threshold for a test is equal to the false positive rate
- The power of a test is the probability of correctly detecting a significant result
- Increasing sample size increases the power of a test
- Increasing the significance threshold increases power, but also increases the false positive rate
- Larger sample sizes are needed to detect smaller effect sizes

Two notes about the terminology in the sample size calculator:

**Baseline conversion rate**is equivalent to our`control_rate`

in the code.**Minimum detectable effect (MDE)**is the smallest effect size (or`lift`

) that we want our test to be able to detect. If the MDE is*larger*than our true`lift`

, power will decrease because our sample size might not be large enough to detect the difference between the two groups.

### Instructions

**1.**

As a final exercise, we’ve provided a sample size calculator for an A/B test, along with the simulation code from the previous exercises. The calculator estimates the sample size needed to achieve 80% power. Plug in the following values to the sample size calculator:

- Baseline rate: 50%
- Minimum detectable effect: 30%
- Significance threshold: 5%

Then, set the sample size for the simulation code equal to the sample size indicated by the calculator. Press “Run” and inspect the proportion of tests that were significant. The proportion should be close to 0.80!

**2.**

Let’s now examine how MDE impacts the power of our test. Change the MDE in the calculator to 40% so that you have:

- Baseline rate: 50%
- Minimum detectable effect: 40%
- Significance threshold: 5%

Update the `sample_size`

in our simulator to match the new sample size given by the calculator. Press “Run” and inspect the proportion of tests that were significant. Now that our MDE is *larger* than our actual effect, what do you see happens to our power?