Find the minimum sample size for any survey or research study. Enter your confidence level (90%, 95%, 99%), margin of error, and population size.
Sample size is the number of observations included in a study or survey. A larger sample gives more accurate results but costs more. The minimum required size depends on your desired confidence level, margin of error, and population variability.
The margin of error is how much your survey result may differ from the true population value. A ±5% margin of error at 95% confidence means you can be 95% sure the true answer is within 5 percentage points of your result.
If you repeated your survey 100 times, 95 of those surveys would produce a result within the margin of error of the true population value. It does not mean there is a 95% chance your specific result is correct.
When your sample is a significant fraction of the total population, you can use the finite population correction factor: n_adjusted = n₀ / (1 + (n₀-1)/N). A survey of 400 from a population of 500 is very different from 400 out of a million.
Use p = 50% (0.5). This gives the most conservative (largest) sample size. If you have prior data suggesting a different proportion, use that value to get a smaller required sample.
Sample size affects statistical power — the ability to detect a real effect. Larger samples make smaller effects detectable. Statistical significance (p-value) depends on both effect size and sample size. A large sample can make a trivial difference statistically significant.
For A/B tests, you need to specify baseline conversion rate, minimum detectable effect (lift), significance level, and statistical power (usually 80%). Sample size calculators for A/B testing use slightly different formulas than this survey calculator.