P-value calculator

Did you know?

A p-value of 0.05 doesn't mean there's a 5% chance your hypothesis is wrong. It means if the null hypothesis were true, there's a 5% chance of seeing data this extreme. The distinction matters: p-values don't tell you the probability that your theory is correct. They tell you how surprising your data is under the assumption that nothing interesting is happening.

p = 0.049996
Significant at α = 0.05 (reject null hypothesis). The result is unlikely to have occurred by chance.
Significant (α=0.05)
Yes

Good to know

P < 0.05 is a convention, not a law. Ronald Fisher suggested 0.05 as a convenient threshold in 1925. It stuck. But there's nothing magical about 5%. Some fields use 0.01 or 0.001 for stricter standards. A p-value of 0.049 and 0.051 are practically identical, yet one is "significant" and the other isn't. The threshold is arbitrary.

Statistical significance ≠ practical importance. With large enough sample sizes, tiny effects become "statistically significant." A drug that lowers blood pressure by 0.1 mmHg might achieve p < 0.001 with 100,000 participants, but that effect is clinically meaningless. Always ask: significant, but how much?

P-hacking is a real problem. Researchers who test many hypotheses and only report the significant ones inflate false positive rates. If you run 20 tests at α = 0.05, you expect one "significant" result by chance. Pre-registration and transparency help combat this.

More facts

FAQ