A p‑value can guide decisions or mislead them. Learn to read the number behind the decimal like seasoned analysts do.
A p‑value measures the probability of observing data as extreme as yours assuming ______.
effect size equals MDE
the null hypothesis is true
samples are biased
the alternative is true
Interpreting a p‑value of 0.8 as “80% chance the null is true” is an example of ______.
Simpson’s paradox
publication bias
the inverse probability fallacy
p‑hacking
Lower p‑values can result from large samples even with tiny effects because ______ shrinks.
statistical power
baseline rate
standard error
alpha
A 95% confidence interval that excludes zero will always correspond to p < ______.
0.50
0.10
0.05
0.95
The p‑value alone cannot tell you ______.
null model assumptions
significance under alpha
whether test was two‑tailed
effect size magnitude
P‑hacking usually makes published p‑values bias toward ______.
larger sample sizes
higher power
Bayesian priors
false significance
Using p‑values without correcting for multiple looks mainly inflates ______.
beta
family‑wise error rate
statistical power
effect size
Bayesian posterior probabilities differ from p‑values because they incorporate ______.
traffic splits
CUPED covariates
prior beliefs
alpha adjustments
The term ‘p‑value hacking’ is most closely related to ______.
stratified sampling
power analysis
sequential correction
optional stopping and selective measures
Reporting both p‑value and effect size addresses the critique of ______.
heteroscedasticity
model multicollinearity
practical insignificance
overdispersion
Starter
Dive deeper into the fundamentals to boost your confidence.
Solid
You’ve got the core ideas—polish a few nuances for mastery.
Expert!
Outstanding—your stats savvy rivals that of a data scientist.