Build models that are accurate and fair across groups. Compare common bias metrics and practical mitigation techniques for real deployments.
Demographic parity requires ______ across groups.
equal false‑negative rates only
similar positive prediction rates
identical ROC curves
the same feature distributions
Equalized odds focuses on matching ______ between groups.
feature importances
prevalence of the label
AUC scores
true‑positive and false‑positive rates
A simple pre‑processing mitigation for biased features is ______.
reweighing or stratified sampling to rebalance groups
duplicating majority examples
dropping the label column
encrypting all features
Post‑processing methods like threshold optimization work by ______.
removing protected attributes from logs
setting group‑specific cutoffs to satisfy a chosen fairness metric
retraining the feature extractor only
randomizing labels during training
Counterfactual fairness asks whether a prediction would stay the same if ______.
the sample size doubled
a protected attribute changed while everything else was held constant
features were sorted alphabetically
the model weights were randomized
Simply dropping a protected attribute can fail because ______.
labels become undefined
privacy laws always require the attribute
models cannot train without it
other features may act as proxies that encode the same information
To evaluate fairness reliably, teams should first ______.
use a single random split
define relevant groups and slice metrics by those segments
optimize only overall accuracy
remove all categorical features
Calibration can matter for fairness because poorly calibrated models ______.
reduce training speed only
guarantee demographic parity
produce scores that are not comparable across groups
always increase AUC
Adversarial debiasing trains a model to predict the label while ______.
randomizing labels during validation
minimizing information that reveals the protected attribute
maximizing reconstruction loss
freezing all feature layers
A practical production step for responsible AI is ______.
hiding failure cases in internal wikis
documenting model cards with known limitations and monitoring plans
replacing fairness metrics with vanity KPIs
disabling logging to avoid audits
Starter
Review fairness metrics and how to slice data by protected attributes.
Solid
You can spot bias, compare trade‑offs, and apply practical mitigations.
Expert!
You balance accuracy and equity with transparent, auditable models.