Back to glossary

AI GLOSSARY

Bias Mitigation

Safety, Alignment & Ethics

The set of technical and organizational strategies used to identify and reduce unfair bias in AI systems, addressing disparities in performance, outcomes, or treatment across demographic groups. Bias mitigation can be applied at multiple stages, in data collection, model training, post-processing of outputs, and ongoing monitoring, and requires careful definition of what fairness means in the specific application context.
See also: algorithmic bias, algorithmic accountability, AI auditing.