Back to glossary

AI GLOSSARY

Data Poisoning

Security & Adversarial AI

A training-time attack where an adversary injects malicious, corrupted, or misleading data into a model's training dataset, causing the model to learn incorrect patterns, develop biased behaviors, or contain hidden backdoors. Data poisoning is particularly concerning for models trained on data scraped from the internet or contributed by untrusted sources, where controlling data quality is difficult.
See also: backdoor attack, adversarial attack, data labeling / annotation.

External reference