{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Data Poisoning — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Data Poisoning</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">A training-time attack where an adversary injects malicious, corrupted, or misleading data into a model's training dataset, causing the model to learn incorrect patterns, develop biased behaviors, or contain hidden backdoors. Data poisoning is particularly concerning for models trained on data scraped from the internet or contributed by untrusted sources, where controlling data quality is difficult.  See also: backdoor attack, adversarial attack, data labeling / annotation.</p><a href=\"https://gaks.ai/glossary/data-poisoning\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/data-poisoning →</a></div>"}