{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Backdoor Attack — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Backdoor Attack</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">A type of training-time attack where an adversary embeds a hidden trigger in a model during training, causing the model to behave normally on most inputs but produce specific, attacker-controlled outputs whenever the trigger pattern is present. Backdoor attacks are particularly dangerous because the compromised model may pass standard evaluations, with the malicious behavior only activating under specific conditions.  See also: adversarial attack, data poisoning, AI safety.</p><a href=\"https://gaks.ai/glossary/backdoor-attack\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/backdoor-attack →</a></div>"}