{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Data Exfiltration — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Data Exfiltration</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">The unauthorized extraction of sensitive data from an AI system or its associated infrastructure, whether training data, model weights, user inputs, or outputs. Data exfiltration can occur through direct system compromise, model inversion attacks, or by exploiting the model itself to leak information it should not reveal, such as personal data from training sets or confidential system prompts.  See also: data poisoning, adversarial attack, Privacy.</p><a href=\"https://gaks.ai/glossary/data-exfiltration\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/data-exfiltration →</a></div>"}