{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Differential Privacy — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Differential Privacy</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">A mathematical framework, formalized by Cynthia Dwork and colleagues in 2006, for adding carefully calibrated random noise to data or model outputs in a way that protects individual privacy while preserving the statistical usefulness of the overall dataset. Differential privacy provides a formal, quantifiable privacy guarantee: it can be proven that the presence or absence of any single individual in the dataset cannot be meaningfully inferred from the output.  See also: de-identification, data minimization, Privacy.</p><a href=\"https://gaks.ai/glossary/differential-privacy\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/differential-privacy →</a></div>"}