Back to glossary

AI GLOSSARY

Gradient Leakage

Security & Adversarial AI

A privacy attack in distributed or federated learning where an adversary reconstructs sensitive training data by analyzing the gradients shared between participants during training. Since gradients encode information about the data used to compute them, reconstruction attacks can recover surprisingly detailed information, including images, text, or personal data, from gradient updates alone.
See also: federated learning, differential privacy, data exfiltration.