Back to glossary

AI GLOSSARY

Evasion Attack

Security & Adversarial AI

An attack where an adversary crafts inputs at inference time that cause an AI model to misclassify or produce incorrect outputs, without modifying the model itself. Unlike data poisoning, which targets training, evasion attacks target deployment. A classic example is a stop sign with carefully placed stickers that cause an autonomous vehicle's perception system to misidentify it.
See also: adversarial attack, adversarial example, data poisoning.

External reference