Back to glossary

AI GLOSSARY

Robustness

Evaluation & Performance

A model's ability to maintain reliable performance when inputs are noisy, corrupted, or drawn from distributions that differ from its training data. A robust model doesn't quietly fail when things look slightly different from what it was trained on — a property that matters considerably in real-world deployment, where inputs are rarely as clean as curated training examples.
See also: adversarial example, out-of-distribution detection.