Back to glossary

AI GLOSSARY

Model Security

Security & Adversarial AI

The set of practices, controls, and techniques aimed at protecting AI models from unauthorized access, theft, manipulation, and exploitation throughout their lifecycle, from training through deployment. Model security encompasses protecting model weights, securing inference infrastructure, preventing adversarial attacks, and ensuring that models behave safely even under adversarial conditions.