Back to glossary
AI GLOSSARY
Red Team
Risk & Assurance
A group tasked with actively trying to find failures, vulnerabilities, or harmful behaviors in an AI system, taking an adversarial perspective to stress-test it before deployment. Red teaming in AI goes beyond traditional cybersecurity to include probing for unsafe outputs, bias, manipulation vulnerabilities, and misuse potential.