Back to glossary

AI GLOSSARY

Abuse Monitoring

Security & Adversarial AI

The ongoing process of detecting and responding to misuse of an AI system, identifying patterns of harmful, policy-violating, or malicious use across all interactions. Abuse monitoring should combine automated detection with human review, and is used for maintaining the safety and integrity of deployed AI products at scale, where manual review of every interaction is impractical. The same infrastructure can, in principle, also enable broad surveillance of user activity, making independent audits of its ethical and responsible use an essential safeguard in any system deployed this way.