Back to glossary
AI GLOSSARY
Prompt Injection
Security & Adversarial AI
An attack where malicious instructions are embedded in user inputs with the goal of overriding or hijacking an AI model's intended behavior, causing it to ignore its system prompt, reveal confidential information, or take unauthorized actions. Prompt injection is one of the most significant security threats to language model applications, particularly agentic AI systems where the model has the ability to take real-world actions.
See also: indirect prompt injection.