Back to glossary
AI GLOSSARY
Context Poisoning
Large Language Model (LLM) Terms
A security and reliability concern where malicious or misleading content is introduced into a model's context, either deliberately or accidentally, causing it to produce harmful, incorrect, or manipulated outputs. It is a particular risk in agentic AI systems where models read and act on external content they did not originate.
See also: prompt injection, agentic AI, context window.