Back to glossary
AI GLOSSARY
System Prompt Leakage
Large Language Model (LLM) Terms
A security concern where the contents of a confidential system prompt are revealed to end users, either through direct asking, clever prompting, or model vulnerabilities. Since system prompts often contain proprietary instructions or business logic, leakage can expose sensitive information and undermine the integrity of an application.