{"version":"1.0","type":"rich","provider_name":"gaks.ai AI Glossary","provider_url":"https://gaks.ai/glossary","title":"Security Boundary — AI Glossary","author_name":"Glenn Katrud Solheim","author_url":"https://gaks.ai","width":600,"height":200,"html":"<div style=\"font-family:sans-serif;border:1px solid #e0e0e0;border-radius:8px;padding:16px;max-width:600px;background:#ffffff;color:#111111;\"><p style=\"margin:0 0 4px;font-size:11px;color:#666;\">AI Glossary — gaks.ai</p><h3 style=\"margin:0 0 8px;font-size:16px;\">Security Boundary</h3><p style=\"margin:0 0 12px;font-size:14px;line-height:1.6;\">A defined perimeter separating trusted from untrusted components in an AI system, determining what information and capabilities are accessible from outside and what must remain protected. Security boundaries don't enforce themselves; they must be explicitly designed and actively maintained. In agentic AI systems, where models interact with external tools, APIs, and data sources, the boundary is constantly under pressure and particularly easy to misconfigure.  See also: sandboxing, prompt injection.</p><a href=\"https://gaks.ai/glossary/security-boundary\" style=\"font-size:12px;color:#0077aa;\">Source: gaks.ai/glossary/security-boundary →</a></div>"}