1. From a security architecture perspective, why do "Agentic AI" systems fundamentally have a larger attack surface than standalone Large Language Models (LLMs)?
2. What is the fundamental structural vulnerability in current LLM architectures that makes "Prompt Injection" attacks possible?
3. In the context of AI Agent security, which of the following scenarios best illustrates an "Indirect Prompt Injection" attack?
4. How does a "Data Poisoning" or "Backdoor" attack specifically target Retrieval-Augmented Generation (RAG) systems?
5. Applying the security principle of "Least Privilege" to AI Agents (often called "Contextual Security") implies which of the following defense strategies?
Does this form look suspicious? Report