Quiz 12 - AI Safety [12/8]
Email *

1. From a security architecture perspective, why do "Agentic AI" systems fundamentally have a larger attack surface than standalone Large Language Models (LLMs)?

*
1 point

2. What is the fundamental structural vulnerability in current LLM architectures that makes "Prompt Injection" attacks possible?

*
1 point

3. In the context of AI Agent security, which of the following scenarios best illustrates an "Indirect Prompt Injection" attack?

*
1 point

4. How does a "Data Poisoning" or "Backdoor" attack specifically target Retrieval-Augmented Generation (RAG) systems?

*
1 point

5. Applying the security principle of "Least Privilege" to AI Agents (often called "Contextual Security") implies which of the following defense strategies?

*
1 point
A copy of your responses will be emailed to the address you provided.
Submit
Clear form
reCAPTCHA
This form was created inside of UC Berkeley.

Does this form look suspicious? Report