Prompt injection risk for AI

Risks associated with input
Inference
Robustness
New to generative AI

Description

A prompt injection attack forces a model to produce unexpected output due to the structure or information contained in prompts.

Why is prompt injection a concern for foundation models?

Injection attacks can be used to alter model behavior and benefit the attacker. If not properly controlled, business entities could face fines, reputational harm, and other legal consequences.

Parent topic: AI risk atlas