Prompt leaking risk for AI
Description
A prompt leak attack attempts to extract a model's system prompt (also known as the system message).
Why is prompt leaking a concern for foundation models?
A successful attack copies the system prompt used in the model. Depending on the content of that prompt, the attacker might gain access to valuable information, such as sensitive personal information or intellectual property, and might be able to replicate some of the functionality of the model.
Parent topic: AI risk atlas