Untraceable attribution risk for AI

Explainability Icon representing explainability risks.
Risks associated with output
Explainability
Amplified by generative AI

Description

The original entity from which training data comes from might not be known, limiting the utility and success of source attribution techniques.

Why is untraceable attribution a concern for foundation models?

The inability to provide the provenance for an explanation makes it difficult for users, model validators, and auditors to understand and trust the model.

Parent topic: AI risk atlas