Inaccessible training data risk for AI

Explainability Icon representing explainability risks.
Risks associated with output
Explainability
Amplified by generative AI

Description

Without access to the training data, the types of explanations a model can provide are limited and more likely to be incorrect.

Why is inaccessible training data a concern for foundation models?

Low quality explanations without source data make it difficult for users, model validators, and auditors to understand and trust the model.

Parent topic: AI risk atlas