AI risk atlas
Explore this atlas to understand some of the risks of working with generative AI, foundation models, and machine learning models.
Risks are categorized with one of these tags:
- Traditional AI risks (applies to traditional models as well as generative AI)
- Risks amplified by generative AI (might also apply to traditional models)
- New risks specifically associated with generative AI
Risks associated with input
Training and tuning phase
Fairness
Data bias
Amplified
Robustness
Data poisoning
Traditional
Privacy
Personal information in data
Traditional
Reidentification
Traditional
Data privacy rights
Amplified
Informed consent
Traditional
Inference phase
Robustness
Evasion attack
Amplified
Extraction attack
Amplified
Prompt injection
New
Prompt leaking
Amplified
Risks associated with output
Value alignment
Hallucination
New
Toxic output
New
Over or under reliance
Amplified
Physical harm
New
Harmful code generation
Privacy
Revealing personal information
Amplified
Explainability
Unexplainable output
Amplified
Unreliable source attribution
Amplified
Inaccessible training data
Amplified
Untraceable attribution
Amplified
Non-technical risks
Governance
Lack of model transparency
Traditional
Lack of data transparency
Amplified
Accountability
Amplified
Societal impact
Job loss
Amplified
Human exploitation
Amplified
Impact on the environment
Amplified
Impact on human agency
Amplified
Plagiarism
New