Hallucination risk for AI

Alignment Icon representing alignment risks.
Risks associated with output
Value alignment
New to generative AI

Description

Generation of factually inaccurate or untruthful content.

Why is hallucination a concern for foundation models?

False output can mislead users and be incorporated into downstream artifacts, further spreading misinformation. False output can harm both owners and users of the AI models. Also, business entities might face fines, reputational harms, disruption to operations, and other legal consequences.

Background image for risks associated with input
Example

Fake Legal Cases

According to the source article, a lawyer cited fake cases and quotations that are generated by ChatGPT in a legal brief that is filed in federal court. The lawyers consulted ChatGPT to supplement their legal research for an aviation injury claim. Subsequently, the lawyer asked ChatGPT if the cases provided were fake. The chatbot responded that they were real and “can be found on legal research databases such as Westlaw and LexisNexis.” The lawyer did not check the cases, and the court sanctioned them.

Parent topic: AI risk atlas