Accountability risk for AI

Governance Icon representing governance risks.
Non-technical risks
Governance
Amplified by generative AI

Description

The foundation model development process is complex with lots of data, processes, and roles. When model output does not work as expected, it can be difficult to determine the root cause and assign responsibility.

Why is accountability a concern for foundation models?

Without properly documenting decisions and assigning responsibility, determining liability for unexpected behavior or misuse might not be possible.

Background image for risks associated with input
Example

Determining responsibility for generated output

Major journals like the Science and Nature banned ChatGPT from being listed as an author, as responsible authorship requires accountability and AI tools cannot take such responsibility.

Parent topic: AI risk atlas