Legal accountability risk for AI

Legal compliance Icon representing legal compliance risks.
Legal compliance
Non-technical risks
Amplified by generative AI
Amplified by synthetic data

Description

Determining who is responsible for an AI model is challenging without good documentation and governance processes. The use of synthetic data in model development adds further complexity, since the lack of standardized frameworks for recording synthetic data design choices and verification steps makes accountability harder to establish.

Why is legal accountability a concern for foundation models?

If ownership for development of the model is uncertain, it may not be clear who would be liable and responsible for the problems with it or can answer questions about it. Users of models without clear ownership might find challenges with compliance with regulations.

Background image for risks associated with non-technical
Example

Determining responsibility for generated output

Major journals like the Science and Nature banned ChatGPT from being listed as an author, as responsible authorship requires accountability and AI tools cannot take such responsibility.

Sources:

The Guardian, January 2023

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work toward mitigations. Highlighting these examples are for illustrative purposes only.