Legal accountability risk for AI

Legal compliance Icon representing legal compliance risks.
Legal compliance
Non-technical risks
Amplified by generative AI

Description

Determining who is responsible for an AI model is challenging without good documentation and governance processes.

Why is legal accountability a concern for foundation models?

If ownership for development of the model is uncertain, regulators and others might have concerns about the model. It would not be clear who would be liable and responsible for the problems with it or can answer questions about it. Users of models without clear ownership might find challenges with compliance with future AI regulation.

Background image for risks associated with non-technical
Example

Determining responsibility for generated output

Major journals like the Science and Nature banned ChatGPT from being listed as an author, as responsible authorship requires accountability and AI tools cannot take such responsibility.

Sources:

The Guardian, January 2023

Parent topic: AI risk atlas

We provide examples covered by the press to help explain many of the foundation models' risks. Many of these events covered by the press are either still evolving or have been resolved, and referencing them can help the reader understand the potential risks and work towards mitigations. Highlighting these examples are for illustrative purposes only.