Accountability risk for AI
Description
The foundation model development process is complex with lots of data, processes, and roles. When model output does not work as expected, it can be difficult to determine the root cause and assign responsibility.
Why is accountability a concern for foundation models?
Without properly documenting decisions and assigning responsibility, determining liability for unexpected behavior or misuse might not be possible.
Example
Determining responsibility for generated output
Major journals like the Science and Nature banned ChatGPT from being listed as an author, as responsible authorship requires accountability and AI tools cannot take such responsibility.
Sources:
Parent topic: AI risk atlas