A.I. risk list
A Harvard Law School Forum on Corporate Governance post gives us a handy rundown of AI risks. Let’s take a peek:
- Unwanted bias, when automated systems relying on biased data or design produce
discriminatory outcomes, perpetuates inequalities in decision-making. Some
companies have faced legal action after using AI systems allegedly reinforcing
discriminatory outcomes.[16]
- “Hallucinations”, referring to when AI generates false information.[17]
- AI systems trained on inaccurate, outdated, or otherwise not fit for purpose data.[18]
- Spread of mis/dis-information or harmful content through AI generated content.
- Failure to evaluate risks of third-party AI. Research suggests that more than half of
all AI failures come from third-party tools, which most companies rely on.[19]
- Intellectual property (IP) infringement.[20]
- Data security breaches, including hacking or privacy violations.
- Technical malfunctioning, causing autonomously operated machines to endanger
human life, for instance.
Harvard Law School Forum on Corporate Governance | Artificial Intelligence: An engagement guide