Loading...
Loading...
OpenAI backed Illinois bill SB 3444 that would shield frontier AI developers from liability for ‘‘critical harms’’—including mass death, serious injury to 100+ people, or $1 billion+ in damages—so long as harms weren’t intentional or reckless and firms publish safety, security and transparency reports. The bill defines ‘‘frontier models’’ as systems trained with over $100 million in compute, capturing major labs like OpenAI, Google, Anthropic, xAI and Meta. OpenAI testified in favor, framing the
OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths
OpenAI publicly backed Illinois bill SB 3444, which would shield developers of so-called "frontier" AI models from liability for extreme harms—defined as death or serious injury to 100+ people or $1 billion+ in property damage—provided the developer did not act intentionally or recklessly and published safety, security, and transparency reports. The bill sets a frontier threshold at models trained with over $100 million in compute, targeting major labs like OpenAI, Google, Anthropic, Meta, and xAI. OpenAI framed support as a way to reduce risk while avoiding a patchwork of state rules and encouraging federal harmonization; critics say the measure is more permissive than prior proposals and faces political resistance in Illinois.
OpenAI has endorsed Illinois bill SB 3444, which would shield developers of so-called frontier AI models from liability for “critical harms” — like mass death, serious injury to 100+ people, or $1B+ property damage — provided the lab didn’t act intentionally or recklessly and has published safety, security, and transparency reports. The bill defines frontier models as those trained with over $100 million in compute, potentially covering OpenAI, Google, Anthropic, Meta and xAI. OpenAI framed its support as promoting consistent national standards and avoiding a patchwork of state rules, while critics warn the measure is unusually broad and politically unpopular in Illinois. The proposal could reshape legal accountability for powerful AI systems if enacted.
Maxwell Zeff / Wired : OpenAI backs an Illinois bill shielding AI labs from liability even for “critical harms,” like 100+ deaths or $1B+ damage, if safety reports were published — The ChatGPT-maker testified in favor of an Illinois bill that would limit when AI labs can be held liable—even in cases where their products cause “critical harm.”
OpenAI has endorsed Illinois bill SB 3444, which would shield makers of so-called “frontier” AI models from liability for extreme harms—defined as incidents causing 100+ deaths or $1 billion in damage—so long as the developer did not act intentionally or recklessly and published safety, security, and transparency reports. The bill’s frontier-model threshold is set at models trained with more than $100 million in compute, potentially covering OpenAI, Google, Anthropic, Meta, and xAI. OpenAI frames the support as promoting national consistency and safe deployment without stifling innovation, while critics and policy experts warn the measure is an unusually broad liability shield and may face political resistance in Illinois. The move signals an activist shift in AI industry legislative strategy.
OpenAI backed Illinois bill SB 3444 that would shield frontier AI developers from liability for ‘‘critical harms’’—including mass death, serious injury to 100+ people, or $1 billion+ in damages—so long as harms weren’t intentional or reckless and firms publish safety, security and transparency reports. The bill defines ‘‘frontier models’’ as systems trained with over $100 million in compute, capturing major labs like OpenAI, Google, Anthropic, xAI and Meta. OpenAI testified in favor, framing the measure as avoiding a patchwork of state rules and urging a harmonized federal framework to protect innovation while managing risks. Critics say the bill is more extreme than prior proposals and could limit accountability for severe AI-enabled harms.