Skip to main content
Alphabets AI on advanced central processing unit (CPU) chip and gavel and sound block in wireframe on electronic mother boards. Illustration of the concept of legislation and regulations of AI Act.

California’s AI Bill Veto Sparks Debate: CMU Experts Weigh In

Media Inquiries
Name
Cassia Crogan
Title
University Communications & Marketing

When California Gov. Gavin Newsom vetoed the state's proposed artificial intelligence (AI) bill,(opens in new window) SB 1047 on Sept. 29, it sparked a heated debate about how to regulate artificial intelligence effectively. 

This bill(opens in new window) aimed to introduce some of the toughest regulations yet for large-scale AI models, like those costing more than $100 million to train, such as OpenAI’s GPT-4. It proposed safety measures such as a "human button" to shut down models if they posed a risk, regular third-party audits and compliance reporting to the attorney general. Violations would lead to fines of up to $10 million, and the bill also sought to protect whistleblowers while establishing a board to oversee AI governance.

After passing the California Senate on Aug. 29, the bill had been seen by many as a bold step toward ensuring safe and transparent AI development. However, the governor's veto has brought renewed attention to the challenges of balancing AI innovation with public safety.

Experts from Carnegie Mellon University's Tepper School of Business(opens in new window) shared their thoughts on the bill: Some see it as important for protecting people from AI's risks, while others think it is unclear and might hurt innovation, especially for startups.


Kannan Srinivasan, H.J. Heinz II Professor of Management, Marketing and Business Technologies

Kannan Srinivasan

While creating a black hole in the Large Hadron Collider was unlikely, theories suggested possibilities of micro black holes that would disintegrate quickly. Alarmists (including scientists) were concerned that such a black hole may end up swallowing the Earth.

AI going rogue is a risk that must be monitored carefully. Self-regulated mechanisms to evaluate and assess such risk with transparent reporting might be helpful. When it is hard to find ways to mitigate hallucinations quickly, it is unclear what exactly the specific regulations would accomplish. Overzealous rules may achieve nothing more than introducing compliance layers that are likely costly and may not reduce the risk. Such laws may favor the cash-rich big tech while seriously handicapping innovation by new entrants who can ill-afford compliance costs. As we gain greater visibility on the mechanisms that AI can potentially take over, if ever, then effective regulations can be designed.


Alan Scheller-Wolf, Richard M. Cyert Professor of Operations Management

Alan Scheller-Wolf

The government has a responsibility to enact industrial safeguards to protect the populace. This is true for AI just like it is true for chemical plants. The government also has a responsibility to enact the least intrusive safeguards that are practical. This legislation seems to fail the second test. The most effective safeguards would be just to put the chemical plant or AI model out of business, but this is not the role of the government.

Some things in this model seem to be sensible — for example prohibiting nonconsensual "deepfakes."  Although there is a subtle line around parody that the Supreme Court would likely have to try to discern.

Other things, like requiring cluster managers to police their users, seems unwieldy and intrusive. The final sentence, which seems almost deliberately difficult to parse, reads to me like the hearings of all of these bodies are to be closed to the public. I find this to be very bad governance in almost all cases.  

Do we need legislation?  Certainly.  This seems like a start.


Param Vir Singh, Carnegie Bosch Professor of Business Technologies and Marketing; Associate Dean for Research

Param Vir Singh

AI is unlike anything we’ve regulated before. In the past, whether it was the financial crisis, environmental issues, or nuclear safety, we had some understanding of the potential harms, making it easier to craft targeted regulations. With AI, we’re navigating unknown territory. The complexity and unpredictability of AI systems, especially large-scale ones, mean we’re not yet fully aware of the risks they may pose. This makes the challenge especially daunting, which is why the language of the California AI Bill remains unclear in parts. The bill puts significant responsibility on AI experts to ensure that their systems are safe and secure, preventing potential harms that we can’t yet fully foresee.

Despite the uncertainties, I support this bill because it takes a proactive approach to AI’s development, encouraging experts to think critically about the risks inherent in the technology. In the race to lead the market, companies often neglect safety and security in favor of developing the next wave of innovation. The California AI Bill changes that dynamic, compelling AI developers to think critically about potential risks while encouraging investors to prioritize secure, responsible AI technologies.

The bill isn’t perfect, but it’s a crucial first step toward ensuring that AI evolves in a way that benefits society without introducing unchecked dangers.


Sridhar Tayur, Ford Distinguished Professor of Operation Management, and the founder of SmartOps Corporation and OrganJet Corporation

Sridhar Tayur

I do not support it as it is currently written but believe that there should be some regulation.

It appears that there is openness to modification, which is what I would like to see happen, and revisit the issues in a revised version.

In large part, I echo the sentiments of Zoe Lofgren and Andrew Ng. To be clear, I firmly support AI governance to guard against demonstrable risks to public safety; unfortunately, this bill would fall short of these goals — creating unnecessary risks for both the public and California’s economy.

SB 1047 seems heavily skewed toward addressing hypothetical existential risks while largely ignoring demonstrable AI risks like misinformation, discrimination, nonconsensual deepfakes, environmental impacts and workforce displacement.

There are clauses in there that, honestly as an AI developer, I have no idea what to do.


Zoey Jiang, Assistant Professor Business Technologies and the BP Junior Faculty Chair:

Zoey Jiang

While the AI safety bill is well-intentioned, I am concerned about its potential implications for competition within the AI industry.

Large tech companies already possess extensive data, existing models and the necessary infrastructure, giving them a significant advantage in ensuring their AI systems comply with the proposed safety standards. They also have the legal resources to navigate the regulatory landscape effectively.

In contrast, smaller tech startups may struggle to meet the same safety thresholds, lacking both the data and computational power necessary to refine their models to the required standards. Without the ability to launch and gather more data to improve their systems, these startups risk being left behind, unable to compete.

Over time, this could widen the gap between established players and new entrants. If left unchecked, it may eventually reach a point where smaller companies, and the public sector, have little opportunity to accumulate the adequate resources — such as access to large datasets and sufficient computational power — to assess the safety of larger companies' models.

This would make it difficult to detect potential corner cases or hold big tech accountable, increasing the risk of even greater systemic safety issues.


Yan Huang, Associate Professor of Business Technologies and the Pounds Fellow:

Yan Huang

Algorithmic bias and discrimination have become increasingly significant as AI models are deployed on a larger scale.

Currently, no binding regulations ensure algorithmic fairness, leaving the issue to the tech industry’s voluntary efforts. However, self-regulation has proven insufficient in curbing bias in AI systems. Existing laws, such as the California AI Transparency Act (2024) and New York City AI Bias Law (2023), emphasize transparency, while the Algorithmic Accountability Act (proposed federally) calls for impact assessments. These regulations primarily focus on detecting bias after deployment, but they fall short of preventing bias or discrimination before it occurs. Given the widespread use of AI systems, significant societal harm can arise before issues are detected.

SB 1047 aimed to address this gap by requiring companies to disclose information on the safety and fairness of AI models before deployment. Mandatory pre-deployment audits would help prevent harm before it happens. Additionally, the bill proposed holding companies legally accountable for discriminatory outcomes caused by their AI models, which would enhance fairness enforcement. This approach would incentivize developers to prioritize fairness from the outset. While the bill shifts much of the responsibility to developers — who are better positioned to address AI design and testing — it is important to acknowledge that new problems may arise when a well-tested AI model is applied in specific contexts. A balanced approach to liability between developers and users could encourage collaboration between them to prevent negative outcomes.

The primary concern is that these stringent regulations could burden AI companies, particularly smaller firms, potentially stifling innovation. While this is a valid concern and the implementation details may need refinement, SB 1047 has the potential to address critical gaps in the legal framework by establishing robust regulatory measures for ensuring fairness and ethics in AI systems, especially in areas with significant societal impact.  

— Related Content —