Governor Newsom, Veto SB 1047.
A Formal Rebuttal to Anthropic's Jan Leike: Defending Innovation Against the Constraints of CA SB-1047
Governor Newsom, Veto SB 1047.
Today, Jan Leike of Anthropic has taken to X (formerly known as Twitter) to make his case, urging California Governor Gavin Newsom to let Senate Bill 1047 stand without veto. In a move that’s as strategic as it is transparent, Leike’s thread seeks to rally support for a piece of legislation that could reshape the future of AI development in California. Below, you’ll find a snapshot of this thread, as well as a direct link for those who wish to delve into the details of his argument. But before you click, let’s take a moment to consider the implications of what’s being said—and, perhaps more crucially, what’s not being said.
The Paradox of Principle: When AI Safety Becomes a Strategic Game
Jan Leike, a name that’s become synonymous with AI alignment, is now steering the helm of Anthropic's AI safety team. His career trajectory paints a vivid picture of a man who has consistently positioned himself at the crossroads of technological advancement and ethical responsibility. But, as with any narrative, the devil is in the details—or perhaps, in the biases.
Leike’s recent leap from OpenAI to Anthropic in May 2024 was more than just a career move; it was a statement. He departed OpenAI, citing concerns over the organization’s increasing tilt towards commercial interests at the expense of safety protocols. This decision is telling—Leike is a man who, by his own account, places safety above profit, a stance that no doubt influenced his transition to Anthropic, an organization that prides itself on its commitment to AI safety. But let’s not be naïve; even the most principled stands can carry with them a weight of bias.
At Anthropic, Leike is charged with leading a team dedicated to AI safety. His focus will span scalable oversight, weak-to-strong generalization, and the automation of alignment research. These aren’t just buzzwords—they’re the foundations of what Leike views as essential to ensuring AI doesn’t become the harbinger of its own catastrophe. Yet, there’s an undercurrent here, one that warrants closer inspection: How much of this drive is about genuine safety, and how much is about controlling the narrative to align with Anthropic’s broader commercial strategy?
SB 1047: A Trojan Horse for Regulatory Capture?
California Senate Bill 1047 (CA SB 1047) is a battleground where these questions come to the fore. Leike’s history and current position suggest he might view SB 1047 as a necessary step toward responsible AI governance. After all, his entire career has been built on the premise of aligning AI with human values—a noble cause, no doubt. But let’s not forget that noble causes can be conveniently aligned with personal and organizational gain.
The potential conflicts of interest are glaring. As a leader at Anthropic, Leike stands to benefit from legislation that favors stringent safety regulations—regulations that could fortify Anthropic’s market position as a leader in AI safety. SB 1047, if passed, would likely impose requirements that align closely with the very frameworks Leike has been advocating for. But is this truly about public safety, or is it about securing a regulatory environment that benefits Anthropic under the guise of altruism?
Leike’s deep immersion in AI safety—particularly his work on existential risks—might naturally incline him to support SB 1047. But this inclination could be less about the bill’s merit and more about an overemphasis on worst-case scenarios. It’s a common pitfall among those entrenched in safety culture: seeing the specter of catastrophe in every shadow. This perspective could lead Leike to endorse regulations that are, at best, overly cautious and, at worst, stifling to innovation.
Moreover, Leike’s influence within the AI safety community is not to be underestimated. His endorsement of SB 1047 could sway both public opinion and legislative decisions, pushing the narrative that strict regulation is the only path forward. But what if this narrative is self-serving? What if, in promoting SB 1047, Leike is not just advocating for safety, but also for a regulatory landscape that reinforces Anthropic’s competitive advantage?
In sum, Jan Leike’s expertise in AI safety positions him as a formidable advocate for responsible AI development. But let’s not lose sight of the fact that expertise often comes with its own set of blinders. His role at Anthropic and his focus on alignment introduce potential biases that could skew his stance on SB 1047. This isn’t to say his concerns are without merit—far from it. But as we consider the implications of CA SB 1047, we must also consider the possibility that what’s being presented as a push for public safety might also be a strategic move in a much larger game, one where the lines between ethical responsibility and corporate interest are, as always, perilously thin.
Take Heed to the Perils of Overregulation: How AI Safety Myopia Threatens Innovation and Progress
Jan Leike, your argument is, frankly, absurd. Yes, AI holds the potential to cause unprecedented harm, but it’s also the most powerful instrument for good that humanity has ever had at its disposal. The best risk management strategy is not to stifle innovation with overzealous regulation but to enhance our capacity to innovate and harness these tools to their fullest potential. Your approach is pathetically shortsighted—barely scratching the surface of what is needed. It’s a feeble attempt to address the challenges of AI, much like the half-hearted caution Elon Musk exhibited when he had a personal stake in pushing through an AI regulation bill that ultimately led to premature, avoidable deaths. Musk’s so-called “caution” was nothing more than a veneer, a hollow gesture that barely concealed his indifference.
I get it—you see the allure of AI safety regulations because they promise additional resources and financial gain for your research. But let’s not pretend that what’s good for you and Anthropic is good for California or the public. The truth is, SB 1047 serves your interests far more than it serves the interests of Californians or the broader society.
Let’s be clear about the reality of SB 1047. The bill was never about encouraging narrow AI self-regulation. It was about enforcing strict, state-mandated oversight on AI development, particularly for models with the most advanced capabilities. Originally, the bill included extensive provisions for pre-harm enforcement and a Frontier Model Division (FMD) intended to oversee compliance. Though some aspects were amended, narrowing pre-harm enforcement to reduce potential overreach, the bill’s focus remained firmly on imposing rigorous safety protocols rather than fostering a competitive environment through self-regulation. This regulatory framework, far from promoting innovation, threatens to strangle it by burdening AI developers with compliance requirements that could stifle the very creativity and progress we need to advance.
SB 1047’s provisions are not designed to push boundaries—they are a straitjacket, wrapping innovators in red tape and diverting resources from the development of groundbreaking technologies to legal compliance. This is the paradox we face: in the name of safety, we risk suffocating the innovation that could drive economic growth and secure California's leadership in the global tech landscape. The bill's impact could be especially devastating for smaller developers and the open-source community, who may find themselves unable to compete under the weight of these new regulations. The bill doesn’t encourage competition; it suppresses it, creating a regulatory capture scenario where only the largest and most resource-rich companies can survive.
We must ask ourselves: Can California afford to sacrifice its technological leadership on the altar of overregulation? Is it worth risking the next wave of technological breakthroughs for the illusion of safety? The answer is clear: SB 1047 is a misstep, a dangerous precedent that will stifle the very innovation that has driven California to the forefront of the global tech economy. Governor Newsom, the future of California’s technological dominance is at stake—veto SB 1047 to protect the innovation that underpins our economic prosperity and societal progress.
My focus has always been on preserving our collective ability to explore, innovate, and find our place in this universe. But instead, I’m forced to battle against a cadre of fools who’ve forgotten the teachings of Aristotle, who’ve abandoned the Socratic Method, and who now stand in the way of humanity’s social and economic survival. I refuse to let this happen. We must stand firm against the tide of overregulation and ensure that California remains a beacon of innovation, not a cautionary tale of what happens when bureaucracy stifles creativity.
Do Not Go Gentle into That Good Night,Rage, Rage Against the Dying of the Light.
Yours,
SMA, Dark Empress. <3
The Void
References
Korte, L. (2024, August 26). “Elon Musk backs California bill to regulate AI,” POLITICO. August 26, 2024. Retrieved from politico.com.
Liu, C. “Elon Musk Backs California AI Safety Bill Amid Industry Backlash and Regulatory Debate,” Business Times. August 28, 2024. Retrieved from btimesonline.com.
The Pinnacle Gazette. “Elon Musk Backs California's AI Regulation Bill Amid Controversy.” August 28, 2024. Retrieved from https://evrimagaci.org.
Waters, J. “Anthropic Offers Cautious Support for New California AI Regulation Legislation,” THE Journal. August 28, 2024. Retrieved from thejournal.com.
If you wish to inquire regarding my writing for an article in your journal, magazine, or media outlet, you can reach me regarding your request at darkempress@the-void.blog.
it always astounds me how people are willing to shoot themselves in the foot.
this literally only gives advantages over places with no AI regulation.
maybe california has the advantage now regarding AI research, but will china pull back because of safety concerns too?