A Story Bigger Than Iran
The AI war at the Pentagon.
A note from Garry Kasparov: Join me this Wednesday (March 4) at 5pm ET/2pm PT: I’ll be hosting a timely Zoom conversation with Ed Luce, US editor for the Financial Times and author of Zbig: The Life and Times of Zbigniew Brzezinski, America’s Great Power Prophet. It’ll be an important discussion on a half-century of lessons for US foreign policy amid a cascading crisis in Iran. This call is exclusive to premium subscribers of The Next Move. To make joining easier, we’re offering 30% off subscriptions through Wednesday. Premium subscribers get invitations to these exclusive conversations, early access to podcasts—and you might even win an autographed chess set! You can find registration information at the bottom of this post.
The unfolding chaos in Iran pushed the most important story in American democracy to page two. I’m referring to the drama between the DoD and artificial intelligence giants Anthropic and OpenAI. It revealed the bravery of one man—Anthropic’s Dario Amodei—and the cravenness of another—Sam Altman of OpenAI.
Anthropic lost its $200 million contract with the Pentagon because its CEO, Dario Amodei, insisted upon responsible limits on military and intelligence applications of AI. More importantly, Anthropic became the first American firm to be labeled a “supply chain risk,” meaning no company doing business with the US military can do business with Anthropic. Just as quickly as the government blacklisted Anthropic, OpenAI’s Sam Altman turned over his technology without real guardrails.
This AI saga is more significant than the situation in the Middle East because it impacts not only the conflict in Iran, but every future action taken by this administration and its successors. It’s almost impossible to understate the impact that AI will have in the years to come.
Here’s what happened:
In negotiations with the Department of Defense, Anthropic CEO Dario Amodei laid out two conditions for using his firm’s tech: no mass surveillance of Americans; no fully autonomous weapons.
I have been an AI enthusiast for decades. Thanks to Deep Blue, I became the first white collar professional to have his job threatened by AI! It didn’t take me long to realize that the future is in cooperation between man and machine, leveraging humanity’s creativity and ingenuity and technology’s brute force capabilities and pattern recognition. AI will accelerate innovation and is essential to America’s national defense.
Of course, artificial intelligence carries risks, and Amodei’s proposed limits make sense. Defense Secretary Pete Hegseth wanted no meaningful restrictions on military and intelligence uses of AI—but no organ of government, especially the armed forces, should operate without rules. Yet the administration was willing to absorb the massive switching costs of dropping Anthropic—its existing contractor—in order to… what? Leave the door open to AI-powered mass domestic spying and autonomous killing machines?
Ethical considerations haven’t historically moved this administration, so the Defense Department’s refusal to engage with Amodei’s principles is unsurprising. Perhaps more shocking was the abrupt surrender of Amodei’s rival and former colleague, Sam Altman, who was waiting in the wings to snatch up the Pentagon’s business. Altman’s OpenAI inked an agreement with the government almost as soon as the Anthropic contract fell apart.
Just days earlier, as Anthropic sparred with the DoD, Altman appeared to stand with Amodei, stating that “we have long believed that AI should not be used for mass surveillance or autonomous lethal weapons.”
I guess the word “should” was doing a lot of work there—whatever should happen, the government will have access to the world’s most powerful technology sans serious guardrails. The military has a greenlight from OpenAI to use its tech to spy on Americans and develop killing machines devoid of any human accountability. OpenAI claims that its contract with the Pentagon offers a backstop against illegal and unsavory uses of artificial intelligence. However, those assurances rest upon the status quo and don’t bar the government from changing its policies down the line in order to misuse AI. That makes Sam Altman not only a chicken, but a liar too. I suppose humans retain a monopoly on misbehavior.
ICYMI: Tune in to the latest episode of the Older/Wiser podcast:
Indeed, aspiring authoritarians bring out two different archetypes among business elites.
There are those who trip over themselves to comply in advance. They are moved by fear of political retaliation and the vapid desire to add a couple of extra zeroes to their ledgers. They destroy institutional safeguards by discarding them without having to be asked. Sam Altman belongs to this class of cowards.
Then there are those industry leaders who stand by their values. No matter their ambitions, they refuse to put a price on principle. Losing a far-reaching contract with the US military will undoubtedly hurt Anthropic’s business, and landing on a government blacklist could be even more damaging. Still, no amount of money can buy back your dignity. Dario Amodei lost his tender with the Pentagon but the Anthropic CEO held onto his beliefs and cemented his reputation as a man of courage. The “supply chain risk” smear is a badge of honor.
I must admit that I sympathized with Sam Altman when OpenAI’s board attempted a coup two years back. I owe the board an apology: they recognized something rotten about Altman that I did not. Over the years, Altman has had nice things to say about the risks associated with AI. Yet when push came to shove, his words meant zilch. By contrast, Dario Amodei acted on his stated values.
Americans should not raise the white flag on AI simply because one greedy businessman has. The Department of Defense may insist upon an expansive mandate to use and abuse artificial intelligence, yet America has laws, courts, and Congress—these institutions are under unprecedented strain, but they still exist. As with Trump’s politicized attacks on law firms, the “supply chain risk” claim will likely not survive legal scrutiny. Meanwhile, Americans have shown that they care about their privacy. The public can demonstrate that there’s a cost to failing to protect your customers, and they can do so by changing their business from OpenAI to Anthropic.
The administration will threaten and bully those who try to check it. Complying in advance only makes you look weak and forestalls the inevitable. If you bend over, your payment will eventually come due anyway. But if you fight a demagogue, you can win.







The Iranian Martyr Makes The Iranian War A Religious One: Inevitable Consequence Of CNPP
The murder to the aged cancer stricken Khomeni by Israeli forces has turned him into a Muslim martyr This has put our Nazi regime on the spot and in response Hegseth now says that this conflict is a religious one, Muslim vs Christianity/Jewish religions This makes this conflict a terrible, dangerous, reckless. emotional catastrophe There are many emotionally charged Shia Muslims in the Middle Eastern region who view the US Israeli warring coalition is creating a religious war Because of this this has the high probability of expanding to many nations in the region
The fascist Netanyahu has long wanted this moment because Iran through its terrorist groups Hamas, Hezbollah, and Houthis have always threatened his theocratic Jewish state As an oppressive warmongering far right Israeli government with the difficulty to give in to the need for a two state solution, this theocratic state has induced Cheeto’s Dept of War because they are equally warmongers
But the issue is that Netanyahu has coopted the American far right warmongers and it’s right up CNPP’s(Christian Nationalist Pedo Party) mission, to create a “Christian” nation standing up against all pagan states The moniker of CNPP is just an political excuse to create a fascist strongman, autocratic government
Just a minor correction: in the first article the word should be "overstate" and not "understate'. Thank you for your insight in all our articles.