ChatGPT Uninstalls Surge 295% After Pentagon Deal: What Really Happened?
ChatGPT uninstalls surged 295% after OpenAI signed a controversial Pentagon deal involving autonomous weapons and mass surveillance — here's what really happened and why millions deleted the app.
It took one weekend. One contract. One decision.
And just like that, millions of everyday ChatGPT users — people who’d been using OpenAI’s chatbot to plan dinners, write emails, and help their kids with homework — started asking themselves a question they’d never had to ask before:
“Do I trust this thing anymore?”
For a staggering number of them, the answer was no.
The Deal That Broke the Internet
On February 28, 2026, news broke that OpenAI had signed a contract with the U.S. Department of Defense (DoD) — a deal that would give the Pentagon access to AI technology for classified military systems. The deal included terms that Anthropic, OpenAI’s closest safety-focused rival, had explicitly refused weeks earlier.
What did Anthropic refuse? The terms allegedly included using AI for mass surveillance of American citizens and powering fully autonomous weapons systems — things Anthropic CEO Dario Amodei said the company could not agree to in good conscience.
OpenAI agreed to those terms.
And the internet? The internet did not let it slide.
The Numbers Don’t Lie: A 295% Uninstall Surge
Within hours of the news spreading, app analytics firm Sensor Tower started recording something extraordinary:
- ChatGPT uninstalls jumped 295% day-over-day on February 28 — nearly tripling the app’s typical daily uninstall rate of 9% (measured over the prior 30 days)
- U.S. downloads dropped 13% day-over-day on Saturday, then fell another 5% on Sunday
- 1-star reviews on the App Store surged 775% on Saturday, then doubled again on Sunday
- 5-star reviews dropped by over 50% in the same two-day window
To put it plainly: this wasn’t a slow bleed. It was a dam breaking.
Hashtags like #CancelChatGPT and #QuitGPT went viral within hours of the announcement. Reddit threads flooded with users urging each other to “Cancel and Delete.” The backlash wasn’t just noise — it was measurable, rapid, and financially significant.
The Claude Effect: A Rival’s Best Weekend Ever
While OpenAI was scrambling for damage control, Anthropic’s Claude had the best weekend in the company’s history.
- Claude’s U.S. downloads rose 37% on Friday (February 27) and 51% on Saturday (February 28)
- App analytics firm Appfigures noted that Claude’s total daily U.S. downloads on Saturday surpassed ChatGPT’s for the first time ever
- Claude climbed to the #1 spot on the Apple App Store’s Top Free Apps leaderboard in the U.S.
- Claude also hit #1 in six other countries, including Canada, Germany, Switzerland, Belgium, Norway, and Luxembourg
- Similarweb reported Claude’s U.S. weekly downloads were roughly 20 times what they had been in January
Why? Because Anthropic had publicly and principally said no to the deal. Consumers noticed — and voted with their phones.
What Was in the Pentagon Deal — And Why Did It Matter?
To understand the backlash, you need to understand what was at stake.
The U.S. Department of Defense had approached both Anthropic and OpenAI about AI partnerships. According to Anthropic’s public explanation, the DoD demanded two specific capabilities that crossed ethical red lines:
- Use of AI through a custom instance (GenAI.mil) for autonomous weapons — meaning the AI could be used in decision-making about lethal force without guaranteed human oversight
- Mass surveillance of American citizens — using AI to monitor U.S. residents at scale
Anthropic refused both demands. The Trump administration responded by banning all federal agencies from using Anthropic products, designating the company a “supply chain risk.”
OpenAI stepped in — and accepted the terms Anthropic had rejected.
The timing made the optics even worse: the U.S. launched strikes against Iran almost immediately after the deal was signed, leading critics to directly connect the new military AI contract to real-world weapons deployment.
Sam Altman’s Apology Tour
OpenAI’s CEO Sam Altman quickly realized the rollout had been a disaster.
In a post on X, Altman admitted the deal had been “rushed” and announced that OpenAI was working with the Department of Defense to amend the agreement. He outlined two specific additions to the contract:
- An explicit clause stating the AI shall not be used for domestic surveillance of U.S. citizens
- Language reaffirming OpenAI’s commitment to its core safety principles
Whether those amendments will rebuild user trust remains an open question. Altman also held an AMA (Ask Me Anything) session on X to address user concerns — though many critics noted that contract amendments feel more like PR patch-ups than genuine course corrections.
Why This Matters Beyond the App Store
Yes, 295% is a dramatic number. But what this story is really about is something deeper: the moment consumers realized they have power in the AI era.
For years, people have debated the ethics of artificial intelligence in abstract terms — bias in algorithms, deepfakes, job displacement. The Pentagon deal was different. It was concrete. It was immediate. And it had a face: Sam Altman, a product people used every single day.
The switching costs in AI are near zero. Unlike leaving a social media platform — where your photos, friends, and history are locked in — switching from ChatGPT to Claude or Gemini takes about 30 seconds. The AI assistant market has no moat built on data lock-in. That makes consumer trust not just a nice-to-have, but a strategic asset that can evaporate overnight.
OpenAI has been aggressively pivoting toward enterprise and government contracts — a financially logical strategy where DoD budgets dwarf consumer subscription fees. But the Pentagon deal exposed a tension that can’t be easily resolved: the users who made ChatGPT a cultural phenomenon are not the same users who are driving OpenAI’s future revenue strategy.
The Bigger Picture: AI Ethics as a Market Force
For anyone who thought AI ethics were just academic hand-wringing, this weekend is a case study worth studying.
A company publicly committed to safety principles — Anthropic — gained market share by being refused a government deal and standing by its values. Another company — OpenAI — lost market share by accepting that same deal.
That’s new. That’s significant. And for AI companies across the board, it sends a signal that cannot be ignored:
Users are watching. And they will leave.
Whether this represents a lasting shift or a temporary spike in uninstalls remains to be seen. OpenAI retains massive advantages in brand recognition, developer adoption, and enterprise relationships. But the weekend of February 28, 2026 will be remembered as the first major consumer revolt in AI history — and a reminder that trust, once broken, is extraordinarily hard to rebuild.
What Should You Do?
If you’re an AI user trying to make sense of all this, here’s what matters:
- Understand what you’re agreeing to. The companies building these tools are making decisions about how they’re deployed. Those decisions affect you.
- Your choice matters. App store rankings and uninstall data are among the few direct feedback mechanisms consumers have with tech giants.
- Stay informed. The AI landscape is moving fast. The values and decisions of these companies will shape everything from your inbox to international security.
This week, millions of people exercised that choice. Whether you agree with them or not, the message was unmistakable.


