Google Researcher "Incredibly Ashamed" After Company Signs Secret Pentagon Deal
More than 600 Google employees sent a letter to CEO Sundar Pichai urging him not to sign a classified AI deal with the Pentagon. He signed it anyway. Hours later one of the company's own senior researchers went public with how he felt about that.
"Incredibly ashamed," said Andreas Kirsch, a senior research scientist at Google DeepMind. He told Business Insider he had hoped the employee letter would have an effect — and instead woke up to "a worst-case version" of the contract. He called the move "shameful" and said it violates Google's foundational "don't be evil" principles. He also questioned openly how he could continue his work under these conditions.
That's a significant thing for a sitting researcher at one of the world's most powerful AI labs to say out loud with his name attached.
What the Deal Actually Says
The contract — first reported by The Information — allows the Pentagon to deploy Google's Gemini AI models on classified military networks for "any lawful government purpose." It's an amendment to an existing unclassified contract signed late last year and it gives the Department of Defense access to Google's AI in spaces that the public and most of the company's own employees will never be able to see or scrutinize.
There are some stated limits built into the language. The contract bars use of the AI for domestic mass surveillance and prohibits autonomous weapons without human oversight. It also states explicitly that Google cannot veto "lawful government operational decision-making."
That last clause is the one critics are focused on. If Google has no veto over how the government uses the technology once it's deployed on classified systems, the safety provisions — however well-intentioned — are essentially unenforceable. The company can write rules all day. It can't audit compliance on systems it doesn't have access to.
What Employees Were Worried About
The letter signed by over 600 employees — including senior figures from DeepMind — warned that classified military AI work could lead to the technology being used in "inhumane or extremely harmful ways" and could cause irreparable damage to Google's reputation. The letter went to Pichai before the deal was confirmed. It didn't change the outcome.
This isn't Google's first internal battle over military contracts. In 2018 the company faced significant employee pressure over Project Maven — a Pentagon program using AI to analyze drone footage — and ultimately chose not to renew that contract. The current deal represents a significant departure from that earlier stance.
The Bigger Picture
Google isn't alone in this space. Similar deals have been signed with OpenAI, Anthropic, and Elon Musk's xAI. Companies like Palantir and Anduril have built their entire business models around defense work. The Pentagon — which recently renamed itself the Department of War — has been actively pushing for fewer restrictions on the AI tools it uses and the speed at which it can deploy them.
The AI industry is at an inflection point where the technology is capable enough to be genuinely useful in military applications and the money flowing from defense contracts is significant enough to be hard to turn down. The internal tension at Google reflects a broader unresolved question across the entire sector — at what point does building powerful AI and handing it to governments for classified use become something researchers can no longer square with their own ethics?
Kirsch said the quiet part out loud. Most of his colleagues who feel similarly probably won't.
Google maintains it supports national security work while adhering to responsible AI principles. What those principles mean in practice on classified networks where Google has no oversight authority is a question the company hasn't fully answered.
Curious for more stories that keep you informed and entertained? From the latest headlines to everyday insights, YourLifeBuzz has more to explore. Dive into what’s next.