
Google is negotiating with the Pentagon to bring Gemini into classified environments: this is how artificial intelligence enters the decision-making chain of war.
Google is negotiating with the U.S. Department of Defense over an agreement that would allow the Pentagon to bring Gemini models into classified environments. On the table is a point that matters more than the headline: use would be permitted for “all lawful uses,” but Google is reportedly trying to insert a clause into the contract to prevent applications tied to domestic mass surveillance and to autonomous weapons without appropriate human control.
Translated into plain terms: Silicon Valley is no longer just supplying office tools to governments. It is negotiating the conditions under which its models can enter the most sensitive zones of the military machine. And when a model enters a classified network, the ground shifts. We are no longer talking about chatbots used to summarize emails or draft reports. We are talking about software that can assist analysis, synthesize intelligence, speed up procedures, and end up close to operational decisions.
The leap did not begin today. Google had already gotten its foot in the door. On December 9, 2025, Google Cloud announced that the Chief Digital and Artificial Intelligence Office had selected Gemini for Government as the first enterprise AI on GenAI.mil, the platform intended for roughly 3 million civilian and military employees of the Department. In that case, the declared scope was limited to unclassified processes: onboarding, compliance, contracts, document management, internal productivity. Already massive stuff, but still sold as intelligent administration.
Now the issue is different: bringing Gemini into classified networks. And that is where the glossy “productivity” presentations end. That is where the chapter begins in which the model works alongside the state’s most delicate flows. It is the step that suddenly makes very concrete what until now looked like a piece of government marketing. Anyone following the Big Tech race for artificial intelligence already knows that the real business is not the assistant that helps you write better. It is becoming infrastructure.
The Pentagon had been pushing the leading AI companies to make their tools available on classified networks with fewer restrictions than those normally applied to commercial users. The same report explained that these systems can be used in sensitive activities such as mission planning or targeting support, and that the central issue is always the same: the models are fast, useful, impressive; but they also make mistakes, invent things, and in some contexts an error cannot be fixed with a “regenerate” button.
Here the problem is not technical in the narrow sense of the term. It is political and contractual. Who decides the limits? The company that builds the model? The government that buys it? The command structure that uses it? Google, according to the report, is trying to make at least two red lines clear. But that very precaution tells the story: if it has to be written into the contract, it means extended military use is no longer an academic-debate paranoia. It is an ongoing negotiation.
Every time a Big Tech company enters these environments, it sells three things at once: model, cloud, and operational dependency. The value is not only in Gemini itself. It lies in the fact that the model arrives with the entire package: security, compliance, infrastructure, data management, interfaces, integration. It is the same logic we have already seen when talking about power, technology, and control: whoever owns the deepest operational layer stops being a simple supplier and becomes part of the mechanism.
That is why the Google-Pentagon case matters more than the usual government announcement. It marks another step in the transformation of Big Tech into structural contractors of public power. And in the meantime it pushes even farther the boundary of automated warfare: not necessarily a war fought by robots firing on their own, but a war prepared, filtered, accelerated, classified, and managed by statistical models trained elsewhere. If you want to understand where this trajectory can lead, it is worth rereading the relationship between war games, artificial intelligence, and power.
When a Big Tech company negotiates to bring its models into the Pentagon’s classified networks, it is not selling a feature. It is negotiating its place inside the decision-making chain of force.