
AI enters critical security: amid cyber alarms, EU rules, increasingly powerful models, and the battle for chips and infrastructure.
We are no longer in the territory of surprising demos or tools that save time. We are at the point where models are beginning to touch banks, core software, infrastructure, compliance, regulation, and industrial power.
Over the past few hours, a series of signals has lined up that, taken individually, look like different stories. In reality, they tell the same story of transformation. On one side, Anthropic, with a model powerful enough to push authorities and major financial institutions to question cyber risk. On the other, the European Union weighing whether ChatGPT Search should fall under a tougher regulatory framework. In between, Meta renewing its offensive with Muse Spark, and OpenAI making one point increasingly explicit: the real AI market now is not the viral toy, but enterprise infrastructure.
The most significant case is Claude Mythos Preview. Anthropic placed it inside Project Glasswing, an initiative built together with major technology and security players to find and fix vulnerabilities in the most critical software systems. The message is already weighty in itself: the model is not presented as a generic assistant, but as a tool to be used in a context where error, abuse, or asymmetry of access can have systemic consequences.
When regulators, central banks, ministries, and major financial institutions begin asking whether a model like this could alter the system’s risk profile, the AI sector stops being just a matter of innovation. It becomes a matter of resilience. The point is not only that a model can help defenders find flaws faster. The point is that the same capability, in the wrong hands or distributed without constraints, can further compress the time between the discovery of a vulnerability and its exploitation.
This is the real leap. Cybersecurity, until yesterday, could still be described as a field in which automation helped analysts and technical teams. Today, we are facing models that can read, understand, test, and modify code at a scale that is pushing governments and companies to rethink their security posture. It is the same transition we have already seen elsewhere: first the model is described as useful, then as inevitable, then as too powerful to be left to the market without a new level of control.
Anyone who follows AI closely knows that this issue is directly tied to the rise of AI agents and their ability to act on real systems. The better a model becomes at navigating complex environments, the less it remains just a language machine and the more it becomes an operational lever. And that is where the uncomfortable part of the story begins.
The second signal is political, and perhaps for that reason even more important. The European Commission is evaluating whether OpenAI should be subject to stricter obligations under the Digital Services Act after the company reported 120.4 million average monthly active recipients in the European Union for ChatGPT Search in the half-year ending in September 2025. The 45 million threshold set by the DSA is not a technical detail: it is the line that separates a relevant service from a platform whose impact becomes systemic.
This matters because it changes the grammar with which we talk about AI. If Europe begins treating tools of this kind not only as technology products but as informational infrastructures to be monitored, then artificial intelligence enters a more mature and harsher phase. It is no longer enough to say that the model is useful, popular, or innovative. We have to discuss who is accountable for the risks, which transparency obligations are triggered, and how content, ranking, visibility, and public impact are managed.
And it is a coherent step with what we already know: when a technology approaches the threshold of mass mediation, it stops being neutral. It no longer decides only “how well it works,” but also how it shapes attention, trust, and access to information. We have already seen this with social platforms; now we are seeing it with conversational AI and its evolution toward answer engine, search tool, and universal interface.
Meanwhile, the industrial side of the race keeps pushing forward. Meta has launched Muse Spark, the first visible model from its “superintelligence” team, at a moment when the group needs to prove it has not fallen behind after the less enthusiastic reception of Llama 4. This is not just a product story. It is the sign that competition among the major players has become too strategic to allow long periods of missteps.
Meta wants back to the center of the table not because the public needs another model, but because losing ground at this stage means losing bargaining power over the future of interfaces, services, cognitive labor, and data. In other words: the fight is not only over the benchmark, but over the right to sit inside the daily infrastructure of billions of people.
OpenAI, for its part, has chosen an even clearer line. In its enterprise update, it stated that more than 40% of its revenue now comes from that segment, that Codex has reached 3 million weekly active users, and that GPT-5.4 is driving agentic workflows. Translated: the business is shifting from consumer fascination to operational penetration inside companies.
This is where many people are still watching the wrong part of the movie. AI is not winning because it can make increasingly fluid images or texts. It is winning because it is being embedded into processes, teams, development systems, decision cycles, and work interfaces. Whoever controls these passages does not just control a piece of software: they control the way work is delegated, accelerated, and made measurable.
To understand why this trajectory was almost inevitable, it is enough to return to a basic question: how AI models are trained, and above all how they are then turned into services, APIs, coding tools, and automation systems. The core of power lies not only in the model, but in the supply chain that makes it continuously available, integrable, and reliable for the market.
Broadcom and Google have entered into a long-term agreement to develop custom AI chips through 2031. Anthropic announced an expansion of its partnership with Google and Broadcom to secure more TPU capacity starting in 2027. This is not a technical footnote for insiders. It is the material proof that the war over models rests on a war over foundations.
Every time we see a new model, a new feature, or a new adoption record, we should ask where the physical back room of that promise is located. The answer runs through data centers, energy, supply chains, semiconductors, cloud infrastructure, and computing capacity booked years in advance. It also runs through a resource that has now become openly geopolitical: computing power as a strategic asset of the internet.
If AI is entering critical security, if regulators are watching it as a systemic platform, if major groups keep escalating to dominate the interface of work, and if all of this requires chips, gigawatts, and dedicated infrastructure, then we are no longer looking at a tech trend. We are watching the construction of a new layer of power.