It’s a wild time. On the other hand, American companies have been talking big for decades.
The whole concept of entering the market cheaply or for free, getting everyone on board, and then letting prices skyrocket has been the norm for American venture capital for years.
Anthropic’s claim that they have trained an AI model so extensively that they don’t dare release it freely isn’t particularly interesting by itself. In fact, it fits a familiar pattern of “this is too dangerous to just release” that we often see in security and AI.
It’s worth noting that Anthropic is an AI company positioning itself as an alternative within the LLM world. Not so much a direct GPT clone, but rather a player with a different focus and philosophy.
It’s also worth noting that Anthropic is involved in government contracts in the U.S., and that the relationship between big tech and the government is rarely neutral.
The claim that Anthropic has a model in “Claude Mythos” that is “too powerful to share with the world” leaves a certain aftertaste. Not because it is necessarily untrue, but because it fits perfectly into a world where AI capabilities are increasingly at the intersection of defensive and offensive use.
The real issue isn’t Anthropic itself. The issue is that every serious government is working on AI and cyber capabilities. And yes, for decades, governments have been interested in breaking or circumventing encryption and finding vulnerabilities in systems. So the question isn’t whether this kind of technology exists, but who has access to it and under what conditions.
If an AI is capable of identifying or even exploiting structural weaknesses in software, it becomes a dual-use system in its purest form. Then you’re not just engaged in “security research,” but also in something that can be used directly for offensive purposes. And that’s exactly where the tension arises: do you share it widely, or do you limit it to a select group of partners?
In an ideal world, Anthropic should do two things. First: clarify what is actually fact and what is marketing or interpretation. Second: if this is indeed true, enable the broader security community to protect itself against it.
As a commercial company, they are free to choose to share these types of models only with a small group of major tech companies. But this clashes with the traditional ethos of infosec, where the prevailing view is that knowledge ultimately strengthens defenses.
With over twenty years of experience in information security, it’s not controversial to say that AI can accelerate the discovery of vulnerabilities. That’s not a new idea. In fact, infosec was one of the first fields where machine learning and automation were seriously applied, long before “LLMs” gained attention. However, those were targeted, task-specific systems, not general language models with broad reasoning capabilities.
The hype surrounding LLMs has somewhat overshadowed that aspect of AI, even though that is often where the practical impact lies: systems that independently recognize patterns, generate variations, and iteratively improve in security contexts.
My point, therefore, remains quite simple. It’s not that Anthropic is necessarily wrong, but that claims like these are too easily pulled into a geopolitical or marketing context. And amid all that noise, the core issue gets lost: we are building systems that have the potential to accelerate both defense and offense, without a clear playing field or oversight.
And that might just be the most uncomfortable part. It’s not that a single company is “all talk,” but that a company that actually delivers is growing faster than the power structures meant to keep it in check.