On the cyber frontier, the fastest gun has become the fastest model. That thought has stuck with me since Anthropic announced Project Glasswing. I just watched Once Upon a Time in the West, great movie, but I’ll skip the analogies. I’m no Leone when it comes to dramatic showdowns.
Back to shop talk. Glasswing isn’t a one-off. It’s the latest, and from what we hear, the greatest, entry in what is now, very clearly, a category. Frontier capability is being aimed straight at the heart of cybersecurity, and what that means for the rest of us trying to stay a half-step ahead of the next breach is being worked out in production, not in the lab.
This Is Now a Category, Not an Announcement
If Glasswing felt like a singular moment, it isn’t. Singular moments in our profession are getting very rare. If something hits you on the head, get your hard hat, because something else is coming.
Think back to when OpenAI launched Aardvark, their agentic security researcher, in 2025. It became Codex Security in 2026 when it was released for enterprise use. Theori followed right after with its own AI discovery platform, Xint. More are on the way, just watch for the signs. What matters most isn’t the tools themselves, but how quickly we move from noticing something new to actually finding new CVEs in real code.
In its first 30 days of beta, OpenAI reported that Codex Security identified 792 critical findings, including 14 CVEs across key projects such as PHP, libssh, and Chromium (OpenAI, March 2026). attackers, nor could we achieve that kind of scanning a year ago.
The Economics Already Shifted
I’m less interested in these tools as cool technical milestones, though they are, and I wish I could be more fascinated by that… I really do. I’m trying to do the math. The economics of vulnerability discovery have changed. Past tense, not will, they have.
We’ve benefited from an imbalance all our careers. Sure, a zero-day can be catastrophic, but finding weaknesses was slow, expensive, and heavily dependent on human expertise. Of course, exploiting them usually took far less effort. Frontier AI is narrowing that gap the moment it lands in the wrong hands. Systems that can search code, reason over complex environments, and identify exploitable conditions scale offensive capability in ways most security teams aren’t built to handle.
Sure, AI helps us defenders, too. The honest question is whether defensive capability can outpace the governance, controls, and operational discipline needed to keep it pointed in the right direction. If the model can help your team find flaws faster, it can do the same for the other side, and attackers won’t be slowed down by your change advisory board, procurement cycle, or quarterly risk review.
Credit Where It’s Due
Credit goes to the labs. Anthropic is presenting Glasswing as a real defensive tool, and OpenAI is doing the same with Codex Security and their Trusted Access for Cyber program, which is an identity and trust framework (OpenAI, February 2026). Whatever you think of these companies, they’re not being naive. Both are taking the dual-use issue seriously.
We can’t afford to be naïve about it either. Once a capability like this exists, and now there are several, the race is about adoption and execution. We get to decide whether these tools become a force multiplier for our defenses or an accelerant for the people trying to kick the doors in. Well, actually, we just get to decide if we want to ignore the boot sounds banging on the outside of the door, or do something about them.
The Implications Are Immediate
- Vulnerability management has to become faster and more risk-driven. The monthly patch cycle is already on borrowed time, and the cadence is getting worse, not better.
- Detection engineering needs to account for automated reconnaissance and much faster exploit iteration than anything in your current threat model.
- Incident response needs to assume attackers will test more paths, more quickly, and with less human friction than your playbook assumes.
- “Patch later” is becoming a more dangerous habit by the month. When public agents are surfacing CVEs at the rates we’re now seeing, the gap between disclosure and weaponization shrinks for everybody.
Security teams are very good at playing the game against our usual foes. We’re less comfortable and slower to admit when the rules of the game change. This year is one of those moments, new rule book folks.
Two Bad Instincts
Right now, we need to avoid two common mistakes: getting too excited or being too skeptical. Overconfidence and complacency are both easy traps. Vendors will promise miracle solutions, and your board will want to know why you haven’t bought one. But that’s not a real strategy.
What You Can Do
Focus on the basics and get ready. The threat landscape is speeding up, even if it’s not glamorous work.
- Inventory your assets. You probably can’t, fully, and that’s a problem. But your threat environment needs to be managed, and you can’t shrink what you don’t inventory.
- Tighten identity controls. Phishing-resistant MFA on your privileged accounts, now, seriously, now. While you’re at it, zero-trust and access reviews. You know what? Review your entire IAM program, and read my new paper on it.
- Improve logging and monitoring with identity context. Don’t just add more telemetry.
- Implement or test segmentation so one compromise doesn’t cascade across the enterprise.
- Minimize time to patch. Assume the window is shorter than it used to be.
- Tabletop your response plan, and stress-test it against a scenario where the attacker iterates at machine speed.
- Use AI where it genuinely strengthens your defense. Evaluate tools now on the market with clear eyes. Keep humans in the loop for production decisions. Keep their hands on the triggers.
- Make sure your business continuity plans are tight, and your risk picture is honest.
In other words, make sure your own house is in order before a fight breaks out.
One More Thought
Here’s what I keep thinking about: Anthropic, OpenAI, and at least one other outside company have all released similar capabilities at the same time. This is what responsible disclosure looks like in this field. That’s what we can see.
This makes the silence from the other categories of actors a lot more conspicuous. State-backed programs. Well-funded criminal operations. The quiet end of the private offensive-security market. If three or four public labs converged on this capability within a year, the working assumption that nobody less talkative got there first or got further stops looking conservative. A bit of me fears that Anthropic and OpenAI were only the first, ethical or commercial enough to disclose. Quiet capability beats public capability every time in this line of work, and the quiet kind doesn’t come with a press release. Or maybe I’m paranoid, but maybe I’m not. Time will tell
Closing
Speed has always mattered in the cyber world, but never this much. That’s the main takeaway. Glasswing, Codex Security, and whatever comes next aren’t just more AI news, they’re signs that the landscape has already changed.
For CISOs, the job now is to recognize these changes, respond calmly, and make sure the best tools are working for your team, not for attackers.
Focus less on flashy new tools and more on what attackers actually use to break in and stay in. That’s what matters. Good luck, we have a lot to do and not much time.

