Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

Anthropic Won't Let You Run Mythos. But Claude Is Already in Your Salesforce.

Tal Shapira
Updated
April 11, 2026
April 11, 2026
4 min read
Ready to Close the SaaS Security Gap?
Chat with us

When the New York Times' Kevin Roose described Project Glasswing as a frontier AI model “so powerful that Anthropic is not releasing it to the public,” he wasn’t being sensational. That’s the accurate read. Anthropic built something capable enough that they decided the responsible move was to gate it behind a coalition of 50 organizations and $100 million in controlled access credits before anyone else could touch it.

The results justify the caution. Claude Mythos located a 27-year-old vulnerability in OpenBSD and a 16-year-old flaw in FFmpeg that automated testing missed five million times. The window between discovery and exploitation has collapsed. What once took months now happens in minutes.

Alex Albert, Anthropic’s Head of Developer Relations, called it “possibly the most consequential event in the AI industry I’ve seen up close since joining Anthropic almost 3 years ago.” That conviction is warranted.

It is also pointing at half the problem.

The AI your employees are actually using

Glasswing is built around a specific threat: attackers using AI to find and exploit vulnerabilities in software infrastructure before defenders can patch them. That is a real and serious problem worth the investment.

It is not, however, where most enterprise security teams are encountering AI risk day to day.

The AI most employees interact with isn’t a foundation model their company deployed or controls. It’s a feature inside a SaaS subscription. Copilot inside Microsoft 365. Einstein inside Salesforce. Gemini inside Google Workspace. These didn’t arrive through a separate procurement process or a security review. They came embedded in tools employees already used, with permissions already granted, at the pace of a software update.

That’s AI delivered as a layer on top of SaaS — and it represents the majority of enterprise AI activity. Cyera’s team described the visibility problem well: AI visibility without identity context is just a list. Knowing an AI agent exists tells you almost nothing. Knowing what it can access, what it’s doing, and whether that behavior makes sense given who authorized it — that’s the actual question security teams need to answer.

Most can’t.

The threat that doesn’t need to find a bug

Glasswing targets the attack path that requires an adversary to identify a vulnerability and exploit it from outside the system. There’s a gap to cross. Time, skill, and opportunity all constrain how quickly that can happen.

An AI agent operating inside your SaaS environment with a valid OAuth token doesn’t have that gap. It’s already in. It was provisioned, connected, and started operating. In many organizations, that happened without a formal security review, without a defined scope of access, and without any monitoring on what it does after the fact.

One security team recently discovered 150 distinct Copilot agents running in their environment. All deployed in a single week. None reviewed by security.

An attacker who compromises one of those agents — through prompt injection, a supply chain attack on the underlying model, or a misconfigured permission scope — doesn’t need to find a decade-old vulnerability. They inherit whatever the agent was authorized to do: read access to sensitive files, write access to shared drives, the ability to query CRM records or trigger downstream automations.

The model most security tools are missing

Most security tools were built to watch humans. They track logins, file access, configuration changes — all tied to human accounts. When an AI agent accesses 400 files in 15 minutes, those tools either attribute the action to the person who authorized it, or miss it entirely.

That’s the wrong model. An AI agent acting on behalf of a user is not the same as the user acting. The behavioral baseline is different. The risk profile is different. The question you actually need to answer is whether this agent’s behavior makes sense given what it’s authorized to do, and given what the authorizing human normally does. Answering that requires holding identity, behavior, and SaaS context together in the same view.

Most organizations don’t have that. Most tools weren’t built to provide it. The security community is already asking what Glasswing doesn’t cover. They’re right to ask.

A note to Anthropic

Project Glasswing is a genuine contribution. Using frontier AI to find vulnerabilities before attackers do is exactly the kind of asymmetric defense the industry needs, and the commitment from the launch partners reflects real organizational will.

But here’s what’s worth sitting with: Claude is one of the AI agents already operating inside enterprise SaaS environments today. So is GPT. So is Gemini. The same class of models being pointed at software infrastructure to find vulnerabilities are also the agents that enterprise security teams need governance over — their access, their behavior, their blast radius if something goes wrong.

Mythos is too powerful to release to the public. That’s a responsible call. The Claude versions already running inside enterprise SaaS are another matter entirely. They’re there. They have access. And in most organizations, nobody is watching them.

Glasswing secures the infrastructure those models run on. That’s necessary. The other half — governing the agents already operating inside the application layer — is just as urgent. And it’s mostly still undone.

No items found.

Dr. Tal Shapira

ABOUT THE AUTHOR

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Technical Review by:
Gal Nakash
Technical Review by:
Dr. Tal Shapira

Tal is the Cofounder & CTO of Reco. Tal has a Ph.D. from the school of Electrical Engineering at Tel Aviv University, where his research focused on deep learning, computer networks, and cybersecurity. Tal is a graduate of the Talpiot Excellence Program, and a former head of a cybersecurity R&D group within the Israeli Prime Minister's Office. In addition to serving as the CTO, Tal is a member of the AI Controls Security Working Group with the Cloud Security Alliance.

Ready to Close the SaaS Security Gap?
Chat with us
Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Related Posts

What Is Prompt Poaching? A Guide for Security Leaders
Gal Nakash
Prompt poaching is an attack technique that allows browser extensions to silently capture and exfiltrate AI chatbot conversations without any employee error. Recent cases have affected millions of users, from fake AI sidebar tools to a popular VPN extension that secretly harvested prompts from eight AI platforms. Traditional security tools largely miss it, which is why real-time visibility into AI activity across your SaaS environment is essential.
The Kill Chain Is Obsolete When Your AI Agent Is the Threat
Gal Nakash
AI agents are rewriting the rules of cyber threats. Unlike traditional attackers who must fight through each stage of the kill chain, a compromised AI agent hands adversaries instant access, pre-mapped environments, and legitimate cover for data movement making most existing detection tools blind to the threat. Securing your SaaS ecosystem starts with knowing exactly which agents are operating, what they can access, and when their behavior deviates from the norm.
Malicious Extensions That Lock You Out While They Steal Your Session
Dr. Tal Shapira
Five malicious Chrome extensions disguised as enterprise productivity tools stole session tokens from Workday, NetSuite, and SuccessFactors while simultaneously blocking admins from revoking access or resetting credentials. The attack exposed a blind spot in SaaS security: the browser, where stolen session cookies render SSO and MFA irrelevant.
See more featured resources

Ready for SaaS Security that can keep up?

Request a demo