Demo Request
Take a personalized product tour with a member of our team to see how we can help make your existing security teams and tools more effective within minutes.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Home
Blog

The Vercel and Context AI breach: an AI supply chain attack, step by step

Alon Klayman
Updated
April 21, 2026
April 21, 2026
5 min read
Ready to Close the SaaS Security Gap?
Chat with us

Infostealer, OAuth consent, AI-SaaS apps, Chrome Extensions - all connected to one supply chain attack. The thing is: it's not the first time, and it's unlikely to be the last time we witness this combo. Let's unpack it.

On April 19, 2026, Vercel disclosed a security incident that, based on the company's own public statement, appears to have originated somewhere most security teams aren't watching: a third-party AI tool used by a Vercel employee. According to Vercel's bulletin, the attacker did not hit Vercel's perimeter directly — instead, the intrusion traces back to a compromise of Context AI, a third-party AI application the employee had authorized against their Vercel Google Workspace account. Reco can help you identify whether Context AI, or apps like it, are connected to your environment right now.

What happened: Context AI and Vercel

Context AI is a startup whose deprecated consumer product, the "AI Office Suite," lets users connect AI agents to their Google Workspace to work with documents, slides, and spreadsheets. According to Context AI's published incident statement, at least one Vercel employee had signed up for the AI Office Suite using their Vercel Google Workspace account, establishing an OAuth trust relationship between that employee's account and the Context AI application.

That pre-existing OAuth trust appears to be the thread the entire incident hangs from.

Per Hudson Rock's public analysis, a Context AI employee device was infected with Lumma infostealer in February 2026. The malware is reported to have exfiltrated credentials, session data, and tokens from the machine. Hudson Rock has stated this was the only recorded infostealer infection tied to Context AI in their dataset, leading them to assess it as the likely initial access point — though the chain from that infection to the subsequent breach has not been publicly confirmed by Context AI.

According to Context AI, the company independently identified and stopped unauthorized access to its AWS environment in March 2026. Context AI conducted some forensics efforts, and shut down the AI Office Suite along with its associated OAuth application.

Context AI's updated statement indicates that, based on subsequent information provided by Vercel and further internal investigation, OAuth tokens belonging to some AI Office Suite users were compromised during the incident. Context AI states that one of those tokens was used by the attacker to access Vercel's Google Workspace.

Vercel's bulletin describes what happened next: the attacker used that access to take over the employee's Vercel Google Workspace account, which in turn (directly or not) enabled access to some Vercel environments and environment variables that were not marked as "sensitive."


Vercel has stated that environment variables explicitly flagged "sensitive" — which are stored in a form that prevents them from being read — were not accessed. A limited subset of customers whose non-sensitive variables were exposed have, per Vercel, been contacted directly and advised to rotate credentials.

While Vercel has yet to share details about which of its systems were broken into, how many customers were affected, and who may be behind it, public reporting indicates that a threat actor operating under the ShinyHunters persona has claimed responsibility for the attack and is advertising the stolen data for sale at an asking price of $2 million. This attribution has not been confirmed by Vercel or Context AI.

From Lumma Infostealer to Compromised Vendor(s) - Stage by Stage 


The reconstruction below is drawn from Vercel's and Context AI's official incident statements, with additional context from Hudson Rock's public analysis. It reflects what each party has disclosed — some elements, particularly the link between the February infostealer infection and the March AWS intrusion, are inferred rather than formally confirmed.

Stage 1 — Initial compromise (Context AI side)

Per Hudson Rock's reporting, a Context AI employee device was infected with Lumma infostealer. The malware is reported to have harvested Google Workspace session data and tokens, OAuth tokens, and internal service credentials from the machine. Context AI has not publicly confirmed this as the initial access vector.


Stage 2 — Context AI environment compromise

According to Context AI, the attacker gained unauthorized access to Context AI's AWS environment. Based on Context AI's updated statement, this access appears to have extended to authentication and integration components — including OAuth tokens issued to users of the Context AI application.


Stage 3 — The OAuth trust

Context AI's statement indicates that a Vercel employee had authorized the Context AI application using their Vercel Google Workspace account. The token issued to Context AI appears to have reflected the scope of access the employee's Vercel account held. 

Stage 4 — OAuth tokens compromised

According to Context AI, OAuth tokens belonging to users who had previously authorized the Context AI application were exposed during the breach of Context AI's environment, and appear to have included the token tied to the Vercel employee's account.


Stage 5 — Impersonation via OAuth

Context AI states that one of the compromised OAuth tokens was used by the attacker to access Vercel's Google Workspace. Vercel's bulletin describes the same event from its perspective, stating that the attacker used this access to take over the employee's Vercel Google Workspace account.


Stage 6 — Access to Vercel internal systems

Per Vercel's bulletin, the compromised Google Workspace access enabled the attacker to reach some Vercel environments and environment-related resources (directly using the existing permissions in scope or indirectly laterally moving). Vercel has not publicly detailed every internal system touched, though the company notes the attacker demonstrated what they assess as a sophisticated understanding of Vercel's systems.

Stage 7 — Data access and exposure

According to Vercel, the attacker accessed environment variables that were not marked as "sensitive" — meaning they were stored in a form that decrypts to plaintext. Vercel states that variables explicitly flagged "sensitive" were not accessed, and that there is currently no evidence those values were read. Investigation into other data categories is, per Vercel, ongoing.

Stage 8 — Impact

Vercel has stated that a limited subset of customers had non-sensitive environment variables exposed, and that those customers have been contacted directly with guidance to rotate credentials. Vercel reports engaging Mandiant and other cybersecurity firms, and notifying law enforcement. The full scope of exfiltrated data remains, per Vercel's own statement, under investigation.

Separately, public reporting indicates that a threat actor operating under the ShinyHunters persona has claimed responsibility for the attack and is advertising the stolen data for sale at an asking price of $2 million. Neither Vercel nor Context AI has publicly confirmed this attribution.

Why This Matters for Your Organization

The Vercel breach is not a Vercel story. Based on what the involved parties have disclosed, it's a SaaS-to-SaaS story — and it's a pattern every enterprise security team should be studying right now.

Three things make this incident worth learning from:

Based on public information, the pivot appears to have been an OAuth token, not a phished credential. The attacker did not need to bypass MFA or trick the employee into doing anything during the attack window. The OAuth trust relationship had reportedly been granted earlier, and was compromised at Context AI's side. Every third-party app your employees have authorized is a potential version of this same story.

Context AI's statement notes that the token issued to the Vercel employee reflected the scope of access that account held. If that description is accurate, it means the eventual impact of a future breach is shaped by decisions users make months earlier when they click "Allow." Over-permissioned OAuth grants age into liabilities.

The compromised app fits the classic "shadow AI" pattern. Based on Context AI's own statement, Vercel was not a Context AI customer — an individual Vercel employee signed up for the consumer-targeted AI Office Suite on their own. No procurement review, no security assessment, no contract — just a corporate Google account connected to a third-party AI tool that was later breached. This pattern is common across organizations where employees can self-serve AI sign-ups.

If your employees have connected third-party AI apps to corporate Google Workspace, Microsoft 365, or Slack accounts, you effectively inherit the security posture of every one of those apps — whether you knew you were doing so or not.

Identifying Context AI Apps and Vercel Usage in Your Environment

Traditional security tools have a blind spot here. Endpoint tools see browser activity but don't understand OAuth consent flows. Identity providers see tokens being issued but don't correlate them to downstream app risk. CASB tools see sanctioned apps but miss the long tail of self-service AI sign-ups. And incident indicators like the Context AI OAuth Client ID are easy to miss without dedicated visibility into SaaS-to-SaaS connections.

Using Reco, security teams can:

  • Identify Context AI connections directly. Reco surfaces every third-party OAuth grant against Google Workspace and other IdPs, so you can see exactly which users authorized Context AI, when, and with what scopes.
  • Search for the known IOC. Reco lets you pivot on the specific Context AI OAuth Client ID published by Vercel (110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com) and confirm whether it has ever been granted in your tenant.
  • Audit related risk surfaces. Reco flags over-permissioned OAuth grants, broad Drive and Gmail scopes, and plugin-level connections that could carry the same class of risk as Context AI even if the app is different.
  • See the SaaS-to-SaaS graph. Reco's graph-based visualization shows where AI apps intersect with business-critical SaaS like Google Workspace, Slack, and GitHub — so you can assess blast radius the way an attacker would.
  • Catch the next Context AI before it becomes an incident. Reco marks emerging-risk applications based on real-world incident patterns, giving security teams a head start on apps that warrant review.

Recommended actions

Whether or not your organization uses Vercel, the actions below apply broadly to any team whose employees can authorize third-party apps against corporate identity providers.

If you use Vercel:

  • Review and rotate any environment variables not marked as "sensitive" — treat them as potentially exposed. Use Vercel's sensitive environment variables feature going forward.
  • Review your Vercel activity log and recent deployments for unexpected activity.
  • Ensure Deployment Protection is set to at least Standard, and rotate Deployment Protection tokens if set.
  • Enable MFA on your Vercel account if you haven't already.

If you use Google Workspace (whether or not you’re aware of Context AI or Vercel usage):

  • In the Google Admin Console, navigate to Security → Access and Data Control → API Controls → Manage Third-Party App Access.
  • Search for the Context AI OAuth Client ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com.
  • If the app appears, revoke its access and initiate incident response to check for unauthorized data access.

Check and remove the related Context AI Chrome extension:

There are online indications, including OX Security publication, that a Chrome extension associated with Context AI (extension ID omddlmnhcofjbnbflmjginpjjblphbgk) was associated with the Context AI IOC published by Vercel. OX Security reports that the extension was removed from the Chrome Web Store in March, 2026, but if it was installed by users in your environment before removal, it may still be present locally.

Broader hygiene — applies to every organization:

  • Inventory every third-party app that holds OAuth grants against your IdPs. You almost certainly have more than you think.
  • Review OAuth scopes for over-permissioning — especially "Allow All" grants against Google Drive, Gmail, or equivalent Microsoft 365 scopes.
  • Establish a process for employees to surface AI tools they want to use, so shadow AI sign-ups don't quietly accumulate trust relationships over time.

Want to easily evaluate the security posture of your SaaS applications, discover existing apps (shadow IT), browser extensions, and analyze associated activities, all from a centralized location? Try Reco today.

The Bigger Picture

Based on the public statements from Vercel, Context AI, and vendors like Hudson Rock, this incident appears to fit a pattern we've seen play out repeatedly over the past year: AI apps being shipped, adopted, and integrated faster than they can be secured or reviewed. Employees sign up for useful tools using corporate accounts. Those tools become part of the enterprise attack surface whether the security team knows about them or not.

The takeaway is not that employees should stop using AI tools. That horse has left the barn. The takeaway is that security programs need real-time visibility into which third-party apps hold live OAuth grants against corporate identity providers, what scopes those grants carry, and which of those apps represent concentrated risk.

For security teams, the priority has to be visibility. You cannot secure what you cannot see — and in a SaaS-to-SaaS world, what you cannot see includes the third-party AI apps that your employees trusted six months ago and forgot about.

Contact us today to see which AI apps are already connected to your environment, and whether Context AI is among them: https://www.reco.ai/demo-request.

References

Disclaimer: This analysis reflects what is publicly known about the Vercel and Context AI supply chain compromise at the time of publication, including reasonable inferences where public details are incomplete. The incident remains under active investigation by the affected parties, and key details — including the full scope of the incident, the precise initial access vector, and attribution — may evolve as additional information becomes available.

No items found.

Alon Klayman

ABOUT THE AUTHOR

Alon Klayman is a seasoned Security Researcher with a decade of experience in cybersecurity and IT. He specializes in cloud and SaaS security, threat research, incident response, and threat hunting, with a strong focus on Azure and Microsoft 365 security threats and attack techniques. He currently serves as a Senior Security Researcher at Reco. Throughout his career, Alon has held key roles including DFIR Team Leader, Security Research Tech Lead, penetration tester, and cybersecurity consultant. He is also a DEF CON speaker and holds several advanced certifications, including GCFA, GNFA, CARTP, CESP, and CRTP.

Technical Review by:
Gal Nakash
Technical Review by:
Alon Klayman

Alon Klayman is a seasoned Security Researcher with a decade of experience in cybersecurity and IT. He specializes in cloud and SaaS security, threat research, incident response, and threat hunting, with a strong focus on Azure and Microsoft 365 security threats and attack techniques. He currently serves as a Senior Security Researcher at Reco. Throughout his career, Alon has held key roles including DFIR Team Leader, Security Research Tech Lead, penetration tester, and cybersecurity consultant. He is also a DEF CON speaker and holds several advanced certifications, including GCFA, GNFA, CARTP, CESP, and CRTP.

Ready to Close the SaaS Security Gap?
Chat with us
Table of Contents
Get the Latest SaaS Security Insights
Subscribe to receive updates on the latest cyber security attacks and trends in SaaS Security.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Explore Related Posts

Why We Integrate with Cyera to Fight AI Sprawl Across SaaS and Data
Todd Wilson
Reco and Cyera are partnering to close the gap between SaaS security and data security, delivering an integrated solution that combines Cyera's data classification intelligence with Reco's visibility across 225+ SaaS and AI applications. Together, they give enterprise security teams a unified, context-rich view of data risk — from storage and access to sharing and exfiltration — without the manual work of reconciling two separate tools.
Anodot Breach Lessons: When Your Vendor Is the Vulnerability
Cynthia Ardman
The recent breach of Anodot, an AI analytics platform acquired by Glassbox in November 2025, exposed a growing attack vector: SaaS supply chain compromise. Threat actors used stolen Anodot credentials to access 12+ Snowflake customer environments, bulk-extracting data and demanding ransom.
AI Agents Are Talking, Are You Listening?
Gal Nakash
As AI agents increasingly communicate with each other across enterprise SaaS platforms, they create implicit, runtime trust chains that existing security tools — built for human identities and explicit permissions — cannot observe or control. Organizations must build dedicated visibility into agent interaction graphs and enforce chain-level controls before these blind spots become serious security liabilities.
See more featured resources

Ready for SaaS Security that can keep up?

Request a demo