OAuth Is the New Perimeter: Anatomy of the Vercel-Context.ai Breach

by r00tReading Time: 8 mins read
OAuth Is the New Perimeter: Anatomy of the Vercel-Context.ai Breach

For most of the last decade, defenders have been told that "identity is the new perimeter." On 19 April 2026, Vercel disclosed a breach that turned that slogan into something measurably more uncomfortable: the perimeter isn't identity itself, but the OAuth graph built on top of it — the sprawling, mostly uninventoried web of third-party applications that employees have authorised to read mail, calendars, drives, and admin consoles. And that graph has become the easiest way into your environment.

This is the AckerWorx April deep-dive on what happened, how it worked, and what to do about it before the same chain runs through your tenant.

The Headline Facts

  • Affected platform: Vercel — frontend and serverless cloud deployment platform used by hundreds of thousands of organisations.

  • Disclosure date: 19 April 2026 (initial), 20 April 2026 (further guidance).

  • Initial victim: Context.ai, a small AI productivity tool used by at least one Vercel employee.

  • Initial access vector: Lumma Stealer infostealer malware on a Context.ai employee endpoint, originating from an unsanctioned download (publicly reported as a Roblox cheat).

  • Pivot mechanism: Compromised OAuth tokens from Context.ai's Google Workspace application.

  • Threat actor: ShinyHunters, with an extortion demand reported at roughly USD 2 million.

  • Dwell time: Approximately two months, from first compromise in February 2026 to disclosure in April.

  • Data exposed: Employee records, access keys, API keys, GitHub and npm tokens, and environment variables (the latter classified by Vercel as non-sensitive, but in practice highly useful to an attacker).

What makes this incident worth a full article rather than a paragraph in the trending recap is the absence of anything novel. There was no zero-day. No bypass of MFA. No clever cryptographic trickery. Every step in the chain was a documented, well-understood OAuth abuse pattern, executed competently against a trust relationship that nobody was actively monitoring.

The Attack Chain, Step by Step

Step 1 — Patient zero: an infostealer on a personal device

In February 2026, a Context.ai employee was infected with Lumma Stealer, a commodity infostealer that has been the backbone of credential-theft operations for years. The reported initial action was an off-the-clock download — publicly characterised as a Roblox cheat — that pulled in a malicious payload alongside it.

The lesson here is older than infosec itself: BYOD and mixed personal/corporate device use is still where chains begin. Lumma's job is straightforward — sweep browsers, password managers, session cookies, and authentication artefacts, exfiltrate to a C2, and let the buyer pick what's worth using.

Step 2 — From stolen creds to OAuth tokens

The stolen artefacts included credentials and session material tied to Context.ai's Google Workspace environment. Crucially, this gave the attacker access to the Google Workspace OAuth application registered by Context.ai (OAuth App ID 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbv, per Vercel's disclosure timeline).

Once an attacker controls an OAuth application's credentials and refresh tokens, MFA becomes irrelevant. OAuth tokens, once issued, are bearer tokens — possession is authority. They don't prompt for re-authentication. They don't trip step-up challenges. They just work, until they're explicitly revoked.

Step 3 — Inheriting Vercel's trust

A Vercel employee had previously authorised Context.ai's app against their corporate Google Workspace account, granting it broad scopes for read access. From the attacker's perspective, this was the entire prize: a pre-authorised, MFA-bypassed path into Vercel's Google Workspace tenant via Google single sign-on.

From there the attacker enumerated internal systems — issue trackers, admin tools, internal environments — and began pulling environment variables and secrets from a subset of customer projects.

Step 4 — Discovery via extortion

The most uncomfortable detail in the disclosure: Vercel did not detect the breach internally. It was discovered when ShinyHunters chose to monetise it publicly with an extortion demand. The two-month dwell time is what you would expect when nobody is alerting on anomalous OAuth application behaviour, because almost nobody is.

Mapping to MITRE ATT&CK

Trend Micro and others have mapped the chain cleanly:

  • T1555 — Credentials from Password Stores (Lumma Stealer harvest)

  • T1539 — Steal Web Session Cookie

  • T1199 — Trusted Relationship (Context.ai → Vercel via OAuth)

  • T1078 / T1078.004 — Valid Accounts (Cloud)

  • T1087 / T1530 — Account / Cloud Storage Object Discovery (env-var enumeration)

  • T1567 — Exfiltration Over Web Service

  • T1550 — Use of Alternate Authentication Material (the OAuth tokens themselves)

The single highest-value detection point is the pivot from T1199 → T1078: an OAuth application accessing resources outside its expected scope, or from unexpected IP ranges, or at unexpected times. If your detection stack does not have a rule for that today, it is the first thing to build this week.

Why OAuth Tokens Are the Attacker's Preferred Currency

Three properties make OAuth tokens uniquely attractive:

  1. They survive password resets. Rotating the user's password does not revoke issued OAuth tokens. You have to revoke them explicitly, in the identity provider's app-management surface — a place most organisations rarely look.

  2. They survive MFA. The token represents a completed authentication, including any MFA challenge. Possessing it skips that ceremony entirely.

  3. They are long-lived and refreshable. Refresh tokens regularly outlive the user's tenure, the project that prompted the integration, and frequently the vendor itself.

Combine these properties with a typical enterprise's complete absence of an OAuth application inventory, and you get the perfect persistence mechanism. Vercel was not unlucky; it was statistically representative.

The Hidden Risk in "Non-Sensitive" Environment Variables

Vercel's disclosure was careful to note that the exposed environment variables were not flagged as sensitive. This is technically true and operationally meaningless. Environment variables in modern deployment pipelines routinely contain:

  • Third-party API keys (analytics, payment processors, email providers)

  • Webhook signing secrets

  • Internal service URLs that map a target's architecture

  • Feature flag service tokens

  • Telemetry endpoints

None of these are credentials in the strict sense. All of them are useful to an attacker. The "sensitive vs non-sensitive" classification is a developer-experience convenience, not a security control. Anything reaching production should be treated as a credential, regardless of how a UI checkbox classifies it.

Detection: Where to Hunt Now

If you want to know whether something like this is already running in your environment, these are the queries worth your weekend:

  • OAuth applications with admin or broad-read scopes (Mail.Read, Drive.ReadOnly, Files.Read.All, https://www.googleapis.com/auth/drive, equivalents) authorised in the last 24 months.

  • OAuth apps issued before a vendor's known compromise date — particularly anything authorised against AI productivity tools, browser extensions, or "free for personal use" SaaS.

  • Geographic and ASN anomalies on OAuth token usage. If a token historically used from your office IPs starts appearing from residential VPN exit nodes or a hosting provider, that is a finding.

  • Token activity outside business hours, especially for tokens tied to consumer-grade applications.

  • First-time API calls for a long-lived OAuth app — an integration that has used three Graph endpoints for two years and suddenly enumerates /users is announcing itself.

Mitigation: What to Actually Do

In rough order of effort versus return:

  1. Inventory your OAuth graph. You cannot defend what you have not enumerated. Microsoft 365's Enterprise Applications, Google Workspace's Third-Party App Access, and the equivalent surfaces in Okta, Entra ID, and others are where you start.

  2. Block user consent for high-risk scopes. Move to admin-consent workflows for anything beyond basic profile reads. Yes, this generates tickets. That is the point.

  3. Set OAuth app expiry policies. Tokens should not be permanent. Most providers now support time-bound grants and periodic re-consent.

  4. Move production secrets out of environment variables. Use a proper secrets manager (HashiCorp Vault, AWS Secrets Manager, GCP Secret Manager, Azure Key Vault) with short-lived, automatically rotated credentials. Treat env vars as a developer convenience, not a vault.

  5. Build ITDR detections specifically for OAuth abuse. Identity Threat Detection and Response is the category name; OAuth telemetry is the substance.

  6. Educate the workforce on consent phishing. "Sign in with Google" prompts are now the highest-leverage attack on knowledge workers. Pairing technical controls with awareness training is what closes the loop.

The Bigger Picture: The OAuth Graph Is Your Real Surface Area

The Vercel-Context.ai chain is going to be repeated. The conditions that made it possible — broad employee discretion to authorise SaaS integrations, vendor reliance on OAuth tokens with weak operational hygiene, and a near-total absence of OAuth-specific detection — exist in essentially every modern enterprise. The question is not whether your organisation has an exploitable OAuth path. The question is whether you have inventoried it before someone else does.

April 2026 will be remembered as the month this attack pattern stopped being a theoretical concern in security conferences and became a quarterly board-level briefing topic. If your organisation does not yet have an answer to "show me every OAuth app authorised against any corporate identity, sorted by scope risk and last-used date," this is the slide deck that is about to get requested.

The white-hat work here is straightforward and unglamorous: enumeration, classification, monitoring, and the slow institutional discipline of treating trust relationships as the asset they are. None of it is novel. All of it would have stopped this attack.


This article is part of AckerWorx's monthly threat-analysis series. AckerWorx supports the white-hat community with research, tooling, and analysis aimed at making things better, not worse. Got a corroborating IOC, a counter-analysis, or a war story?