Vercel's April 2026 Breach Was an OAuth Supply-Chain Attack
Vercel's April 2026 Breach Was an OAuth Supply-Chain Attack
Context. On April 19, 2026, Vercel disclosed a security incident tied to Context.ai, a third-party AI SaaS that a Vercel employee had connected to their corporate Google Workspace with "Allow All" permissions. The attacker used the OAuth token to pivot into Vercel's internal systems and read customer environment variables that weren't flagged as "sensitive."
Most post-mortems this week will frame the Vercel breach as a PaaS story. I don't think that's the right frame. Next.js wasn't compromised. Turbopack wasn't compromised. Vercel's build pipeline and edge did their jobs. What got compromised was a single OAuth grant in a corporate Google Workspace, and the blast radius reached customers because "encrypted at rest" doesn't help when the attacker is holding a live admin session.
Vercel's April 2026 breach sat undetected for roughly 22 months because an OAuth supply-chain attack through an AI SaaS routed around Vercel's hosting defenses entirely.
What actually happened, in order
The chain, stitched together from Vercel's bulletin, CEO Guillermo Rauch's disclosure thread, Context.ai's own advisory, and early independent write-ups:
- ›
A Context.ai employee was infected with Lumma Stealer in early 2026. The infostealer harvested Google Workspace credentials plus keys for Supabase, Datadog, and Authkit. Nothing in that initial pile was Vercel-specific. The attacker pivoted from there into Context.ai's consumer product, the "AI Office Suite," and got their hands on OAuth tokens that consumer users had granted to the suite.
- ›
One of those consumer users turned out to be a Vercel employee, signed up using their corporate Vercel Google Workspace identity, who had clicked "Allow All" on the OAuth consent screen. The attacker used the token to move into Vercel's Workspace, took over the employee's account, and from there reached Vercel's internal environments. They read customer environment variables that weren't flagged "sensitive."
- ›
Public timelines put the initial OAuth compromise at roughly mid-2024, which puts Vercel's detection-to-disclosure gap at approximately 22 months. Stolen data was listed on BreachForums for $2M by an account claiming the ShinyHunters name; the real ShinyHunters have denied involvement. Attribution is still shaking out.
Why "not marked sensitive" is doing so much work
Vercel's environment-variable model has two tiers. Variables flagged "sensitive" are encrypted at rest and unreadable even from internal admin sessions. Everything else is readable by anyone with sufficient control-plane access, and that's the bucket most teams leave secrets in because nobody reads the onboarding docs that closely.
Database URLs. Stripe secret keys. OpenAI and Anthropic tokens. Webhook signing secrets. S3 access keys. On the seed-stage SaaS teams I've looked at, the split skews heavily toward the non-sensitive tier, and most founders I've asked didn't know the flag existed until this week.
I'll say the thing other write-ups this week won't: Vercel's non-sensitive tier is a UX footgun. The default should be encrypted-at-rest, opt-out to read. Anything else assumes every operator understands the threat model before they paste a key into a form field, and most don't.
That tier split is the exposure window. If a credential lived in a non-sensitive Vercel env var any time before April 19, 2026, it was readable from within a compromised admin session. Whether yours specifically was read is what Vercel is still investigating. Whether it could have been is already settled.
How do I check if I was exposed?
Vercel has been contacting affected customers directly. If you haven't heard from them yet, you're probably in the unaffected majority. "Probably" is not "confirmed" on a breach with a 22-month exposure window. A few things worth running tonight.
Pull the Vercel audit log. Team → Settings → Audit Log. Filter for env.read, env.list, and env.getSensitive events from IPs or user agents you don't recognize, especially across March and April 2026.
Check provider-side logs for any credential that ever lived in a Vercel env var. For AWS:
aws cloudtrail lookup-events \
--lookup-attributes AttributeKey=AccessKeyId,AttributeValue=AKIA... \
--start-time 2026-02-01 --end-time 2026-04-20 \
--query 'Events[].{Time:EventTime,Source:CloudTrailEvent}' --output table
Grep your repo history for any .vercel/ artifact that slipped past .gitignore:
git log --all --full-history --oneline -- ".vercel/*"
git log -p --all -S "VERCEL_TOKEN" -- .
According to early independent analyses, some Vercel customers received upstream leaked-credential notifications from secret-scanning services days before the official disclosure. GitHub secret scanning, Stripe, AWS, and most serious scanners see exposed credentials before the hosting platform does. If an automated nudge landed in your inbox earlier in April and you swatted it away as a false positive, go back and look.
Where this maps in ArkenSec's methodology
ArkenSec's Phase 11 covers cloud, infrastructure, and supply-chain exposure on the customer's own surface. The phase enumerates exposed storage buckets, SBOM dependencies via Trivy, subdomain-takeover candidates, and client-reachable credentials. That last one is where the Vercel-adjacent findings cluster: NEXT_PUBLIC_* env vars that shipped to the browser, deploy-config artifacts served at predictable paths, and webhook endpoints with no signature validation. Phase 11 routinely finds secrets that were never supposed to be public, placed in the public prefix during a late-night deploy and forgotten.
Phase 5 (auth and session security) covers the customer's outbound OAuth surface: scopes your app requests on sign-in, whether you enforce PKCE, how session tokens are scoped, rotated, and revoked. What Phase 5 does not touch is your internal Google Workspace posture, because that's a governance problem, not a pentest problem. The check that would have caught the Vercel breach upstream is "every OAuth app installed across the Workspace gets an admin review with a written justification, quarterly, forever." I haven't met a seed-stage SaaS that runs that review, and I've asked a lot of them.
What to change this week
Four things, in order, whether or not Vercel contacted you.
- ›
Rotate every deploy token, API key, and database credential that ever lived as a non-sensitive Vercel env var. When the replacements go back in, mark them sensitive. The flag exists for a reason. Use it.
- ›
Open your Google Workspace admin console. Security → API Controls → App access control. Revoke every OAuth app you don't have a written justification for. Set new OAuth grants to require admin approval going forward. That single config change closes the vector that caught Vercel.
- ›
Turn on deploy notifications in Vercel. Slack or email, whichever you actually read. If an attacker pushes to production in your name, you want to know within minutes, not whenever an upstream provider flags a leaked key a week later.
- ›
Run a free external scan against your production domain. 17 checks, about two minutes, no signup and no email. The scan won't tell you whether Context.ai's attacker touched your specific keys; it will tell you what else is reachable from your perimeter right now: leaked public env vars in your JavaScript bundle, unauthenticated webhook endpoints, forgotten staging subdomains, TLS misconfigurations, exposed deploy-config. 30 seconds to check your own exposure — arkensec.com/scan.
The shape of the next one
Vercel's April 2026 breach sat undetected for roughly 22 months because an OAuth supply-chain attack through an AI SaaS routed around Vercel's hosting defenses entirely. The next breach in this shape won't be Vercel; it'll be whatever agentic AI tool half your engineering team connected to Workspace last quarter. The same class of vector — OAuth grant to a third-party SaaS with over-broad scopes, infostealer hits one of their employees, attacker pivots through the token, customer blast radius widens across the supply chain — will keep landing until platform defaults stop treating "Allow All" as a reasonable user choice.
Run the free external scan on your domain tonight at arkensec.com/scan. Two minutes is cheaper than finding out about your perimeter through someone else's incident report. If the scan flags anything serious, the waitlist for ArkenSec Pro is open — quarterly continuous scans with the full 18-phase methodology, for teams that don't want to re-read this post after the next breach.
See Your Security Posture
Run a free external security scan to see where your application stands. TLS, headers, DNS, ports — in under 60 seconds.