SOC 2 Type II readiness is an evidence-velocity problem
SOC 2 Type II readiness is an evidence-velocity problem
My last SOC 2 Type II kickoff call lasted 82 minutes. The auditor asked for seven specific artifacts in the first ten, and I had four of them. The other three (evidence of vulnerability scan cadence on a defined schedule, documented remediation SLAs with timestamps, and a current third-party penetration test report) would end up costing three weeks and $14,000 to produce mid-engagement. I've now sat in twelve of these kickoffs across both sides of the table, and the same thing breaks every time.
Evidence velocity, not documentation, is what blocks Series A SaaS from SOC 2 Type II; closing the velocity gap saves six months and $20K-$50K.
What is SOC 2 Type II readiness?
SOC 2 Type II readiness means your control environment can produce timestamped, auditor-legible evidence of every in-scope control operating consistently across a multi-month observation period, typically six months on a first audit and twelve on subsequent ones. Readiness is not whether your policies exist. It's whether the artifacts those policies promise can be sampled at any random week and handed over in under five minutes.
That definition matters because almost every gap in a first readiness assessment lives in the gap between "the control runs" and "the control produces evidence each time it runs." Auditors don't care about the first half. They sample the second.
Type I versus Type II: what changes
Type I is a snapshot. Type II is a recording. The buyer who's blocking your enterprise deal is asking for the recording.
| Dimension | Type I | Type II | |---|---|---| | Question answered | Are controls designed correctly today? | Did controls operate effectively over the observation window? | | Observation period | Point-in-time | 3–12 months (6 is typical first audit) | | Evidence required | Policies, configs, sample artifacts | Continuous artifacts across the full window | | Auditor sampling | Once | Random weeks across the period | | Buyer credibility | Limited (early-stage and partner deals) | Required for most enterprise procurement | | Typical cost | $10K–$20K | $25K–$60K (audit fee + readiness + tooling) | | Failure mode | Missing policy | Missing artifact for sampled week |
The ratio of Type I to Type II reports issued has been shifting hard toward Type II since 2022. Enterprise procurement teams stopped accepting Type I as a substitute around the same time SaaS spend started getting scrutinized by CFOs. If you're going through the work, plan for Type II from day one.
What are the Trust Services Criteria actually testing?
The five Trust Services Criteria are published by the AICPA: Security (mandatory), Availability, Processing Integrity, Confidentiality, and Privacy. Almost every Series A SaaS scopes the first audit to Security only, which is the right call. Inside Security, the auditor walks roughly sixty points of focus across nine Common Criteria.
Three of those Common Criteria produce most of the pain for Series A teams:
- ›CC6.1 governs logical and physical access controls.
- ›CC7.1 covers detection of security events.
- ›CC7.2 demands system monitoring for vulnerabilities and malicious code.
Every other criterion has a policy answer. These three demand continuous evidence, and they're where readiness dies.
Why most Series A companies fail their first readiness assessment
Readiness is the pre-audit gap analysis, usually run by the same firm that will perform the attestation. I've seen readiness reports come back with 40 to 80 gaps on a first pass for a 25-person SaaS, and roughly three-quarters of those gaps fall into the same pattern: the control is written into the policy, it's occasionally enforced in practice, and nobody has collected evidence of continuous operation across the observation window.
A few examples that show up in nearly every readiness report I've reviewed:
- ›A policy states vulnerability scans run weekly. Actual scans ran four times across nine months. Nobody can produce reports for the other thirty-five weeks.
- ›A policy commits to remediating critical findings within 30 days. Three criticals sat open for 90 days because the engineer who owned them left and the Jira ticket closed as stale.
- ›A policy states production access is reviewed quarterly. The reviews happened, but there's no signed-off artifact, just a Slack thread that rotated out of retention before the audit window closed.
None of these are documentation failures. The documents exist. They're evidence-velocity failures: the control fires, but it doesn't produce a consistent, timestamped, auditor-legible artifact each time. When the auditor samples the observation period and asks for weeks 12, 23, and 31, the team produces zero of three. That moment is where most first audits get extended, re-scoped, or quietly downgraded to Type I.
What does auditor-acceptable evidence look like?
A useful rule of thumb: if you can't produce the evidence in under five minutes from a cold start, it doesn't exist for audit purposes. Auditors don't care whether the control fired. They care whether you can demonstrate it fired on the specific day they sampled.
| Format that works | Format that fails | |---|---| | Immutable logs with timestamps (CloudTrail, GitHub audit log, Okta system log to a dated bucket) | "We have logging on" | | Tool-generated reports with a scan-date header, retained for the full observation window | Screenshots saved to someone's laptop | | Ticket artifacts with state transitions (Jira/Linear: opened → assigned → remediated → closed, each timestamped) | "We can regenerate the report" | | Reviewer-signed artifacts (named reviewer clicks an approval button on a defined schedule) | A Slack thumbs-up emoji | | Cron-driven scan output landing in a write-once retention bucket on a fixed cadence | "We scan when we deploy" |
If your current evidence falls in the right column, you're not ready for Type II. You're ready to write the policy that says you will be.
The three controls that eat a Series A team
CC6.1: access control and review
CC6.1 is where most teams are closest to ready and still fail on the artifact. The controls exist (SSO via Okta or Google Workspace, role-based access, least privilege on cloud IAM), but the quarterly access review is the artifact auditors want and the artifact that never gets produced. The reviewer needs a signed approval, the timestamp, and a list of accounts evaluated.
Compliance automation platforms (Vanta, Drata, Secureframe, Thoropass) have automated this control to the point where I'd recommend any Series A team just pay the $5K–$15K/year and move on. CC6.1 is not the control to build in-house.
CC7.1: security event detection
CC7.1 requires monitoring for anomalous events with a documented response process. Most Series A teams do this by accident: CloudWatch alarms, Sentry on the application side, a dedicated Slack channel for alerts, maybe a PagerDuty rotation. The gap is usually the absence of a defined severity taxonomy and any artifact showing triage actually happened.
What works: a lightweight log aggregator (Panther, Wazuh, or a structured SNS → Slack pipeline with retention enforced), a one-page incident response policy that names a severity scale, and a shared postmortem template stored in a dated bucket. Auditors don't expect you to have a SOC. They expect a process that fires, records itself, and closes. Vercel's April disclosure is the worst-case version of CC7.1 failing silently: roughly 22 months between OAuth compromise and detection. I wrote that one up separately. Same evidence-velocity lesson, different control.
CC7.2: continuous vulnerability management
CC7.2 is where I see the highest evidence-velocity gap and the largest dollar impact. The criterion requires you to monitor systems for vulnerabilities and respond on a defined timeline. What auditors want as evidence:
- ›A scanning cadence documented in policy (weekly external, monthly internal is the most defensible default). The CISA Known Exploited Vulnerabilities catalog is the cleanest external benchmark for which findings warrant the tight end of the SLA.
- ›A documented remediation SLA, with thresholds aligned to NIST SP 800-40r4: typically 30 days for critical, 60 for high, 90 for medium. Tighter is better.
- ›Evidence that findings are triaged against the SLA and closed or risk-accepted in writing, for every scan across the entire window.
The failure mode is not having the scanner. It's running the scanner manually, inconsistently, and never retaining the reports against a sampling schedule. The fix is not a better process document; it's automation that produces the report whether or not anyone remembers to run it. This is why I built ArkenSec. The 18-phase platform runs against your perimeter on whatever cadence the policy defines, maps each finding to SOC 2 CC7.2 (and ISO 27001 A.12.6.1, PCI DSS 11.4.4) by default, and produces an evidence pack the auditor can sample directly. The point isn't the scanner. The point is the report lands on schedule, every time, against a fixed retention path.
Penetration testing: what the auditor is really asking for
The official SOC 2 guidance doesn't mandate an annual penetration test. It mandates "testing of controls" under CC4.1. In practice, every auditor I've worked with treats a current third-party pentest as the cleanest form of that evidence, and the implicit norm is annual.
Series A teams get squeezed here because the incumbent market (Bishop Fox, Cobalt, Synack) runs $10,000 to $20,000 per engagement, delivered as a static PDF, scoped to a single point in time. A year later, the PDF is stale, the application has shipped 200 commits worth of new attack surface, and the auditor wants another one. The Verizon DBIR 2024 tracks median time-to-compromise and time-to-detect on opposite sides of the same widening gap. I don't think annual pentests make sense for any SaaS company under Series B; the application moves faster than the report does.
The shift you want is from point-in-time to continuous. Either you pay for the annual report and accept it'll be stale by the next audit cycle, or you wire up continuous testing that produces fresh evidence on every scan. ArkenSec's full audit is $2,000, with optional quarterly re-runs at $499. The dollar figure isn't what matters here. What matters is the evidence pack arriving in the auditor's hands the day they sample, not three weeks later with a consultant invoice attached. If you want to see what your external perimeter looks like before committing, the free scan takes about two minutes and runs 17 checks against your domain. It's a useful calibration baseline regardless of which vendor you eventually pick.
For the control-by-control breakdown of which TSC criteria a pentest actually satisfies versus what continuous scanning covers, the earlier SOC 2 penetration testing requirements post walks through CC6.1, CC6.7, CC7.1, and CC7.2 in detail.
A 90-day sequence that actually works
If you're a Series A SaaS at 10–50 people and your largest deal is blocked on SOC 2, here's the sequence I'd run:
- ›Week 1. Pick a compliance automation platform. Vanta, Drata, Secureframe, Thoropass; they all do the same thing well enough. Don't optimize this past a single day of evaluation.
- ›Weeks 2–4. Wire up the evidence collectors: AWS, GitHub, the IdP, the HR system. Close CC6.1 first. It's the most easily automated and the least judgment-heavy.
- ›Weeks 4–6. Stand up continuous external scanning on a defined cadence. This is the CC7.2 foundation. The cadence has to match what your policy says it does.
- ›Weeks 6–8. Run your first penetration test. Scope it tightly: external attack surface plus authenticated application. Confirm the report format your auditor will accept before you buy.
- ›Weeks 8–12. Readiness assessment with your chosen auditor. Fix the gaps. Begin the observation period.
- ›Months 4–10. Generate, retain, and spot-check evidence weekly. This is the unglamorous part nobody tells you about, and it's where most first audits actually live or die.
If you're earlier and still deciding between frameworks, ISO 27001 is the alternative most SaaS teams compare against, with broader scope and longer surface to evidence; the same evidence-velocity discipline applies.
One thing worth fighting for internally
Most founders I talk to frame SOC 2 to the board as a tax, something to suffer through to unlock revenue. That's the right frame for the board and the wrong frame internally. Done well, the controls the auditor demands are the same controls that stop a real breach. CC6.1 is what prevents a departed engineer from still having prod access in month three. CC7.2 is what catches the dependency you didn't know was vulnerable. The paperwork is annoying. The behaviors underneath are what a competent security program looks like anyway.
Evidence velocity, not documentation, is what blocks Series A SaaS from SOC 2 Type II; closing the velocity gap saves six months and $20K-$50K. The teams that figure it out at month four of the observation window pay the difference in cash and consultant time.
Join the ArkenSec waitlist for continuous, auditor-mapped pentest evidence delivered on a schedule your policy can defend. The founders rate ($99/month locked for life) is open to the first 50 users.
See Your Security Posture
Run a free external security scan to see where your application stands. TLS, headers, DNS, ports — in under 60 seconds.