Conformità
Rilevamento e risposta per gli endpoint

The CMMC Evidence Problem: Why Deployed Controls Fail Assessment

You can pass a vulnerability scan and still fail a CMMC assessment.

You can deploy endpoint protection, enable MFA, and configure logging correctly—and still receive a finding.

Because CMMC Level 2 certifies what you can prove, not what you intended.

This is Part III in our series on CMMC compliance for defense contractors, subcontractors, and the Managed Service Providers who support them. Part I introduced the Three Pillars framework. Part II examined the grey space between vendor capabilities and organizational implementation. This article focuses on a different failure mode – one that occurs even when controls are fully deployed: the inability to demonstrate defensible evidence during assessment.

Under the Department of Defense’s Cybersecurity Maturity Model Certification (CMMC) program, eligibility for certain defense contracts depends on verified implementation of cybersecurity requirements. At Level 2, organizations handling Controlled Unclassified Information must demonstrate implementation of the 110 security requirements defined in NIST Special Publication 800-171.

Certification is not based on self-attestation, intent, or tool ownership. Assessments are conducted by Certified Third-Party Assessment Organizations (C3PAOs) using the evaluation methods defined in NIST SP 800-171A—examine, interview, and test—to verify implementation within the organization’s defined system boundary.

The difference between having controls and proving controls is where most organizations fail. Where organizations struggle is not usually with deploying controls. They struggle to prove those controls exist in a way an assessor can verify.

The Evidence Problem

C3PAO assessors do not evaluate what an organization intended to implement. They evaluate what the organization can demonstrate through objective evidence.

Consider a defense contractor that deploys endpoint detection and response across all systems processing Controlled Unclassified Information. The platform monitors continuously, blocks malicious code, and generates alerts. The control SI.L2-3.14.2, “Provide protection from malicious code,” appears satisfied.

During assessment, the assessor requests malware detection reports from the past 90 days, signature update logs, and records documenting responses to detected threats.

The organization produces automated weekly emails sent to an unmonitored mailbox. No detection events are documented. No investigation records exist.

The control is marked “Not Met,” not because protection was absent, but because evidence of protection was never generated.

This is the evidence gap.

How Assessors Evaluate Evidence

CMMC Level 2 assessments follow NIST SP 800171A. Each control includes defined assessment objectives evaluated through examine, interview, and test methods. Evidence sufficiency is evaluated against those objectives, not against tool checklists or vendor claims.

For example, for AU.L23.3.1, “Create and retain audit logs,” assessors typically require multiple forms of corroborating evidence:

  • Documentary evidence, such as audit logging policies defining retention periods and log types.
  • Configuration evidence, such as log management system settings showing retention configured.
  • Operational evidence, such as reports or validation records demonstrating logs are generated, retained, and available in accordance with documented policy.

Configuration screenshots alone are rarely sufficient for controls that require demonstration of ongoing process execution. Assessors look for evidence that controls operate over time — not merely that they were enabled at some point.

Scope and Sampling Matter

Assessment evidence is evaluated within the organization’s defined system boundary. C3PAOs use representative sampling to determine whether controls are implemented consistently across in-scope assets.

Assessors do not review every artifact for every system. Instead, they select representative samples to determine whether processes are implemented consistently across the environment.

If implementation is inconsistent across in-scope systems, sampling will surface those gaps, even if artifacts exist elsewhere.

Evidence must align with the documented system boundary, asset inventory, and defined processes. Evidentiary sufficiency is contextual, not theoretical.

Three Categories of Evidence

While NIST SP 800171A defines assessment methods as examine, interview, and test, organizations benefit from thinking about evidence in three practical categories.

  • Documentary evidence answers the question: “What is your process?”
    Examples include baseline configuration standards, patch management policies, and incident response plans.
  • Configuration evidence answers the question: “How is the technology configured?”
    Examples include firewall rulesets, MFA enforcement settings, and encryption configurations.
  • Operational evidence answers the question: “Can you prove this actually happened over time?”
    Examples include tickets, meeting minutes, compliance reports, and investigation notes.

Organizations routinely produce documentary and configuration evidence. They frequently fail to generate operational evidence. This failure is one of the most common causes of assessment findings.

Understanding how assessors evaluate evidence is the first step. The harder challenge is operationalizing evidence generation in a way that can withstand assessment scrutiny.

How N‑able Supports Evidence Generation

N‑able platforms generate compliance artifacts when configured correctly. N‑central produces configuration, patch, access, and monitoring reports aligned to common CMMC control objectives. Cove Data Protection produces backup and recovery validation records.

Organizations remain responsible for reviewing, retaining, and documenting these artifacts within compliance workflows—platforms assist with evidence generation but do not replace operational discipline.

Next Steps

Download N‑able’s Shared Responsibility Matrix to understand which controls require customer evidence generation and operational ownership.

This article examined how assessors evaluate evidence. Part IV will examine the harder challenge: designing operations that generate defensible evidence before assessment begins.

© N‑able Solutions ULC e N‑able Technologies Ltd. Tutti i diritti riservati.

Il presente documento viene fornito per puro scopo informativo e i suoi contenuti non vanno considerati come una consulenza legale. N‑able non rilascia alcuna garanzia, esplicita o implicita, né si assume alcuna responsabilità legale per quanto riguarda l’accuratezza, la completezza o l’utilità delle informazioni qui contenute.

N-ABLE, N-CENTRAL e gli altri marchi e loghi di N‑able sono di esclusiva proprietà di N‑able Solutions ULC e N‑able Technologies Ltd. e potrebbero essere marchi di common law, marchi registrati o in attesa di registrazione presso l’Ufficio marchi e brevetti degli Stati Uniti e di altri paesi. Tutti gli altri marchi menzionati qui sono utilizzati esclusivamente a scopi identificativi e sono marchi (o potrebbero essere marchi registrati) delle rispettive aziende.