Shadow AI Risks

The rapid proliferation of artificial intelligence tools has introduced a dangerous new category of cybersecurity risk for Department of War contractors: Shadow AI. This phenomenon represents the unauthorized use of AI-powered tools and platforms by employees without IT oversight or formal governance. This is a problem that poses unprecedented challenges for organizations seeking Cybersecurity Maturity Model Certification (CMMC) compliance.

While Shadow IT focused primarily on unsanctioned software applications and cloud services, Shadow AI amplifies these risks exponentially. Recent reports reveal the scope of this emerging threat showing that 68% of employees now use free-tier AI tools like ChatGPT through personal accounts, with 57% inputting sensitive data into these platforms. Even more concerning, 93% of employees admit to sharing confidential company data with unauthorized AI tools. For contractors handling Controlled Unclassified Information (CUI), these statistics represent a compliance catastrophe waiting to happen.

Shadow AI vs Shadow IT Risks

Shadow AI introduces unique vulnerabilities that traditional Shadow IT never posed. Unlike conventional unauthorized applications that primarily store or transmit data, AI tools actively process, learn from, and potentially retain information fed into them. When an employee pastes source code, customer information, or proprietary designs into an AI tool or platform, that data may be stored indefinitely, used to train future models, or retained in conversation logs accessible to the AI provider’s employees.

The data exposure is often also permanent and irreversible. Once information enters an AI model’s training dataset, it cannot be reliably deleted or recovered. This creates a fundamentally different risk profile than Shadow IT scenarios where data could theoretically be retrieved or access revoked. Furthermore, the number of AI applications employees can access has exploded.

Shadow AI: A Compliance Nightmare for CMMC

For defense contractors, Shadow AI creates a perfect storm of compliance violations across multiple CMMC domains. The framework’s 110 Level 2 controls were designed to protect CUI through comprehensive security measures, but these controls assume visibility and governance over all systems processing sensitive data. Shadow AI operates entirely outside this protective perimeter, creating invisible gaps in an organization’s security posture that assessors will inevitably discover.

The compliance implications extend beyond CMMC. When employees use unauthorized AI tools to process CUI, contractors may simultaneously violate other regulatory frameworks such as HIPPA and GDPR. Regulated industries face particularly severe consequences. For example, healthcare organizations using Shadow AI without proper safeguards risk HIPAA violations when Protected Health Information (PHI) enters unvetted AI systems without required Business Associate Agreements. These violations can trigger enforcement actions, substantial fines, contract terminations, and loss of certification. It’s no different for defense contractors.

Top Shadow AI Risks for CMMC

Risk 1: Uncontrolled CUI Data Leakage and Exposure

The Threat: Shadow AI creates direct pathways for CUI to exit controlled environments and enter third-party AI platforms where it may be permanently exposed, stored on foreign servers, used for model training, or accessed by unauthorized parties.

Research demonstrates this isn’t just theoretical. Organizations experienced data leakage involving AI tools in 13% of breaches, with these incidents costing an average of $670,000 more than standard breaches. Samsung provides a cautionary tale when in 2022 they reversed a ban on using AI. Engineers, now free to use AI to enhance productivity, inadvertently leaked proprietary chip design code by pasting it into ChatGPT while debugging issues, potentially placing trade secrets into the public domain.

For defense contractors, the stakes are even higher. CUI encompasses technical data, manufacturing processes, logistics information, and other sensitive government information. If this data enters consumer-grade AI platforms, contractors lose control over how it’s stored, who can access it, and whether it complies with government security requirements. Many AI tools operate on cloud infrastructure in jurisdictions outside U.S. control, potentially violating data residency requirements.

CMMC Controls Affected:

  •   AC.L2-3.1.3 – Control CUI Flow: This requirement mandates organizations control information flows between systems and information domains in accordance with approved authorizations. Shadow AI completely bypasses these controls by creating unauthorized data flows to external AI platforms. When employees input CUI into unsanctioned tools, they violate the fundamental principle of controlling where CUI flows and how it’s processed.
  •   AC.L2-3.1.20 – External Connections: Organizations must authorize external system connections where CUI is transmitted or processed. Shadow AI tools represent unauthorized external connections that have never been vetted, approved, or subjected to security reviews required for systems handling CUI.
  •   MP.L2-3.8.1 – Media Protection: This control requires organizations to protect system media containing CUI, both paper and digital. When CUI is uploaded to AI platforms, it exists on external media (cloud servers) entirely outside the organization’s protection mechanisms.
  •   MP.L2-3.8.9 – Protect Backups: AI platforms may retain conversation histories, prompt logs, and uploaded files indefinitely as part of their backup and training processes. These “backups” of CUI may exist without the encryption, access controls, or security measures CMMC requires.

Risk 2: Complete Absence of Audit Trails and Accountability

The Threat: Shadow AI operates invisibly, creating a complete blind spot in audit logging, monitoring, and accountability mechanisms. Traditional security tools may not be able to detect or log interactions with browser-based AI services using encrypted HTTPS connections.

Organizations lose visibility into critical security questions: Which employees are using AI tools? What data are they sharing? How frequently? What responses are they receiving and incorporating into work products? This lack of auditability makes it impossible to conduct forensic investigations, respond to incidents, or demonstrate compliance during CMMC assessments.

The monitoring gap is particularly problematic because AI usage doesn’t follow traditional application patterns. An employee might access ChatGPT through a web browser, use Microsoft Copilot embedded in Office applications, or leverage AI capabilities hidden within approved SaaS platforms, all while leaving minimal traces in conventional security logs.

CMMC Controls Affected:

  •   AU.L2-3.3.1 – System Auditing: Organizations must create and retain system audit logs sufficient to enable monitoring, analysis, investigation, and reporting of unlawful or unauthorized system activity. Shadow AI usage generates no audit logs in organizational systems, creating gaps in the audit trail whenever employees interact with these tools.
  •   AU.L2-3.3.2 – User Accountability: This control requires ensuring actions of individual users can be uniquely traced to those users so they can be held accountable. When employees use personal AI accounts or browser-based tools, their actions become completely untraceable within the organization’s accountability framework.
  •   AU.L2-3.3.3 – Event Review: Organizations must review and analyze system audit records periodically for indications of inappropriate or unusual activity. Without logs of Shadow AI interactions, security teams cannot review these activities for anomalies, policy violations, or potential security incidents.
  •   AU.L2-3.3.8 – Audit Protection: Audit logs must be protected from unauthorized access, modification, and deletion. Shadow AI activities that occur entirely outside organizational systems cannot have their “logs” (conversation histories on third-party platforms) protected according to CMMC requirements.

Risk 3: Unauthorized Software and Unmanaged System Components

The Threat: Shadow AI introduces unvetted software and external services into the technology ecosystem without security reviews, vulnerability assessments, or configuration management oversight.

Each AI tool represents a new attack surface. Browser extensions for AI assistants, desktop applications, API integrations, and embedded AI features in approved software all create potential entry points for attackers. These tools may have unpatched vulnerabilities, weak authentication mechanisms, or malicious code injection risks that IT security teams have never assessed.

The problem compounds when employees use AI coding assistants that generate code suggestions. If that AI-generated code contains vulnerabilities or backdoors, it may be integrated directly into applications processing CUI, introducing security flaws that originated entirely outside the organization’s secure development lifecycle.

CMMC Controls Affected:

  •   CM.L2-3.4.1 – System Baselining: Organizations must establish and document baseline configurations for systems. Shadow AI tools exist entirely outside baseline configurations, and they are not evaluated, approved, or included in system documentation.
  •   CM.L2-3.4.2 – Security Configuration Enforcement: This control mandates implementation of the most restrictive configurations consistent with operational requirements. Shadow AI tools typically operate in silos with whatever default configurations the vendor provides, which may be entirely inadequate for protecting CUI.
  •   CM.L2-3.4.6 – Least Functionality: Organizations must configure systems to provide only essential capabilities. Shadow AI tools often have extensive, unrestricted functionality that violates least functionality principles, including features that upload files, share data externally, and integrate with other services.
  •   CM.L2-3.4.8 – Application Execution Policy: Organizations must control and monitor user-installed software. This includes implementing application allowlisting and restricting unauthorized software execution. Shadow AI completely bypasses these controls when accessed through web browsers or unauthorized installations.
  •   CM.L2-3.4.9 – User-Installed Software: This requirement specifically addresses restrictions on user software installation. Employees who independently download AI tools, install browser extensions, or access web-based AI services violate this control by introducing unapproved software without IT review.

Risk 4: Inadequate Incident Detection and Response Capabilities

The Threat: Organizations cannot detect, report, or respond to security incidents involving Shadow AI because these tools operate outside monitored environments. By the time data exposure is discovered, if it ever is, the damage has already occurred.

Shadow AI incidents lack the typical indicators that trigger security alerts. There are no failed login attempts to monitor, no unusual network traffic patterns to flag, and no system logs to analyze. When an employee shares CUI with a browser-based AI tool, it appears as normal web browsing activity.

The delayed detection means organizations may violate reporting requirements. DFARS 252.204-7012 mandates that contractors report cyber incidents affecting CUI (rapidly) within 72 hours. However, if contractors don’t know Shadow AI exposure occurred, they cannot possibly report it within the required timeframe.

CMMC Controls Affected:

  •   IR.L2-3.6.1 – Incident Handling: Organizations must establish an operational incident handling capability including preparation, detection, analysis, containment, recovery, and user response activities. Shadow AI incidents cannot be handled because they cannot be detected through normal security monitoring.
  •   IR.L2-3.6.2 – Incident Reporting: This control requires tracking, documenting, and reporting incidents to designated officials and authorities. Shadow AI data leakage may constitute a reportable cyber incident under DFARS, but organizations cannot report what they cannot detect.
  •   IR.L2-3.6.3 – Incident Response Testing: Organizations must test incident response capabilities periodically. However, if incident response plans don’t account for Shadow AI scenarios, testing won’t identify these critical gaps in detection and response capabilities.
  •   SI.L2-3.14.7 – Identify Unauthorized Use: This control mandates identifying unauthorized use of organizational systems. Shadow AI represents unauthorized use of external AI systems to process organizational data, a scenario that’s difficult to detect with traditional monitoring focused on internal systems.

Risk 5: System and Information Integrity Violations

The Threat: AI systems can generate hallucinated information, biased outputs, and factually incorrect content that employees may incorporate into work products, decisions, or communications without verification.

Model hallucinations present a particularly insidious risk. When AI confidently presents false information as fact, employees may rely on it for technical decisions, customer communications, or even contract deliverables. In one notable case, New York lawyers submitted court filings containing fictitious case citations generated by ChatGPT, resulting in sanctions and professional embarrassment.

Beyond hallucinations, Shadow AI tools may introduce malicious code, outdated security practices, or vulnerable patterns into organizational systems. AI coding assistants trained on public repositories may suggest code containing known vulnerabilities or deprecated security functions.

CMMC Controls Affected:

  •   SI.L2-3.14.1 – Flaw Remediation: Organizations must identify, report, and correct system flaws in a timely manner. When AI tools generate flawed code or recommendations that get incorporated into systems, these flaws may never be identified through normal vulnerability scanning.
  •   SI.L2-3.14.2 – Malicious Code Protection: This control requires protection against malicious code at designated system locations. Shadow AI tools accessed via web browsers or unauthorized applications bypass antivirus and anti-malware protections designed to scan incoming code.
  •   SI.L2-3.14.3 – Security Alerts & Advisories: Organizations must receive security alerts and advisories about system threats and vulnerabilities. Shadow AI tools are not included in vulnerability monitoring or security alert feeds, leaving organizations unaware of newly discovered risks.
  •   SI.L2-3.14.5 – System & File Scanning: Organizations must scan for malicious code and scan for unauthorized software. Shadow AI tools and their outputs often evade these scanning mechanisms, particularly when accessed through encrypted web sessions.

Risk 6: Compromised Access Control and Authentication

The Threat: Employees may access Shadow AI tools using personal accounts with weak authentication, no multi-factor authentication requirements, and credentials that are not managed by enterprise identity systems.

These personal accounts create numerous vulnerabilities. Employees may reuse passwords across multiple services, fail to enable available security features, or share account access with others. When credentials are compromised, through phishing, credential stuffing, or data breaches, attackers gain access to conversation histories containing CUI.

The authentication gap becomes particularly problematic when AI tools offer “remember me” features or long-lived sessions. An employee might remain logged into ChatGPT for weeks, during which time an attacker with physical access to their device could access all previous AI interactions.

CMMC Controls Affected:

  •   AC.L2-3.1.1 – Authorized Access Control: Organizations must limit system access to authorized users, processes acting on behalf of users, and authorized devices. Shadow AI tools are accessed by unauthorized external systems that have never been vetted or approved for CUI processing.
  •   AC.L2-3.1.5 – Least Privilege: Users should only have access necessary to perform their assigned tasks. Shadow AI tools often provide unrestricted capabilities far exceeding what users need, violating least privilege principles.
  •   AC.L2-3.1.12 – Control Remote Access: Organizations must authorize and monitor remote access sessions. When employees access cloud-based AI services, they’re establishing remote connections to external systems without authorization or monitoring.
  •   IA.L2-3.5.3 – Multifactor Authentication: MFA must be used for local and network access to privileged accounts and for network access to non-privileged accounts. Personal AI tool accounts rarely implement MFA, and even when available, employees may not enable it.

Risk 7: Unencrypted and Unsecured Data in Transit and at Rest

The Threat: While connections to AI platforms may use TLS encryption in transit, the data itself may then be stored, processed, and potentially transmitted to other systems in ways that don’t meet CMMC encryption requirements.

Consumer AI platforms are not designed with CMMC compliance in mind. They may store data on servers in multiple countries, lack FIPS validated cryptographic modules, and have no contractual obligations to protect CUI according to government standards. The encryption they do provide, if any, likely uses cryptography that may not meet federal requirements for protecting CUI.

Furthermore, data entered into AI tools may be transmitted to multiple backend services for processing, stored in various caches and databases, and potentially shared with third-party services for analytics or model improvement, all without the oversight or encryption standards CMMC mandates.

CMMC Controls Affected:

  •   SC.L2-3.13.8 – Data in Transit: Organizations must implement cryptographic mechanisms to prevent unauthorized disclosure of CUI during transmission. When CUI travels to AI platforms, even if the HTTPS connection is encrypted, the subsequent internal transmission and processing by the AI provider may not maintain required protections.
  •   SC.L2-3.13.11 – CUI Encryption: This control specifies that cryptographic mechanisms used to protect CUI must employ FIPS-validated cryptography. Consumer AI platforms rarely use FIPS-validated modules, meaning their encryption doesn’t meet CMMC requirements.
  •   SC.L2-3.13.16 – Data at Rest: Organizations must protect the confidentiality of CUI at rest. Once uploaded to AI platforms, CUI exists “at rest” on external servers where the organization has no control over encryption, access controls, or security measures.
  •   SC.L2-3.13.10 – Key Management: Organizations must establish and manage cryptographic keys according to requirements and employing automated mechanisms. Organizations have zero visibility into or control over how AI platforms manage encryption keys for stored data.

Risk 8: Personnel Security and Training Gaps

The Threat: Employees using Shadow AI often don’t understand the security implications of their actions, haven’t received training on acceptable AI use, and may be unaware they’re violating security policies or compliance requirements.

The training gap is bidirectional. Security teams don’t know how to detect, monitor, or mitigate Shadow AI risks, while employees don’t understand why pasting information into an AI tool differs from using approved collaboration tools. This knowledge deficit allows dangerous practices to persist undetected.

Moreover, insider threat indicators associated with Shadow AI usage may go unrecognized. An employee exfiltrating large amounts of CUI through AI tool uploads exhibits behavior identical to malicious insider activity, but without proper training, both the employee and cybersecurity personnel may fail to identify the threat.

CMMC Controls Affected:

  •   AT.L2-3.2.1 – Role-Based Risk Awareness: Organizations must ensure personnel are trained to carry out their assigned information security-related duties and responsibilities. This includes awareness of AI-related risks specific to their roles.
  •   AT.L2-3.2.2 – Role-Based Training: Personnel with significant security responsibilities must receive role-based security training before being authorized to access systems. Cybersecurity teams need specialized training to address Shadow AI detection, monitoring, and risk mitigation.
  •   AT.L2-3.2.3 – Insider Threat Awareness: Organizations must provide security awareness training on recognizing and reporting potential insider threat indicators. Shadow AI usage patterns, especially frequent uploads of sensitive data, may indicate either negligent or malicious insider activity.

Risk 9: Physical Protection and Media Sanitization Failures

The Threat: When CUI exists on AI platform servers, organizations lose the ability to implement required physical protection controls or properly sanitize media before disposal or reuse.

Physical security becomes impossible to verify or enforce. Organizations cannot escort visitors through AI provider data centers, cannot monitor physical access to servers containing their CUI, and cannot control physical protection mechanisms. The data exists in an environment where none of the CMMC physical protection requirements can be satisfied.

Media sanitization presents an even more intractable problem. When AI platforms process CUI, that information may persist in training datasets, cached responses, and backup systems indefinitely. There is no way to cryptographically erase, clear, or destroy this media to prevent data recovery. The data is permanently beyond your organizational control.

CMMC Controls Affected:

  •   PE.L2-3.10.1 – Limit Physical Access: Organizations must limit physical access to systems, equipment, and operating environments to authorized individuals. This control becomes meaningless when CUI resides on AI platform infrastructure where the organization has no physical access control whatsoever.
  •   PE.L2-3.10.4 – Physical Access Logs: Organizations must maintain audit logs of physical access to areas where systems are located. AI platforms may maintain their own access logs, but contractors cannot review, verify, or audit these logs.
  •   MP.L2-3.8.3 – Media Disposal: Before disposal or release for reuse, organizations must sanitize or destroy media containing CUI. There is no mechanism to sanitize AI platform servers, databases, or model training datasets that have ingested CUI.

Risk 10: Risk Assessment and Security Assessment Blind Spots

The Threat: Shadow AI creates vulnerabilities that don’t appear in risk assessments, vulnerability scans, or security control assessments because these tools and their associated risks are completely unknown to security teams.

Organizations preparing for CMMC assessments will inventory their systems, identify vulnerabilities, and assess control effectiveness. However, these activities assume visibility into all systems processing CUI. Shadow AI exists entirely outside this assessment scope, meaning risk assessments are fundamentally incomplete.

The assessment gap extends to third-party risk management. Many AI tools connect to additional services, use third-party models, or share data with partners for improvement or analytics. These supply chain risks never undergo vendor security reviews or risk assessments.

CMMC Controls Affected:

  •   RA.L2-3.11.1 – Risk Assessments: Organizations must conduct periodic assessments of risk to organizational operations, assets, individuals, and other organizations. Shadow AI represents a significant unassessed risk that’s absent from formal risk assessment processes.
  •   RA.L2-3.11.2 – Vulnerability Scan: Organizations must scan for vulnerabilities and when new vulnerabilities are identified and reported. Shadow AI tools are never included in vulnerability scanning scope, leaving their security flaws undetected.
  •   RA.L2-3.11.3 – Vulnerability Remediation: Organizations must remediate vulnerabilities in accordance with risk assessments. Vulnerabilities in Shadow AI tools cannot be remediated because they’re never discovered through vulnerability management processes.
  •   CA.L2-3.12.1 – Security Control Assessment: Organizations must periodically assess security controls to determine effectiveness. Shadow AI bypasses assessed controls entirely, making those assessments incomplete and potentially misleading.
  •   CA.L2-3.12.4 – System Security Plan: The System Security Plan (SSP) must describe system boundaries, environments, how security requirements are implemented, and connections to other systems. Shadow AI tools are never documented in the SSP, making these plans fundamentally inaccurate.

Conclusion

Shadow AI represents one of the most significant compliance challenge facing many organizations today. It combines the worst aspects of Shadow IT with unique risks inherent to artificial intelligence systems: permanent data retention, opaque processing, unverifiable integrity, and complete invisibility to traditional security controls. For defense contractors handling CUI, every unauthorized AI interaction creates potential violations across multiple CMMC domains simultaneously.

The risks enumerated above affect at minimum 40 specific CMMC Level 2 controls spanning Access Control, Audit and Accountability, Awareness and Training, Configuration Management, Identification and Authentication, Incident Response, Media Protection, Personnel Security, Physical Protection, Risk Assessment, Security Assessment, System and Communications Protection, and System and Information Integrity domains. This comprehensive impact makes Shadow AI not just a security concern, but an existential threat to CMMC certification and continued eligibility for defense contracts.

Organizations must act decisively to gain visibility into AI usage, implement governance frameworks, educate personnel, and deploy technical controls that bring Shadow AI into managed, compliant environments. The alternative continued invisible AI adoption makes CMMC compliance impossible to achieve or maintain, regardless of how robust other security measures may be.

By Published On: 16 Oct, 2025

Share This Story, Choose Your Platform!

Related Posts

  • 19 Sep, 2025
  • 10 Sep, 2025