I've seen too many organizations build what they call a compliance program that's really just a folder of policies nobody reads and a spreadsheet tracking who clicked through training. When the auditor shows up or an incident happens, the gaps become obvious fast. A real compliance program isn't documentation theater. It's an integrated system of governance, controls, evidence, and response that operates whether or not anyone's watching.
The difference matters because compliance failures don't stay abstract. I've watched companies lose contracts worth millions because they couldn't demonstrate control effectiveness during due diligence. I've seen breach notification requirements trigger because nobody knew the incident response plan was three years out of date and referenced people who no longer worked there. The organizations that survive audits and incidents aren't the ones with the prettiest policies—they're the ones who built programs that actually function.
This article walks through what a functioning compliance program actually looks like: the components, how they connect, and what good execution means in practice.
Governance: Who Owns This and Why That Matters
Every compliance program needs clear ownership, and that ownership needs to sit high enough in the organization to make things happen. I don't mean symbolic sponsorship where an executive's name appears on a charter they've never read. I mean actual accountability: someone who answers when the program fails, who controls budget, and who can override business units when compliance and revenue come into conflict.
The pattern I see in organizations that struggle is governance by committee without decision authority. You get a compliance steering group with representatives from IT, legal, HR, operations, and finance, all of whom have opinions but none of whom can commit resources or make binding decisions. Meetings produce action items that sit in someone's backlog behind product releases and cost reduction initiatives. The program exists on paper but has no momentum.
Effective governance means a named executive owner—often the CISO, General Counsel, or Chief Compliance Officer depending on the regulatory framework—with a direct reporting relationship to the CEO or board. That person chairs a governance committee that doesn't just review metrics but makes actual decisions: which controls get funded, which risks get accepted, which third parties get cut off, which projects get delayed until security reviews complete.
The Governance Committee Structure That Works
The governance committee should meet regularly (monthly in most regulated environments, quarterly if you're in a stable state with low regulatory change). Meeting agendas should cover:
- Control performance metrics and gaps
- Regulatory changes requiring program updates
- Risk assessment findings and treatment decisions
- Audit findings and corrective action status
- Incident trends and root cause patterns
- Budget allocation for remediation and improvement
The committee needs authority to allocate funding, approve policy exceptions, and escalate to the board when risk exceeds tolerance. Without that authority, you're running a discussion forum, not governance.
Risk Assessment: The Foundation Nobody Wants to Do Right
Risk assessment is where most compliance programs get lazy. They inherit a template from a consultant, fill in some qualitative scores, and call it done. Then they wonder why the controls they've implemented don't address the actual threats they face.
A functional risk assessment starts with asset inventory—not just servers and applications, but data flows, third-party integrations, and regulated information repositories. You can't assess risk to assets you don't know you have. I've walked into healthcare organizations that couldn't tell me where all their copies of protected health information lived. You can't comply with HIPAA breach notification if you don't know what systems to check when an incident occurs.
The assessment should identify:
- Regulatory requirements applicable to each asset class
- Threat scenarios specific to your environment (not generic ransomware hand-waving, but concrete attack paths based on your architecture)
- Existing control effectiveness gaps
- Residual risk after current controls
- Treatment decisions: accept, mitigate, transfer, or avoid
Risk scoring needs to reflect actual business impact. A breach of customer payment data has different consequences than exposure of internal meeting notes, and your assessment should distinguish between them. I use likelihood and impact matrices that tie to quantifiable outcomes: regulatory fines, contract penalties, notification costs, revenue loss from customer churn.
Making Risk Assessment Continuous
The bigger mistake is treating risk assessment as an annual event. Your environment changes constantly: new applications, new vendors, new regulatory requirements, new threat intelligence. A risk assessment that's twelve months old is historical fiction.
You don't need to reassess everything monthly, but you need triggers that force reassessment when conditions change. New third-party integrations should trigger vendor risk assessment. New regulatory requirements should trigger gap analysis. Incidents should trigger root cause analysis that updates your threat scenarios. If you're working with regulatory compliance obligations across multiple frameworks, you need a mechanism to track how changes in one area cascade to others.
Building Compliance Programs That Work
Carl delivers keynotes on practical compliance program design for healthcare, defense contractors, and regulated industries—based on real CISO experience, not vendor pitches.
Book Carl to Speak
Policies and Standards: Documentation That People Actually Use
Policy development is where compliance programs produce the most waste. Organizations write fifty-page acceptable use policies that nobody reads, data classification standards with nine sensitivity levels that nobody applies consistently, and incident response procedures so complex that people bypass them when actual incidents occur.
Good policy writing is harder than it looks. You need to be specific enough to drive consistent behavior but flexible enough to accommodate different contexts. You need to be comprehensive enough to satisfy auditors but concise enough that people will actually read and follow what you wrote.
I structure policy frameworks in three layers:
- Policies: High-level governance documents that state what the organization will do and why. These are board-approved, rarely change, and apply across the organization.
- Standards: Technical and procedural requirements that implement policies. These specify acceptable encryption algorithms, password complexity rules, access review frequencies, and vendor assessment criteria.
- Procedures: Step-by-step instructions for specific tasks. How to provision user accounts, how to classify data, how to report incidents, how to conduct access reviews.
This structure lets you update technical standards without requiring board approval every time NIST revises guidance. It lets you adapt procedures to different business units while maintaining consistent policy requirements.
The Role Policy Exceptions Play
Every compliance program needs an exception process because no policy survives contact with business reality unchanged. The question is whether exceptions happen transparently with risk acknowledgment or whether they happen in shadow IT because the policy is unworkable.
A functioning exception process requires:
- Business justification for why the standard can't be met
- Compensating controls that address the gap
- Executive approval from someone with risk authority
- Expiration dates that force periodic revalidation
- Documentation that survives audits
Track exception trends. If you're granting the same exception repeatedly, your standard is wrong and needs revision. If exceptions cluster in one business unit, you have either a control problem or a culture problem that needs attention.
Controls: Implementation and Evidence That Hold Up
Controls are where compliance programs either work or fail. You can have perfect policies and governance, but if controls aren't implemented consistently and can't be evidenced, you don't have a compliance program—you have compliance aspiration.
The gap I see most often is between control design and control operation. Organizations document how access reviews should work, but when you examine the evidence, reviews are months late, completed by people who don't understand what they're reviewing, and routinely approve access that violates segregation of duties. The control exists on paper but doesn't operate effectively.
Effective control implementation requires:
- Clear ownership: Someone specific is responsible for executing the control, not "the IT team" or "business unit managers."
- Documented procedures: The person executing the control knows exactly what steps to perform and what constitutes completion.
- Built-in evidence: The control execution process produces artifacts that demonstrate it happened—logs, approvals, review records, scan results.
- Validation mechanisms: Someone other than the control owner periodically checks that the control is operating as designed.
I push organizations toward automated controls wherever possible because humans are inconsistent. Configuration management that automatically enforces encryption settings is more reliable than a procedure that tells administrators to enable encryption and hopes they remember. Automated access provisioning that enforces segregation of duties rules is more reliable than manual review of role assignments. The fewer controls that depend on someone remembering to do something, the better.
The Evidence Problem
Evidence is what separates organizations that pass audits from organizations that scramble before every assessment trying to reconstruct proof of what they think happened. If your approach to audit readiness involves weeks of evidence collection panic, your controls aren't producing evidence as a natural byproduct of operation.
Good evidence is contemporaneous, complete, and specific. A spreadsheet dated the day before the audit that claims you performed quarterly access reviews isn't evidence—it's fiction. Timestamped logs showing access review workflows, approval decisions, and remediation actions are evidence. Automated scan results with dates and findings are evidence. Screenshots someone created during evidence collection are not.
Build evidence collection into the control execution process. If quarterly vulnerability scanning is a control, the scan results should automatically archive with timestamps in a location the control owner can't modify retroactively. If access certification is a control, the review tool should maintain an audit trail of who reviewed what, what they approved or revoked, and when remediation completed.
Monitoring and Metrics: Knowing When Controls Fail
A compliance program that only learns about control failures during annual audits isn't a program—it's periodic discovery of accumulated problems. You need continuous monitoring that surfaces issues while you can still fix them before they become findings or incidents.
Monitoring operates at multiple levels. Technical monitoring tracks control performance: Are backups completing? Are encryption standards being enforced? Are access attempts from unauthorized locations being blocked? Are patches being deployed within required timeframes? This layer catches operational control failures.
Compliance monitoring tracks program health: Are policies up to date? Are risk assessments on schedule? Are corrective actions from the last audit closing on time? Are incident response plan tests happening? This layer catches process breakdowns.
The metrics you track should answer specific questions that governance needs to make decisions. I'm skeptical of compliance dashboards that show a hundred green indicators but can't tell you whether your program would survive an audit or respond effectively to an incident. The metrics that matter are the ones that predict failure or indicate gaps:
- Percentage of systems with overdue patches categorized by severity
- Age of open audit findings and corrective action plans
- Third-party assessments overdue or with unresolved high-risk findings
- Policy review cycles missed
- Percentage of staff overdue on required training
- Incident response time for detection, containment, and notification against regulatory deadlines
These metrics tell you where the program is breaking. A dashboard that's all green either means you're doing great or you're not measuring the right things. I usually assume the latter until proven otherwise.
Compliance Program Design for Your Industry
Carl speaks on building functional compliance programs for healthcare, defense contractors, and other regulated industries. See all keynote speaking topics or reach out about your event.
Book Carl for Your EventThird-Party Risk Management: Your Program Extends Beyond Your Network
Your compliance obligations don't stop at your network perimeter. Every vendor that handles regulated data or has access to your systems becomes part of your compliance scope. I've seen organizations with excellent internal controls fail audits because a third-party vendor couldn't produce evidence of their own compliance, or worse, suffered a breach that exposed customer data the organization didn't even know the vendor had.
Third-party risk management needs to be part of the compliance program from the beginning, not something you bolt on when a contract reviewer notices a missing BAA or a due diligence questionnaire arrives from a customer.
The program should include:
- Inventory: You need to know which vendors have access to what data and systems. I've worked with organizations that discovered critical vendors during incident response, which is exactly the wrong time for that discovery.
- Risk-based assessment: Not every vendor needs the same scrutiny. A SaaS provider hosting your customer relationship management system needs a security assessment. Your office supply vendor probably doesn't. Risk-tier your vendors based on data access and criticality.
- Contractual requirements: Your contracts need to flow down your compliance obligations. If you're subject to HIPAA, your BAAs need to be in place and enforceable. If you're subject to CMMC or ITAR, your contracts need security requirements and audit rights.
- Ongoing monitoring: Initial vendor assessments tell you what their posture was at a point in time. Continuous monitoring—whether through automated security ratings, periodic reassessments, or breach notification tracking—tells you when things change.
- Incident response coordination: Your incident response plan needs to address vendor-originated incidents. When a vendor suffers a breach affecting your data, you need to know within hours, not weeks, and you need a process for determining notification obligations.
The biggest mistake I see is treating vendor assessments as procurement checkboxes. Someone in purchasing sends a questionnaire, the vendor fills it out with optimistic answers, nobody validates anything, and the contract gets signed. Then during your audit, you discover the vendor can't actually produce evidence supporting their questionnaire responses, and now you have a compliance gap you can't close without replacing the vendor.
Incident Response and Breach Management: When the Program Gets Tested
Incident response is where compliance programs face their real test. You can have beautiful policies and excellent controls, but if your organization can't detect, contain, investigate, and report incidents within regulatory timeframes, you're going to fail when it matters most.
A compliance-aware incident response program needs several components that standard security incident response sometimes skips:
Regulatory notification requirements mapped to incident types: Different regulations have different notification triggers and timelines. HIPAA breach notification has a 60-day deadline for individual notices but requires media notification for breaches affecting 500+ individuals. State breach notification laws vary by state and may have triggers different from HIPAA. Your incident response plan needs decision trees that map incident characteristics to notification obligations so you're not researching requirements during active incidents.
Evidence preservation: Security teams often prioritize containment and restoration, sometimes at the expense of evidence that compliance and legal teams need. You need procedures that preserve logs, capture forensic images, and maintain chain of custody while containing the incident. I've seen organizations clean up compromised systems so thoroughly they couldn't later determine what data was accessed, making breach notification determinations nearly impossible.
Communication protocols: Who talks to regulators? Who talks to customers? Who talks to the media? These decisions shouldn't get made during an incident. Your plan needs pre-approved communication templates and clear authority for who can approve them.
Testing that includes compliance scenarios: Most tabletop exercises focus on technical response: containing ransomware, isolating compromised systems, restoring from backups. Compliance-focused exercises should include: determining whether notification is required, calculating notification deadlines, identifying affected individuals, drafting notification content, and coordinating with legal and regulatory contacts. The first time your team works through breach notification procedures shouldn't be during an actual breach.
Root Cause Analysis Feeding Back to Controls
The compliance value of incident response isn't just handling the incident—it's learning what failed and fixing it. Every incident represents a control breakdown: a gap in detection, a failure of prevention, a weakness in access controls, or a policy that didn't address the scenario that just occurred.
Root cause analysis should feed directly back into your compliance program. Did the incident happen because a control wasn't designed? Add the control. Did it happen because a control wasn't operating? Fix the implementation. Did it happen because the risk assessment missed a threat scenario? Update the assessment. The cycle from incident to corrective action to control improvement is what makes a compliance program mature.
Internal Audit and Continuous Improvement
External audits tell you whether you pass or fail. Internal audit tells you where you're going to fail before the external auditor finds it. Organizations that wait for external audits to identify gaps spend their compliance budget on remediation and emergency fixes. Organizations with effective internal audit programs spend their budget on continuous improvement and get cleaner external audit results.
Internal audit should operate on a risk-based cycle. High-risk controls get audited frequently—quarterly or semi-annually. Lower-risk controls can stretch to annual or biennial review. New controls get audited shortly after implementation to catch design flaws before they become operational failures.
The auditors need to be independent from the control owners. The person executing access reviews shouldn't audit their own access review process. This doesn't necessarily mean you need a separate internal audit department, but it does mean you need organizational separation between execution and validation.
Internal audit findings should feed the same corrective action process as external audit findings: documented remediation plans, assigned ownership, target completion dates, and tracking to closure. The difference is internal findings give you time to fix things before they become external findings, which gives you leverage during external audits when you can demonstrate you already identified the gap and remediation is in progress.
What Maturity Actually Looks Like
Compliance program maturity isn't about the sophistication of your GRC platform or how many policies you've published. It's about how the program functions under stress. A mature compliance program operates predictably when people are on vacation, when business units push back on controls, when budgets get cut, and when incidents occur.
Here's what I look for when assessing program maturity:
Controls operate without heroics: If your program depends on specific individuals working overtime to keep controls running, you don't have a sustainable program. Mature programs have documented procedures, automation where possible, and redundancy so that normal turnover doesn't create control gaps.
Evidence exists without scrambling: When someone asks for evidence of control operation, it's available immediately because the control process produces it automatically. You're not reconstructing what happened—you're pulling existing records.
Changes flow through impact assessment: New systems, new vendors, new regulatory requirements, and organizational changes trigger compliance reviews before implementation, not after. The program is integrated into change management, not bypassed by it.
Metrics predict problems: Your monitoring catches control failures and program gaps before they become audit findings or incidents. You're fixing things proactively based on leading indicators, not reactively based on failures.
Governance makes hard decisions: When compliance conflicts with business objectives, there's a defined process for escalation and decision that results in documented risk acceptance or business changes. Compliance doesn't get routinely overridden without accountability.
Organizations at this maturity level still have audit findings and still have incidents, but they handle both more effectively because the program infrastructure is solid. They spend less time in crisis mode and more time on improvement.
The Executive Perspective: Why This Structure Matters
From a leadership standpoint, a well-structured compliance program changes the conversation from whether you'll pass the next audit to how you're managing regulatory risk strategically. Executives who understand what a real compliance program looks like can ask better questions: Are we tracking metrics that predict failure? Do our controls have evidence built in? Can we respond to incidents within regulatory timeframes? Are we learning from near-misses?
The organizations I've seen struggle with compliance treat it as a separate function that exists alongside the business. The organizations that succeed integrate compliance into business operations. Security reviews are part of vendor selection. Privacy impact assessments are part of product development. Control evidence is part of system administration. Risk acceptance is part of strategic planning.
Building this integration requires executive sponsorship that goes beyond approving budgets. It requires leaders who understand that compliance isn't overhead—it's the operating requirements for doing business in regulated industries. It requires treating compliance program gaps with the same urgency as revenue shortfalls or product delays, because the consequences of failure are comparable: lost contracts, regulatory penalties, reputational damage, and in severe cases, criminal liability.
The compliance program architecture I've described here isn't theoretical. It's what works in healthcare organizations subject to HIPAA, defense contractors subject to CMMC and ITAR, financial services firms, and other regulated environments where failure has consequences. The specific controls vary by framework, but the program structure—governance, risk assessment, policies, controls, evidence, monitoring, incident response, and continuous improvement—remains consistent.
If your compliance program doesn't look like this, you're either carrying risk you haven't acknowledged or you're headed for findings you haven't anticipated. Either way, it's worth the time to build the structure right before the next audit or the next incident forces the issue.