Swiss Cheese Model
The Swiss Cheese Model is a safety accident causation theory developed by James Reason, a British psychologist, in the 1990s. The model provides a framework for understanding how accidents occur in complex systems by...
How Swiss Cheese Model works in practice
A practical sequence teams can use to standardize adoption and reduce risk.
2
Identify Active Failures (Human Errors)
Identify Active Failures (Human Errors): What immediate actions or omissions by workers/supervisors contributed? Examples: wrong valve opened, communication not relayed, fatigue-induced error, procedural violation.
3
Identify Latent Failures (System Weaknesses)
Identify Latent Failures (System Weaknesses): For each active failure, ask "Why did that error occur?" and trace to system factors:
4
Was the procedure unclear or missing?
Was the procedure unclear or missing?
5
Was the worker adequately trained?
Was the worker adequately trained?
6
Was the equipment adequate?
Was the equipment adequate?
Where Swiss Cheese Model has the most impact
These are the areas where mature teams typically see measurable gains.
01
For HSSE Teams
The Swiss Cheese Model fundamentally changes how safety is managed. Rather than hunting for "the cause" (which usually becomes "human error"), HSSE teams design multi-layered defensive systems. Each significant hazard requires independent controls: engineering, procedures, training, supervision, and rescue capability. When incidents occur, investigations are thorough and multi-factorial, identifying latent failures (system weaknesses) not just active failures (worker error). This approach is more effective than blame-focused investigations.
02
For IT & CIOs
The Swiss Cheese Model informs how safety data is captured and analysed. Systems must track multiple failure types: procedure gaps, training status, equipment maintenance, supervision compliance, communication logs, and environmental factors. Incident investigations must be structured to identify latent failures alongside active failures. Root cause analysis workflows guide investigators through systematic hole-mapping. Predictive analytics look for warning signs (accumulating latent failures in a particular area) that might predict an incident if an active failure occurs.
Deep Dive
Swiss Cheese Model explained for operations, HSSE, and leadership teams
A concise reference focused on implementation, governance, and day-to-day execution.
What Is the Swiss Cheese Model?
The Swiss Cheese Model is a safety accident causation theory developed by James Reason, a British psychologist, in the 1990s. The model provides a framework for understanding how accidents occur in complex systems by visualising safety as multiple defensive layers, each with latent vulnerabilities or "holes."
The Core Concept
Imagine a series of slices of Swiss cheese stacked on top of each other. Each slice represents a defensive layer in a safety system:
Slice 1: Organisational Management [O O O] Slice 2: Supervision & Communication [ O O ] Slice 3: Training & Competence [O O ] Slice 4: Procedures & Permits [ O ] Slice 5: Equipment & Engineering [O O ] Slice 6: Environmental Factors [ O ]
When all holes align ( to to to to to to ), an accident occurs.
Each slice has holes representing potential failures: inadequate management oversight, unclear procedures, equipment defects, fatigue, environmental hazards, or human error. Individually, each hole poses no threat because adjacent slices remain intact. However, when holes in multiple slices align by chance, a pathway for accident to occur opens. The accident is not the result of a single failure but of the simultaneous alignment of multiple, independent vulnerabilities.
Key Implications
- Multiple Failures Are Necessary: An accident rarely results from a single cause. Reason's research in aviation, healthcare, and nuclear power showed that major incidents typically involved 4-6 independent failures aligning. Blaming the accident on "human error" alone is incorrect and superficial.
- Latent vs. Active Failures: The model distinguishes between:
- Latent failures: Organisational or systemic weaknesses that exist but lie dormant (inadequate training curriculum, missing procedure, design flaw, maintenance schedule gap). Latent failures often precede incidents by weeks or months.
- Active failures: Immediate human actions or omissions (wrong button pressed, communication breakdown, fatigue-induced error). Active failures are the proximate cause but usually result from latent failures.
- System Resilience Requires Redundancy: If you have only one defensive layer against a hazard, an accident is inevitable when that layer fails. Resilience requires independent, redundant layers. For example, fall protection requires: (1) engineering (guardrails), (2) training (how to use harness), (3) supervision (constant monitoring), (4) maintenance (harness inspection), (5) procedures (Permit-to-Work), (6) rescue capability (if primary controls fail).
Worked Example: Confined Space Incident
A worker is asphyxiated in a confined space:
Holes that aligned:
- Organisational hole: The company had no formal Confined Space Entry procedure (latent failure-existed for months).
- Training hole: The entry supervisor had not completed confined space training (latent failure-hired 3 months ago without induction).
- Atmospheric testing hole: The company owned no atmospheric testing equipment; the supervisor "visually inspected" the space instead (latent failure-cost-cutting decision).
- Communication hole: The supervisor did not brief the team on hazards; only one worker knew they were entering (latent failure-rushed schedule).
- Rescue hole: No rescue equipment was on-site; the nearest rescue-trained team was 30 minutes away (latent failure-not planned).
- Active failure (human error): The worker, fatigued and eager to complete the task, entered without waiting for atmospheric testing results (happened in seconds).
Any one of these holes might not cause an incident:
- If procedure existed, the accident might not occur (even with untrained supervisor).
- If supervisor was trained, they would demand atmospheric testing and rescue capability.
- If rescue equipment was on-site, the worker could have been saved.
But when all 6 holes aligned, the incident was nearly inevitable. The accident was not "worker error" but system failure.
Also Known As: Accident Causation Model, Resilience Engineering Framework, Latent Failure Theory, Reason's Model
Regulatory Standard / Framework:
- ISO 45001:2018 - Occupational Health and Safety Management requires systems thinking in risk assessment and incident investigation
- UK HSE Incident Investigation guidance - recommends Swiss Cheese thinking for root cause analysis
- EU Framework Directive 89/391/EEC - Article 5 requires identification of hazards and system controls (implies multiple defensive layers)
- OSHA Process Safety Management (PSM) - 1910.119 requires multiple safeguards for high-hazard operations
How the Swiss Cheese Model Applies to Root Cause Analysis
Five-Step Root Cause Investigation Using Swiss Cheese
- Describe the Incident Sequence: Establish a timeline: What happened first, second, third? What was the immediate cause (the proximate hole)?
- Identify Active Failures (Human Errors): What immediate actions or omissions by workers/supervisors contributed? Examples: wrong valve opened, communication not relayed, fatigue-induced error, procedural violation.
- Identify Latent Failures (System Weaknesses): For each active failure, ask "Why did that error occur?" and trace to system factors:
- Was the procedure unclear or missing?
- Was the worker adequately trained?
- Was the equipment adequate?
- Was there communication breakdown?
- Was there management pressure (schedule, cost) overriding safety?
- Was supervision/monitoring absent?
- Map Defensive Layers & Holes: Visualise which defensive layers existed and which holes (vulnerabilities) existed in each:
- Management oversight: present/absent, effective/ineffective
- Procedure & planning: clear/vague, enforced/ignored
- Training & competence: thorough/superficial, current/outdated
- Supervision & communication: active/passive, clear/ambiguous
- Equipment & engineering: maintained/neglected, suitable/marginal
- Environmental factors: controlled/chaotic
- Identify Corrective Actions: Decide which holes to plug (improve procedure, training, maintenance, supervision, equipment). Prioritise actions that introduce independent defensive layers (not just strengthening an existing layer).
Corrective Action Prioritisation Using Swiss Cheese
PRIORITY 1 (Most Effective): Create a NEW defensive layer (e.g., if incident involved no Permit-to-Work, create a Permit-to-Work procedure for this task).
PRIORITY 2: Strengthen an independent existing layer (e.g., add atmospheric testing equipment if procedure exists but testing is not done).
PRIORITY 3: Strengthen the same layer (e.g., improve clarity of existing procedure; add more detail to training curriculum). Single-layer improvements offer less resilience than new layers.
PRIORITY 4 (Least Effective): Blame workers/supervisors; discipline them. This addresses an active failure but not the latent failures that enabled it.
Why the Swiss Cheese Model Matters: Operational impact
For HSSE Teams
The Swiss Cheese Model fundamentally changes how safety is managed. Rather than hunting for "the cause" (which usually becomes "human error"), HSSE teams design multi-layered defensive systems. Each significant hazard requires independent controls: engineering, procedures, training, supervision, and rescue capability. When incidents occur, investigations are thorough and multi-factorial, identifying latent failures (system weaknesses) not just active failures (worker error). This approach is more effective than blame-focused investigations.
For IT & CIOs
The Swiss Cheese Model informs how safety data is captured and analysed. Systems must track multiple failure types: procedure gaps, training status, equipment maintenance, supervision compliance, communication logs, and environmental factors. Incident investigations must be structured to identify latent failures alongside active failures. Root cause analysis workflows guide investigators through systematic hole-mapping. Predictive analytics look for warning signs (accumulating latent failures in a particular area) that might predict an incident if an active failure occurs.
Industry context
According to James Reason's original research (cited in the Journal of Organisational Behaviour Management, 2000) and subsequent replications in aviation (Eurocontrol), healthcare (Institute of Medicine), and construction (HSE studies), major incidents in complex systems typically involve 4-6 independent contributing factors. Investigations focusing only on "human error" or a single cause miss 70-80% of correctable system failures and provide little benefit for future prevention. Comprehensive Swiss Cheese-based investigations (identifying all 4-6 factors) reduce repeat incidents by 85%+.
Implementing & Monitoring Swiss Cheese Thinking: From Manual to Digital
Manual approach: When an incident occurs, a supervisor conducts a brief investigation: "Worker didn't follow procedure." Report filed; worker disciplined; no system changes. The company repeats the same incident 6 months later with a different worker or on a different site because the latent failures were never addressed.
Digital approach: Incident investigation is structured and comprehensive. When an incident is reported, the system prompts investigators to identify: (1) immediate cause, (2) active failures (worker actions), (3) latent failures (system weaknesses) in each defensive layer. Corrective actions are categorised by priority (new defensive layer, strengthen independent layer, etc.). The system tracks all corrective actions to completion and monitors whether similar incidents recur (if they do, it indicates the root cause was not properly addressed).
Dockt's platform structures incident investigations using Swiss Cheese principles. When a SIF occurs, the system auto-links the incident to relevant credential gaps (training, competence, certification status) that may constitute latent failures. For example: if a confined space incident occurs and the supervisor lacked Confined Space Entry training, Dockt flags this as a contributory latent failure and proposes corrective training. This ensures investigations identify both human factors and credentialing factors systematically.
Best Practices for Swiss Cheese Thinking
- Design Redundancy & Independence: For high-consequence hazards (work at height, confined space, machinery, diving), ensure at least 3 independent defensive layers. Example for work at height: (1) guardrails (engineering), (2) harness + lanyard (PPE + training), (3) rescue plan + rescue team (rescue capability), (4) Competent Person supervision. If one layer fails, others remain.
- Investigate Latent Failures, Not Just Active Failures: When conducting root cause analysis, go beyond "worker error." Ask: Why did that error occur? What system failure enabled it? Was the procedure clear? Was the worker trained? Was equipment adequate? Identify all 4-6 contributing factors.
- Distinguish Between Latent & Active Failures in Root Cause Reports: Document latent failures separately (procedure gap, training need, maintenance overdue, communication breakdown). Latent failures are actionable and replicable; they explain why this incident occurred and can be corrected to prevent similar incidents.
- Focus Corrective Actions on Creating New Defensive Layers: When correcting incidents, prioritise creating new independent layers over strengthening existing ones. Example: if a machinery incident occurred and procedure exists but workers violated it, do not just add procedure detail (same layer). Instead, add an independent layer: install automated machine shutdown, or add Permit-to-Work process, or increase supervision frequency.
Frequently asked questions
No. Diminishing returns apply. For most hazards, 3-4 well-designed independent layers provide adequate resilience. Adding layers beyond that point increases complexity and cost with minimal additional safety benefit. Use risk assessment to determine the appropriate number of layers for each hazard.
Continue your glossary path
Explore connected concepts often applied alongside this term.
Read term
Toolbox Talk
Explore this related glossary concept.
Explore ->
Read term
RAMS (Risk Assessment and Method Statement)
Explore this related glossary concept.
Explore ->
Read term
CDM Regulations (Construction Design and Management Regulations 2015)
Explore this related glossary concept.
Explore ->
Operationalize Swiss Cheese Model at workforce scale
Dockt helps teams move from manual credential tracking to proactive, audit-ready competence management.