Most cybersecurity work is reactive. Something breaks, you fix it. Threat modelling is the opposite: you sit down before anything breaks and ask what could go wrong? Then you build defences before the attacker shows up.
It sounds obvious. Most teams still skip it.
What Threat Modelling Is
Threat modelling is a structured process for identifying what you’re protecting, who might attack it, how they’d do it, and what you’re going to do about it. OWASP distils it into four questions:
- What are we working on? — Map the system: components, data flows, trust boundaries.
- What can go wrong? — Identify threats against what you mapped.
- What are we going to do about it? — Decide: mitigate, accept, eliminate, or transfer each threat.
- Did we do a good job? — Review. Repeat when the system changes - Update the Threat Model Document and (if needed), perform a new risk assesment.
The output isn’t a checklist. It’s a living document — a threat model — that gives everyone on the team a shared understanding of where the risks are and why specific controls exist. This is often used as input when workshopping risk exposure later in the project.
Why Bother?
Security reviews at the end of a project find problems that are expensive to fix. Threat modelling finds them at the design stage, when changing an architectural decision costs nothing.
It also creates a clear rationale for security investment. Instead of “we need to add MFA/Passkeys because compliance,” you have “we need MFA because anonymous users can reach this authentication endpoint, which is an elevation of privilege risk under STRIDE, and the impact of a breach here is full account takeover.”
That argument lands differently in a planning meeting.
The Process
A typical threat modelling session starts with a Data Flow Diagram (DFD). You map every component, every data store, every data flow, and — critically — every trust boundary. A trust boundary is any point where data crosses between different privilege levels: internet to DMZ, user to backend, browser to API.
flowchart LR
subgraph Internet["🌐 Internet (Untrusted)"]
User([User])
end
subgraph DMZ["DMZ"]
LB[Load Balancer]
WAF[WAF]
end
subgraph Internal["Internal Network (Trusted)"]
API[API Server]
DB[(Database)]
Auth[Auth Service]
end
User -->|HTTPS| WAF
WAF --> LB
LB --> API
API --> DB
API --> Auth
style Internet fill:#ffeaea,stroke:#cc0000
style DMZ fill:#fff8e1,stroke:#f9a825
style Internal fill:#e8f5e9,stroke:#388e3c
Once the diagram exists, you walk through each component and data flow asking: what could an attacker do here?
Methodologies
STRIDE
STRIDE is the most widely used threat categorisation framework. Developed by Microsoft, it gives you six threat types to check against every element of your DFD:
| Threat | What it means | Example |
|---|---|---|
| Spoofing | Pretending to be someone else | Forged JWT, ARP spoofing |
| Tampering | Modifying data in transit or at rest | SQL injection, MITM |
| Repudiation | Denying an action occurred | Deleted logs, no audit trail |
| Information Disclosure | Exposing data to unauthorised parties | Verbose error messages, misconfigured S3 |
| Denial of Service | Making a system unavailable | Flood attacks, resource exhaustion |
| Elevation of Privilege | Gaining access beyond what’s allowed | Broken access control, SSRF to metadata API |
Walk every component against all six. The ones that apply become your threat list.
DREAD
DREAD is a scoring model that puts numbers on the threats STRIDE finds. Each threat gets rated 1–10 across five dimensions:
- Damage potential
- Reproducibility
- Exploitability
- Affected users
- Discoverability
The average score determines priority. DREAD is useful for ranking a long threat list when you can’t fix everything at once. It’s also subjective, so scoring should be done by more than one person.
PASTA
PASTA (Process for Attack Simulation and Threat Analysis) takes a business-risk-first approach rather than a pure technical one. Seven stages, from defining business objectives to simulating attacks to analysing residual risk. More heavyweight than STRIDE, better suited to large organisations or complex regulated systems where you need to tie security findings directly to business impact.
Attack Trees
Attack trees decompose a goal — “attacker gains admin access” — into all the ways it could be achieved, branching down into sub-goals. Good for analysing a single high-value target in depth. Less efficient for covering a whole system quickly.
graph TD
A["🎯 Gain Admin Access"]
A --> B[Steal credentials]
A --> C[Exploit vulnerability]
A --> D[Insider threat]
B --> E[Phishing]
B --> F[Credential stuffing]
B --> G[Session hijacking]
C --> H[SQLi to auth bypass]
C --> I[SSRF to metadata]
D --> J[Malicious employee]
D --> K[Compromised contractor]
The Threat Modelling Process End to End
flowchart TD
A[Define scope] --> B[Build DFD]
B --> C[Identify trust boundaries]
C --> D[Apply STRIDE per component]
D --> E[Score threats with DREAD]
E --> F{For each threat}
F --> G[Mitigate]
F --> H[Accept + document]
F --> I[Eliminate component]
F --> J[Transfer risk]
G --> K[Validate controls work]
H --> K
I --> K
J --> K
K --> L[Document & review]
L --> M{System changed?}
M -->|Yes| B
M -->|No| N[Done — until next change]
When to Do It
Threat modelling is most valuable at design time — before code is written. But it’s still worth doing on existing systems, especially before a major feature or architecture change.
Triggers that should prompt a threat model review:
- New authentication or authorisation mechanism
- New external integrations or third-party dependencies
- Moving from on-prem to cloud, or between cloud providers
- After a security incident
- New data types, especially PII or payment data
“Continuously” is the right answer, which in practice means “whenever something significant changes.”
Who Should Be in the Room
Not just the security team. The people who understand the system are the ones who can identify what’s actually reachable, what the real trust boundaries are, and what assumptions the design makes. That means developers, architects, and ideally a product person who can make risk acceptance decisions.
The security engineer facilitates and knows the frameworks. The team knows the system. You need both.
Tools
| Tool | Type | Best for |
|---|---|---|
| Microsoft Threat Modeling Tool | Free, desktop | STRIDE-based, good for Azure workloads |
| OWASP Threat Dragon | Open source, web/desktop | Teams wanting an open-source DFD tool |
| IriusRisk | Commercial | Large teams, automated threat libraries |
| ThreatModeler | Commercial | Enterprise, integrates with SDLC toolchains |
| draw.io | Free | Manual DFDs, no threat library, but flexible |
For most small teams starting out, Threat Dragon or even draw.io with a STRIDE checklist in a spreadsheet is enough. The tool matters less than the conversation it facilitates.
The Honest Take
Threat modelling feels like overhead until it isn’t. The first time you catch an authentication bypass at design review instead of in a pentest report three weeks before go-live, the time cost stops being a question.
The four questions work. Apply them consistently, document the answers, and revisit when the system changes. Everything else — the methodology, the tool, the scoring model — is in service of that loop.