AI-Assisted Threat Modelling: Where It Helps, Where It Lies
You can paste a system description into an LLM and get back a STRIDE analysis in 30 seconds. A full threat list, categorised by type, with suggested mitigations. It looks thorough. It might even be thorough. That’s the problem. What LLMs Are Actually Good At Start with the honest case for using AI in threat modelling, because it’s real. Breadth coverage. A well-trained LLM has processed thousands of architecture descriptions, CVEs, and security design documents. It won’t forget to check for SSRF. It won’t skip repudiation because the session ran long. It has no blind spots born from familiarity with the system. For the common, well-documented threat categories, it’s genuinely reliable. ...