Should We Publish Threat Models?
A Critical Reflection
Disclaimer: The views expressed here are solely my own and do not reflect the views of my employer.
In their recent article "Publish Your Threat Models!", Loren Kohnfelder and Adam Shostack propose a bold and commendable idea: that software vendors should publish their threat models to improve security transparency and foster industry trust. Their goal is clear and timely: by revealing how systems are designed to withstand threats, we foster accountability, enable informed consumption, and build a shared security culture.
After a thoughtful exchange with Loren, I believe this is an important conversation to have. However, I argue that we must first clarify what a threat model is, and whether the type of model they describe is meaningful enough — and safe enough — to publish.
This article explores why transparency without epistemic clarity risks being either misleading or strategically risky. The core dilemma is this: a threat model that’s helpful enough to publish is usually too sensitive to share; and one that’s safe enough to share is rarely helpful.
1. What is a Threat Model?
Kohnfelder and Shostack begin with a definition grounded in current industry practice:
"A threat model is the identification of possible threats to a system or component, and how those threats are (or are not) mitigated by design."
This formulation aligns closely with conventional STRIDE-based modeling: often diagram-driven, identifying actors, assets, threats, and mitigations. It is, fundamentally, a prescriptive model of design intent — not a descriptive artefact reflecting the actual implemented system.
That distinction is critical.
A prescriptive threat model is inherently normative: it states what the designers intend to protect and how they believe it should be done. However, systems evolve. Implementations deviate. Threats emerge from context. The prescriptive model, in isolation, may not reflect current risk.
This makes publication fraught with ambiguity: how much of the model reflects intent, how much reflects implementation, and how current is it?
2. The Transparency Dilemma
Threat models operate on a spectrum:
If they’re too generic, they risk becoming marketing material — safe, but meaningless.
If they’re contextually specific, they encode real system semantics — and that’s where the risk lies.
To be useful, a threat model must show more than surface intent. It must encode the logic of trust boundaries, internal assumptions, failure modes, exception flows, data integrity guarantees, prioritisation strategies — in short, meaning.
And meaning leaks strategy.
Kohnfelder argues that attackers don’t benefit significantly from high-level threat models. I agree — at the highest level. But systems don’t operate at the highest level. They operate in nuanced contexts, composed of implicit constraints and strategic logic.
A model that reveals mitigation priorities, exception handling, or fallback behaviour can hint at real-world trade-offs. That’s not "security through obscurity" — that’s operational inference risk. And it’s hard to redact without neutering the model’s utility.
3. Security Is Not Transitive
The article implies that publishing PTMs (public threat models) helps customers evaluate components they rely on. In principle, this is true. In practice, it’s problematic.
Security is not transitive. A secure component embedded in an insecure context does not make the larger system secure. Nor does a component’s declared intention to mitigate a threat guarantee its ability to do so when integrated elsewhere.
Customers can only meaningfully benefit from published threat models if:
The model is written from their perspective
The assumptions are clearly aligned with their context
The integration-level interactions are understood
That is rarely the case. Publishing a threat model for a reusable component is not the same as providing a holistic assurance argument for the composed system. Worse, customers may overinterpret intentions as guarantees, creating a false sense of security.
4. Risk and Prioritisation
One of the strongest assumptions in the article is that the value of publishing outweighs the downside. But threat modelling is deeply bound to risk posture, which is:
Subjective
Temporal
Often undocumented
Threats are not mitigated equally. They are prioritised based on business drivers, resource constraints, and acceptable loss models. Publishing a threat model may expose the organisation’s prioritisation logic, or worse, incentivise superficial completeness to appeal to external readers.
Obscurity is not security — but partial transparency can distort the signal. And in a world where security postures differ, an unqualified claim of "mitigated" is not universally helpful.
5. The Value Is in the Process
This may be the most important point:
Threat models, as artefacts, are often incomplete. But the process of creating them builds deep, internal system understanding.
Design discussions, mental model alignment, threat enumeration workshops — these are the epistemic moments where security understanding emerges. What results is often a fragmented set of models in the minds of the people, not a polished, publishable diagram.
Publishing artefacts may be helpful, but we mustn’t confuse the map for the territory.
❓ Frequently Asked Questions (Based on Discussion with Loren Kohnfelder)
Q1: Are you suggesting we shouldn’t publish threat models at all?
No — I’m suggesting we must first define what kind of threat model we’re publishing, who it’s for, what assumptions it encodes, and how it’s maintained.
Q2: Can we redact the sensitive parts and publish the rest?
Possibly — but redaction often strips out the semantic layers that make the model useful. A risk model without context becomes a hollow checklist.
Q3: What exactly leaks when we publish a threat model?
Beyond specific attack paths, we leak strategic logic: mitigation priorities, architectural decisions, internal process constraints, fallback paths, and even business assumptions.
Q4: Isn’t it better to publish something than nothing?
Not always. A misleading model can be worse than none, especially if external readers infer assurance where none exists. If we publish, we must publish responsibly.
Q5: Can threat models meaningfully support integrators and customers?
Only if they are modelled from those perspectives. Without alignment on assumptions and context, the model is not portable.
Q6: Why not separate the question of transparency from how threat modelling should be done?
Because transparency is only meaningful when it reveals something real. If our threat modelling practices are flawed, incomplete, or outdated, publishing becomes performative — not constructive.
Q7: What’s the alternative?
Modelling frameworks that explicitly encode perspective, context, state transitions, and emergent behaviours — like SoP (System of Perspectives) — can support both rigour and transparency. But until such models are widely adopted, we must be cautious about mistaking publication for assurance.
Final Thoughts
Loren and Adam have done the community a great service by prompting this conversation. We agree on the aspiration. But I believe the path to meaningful transparency lies first in epistemic clarity, modelling fidelity, and context-aware framing.
Publish, yes — but publish what matters, and publish responsibly.

