From the Winter 2020 Issue

Threat Modeling: Methodologies, Myths, and Missing Perspectives

Hilary MacMillan
EVP for Engineering | CyLogic

On April 10, 2014, citizens of Ghaziabad, a city near Delhi, India, cast their ballots for parliamentary elections using electronic voting machines. The machines – and the votes they held – had to be stored in a secure location for a month, until vote counting was set to begin.  When planning, election officials accounted for myriad threats to the machines and implemented security protocols to counter them.  Unfortunately, they missed one important threat; rats.

Seriously. As in rodents. The storage location was near a wholesale meat market and the possibility of the creatures feasting on wires and electronic circuits presented a real risk to the election’s integrity.1  While officials used threat modeling techniques to successfully counter a variety of adversaries, the most pressing threat came from one they didn’t anticipate.

Threat modeling is a powerful tool for building secure-by-design systems. It’s useful to anyone involved in identifying, defining, selecting, engineering, designing, implementing, operating, and/or supporting system solutions. Threat modeling is powerful because it forces one to take an atypical perspective and ask: What does the system need to do and be when bad things are happening?  If there’s one thing to be counted on, it’s that bad things will happen, either due to malicious activity or plain bad luck.

Despite the value of threat modeling, the practice hasn’t risen to the level of ‘required and expected’ in many solution engineering and implementation efforts. Two reasons for this, among many, are a lack of awareness and intimidation.

  • The Systems Engineering (SE) discipline, when done in an orthodox manner, should include threat modeling in its regular scope. However, the International Council on Systems Engineering’s (INCOSE) Systems Engineering Handbook2 and the Systems Engineering Body of Knowledge (SEBOK)3 discuss threat modeling activities under various “specialty engineering” topics (e.g., Security Engineering or Resilience Engineering). Because of this, SEs are often unaware of this methodology and its value.
  • Threat modeling has a strong association with cybersecurity and secure software development. These disciplines provide necessary and valuable contributions to threat modeling but often don’t bring the broad user or business focused perspective that’s needed. This association leads to a notion that threat modeling can only be done by those with deep technical or (cyber)security expertise and can prevent others from getting involved.

With these challenges in mind, there are a few threat modeling myths to dispel:

Myth #1:  Threat modeling is too expensive and/or too time-consuming and/or takes special resources that we don’t have.

Engineering and implementation tend to focus on desired and expected behaviors under optimistic conditions, when the world around the system – including other systems, organizational entities, and people, both intended and unintended – is just as we define it to be.  Unfortunately, the likelihood of this reality is low to nonexistent.

This optimistic viewpoint results in an incomplete set of requirements, otherwise known as a requirement set with defects. As Dr. Barry Boehm demonstrated, the cost to correct defects introduced in the requirements phase increases exponentially over time until the defects are discovered and corrected.4 There is a cost associated with threat modeling but it’s less than the potential cost of not threat modeling.

Myth #2:  Once we identify all the threats to our system and figure out how to defeat them, we’re done.

It’s unrealistic to assume that all potential threats (human, environmental, etc.) can be identified and defeated during system definition and design. The threat landscape continuously evolves. Just as the German army defeated the Maginot Line in 1940 by using mobile forces to bypass its defenses, firewalls, vulnerability scanners, and antivirus programs, passwords are continually defeated by changing tactics and circumstances.

A crucial part of any threat modeling activity, and one that is rarely included or emphasized, is defining what system characteristics and behaviors must be true and performed despite the bad actions and bad luck that may impact a system.

Myth #3:  Threat modeling is a cybersecurity and/or software developer thing – it should be in their job jar. Threat modeling requires deep (cyber)security and/or software engineering expertise.

Brainstorming ways in which bad actions or bad luck can impact a system is something anyone can do. Arguably, those who will ultimately use the system, operate it, support it, or depend on it are better positioned than those who engineer and implement it to dream up such scenarios. Like the ability to think of the bad things that can happen to a system from an operational perspective, the same operationally focused group is also often the best to articulate what the system needs to be able to do, even when those bad things happen. This isn’t to say that (cyber)security and software engineers, architects, and developers aren’t critical to the threat modeling process. This group is often best positioned to determine the likelihood of an imagined bad action impacting the system, determining what the impact to the system would be, and assessing the feasibility of resilience options.

Arguably, those who will ultimately use the system, operate it, support it, or depend on it are better positioned than those who engineer and implement it to dream up such scenarios.

With those myths dispelled, there are factors that do challenge the effective implementation of threat modeling, which can be summarized as “I don’t know how to threat model, or when to do it…”

In the last few decades, several threat modeling methodologies5 have emerged both from commercial enterprise and from public-sector organizations. Not all are true methodologies (i.e., not all describe a systematic way to “do threat modeling”) and the ultimate outcomes of each are not consistent with each other. As such, the threat modeling activity can be optimized by employing several of these established methodologies.

That said, there’s little explicit guidance on selecting the optimal methodology or set of methodologies, how to combine them, when – in an engineering or implementation process – threat modeling should be done for greatest effect, or what to do with the outcomes of threat modeling to actually realize secure systems.

While the scope and length of this article does not accommodate a detailed exploration of these methodologies and their ideal applications, the main points to internalize are:

  1. Threat modeling should be performed, on every implementation effort (whether building new systems or installing and integrating already-built solutions), every time.

  2. The ultimate goal driving the need to threat model is to realize that resilient systems behave in desired ways despite bad actions or bad luck. Achieving this goal takes requirements that are designed and/or implemented to enable this.

  3. It’s impossible to identify every possible bad action or unfortunate incident that a system may encounter in its lifetime. That said, brainstorming ways in which bad actors and bad luck could impact your system, and then determining what characteristics of your system must still be true despite those bad events is an effective way of determining system resiliency requirements.

This last point is crucial. Often, threat modeling methodologies focus on defining the adversary: their motivations, their means, and their opportunities to exploit vulnerabilities. While this is one useful input to the threat modeling process – when brainstorming ways in which bad luck could happen – it’s not the be all, end all. The critical outcome of any threat modeling effort is an understanding of what the real requirements are for the system regarding its ability to continue behaving in desired ways despite innumerable bad things that could happen to it. The potential attacks are the starting point, they’re not the end.

Put another way, there are always more problems to chase. However, if we define what must be true for our system, in what it is and how it behaves, we can make conscious and deliberate design and implementation decisions based on trading off between meeting the requirement absolutely and other factors we must account for (e.g., cost and schedule to implement, supportability, ease of use, performance impact, interoperability, and availability). It also gives the team an opportunity to consider realization options based on the system as a whole (which includes its purpose, behavior, composition, and interactions) versus an induvial part of it.

The following two scenarios illustrate this:

  • Scenario 1: A threat modeling activity identifies an attack where an adversary determines legitimate users’ authentication credentials through a brute force attack. A threat modeling approach that identifies ways to address this attack vector might recommend countermeasures like requiring longer passwords or requiring plaintext passwords to be salted and hashed through multiple iterations. These measures are implemented but when the system is operational, an adversary gains access to the system by using social engineering techniques to log in using legitimate credentials.
  • Scenario 2: This same activity is performed; however, instead of identifying ways to address the specific attack vector identified, the team identified that, even in the presence of an adversary or despite a careless user leaving their passwords written on paper under their keyboard, the system must only permit access to correct and legitimate users. This sets the team up to consider a broad range of implementation options that might include additional factors of authentication like biometrics or possession factors, or the implementation of a “two-man rule,” where two legitimate users authenticate each other.

Threat modeling should be part of any and every effort to implement engineer or implement new systems, whether they are technology-based, physical, process-based, or organization/human-centric. Much of the current literature that describes threat modeling is aimed toward software engineers, architects, and developers (and more specifically, software security engineers or cybersecurity engineers), and as such, is often missed or unknown to those outside software or cybersecurity-focused disciplines. Furthermore, much of the threat modeling methodology instruction uses terminology that the software security community is comfortable with, but can be either intimidating or unintelligible to others. These realities shouldn’t and don’t absolve executives, business analysts, systems engineers, system administrators, and others who influence and inform system definition, design, implementation, operations, and support from the need to continuously and proactively threat model.

But identifying and countering specific threats, often defined as the identification and mitigation of exploitable vulnerabilities, is insufficient. Trying to identify every possible exploit – every “bad thing” that could ever possibly go wrong – is Sisyphean. There will always be new threats (human or otherwise) and always new ways to exploit vulnerabilities. What must necessarily follow the identification of exploitable vulnerabilities is recognizing and documenting what system characteristics must be true and what behaviors must be allowed (or prevented) when a vulnerability is exploited, regardless of “who” did the exploiting. Defining these aspects as requirements and reflecting them in design, implementation, and operation enables “secure by design” systems.



2 International Council on Systems Engineering (INCOSE). INCOSE Systmes Engineering Handbook. John Wiley & Sons, Inc., 2015.




Leave a Comment