top of page

From Microsoft's Crisis to Medical Device Revolution: The Evolution of Threat Modeling

  • Writer: Shannon Lantzy
    Shannon Lantzy
  • 22 hours ago
  • 7 min read

ree

The year 2002 marked a turning point in cybersecurity history, though few realized it at the time. Microsoft was hemorrhaging customers due to security vulnerabilities, worms were spreading unchecked, and patching costs were mounting. In the midst of this crisis, Bill Gates penned his famous memo on "trustworthy computing," setting the stage for a fundamental transformation in how we approach software security.

At the center of this transformation was Adam Shostack, a critic-turned-insider who would help revolutionize cybersecurity practices not just at Microsoft, but across industries—eventually reaching all the way to FDA medical device regulation.

The Auto-Run Revelation: When Data Overcomes Organizational Inertia

[07:25] The story begins with a seemingly simple Windows feature called auto-run, introduced in Windows 95. When you inserted media into your computer, it would automatically execute predefined actions—eliminating the need for users to manually type installation commands. The feature solved a real usability problem and reduced support costs for software vendors.

But malware developers discovered auto-run too. They began exploiting it to spread infections, creating a massive security problem that persisted for years. The reason it took so long to fix wasn't technical—it was organizational.

"Microsoft didn't like changing features outside of major releases because it drove training costs," Shostack explains. [08:27] Multiple internal teams had fought to change auto-run and failed. The feature remained unchanged not because the fix was difficult, but because the organizational cost of change seemed to outweigh the security benefits.

Everything changed when Shostack quantified the problem in terms Microsoft's leadership could understand: infection cleanup data. His analysis revealed that auto-run was responsible for cleaning up infections on millions of computers every month. The data was overwhelming—and it finally provided the evidence needed to overcome organizational resistance.

The result? A million fewer infections per month after the fix was implemented. The customer impact that leadership had worried about proved negligible—barely visible as rounding errors in the metrics they were tracking.

This experience illustrates a crucial lesson about organizational change: sometimes the biggest barrier to better security isn't technical complexity, but the difficulty of making decisions under uncertainty. Leaders often ask for "more data" not because additional information will change their decision, but because requesting more information seems reasonable when facing unknowns.

From Whiteboard Wizardry to Systematic Science

[14:10] The auto-run story represents Microsoft's broader transformation from reactive security to proactive threat modeling. But threat modeling itself needed to evolve before it could scale across a company as large and diverse as Microsoft.

Traditional threat modeling was expert-driven and informal. Security specialists would gather around whiteboards, sketch out systems, and brainstorm about what could go wrong. The process relied heavily on individual expertise and intuition—making it difficult to scale or verify.

[15:43] The breakthrough came in 1999 when two Microsoft researchers, Loren Kohnfelder and Praerit Garg, created STRIDE—a mnemonic that stands for Spoofing, Tampering, Repudiation, Information Disclosure, Denial of Service, and Elevation of Privilege. This systematic framework transformed threat modeling from art to engineering.

"That invention took us from people in a room looking at a whiteboard and saying, 'I wonder what can go wrong with this?' to saying, 'I can now step through this diagram in a structured way and think about what can go wrong where I can actually have somebody else assess the work in a way that's not opinion driven,'" Shostack notes.

STRIDE provided the systematic approach needed to scale threat modeling beyond individual experts. Instead of relying on brainstorming sessions that varied based on who was in the room, teams could now work through a structured checklist that covered the universe of potential security failures.

The FDA Connection: From Software to Life-Critical Systems

[26:00] Fast-forward to 2015-2016, when FDA faced its own challenge: how to regulate cybersecurity in medical devices. The agency recognized that traditional approaches to medical device safety needed to evolve for the connected, software-driven devices entering the market.

Through connections at MITRE, a federally funded research center that had worked with Shostack on vulnerability databases, FDA reached out to explore how threat modeling might apply to medical device security.

The timing was crucial. Unlike most other industries, medical devices require pre-market approval before sale. This regulatory structure gave FDA an opportunity to build security requirements into the development process rather than trying to retrofit them afterward.

Dr. Suzanne Schwartz, who led FDA's cybersecurity efforts, approached the challenge systematically. Rather than simply mandating threat modeling, FDA invested in education and community building. They conducted bootcamps with the Medical Device Innovation Consortium (MDIC) to train both industry and government personnel. They developed playbooks with concrete examples and tools.

[28:44] "FDA really spent a lot of time and energy thinking about 'can we do this?' And we did these boot camps with MDIC to train people at MITRE and from industry in threat modeling so we could create a playbook," Shostack recalls.

The result was guidance that made threat modeling artifacts effectively mandatory for medical device submissions. But rather than being perceived as an additional burden, the systematic approach gave companies a clear framework for addressing cybersecurity requirements.

Threat Modeling vs. Risk Management: A Crucial Distinction

One of the most important insights from Shostack's work is the distinction between threat modeling and risk management—concepts that are often conflated but serve different purposes in product development.

Risk management deals with known systems and attempts to quantify the likelihood and impact of potential problems. It's about managing uncertainty in fixed environments. Threat modeling, by contrast, is a design activity that explores potential problems before systems are built.

Consider the hypothetical medical device Shostack often uses in presentations: [00:33] an implantable, reversible male birth control device with remote control capabilities. From a risk management perspective, you might try to quantify the probability of unauthorized access and weigh it against the benefits of remote control functionality.

But from a threat modeling perspective, the question isn't about quantifying risk—it's about systematic exploration of failure modes during design. What happens if someone spoofs the device's identity? How might an attacker tamper with communications? Could privileges be elevated to gain unauthorized control?

This design-time analysis can lead to architectural changes that eliminate entire categories of vulnerabilities rather than simply managing their risk. It's the difference between building a bridge to withstand expected loads versus managing the risk of an inadequately designed bridge collapsing.

The Software Understanding Problem

[42:49] As our conversation turns to current challenges in cybersecurity, Shostack identifies software understanding as one of the fundamental hard problems facing the industry. We're building increasingly complex software systems that we don't fully understand and can't adequately communicate about.

"Organizations are building software that they do not understand and cannot communicate their understanding to the people who are using it," he explains, referencing recent Defense Department publications on the topic.

The problem stems from the affordances built into our computing platforms. Traditional systems like Windows and Unix give programs enormous freedom—they can write files throughout the system, modify registries, and create side effects that persist long after the program terminates. Understanding the interactions between components in such systems becomes exponentially difficult as complexity grows.

Cloud architectures offer some hope by imposing constraints on what different components can do. Amazon S3, for example, only stores files—it doesn't execute code or modify system configurations. This constraint makes it easier to reason about the properties and behaviors of cloud-based systems.

But we're still in the early stages of learning how to build software systems that are inherently understandable. The challenge is compounded by the pace of innovation—we're constantly creating new abstractions and architectures before we fully understand the security implications of existing ones.

Balancing Innovation and Security

[56:22] This tension between innovation and security is particularly acute in regulated industries like medical devices. How do you preserve the speed and creativity that drives breakthrough technologies while ensuring systematic security practices?

Shostack argues that the key lies in understanding when regulation is appropriate and how to implement it thoughtfully. Industries where products can cause direct physical harm—cars, airplanes, medical devices—merit regulatory oversight. But the specific approach matters enormously.

FDA's threat modeling guidance represents a thoughtful approach: rather than prescriptive rules about specific technologies, it requires systematic thinking about security during design. Companies retain the freedom to innovate in their architectural choices while being held accountable for systematic security analysis.

"I think FDA helps drive consistency and fairness and patient centricity into the system in a very, very positive way," Shostack notes. The agency transforms subjective business risk decisions into explicit requirements, ensuring that patient safety considerations aren't left to individual company judgment.

Practical Guidance: Starting with a Napkin

[59:42] Despite the sophisticated frameworks and methodologies that have evolved around threat modeling, Shostack's advice for startup founders remains refreshingly simple: "Start your threat modeling with a napkin."

The message is that threat modeling doesn't require expensive tools or extensive training to begin. What it requires is systematic thinking about potential failures before committing to architectural decisions.

For medical device startups, this means considering both medical and security failure modes early in the design process. What happens if the device loses connectivity? How might an attacker manipulate sensor readings? What are the implications of different authentication approaches?

These questions are much easier and cheaper to address during initial design than after software is written and hardware is manufactured. The napkin-level analysis doesn't need to be perfect—it needs to identify the key security considerations that should inform architectural decisions.

The Four Essential Questions

[1:02:10] As threat modeling practices scale across organizations, Shostack emphasizes four fundamental questions that every team member should be able to answer:

  1. What are we working on?

  2. What can go wrong?

  3. What are we gonna do about it?

  4. Did we do a good job?

These questions provide a framework for systematic security thinking that doesn't depend on specialized expertise. They can be applied at any level of detail, from high-level system architecture to specific implementation choices.

The goal isn't to eliminate all security risks—that's impossible. The goal is to ensure that security considerations are systematically incorporated into design decisions and that teams have a shared understanding of the security properties they're trying to achieve.

Looking Forward: AI and the Future of Threat Modeling

[1:01:24] As our conversation concludes, Shostack touches on emerging work around AI-assisted threat modeling. The potential is significant—AI systems can help teams work through systematic threat analysis more efficiently and might identify patterns that human analysts miss.

But the fundamental challenge remains the same: building systematic thinking about security into the product development process. AI tools may make threat modeling more accessible and comprehensive, but they don't eliminate the need for human judgment about acceptable trade-offs and implementation choices.

The success of threat modeling—from Microsoft's crisis response to FDA's medical device guidance—demonstrates that systematic approaches to security can scale across organizations and industries. The key is starting with simple, systematic questions and building from there.

Whether you're developing the next breakthrough medical device or simply trying to build more secure software, the lesson is clear: don't wait until after you've "poured the concrete" to think about the security properties you need. Start with a napkin, ask systematic questions, and build security into your design from the beginning.

The stakes are too high, and the cost of retrofitting security too great, to approach it any other way.

This content was generated by AI from the original podcast discussion between Shannon Lantzy and Adam Shostack on the Inside MedTech Innovation podcast. Listen to the full conversation [here].


 
 
bottom of page