top of page
Writer's pictureShannon Lantzy

Adapt threat modeling for AI safety

Threat modeling could be adapted into a process for developing medical AI with safety-by-design


Medical AI developers need to create products that are safe by design, and regulators need to review the products to evaluate claims of safety. But they're not yet sure how. If you follow Dr. Robert Califf's public comments at CHAI*, it's clear that the FDA is already doing a lot, but it is also calling for new statutory authorities to regulate AI. In a conversation with a leader of Health AI at one of the largest tech companies in the world, I learned the company is not yet sure how it can consistently assure safety in medical AI platforms without a human in the loop. This concerns me: starting with the assumption that humans must be in the loop may stifle automation innovation at the outset. (And, we already have examples of closed-loop algorithms in healthcare; it's not novel.)


So, we need frameworks that will increase confidence in building safe-by-design AI. I see parallels between cybersafety and AI safety:

  • Need to be secured by design

  • Need continuous updating as new threats emerge

  • Safety risks may be complex for developers to perceive and plan for unilaterally

  • The design phase is the cheapest and easiest opportunity to build for safety, but least incentivized

  • Safe designs are difficult to demonstrate

  • Safety often requires tradeoffs that are difficult to negotiate


Threat modeling can help immediately. It's already used by medtech, its origins are in software development, it's simple, and it may be scalable. I have personally led R&D teams through the threat modeling process, and they have universally valued the results.


Threat modeling is a design process that helps to solve some of these issues. It can be deceptively simple using Adam Shostack's Four Question Framework.** At any stage of design, ask and answer the following questions:

  1. What are we building? (Answer with design diagrams.)

  2. What can go wrong (Be systematic with tools like STRIDE.)

  3. What are we doing about it? (Design controls.)

  4. Did we do a good job? (Do the controls work? Did they have unintended consequences? Have we tested the most critical controls?)


FDA adopted threat modeling as a recommended process for designing security into medical devices. The FDA collaborated with industry to create a threat modeling playbook and bootcamps for medical device developers and incorporated threat modeling into its cybersecurity guidance, stating publicly that the outputs of threat modeling are part of the scientific evidence needed to support cybersafety claims in medical device regulatory submissions.


This level of validation for threat modeling should signal to the industry that something similar is needed for AI safety. With a few added tools (like an analogous threat list like "STRIDE," specific to AI), threat modeling is a viable, easily adopted framework for health AI safety.


I'd love to hear from people on this topic. Drop me a line.


~Shannon Lantzy, the Optimistic Optimizer


* The Coalition for Health AI (CHAI) is one of several organizations focused on fostering AI in healthcare, including working on the processes, frameworks, and tools needed to assure safety in Health AI.

** Adam is a friend, a brilliant trainer of threat modeling, and an original author. I strongly recommend his courses.

30 views

Comments


bottom of page