top of page
Writer's pictureShannon Lantzy

For Better Risk Analysis, Stop Keeping Score

Risk scores often distract from the valuable information needed to make decisions. Instead, use simple, elegant models better suited to the task.

We need for more scientifically sound cyber risk assessment in medtech. The current practice of developing a likelihood and severity score for cybersecurity vulnerabilities (among other hazards) leads to mathematically incorrect conclusions, lack of trust in the scores, and the inability to recover the valuable information that was used to create the original scores. In turn, these problems engender distrust in the risk assessment and analysis processes.


Here's a story of an attempt at business risk scoring and an alternative approach that proved more effective for decision-making. I've changed the identifying details for the firm, but it's a true tale.


Background

Small InJectors (SJ) is a business unit in a large medical device manufacturer with a suite of small, handheld smart devices. The devices are disposable with a relatively short use period. They have firmware, but they were not designed to be updated after manufacturing. A leader in the business unit read the new FDA guidance on postmarket software updates and realized that FDA may raise the bar on the frequency of software patches. To patch more frequently, SJ's products would need over-the-air updates, which the company and product lines were not prepared for. To retrofit an over-the-air update design would require major rework, delays, and unplanned costs.


The company's cybersecurity and R&D stakeholders developed a scoring system to determine which products would need the retrofit, and which could continue with their original designs and schedules. The scoring system resulted in each product line receiving a number of 1 through 10. If the score was above 6 the retrofit would be required, if above 4, there was risk, and if below 4 R&D could continue without added regulatory risk.


SJ wanted my team to assess the scoring system and provide feedback on the determinations.


Our Approach

The scores themselves told us nothing meaningful about whether the FDA would likely reject a submission or a product line would be pulled from the market until retrofit. The cutoff points were determined based on the analysts' preconceived assumptions about the products, plus a number of attributes about the products that were indeed meaningful for making a determination. The scores didn't carry enough information to be communicative.


We recommended replacing the scoring system with a decision tree. The team met with SJ, and drafted a decision tree that essentially replicated the analysis and decision points SJ had already created, adding a few elements that were important. At this stage, the results became clear, the scoring system was replaced with the tree, and we closed the project.


Results

The decision tree was more effective because it carried more information, was easily updated, and all decision-makers understood exactly how it worked. Ultimately, the company was able to unify its leaders and more confidently determine which products needed unplanned investment to meet the new requirements. The decision trees made it easier to make confident, durable decisions across stakeholder groups.


I have several more examples of less-than-helpful scoring systems that can be replaced with simpler, more accessible, elegant alternatives. Stay tuned.


~Shannon, the Optimistic Optimizer

10 views

Recent Posts

See All
bottom of page