Understanding the difference between real-world data and real-world evidence is the key to digital medicine and AI success in healthcare
Great accuracy in machine-learned algorithms may say nothing about clinical outcomes
In 2015 I was asked to create a machine-learning competition in healthcare for Booz Allen's second Data Science Bowl. I thought the hard part would be finding data about human health that we could actually post publicly on kaggle. The harder part is actually measuring meaningful outcomes. My collaborators, Drs. Andrew Arai and Michael Hansen at NIH's National Heart, Lung and Blood Institute, brought a great dataset of MRI images with labeled measurements of ejection fraction. The competitors' algorithms provided great results. They could estimate ejection fraction better and faster than cardiologists.
But...ejection fraction is a measurement...not an outcome. We didn't have the data to show, for example, that patients would live longer as a result of faster or more accurate diagnoses. To "transform how heart disease is diagnosed," we would need to study how computer vision changes patient outcomes when added to the clinical workflow. We needed to know whether the AI could help the doctor make better decisions about diagnosis and treatment (outcomes!), and for that, we would need completely different data and methods.
Real-world data alone is not enough; we need real-world evidence
At MDIC's Annual Public Forum week, Dr. Califf said, "Real-world data plus the right methods gets us real-world evidence." Also at MDIC, folks celebrated the first FDA approval of a medical device based on real-world evidence. It has taken years of collaboration and investment (e.g., in NESTcc) to get achieve this result. The ecosystem has learned a lot from the experience. And there is a lot more work yet to do.
It is hard to get real-world data. It is much harder to "get" real-world evidence. In fact, you can't go "get" evidence for this. You have to create it.
The regulatory ecosystem may not be initially comfortable with some forms of evidence - that shouldn't stop innovators
Digital medical device development feels like it should be easier than hardware medical device development. There's a part that may be easier (initial product development), but the hardest part is still hard (proving that it actually improves lives). We need to capitalize on the unique features of digital, like connectivity, to reduce barriers to evidence collection (like lowering the bar for premarket study in favor of postmarket evidence generation when the safety and the business model support it). I hope to see a lot of postmarket quasi-experiments, simulation-based evidence, personally.
I work with startups and large MDMs alike who are all existentially scared that their existence is threatened if they do something that regulators aren't already promising to accept. I get it, regulatory uncertainty matters. But it should stop development, it should prompt q-subs and applications for TAP.
Innovators need to keep pushing the market, despite the barriers, if the real promise of digital and AI is to be adopted in health.
~Shannon, the Optimistic Optimizer