top of page
  • Writer's pictureShannon Lantzy

How not to use AI: Decision support can harm patients when used as a decision override

A recent article (warning: paywall) in the Wall Street Journal illustrated how algorithms in hospitals are changing nursing, specifically how AI can harm patient care. The article describes how a nurse may choose to ignore her own better judgment when it disagrees with an algorithm’s recommendation, for fear of being held accountable if her override was wrong. In a vivid example, the article describes a cancer patient writhing in pain and a nurse standing nearby with a pill in her hand. The nurse knows the pill would safely ease her patient’s suffering, but if she gives it, she could be disciplined for overriding the hospital’s medication management software. Before the hospital implemented the software, she had more autonomy and would have given the meds to the patient.

Who should be responsible for this situation? The nurse? The attending physician? The hospital? The software manufacturer? The patient? The software?

As a thought experiment, let’s assume the doctor and the hospital share responsibility (each can get sued). The hospital provides resources in the form of people, processes, and tools (e.g., nurses, clinical workflows, medical equipment, and software). The doctor provides clinical judgment and the practice of medicine. Put simply, the doctor decides.* In this

In the pain patient scenario, the nurse had already interacted with the doctor about this patient and was reticent to go back again. She was also reticent to override it, for fear of reprimand or worse. The article provided several other examples of nurses ceding judgment to an algorithm, such as when the algorithm alerted a nurse to a potential case of sepsis and recommended blood tests, whereas the nurse knew with certainty that the patient wasn’t septic. She did the blood test rather than face scrutiny for overriding the algorithm.

Human judgment is often biased and flawed. So is algorithm judgment. We need to design use cases that optimize decision-making using the best of both humans and algorithms and interfaces that help them work together.

  • Design an override and a feedback loop directly into the algorithm so that nurses know that overrides are expected, and their judgments are valued

  • Train staff to understand the algorithm’s strengths and weaknesses. Key example: humans have access to more data when making a decision in a hospital; the algorithm necessarily can only use a subset of available data

  • Think about algorithms as task-doers with limitations; clinician teams should know what tasks the algorithms are trusted with, what tasks they are not, and when algorithm overrides are expected

Personally, I love the algorithms I have to support my daily work, but I override them all the time (e.g., Grammerly's grammar, Tesla's exit ramp driving). I’m optimistic that as an industry, healthcare will learn how to use, and not use, AI quickly enough.

~Shannon, the Optimistic Optimizer

* To be precise, decision-making is shared between the clinician, patients, and patient representatives. Informed consent is central to healthcare decision-making. I am using a shorthand.



bottom of page