The Verification Bottleneck: Why AI Needs Human Velocity

In the world of Clinical AI, there is a dangerous misconception: that the hard part is building the model. It isn't. The hard part is building the trust.
At SmarterDx, I worked on an LLM product that could read patient charts (EMRs) and identify missed diagnoses—revenue that hospitals were leaving on the table. The AI was brilliant. It could find a missing "Sepsis" diagnosis buried in a nursing note from day 3 of a 14-day stay.
But when the first version of the review interface shipped, it failed. Not because the AI was wrong, but because the UX was asking the wrong question.
1. The Signal (Detection is Cheap)
The model operated on 100% of discharges. For every 10,000 patients, it might flag 1,000 potential opportunities. That's a massive amount of signal.
From an engineering perspective, this was a triumph. The team created a "Super-Reviewer" that never sleeps. But for the Clinical Documentation Integrity (CDI) nurses who had to review these findings, it was a nightmare. The interface essentially dumped a haystack on their desk and said, "There are needles in here. Good luck."
2. The Noise (Search vs. Review)
The initial UI design followed a traditional "Search" paradigm. It showed a list of patients and a tag: Potential Sepsis.
When a user clicked, the app opened the patient chart. The user then had to do exactly what they did before AI: read. They had to scroll through days of progress notes, lab results, and vitals to find the evidence that triggered the algorithm.
This was the failure. I realized the tool wasn't saving them time; it was just shifting their attention. The time-to-validate (TTV) was still hovering around 8-12 minutes per chart. In a pre-bill environment, where every hour counts before the claim goes out, this friction was fatal.
3. The Bottleneck (Velocity is Trust)
I identified that the bottleneck wasn't the AI's detection capability—it was the speed of human verification. To scale, we didn't need better AI; we needed a faster human loop.
I led a redesign that shifted the paradigm from "Search" to "Verification".
Instead of just flagging "Sepsis", my new "Synthesis Engine" UI extracted the exact snippets of evidence that triggered the flag:
- White Blood Cell Count: 14.5 (High) @ 02:00 AM
- Vitals: Fever 101.3°F @ 02:15 AM
- Doctor's Note: "Suspect infection, starting Vanc..."
I designed the card to present these 3 datapoints front-and-center. The user didn't have to open the chart. They just had to look at the card and click "Agree" or "Disagree".
The Result
Review time dropped from 12 minutes to 45 seconds.
By respecting the user's attention and doing the "pre-reading" for them, my design turned a specialized forensic task into a high-velocity verification workflow. I helped transform the product from a "Scanner" into a "Revenue Engine."
As Product Design Engineers effectively leveraging AI, our job isn't just to expose the model's output. It's to design the interface of trust that allows humans to accept that output at speed.