I'm Sarah Brock. I have 20+ years in healthcare technology, split across two chapters that don't usually exist in the same person: 8 years as a licensed operator running post-acute facilities, and 12+ years as a product leader building the software those facilities depend on. The first chapter is why the second one works. These prototypes show how I think about AI in healthcare — how I identify real clinical and operational problems, scope AI-powered solutions, and build working examples to validate the concept.
Three structural failures repeat across every care setting I've worked in — as an operator and as a product leader. Clinical knowledge disappears between shifts. Episode-level decisions happen without outcome feedback. Regulatory data sits disconnected from the care delivery layer where it's most needed.
AI applied to documentation and chat interfaces doesn't fix those failures. The opportunity is a care intelligence platform that connects operational knowledge, clinical decisions, and outcome data across the full episode of care. The prototypes below explore what three surfaces of that platform could look like — built with Claude and AWS Bedrock, grounded in workflows I've operated inside.
Context windows are finite. Clinical workflows are not. When session pressure causes a clinician to rush or skip clarifying back-and-forth, incomplete information enters the record and corrupts every downstream decision that depends on it. This isn't a UX problem — it's a care quality failure that propagates across all three platform surfaces.
Mid-review, CHF + COPD comorbidity interaction surfaces. That combination requires shorter therapy sessions across more days — a different authorization structure entirely. Under context pressure, you can't reintroduce that interaction and rerun the logic cleanly.
Patient approved for fewer SNF days than clinically needed. Discharged before functional baseline. Readmission follows.
"No therapy updates for Bob across 3 shifts — unusual for his care plan. Did therapy not occur, or does it need to be documented? Marking did-not-occur vs. undocumented changes his episode metrics, readmission risk, and current authorization coverage."
Proactive gap detection before the handoff closes. The agent notices what's missing and surfaces it while there's still context to act on it.
These aren't isolated failures. A missing therapy note in ShiftIQ means ContinuIQ monitors against a flawed baseline. A wrong auth from CarePathIQ becomes a readmission that ContinuIQ flags two weeks later. Context failure in one surface propagates through all three.
Each prototype addresses a distinct failure point in post-acute care. Click any card to explore the full interactive prototype — built with Claude and AWS Bedrock, fictional patient data throughout.
The three prototypes above show clinical product thinking. This shows the production-track AI work behind them — a working churn prediction model built across enterprise post-acute platforms, validated against CareCore EHR behavioral data, and approved for the roadmap.
These aren't prototype metrics — they're the market realities these tools are designed to address.
My 20+ years in healthcare technology break into two distinct chapters, and the combination is genuinely rare. The first 8 years I was an operator — licensed to run skilled nursing facilities, overseeing all health information technology across every care setting Columbine operated, doing the hiring, the state licensing, the midnight calls when a resident went to the ER. The next 12+ years I moved to the vendor side, building the software I used to buy and configure and complain about. That sequence matters. When I write a product spec, I know what it feels like when the software doesn't match how care actually happens at shift change. When I talk to a clinical user, I'm not translating — I've been them. That's the foundation these prototypes are built on.