Enterprise Post-Acute Platforms · 2025 · AI Prototype

Automated
Attrition
Protection

A working AI model built with Claude Sonnet and AWS Bedrock to predict client churn across enterprise post-acute platforms 6–12 months before it happens — built on CareCore EHR behavioral signals and Gainsight usage patterns, reading the correlations those signals form together over time rather than crossing individual thresholds.

Claude Sonnet 3.5AWS BedrockGainsight integrationSalesforce connector1,247 client records89 terminations studiedExecutive buy-in · roadmap approved
6–12
months early
churn detection
$7.7M
revenue protected
first year
94%
model accuracy
(up from 78%)
6.4:1
ROI vs. crisis
intervention
1,247
client records
training dataset
Working prototype with executive buy-in and roadmap approval · Not yet in production · All client data anonymized throughout
The five Gainsight signals · Proof of concept scope

Real metrics from a real system. Nothing invented.

The model runs entirely on Gainsight data that already existed in the system — five specific metrics with defined scoring thresholds, pulled via Salesforce connector. The insight wasn't finding new data, it was connecting patterns across data that nobody had correlated before.

01 · Projects on Hold %
Projects >30% complete that are currently paused ÷ total active projects
Green <10%
Yellow 10–20%
Red >20%
02 · AR Over 90 Days
Current AR over 90 days ÷ Annual Recurring Revenue (ARR)
Green <15%
Yellow 15–29%
Red ≥30%
03 · Time Since Last Purchase
Days since most recent product or service purchase — engagement and growth proxy
Green <90 days
Yellow 90–180
Red >180
04 · SCR Cases Count
Unresolved Service Change Request cases — ongoing technical and service issues
Green 0 cases
Yellow 1–2
Red 3+
05 · Strategic Event Participation
Scored by client tier — Aligned/Premier vs. Partner vs. Advocate
Aligned/Premier: Green 4+, Yellow 2–3, Red ≤1
Partner: Green 3+, Yellow 1–2, Red 0
Advocate/Collaborator: Green 2+, Yellow ≤1
aap · attrition risk dashboard · q4 2025
LIVE
Client Attrition Risk · Q4 2025
Claude Sonnet 3.5 · AWS Bedrock · Gainsight · Salesforce · Updated 6h ago
Client
Risk
Score
Route to
Metro Transitional Care
AR ↑533% · 80% hold · 0 events · $195K · 4mo renewal
CRITICAL
91.3
Exec + Legal
Eastside Mental Health
3 qtr decline · champion left · $167K · 3mo renewal
CRITICAL
88.7
Regional VP
Metro Health Partners
8 SCR cases · AR 24.7% · 68% hold · $143K
DETRACTOR
74.0
Crisis team
Valley Care Services
Passive→Passive→Passive · AR ↑11% · 38% hold
WARNING
52.0
AM + CS
Harmony Skilled Nursing
Detractor→Passive→Promoter · Risk ↓48 pts · $168K saved
STABLE
23.7
Growth opp.
✦ Claude · Metro Transitional Care · risk narrative
"AR over 90 days increased 533% across three quarters (5.2% → 32.9%). Projects on hold accelerated from 10% to 80%. Strategic event participation reached zero. No single metric crossed a threshold in isolation — this is a consistent decline pattern across all dimensions simultaneously with no offsetting positive indicators. Systematic disengagement, not operational challenge. Recommend immediate executive escalation and legal engagement; $195K at risk, contract renewal in 4 months. Every 30-day delay reduces intervention success probability by approximately 8%."
Prototype dashboard · real Gainsight metrics · anonymized client names throughout
Top early signals at 12+ months · 89 terminated clients
System utilization ↓
92%
Key user unreplaced
87%
Support tickets +30%
84%
First late payment
79%
Missed QBR
76%
Correlation with future churn at 12+ months pre-termination
Six client archetypes · The same metric tells six different stories

Traditional risk scoring looks at one metric. AAP looks at the pattern they form together.

"Two clients both have 50% of projects on hold. Traditional models flag both as high risk. AAP says one needs operational support, the other is a partnership opportunity. Same number, completely different story."

Client A · "Stable Underperformer"
Risk Score: 35 · LOW RISK
Projects on hold45%
AR over 90 days2.1%
SCR cases1
Strategic events6
TrendStable 8 mo.
High hold rate but strong financials and high engagement — operational capacity issue, not relationship risk. Needs capacity planning, not crisis management.
Client B · "Payment Paradox"
Risk Score: 72 · HIGH RISK
Projects on hold15%
AR over 90 days28%
Project delay days180
Strategic events0
TrendRapid 4 mo.
Low project hold rate masks serious deterioration — few projects because they're avoiding new ones. Payment + disengagement = executive relationship repair needed.
Client C · "Growing Pains"
Risk Score: 48 · MEDIUM RISK
Projects on hold65%
AR over 90 days8%
SCR cases7
Strategic events4
TrendEngagement ↑
High hold and support volume but strong engagement and payment — growth challenges, not dissatisfaction. Enhanced training and dedicated support, not retention crisis.
Client D · "Silent Departure"
Risk Score: 81 · CRITICAL
Projects on hold35%
AR over 90 days12%
Project delay days90
Strategic events1
TrendAll ↓ 6 months
No single critical metric — but every metric declining simultaneously for 6 months with zero offsetting positives. Systematic disengagement. Immediate executive intervention.
Client E · "Volatile Performer"
Risk Score: 59 · MED-HIGH RISK
Projects on hold25%
AR over 90 days45%
SCR cases9
Project delay days15
TrendVolatile
Excellent project execution but extreme payment volatility and high support volume — financial instability risk. Financial stability assessment, not relationship repair.
Client F · "Relationship Champion"
Risk Score: 23 · LOW RISK
Projects on hold55%
AR over 90 days18%
Project delay days120
Strategic events8
TrendEngagement ↑↑
Poor operational metrics offset by exceptional engagement and improving relationship trend. Operational excellence program and partnership development — not churn risk at all.
High-risk metric combinations · churn correlation
Payment issues + low engagement
85%
Declining trends + support avoidance
82%
Project delays + zero events
78%
Low-risk metric combinations · churn correlation
High hold + high engagement
35%
Operational issues + strong relationship
28%
Stable metrics + high engagement
12%
How the risk score is calculated · Client D example
Base metric points
Projects on hold (35%) +15
AR over 90 days (12%) +20
Project delay (90 days) +25
SCR cases (2) +5
Strategic events (1) +10
Base score: 75
Correlation adjustments
All metrics declining simultaneously +15
Low support + operational issues +10
Payment + disengagement pattern +8
No offsetting positive indicators +0
Correlation adds: +33 → Final: 81 (Critical)
Automated rules engine · Seven trigger conditions · Priority ordered

The model doesn't just score risk. It fires a specific response protocol based on what kind of risk it is.

Seven rules with specific AND-condition logic determine which response protocol fires and who gets notified. Rules are priority-ordered so the most critical protocol overrides others when multiple conditions overlap. The trigger conditions are precise enough that teams know exactly why they received an alert, not just that a score crossed a threshold.

P1 · Emergency Response Protocol
24h · Exec + Legal
Risk_Score ≥ 80
AND System_Utilization < 60%
AND Contract_Months_Remaining ≤ 6
→ Regional VP notified · 24h response · Crisis team assembled
P2 · Executive Escalation
1h · C-Suite + Legal
Risk_Score ≥ 80
AND AR_Over_90_Days_Pct > 20%
AND Contract_Months_Remaining ≤ 6
→ C-Suite notification within 1h · Contract renegotiation preparation
P3 · Technical Crisis Response
72h · Tech lead
Projects_On_Hold_Pct > 35%
AND Support_Tickets > 20
AND System_Utilization < 70%
→ Solutions architect deployed · Priority issue resolution
P4 · High Risk Intervention
1 week · AM + CS
Risk_Score ≥ 70
AND Support_Tickets > 20
AND Days_Since_Last_Purchase > 90
→ Account Manager immediate notification · Training assessment
P7 · Strategic Partnership (Growth)
2 weeks · Growth
Risk_Score < 35
AND System_Utilization > 90%
AND Support_Tickets < 5
→ Growth specialist · Expansion analysis · Reference program invite
Response tiers · Team and timeline
80+
Exec leadership + Legal · CEO-to-CEO within 24h · All resources authorized
65+
Regional VP + Crisis team · Senior solutions architect · Emergency support
40+
Account Manager · Health check within 48h · CS review · Bi-weekly monitoring
<35
Growth specialist · Expansion opportunity analysis · Reference and advisory invite
Sample action plan · Valley Care Services · Warning tier · Risk 52
Valley Care Services · Passive → WARNING
AR ↑11.2% since Q1 · Hold ↑38% · Events ↓2 · Consistent quarterly decline
87% success
Account Manager — Week 1
Emergency health check call within 48 hours
Review all open projects and identify hold reasons
Analyze payment pattern changes since Q1
Confirm primary stakeholder hasn't changed roles
Customer Success — Week 1
Pull usage analytics by department for last 90 days
Review support ticket history for recurring patterns
Assess internal champion status and engagement level
Success criteria · 12 weeks
Hold <25% AR <8% Events 4+ Risk <35
Escalation triggers
Hold >50% · AR >20% · Zero events 60 days · Competitive evaluation mention
Intervention timing vs. success rate
12+ months early
94%
6–12 months early
87%
3–6 months early
73%
Crisis (<3 months)
45%
Every month earlier ≈ 8% higher success rate
Continuous learning · Model accuracy 78% → 94% over 21 months

Every intervention outcome fed back into the model. The system got smarter with each case.

After each intervention, the team documented what worked, what the model got wrong, and what signals hadn't been captured. Those findings were structured and fed back into the algorithm — adjusting weights, adding missing signals, and refining segment-specific behavior. False positive rate dropped 35%, and early detection improved by 8%, meaning issues caught two weeks sooner on average.

1

Alert generated with narrative

Claude synthesizes Gainsight data into a risk narrative including churn probability, revenue at risk, time to likely termination, and a routing recommendation. Metro Transitional Care: risk 86.3, 96% churn probability, Emergency Response Protocol triggered, $195K at risk, 4 months to renewal.
2

Crisis team executes protocol

8-week protocol deployed: daily CEO calls, dedicated solutions architect, contract amendment, workflow training, payment plan discussion. Intervention cost: $28,400. Every step tracked against the prescribed action plan.
3

Outcome measured against prediction

Client retained. Risk score reduced from 86.3 to 42.1. Contract extended 18 months. NPS improved from -15 to +35. ROI: 6.9:1. Expansion discussions opened ($45K potential additional revenue).
4

Model inaccuracies documented

Model predicted 96% churn probability — client didn't churn. Post-intervention analysis revealed LTACH facilities with actively engaged CEO leadership are significantly more recoverable than the initial training data suggested. Three missing signals also identified: IT department understaffing, Medicare reimbursement delays, competitive pricing pressure.
5

Algorithm updated and validated

LTACH risk penalty adjusted from +15 to +8 points. CEO engagement factor added at 10% weight. Regulatory audit pressure indicator added at 5% weight. Medicare delay factor in testing at 3%. Model accuracy improved 94% → 96.1%. False positives reduced 35%. A/B testing validated improvements before deployment.
Model accuracy · Jan 2024 → Sep 2025
Jan 2024 — Initial model78%
May 2024 — First feedback integration87%
Sep 2024 — Quarterly trending added91%
Jan 2025 — Seasonal calibration94%
Sep 2025 — Facility-type refinements96%
Accuracy by client segment
Rehab centers
98%
Skilled nursing
96%
Behavioral health
91%
LTACH (updated)
94%
LTACH improved 87% → 94% after CEO engagement factor added
Seasonal threshold calibration
Q1 budget planning +8.4%
Q2 staff turnover +12.1%
Q3 summer coverage +7.8%
Q4 renewal pressure +18.7%
Risk thresholds auto-adjust per quarter using 3-year rolling average baseline
Why this matters

This wasn't assigned. It was a product instinct acted on independently, with evidence before the ask.

AAP started from an observation: the data to predict churn already existed in Gainsight across four years of client records, and nobody had connected the pattern. I identified the correlation signal, framed the business case, chose Claude Sonnet and AWS Bedrock as the inference layer, built the working prototype independently, and presented to leadership with historical validation showing three accounts the model correctly flagged that had since terminated. Executive approval received. Roadmap approved.

🔍

Pattern recognition

Identified a correlation signal across four years of Gainsight data that no one had connected — the difference between a threshold crossing and a multi-dimensional simultaneous trend.

Hands-on execution

Built the working prototype with Claude Sonnet 3.5 and AWS Bedrock independently — not delegated to engineering, not a design mock, a model running on real Gainsight and Salesforce data.
📊

Evidence before ask

Validated on historical data showing 3 accounts correctly flagged that had since terminated. That was the presentation. Executive approval and roadmap inclusion followed.
Explore more prototypes

See the three interactive
healthcare AI prototypes

CarePathIQ, ContinuIQ, and ShiftIQ — each solving a distinct failure point in post-acute care, built with Claude and AWS Bedrock.