Voice AI

Voice AI for Healthcare: What Actually Works (And What Doesn't)

HIPAA compliance, appointment scheduling, patient intake — the real constraints of building voice AI for healthcare. Lessons from 3 deployments.

CM
Chris Mott
Founder, ResultantAI
Dec 18, 2025 15 min read
Healthcare technology
⚡ TL;DR
  • Works great: Appointment scheduling, prescription refill requests, general inquiries
  • Works with caveats: Patient intake, insurance verification, symptom triage
  • Doesn't work (yet): Clinical advice, diagnosis support, emergency routing
  • HIPAA reality: Most voice AI platforms aren't compliant out of the box — you need BAAs
  • ROI sweet spot: 40-60% of front desk calls can be automated

I've deployed voice AI systems for three healthcare organizations in the past 18 months: a multi-location dental practice, a specialty clinic network, and a mental health provider group. Each project taught me something different about what works — and what absolutely doesn't — when AI answers the phone for healthcare.

This is the honest breakdown.

The HIPAA Reality Check

Let's get this out of the way first: most voice AI platforms are not HIPAA-compliant by default.

HIPAA (Health Insurance Portability and Accountability Act) requires any technology handling Protected Health Information (PHI) to meet specific security and privacy standards. PHI includes names, addresses, dates of birth, medical records, and — crucially — voice recordings of patients discussing their health.

⚠️ Critical: You Need a BAA

Before deploying ANY voice AI in healthcare, you need a Business Associate Agreement (BAA) with your AI provider. This is a legal contract where the vendor agrees to HIPAA compliance. No BAA = no deployment. Period.

Which Platforms Sign BAAs?

Platform BAA Available HIPAA Hosting Notes
Retell AI ✓ Yes ✓ Yes Enterprise plan required
Vapi ✓ Yes ✓ Yes HIPAA tier available
Bland.ai ⚠ Limited ⚠ Limited Contact sales for BAA
OpenAI (direct) ✓ Yes ✓ Yes Enterprise API only
Most no-code tools ✗ No ✗ No Check before using

The dental practice I worked with initially wanted to use a cheaper platform without a BAA. Their compliance officer shut that down immediately — and rightfully so. A single HIPAA violation can cost $100-$50,000 per incident, with annual maximums of $1.5M per violation category.

What Actually Works

📅
Appointment Scheduling
Checking availability, booking new appointments, confirming existing appointments, rescheduling. This is the sweet spot — high volume, repetitive, low clinical risk.
💊
Prescription Refill Requests
Taking refill requests and routing to pharmacy. AI collects medication name, pharmacy preference, and patient verification — then hands off to staff for approval.
General Inquiries
Office hours, location, parking, accepted insurance plans, what to bring to appointments. Information that's already on your website, delivered by voice.
🔔
Appointment Reminders
Outbound calls confirming upcoming appointments, collecting confirmation or rescheduling on the spot. Reduces no-show rates by 20-30%.

Case Study: Dental Practice Scheduling

A 4-location dental practice was drowning in phone calls. Two front desk staff spent 60% of their time on the phone — mostly scheduling and rescheduling appointments.

We deployed a voice AI that:

Results after 90 days:

"The AI actually schedules faster than we do. It doesn't have to put anyone on hold to check the calendar."
Practice Manager, Multi-location Dental Group

What Works With Caveats

📋
Patient Intake
Collecting demographic info, insurance details, reason for visit. Works for straightforward intake — but complex medical histories should go to staff.
🏥
Insurance Verification
Collecting insurance info and checking basic eligibility. But actual benefits verification usually requires staff follow-up with payers.
🩺
Basic Symptom Collection
"What brings you in today?" followed by structured questions. Useful for pre-visit prep, but must route to clinical staff for any advice.

The Handoff Problem

The biggest challenge with "works with caveats" use cases is knowing when to transfer to a human. The AI needs clear rules:

Automatic Transfer Triggers (Build These In)
Any mention of: chest pain, difficulty breathing, severe bleeding, suicidal thoughts
Patient asks for medical advice ("Should I go to the ER?")
Patient expresses frustration or asks for a human
Complex insurance questions (prior auth, appeals, coverage disputes)
Anything involving minors that requires parental consent verification
Patient mentions they're calling about test results

What Doesn't Work (Yet)

⚕️
Clinical Advice
"Is this rash serious?" — AI cannot and should not provide clinical guidance. Legal liability is enormous, and it's outside the AI's competence.
🚨
Emergency Triage
Determining if someone needs emergency care is too high-stakes for current AI. Miss one real emergency and the liability is catastrophic.
📊
Test Result Delivery
Patients want to discuss results with clinicians, not AI. Even "normal" results often need context that AI can't provide appropriately.
💬
Mental Health Crisis Support
Crisis situations require human judgment and real-time clinical assessment. AI should immediately transfer, not attempt to handle.
🎯 The Bright Line Rule

If the conversation could influence a clinical decision or patient safety, transfer to a human. AI handles administrative tasks. Humans handle clinical judgment. No exceptions.

Implementation Lessons

1. Start Narrow, Expand Slowly

The mental health provider group wanted to automate everything on day one. We pushed back hard. Started with appointment scheduling only. After 60 days of clean operation, added prescription refill requests. After another 60 days, added intake form collection.

This phased approach let us catch edge cases before they became problems. By month 4, the system was handling 52% of calls — but we got there safely.

2. Over-Engineer the Transfers

When AI transfers to staff, the handoff needs to be seamless. The worst patient experience is repeating everything they just told the AI to a human.

Our transfer protocol:

3. Train Staff on AI Limitations

Front desk staff need to understand what the AI can and can't do. Otherwise, they'll either over-rely on it (transferring things it can handle) or under-rely (taking calls they could let AI handle).

We run 2-hour training sessions covering:

4. Monitor Everything

Healthcare AI needs more monitoring than other industries. We track:

The ROI Case for Healthcare

Healthcare practices typically see 40-60% of calls handled by AI after full deployment. Here's the math for a typical specialty practice:

The ROI isn't just about cost savings — it's about capacity. Those 54 hours can go toward patient care, insurance follow-ups, or reducing staff burnout.

What's Coming Next

Healthcare voice AI is improving fast. Things I expect to see in the next 12-18 months:

But the fundamentals won't change: AI handles administrative, humans handle clinical. That's the bright line that protects patients — and protects you.

Considering Voice AI for Your Practice?

I help healthcare organizations implement voice AI safely. HIPAA-compliant, properly integrated, with the right guardrails. Let's talk about what makes sense for your situation.

Book a Strategy Call →
CM
Chris Mott
Founder, ResultantAI

Chris builds AI systems for service businesses, including healthcare practices navigating the complexity of HIPAA-compliant automation. He's deployed voice AI across dental, specialty care, and mental health settings.