Three months ago, our legal team forwarded me an email from a vendor offering "AI-powered clinical documentation improvement." The product looked promising. The demo impressed our physicians. Then I asked for their Business Associate Agreement.

The response: "Our AI tool doesn't actually access PHI, so a BAA isn't necessary."

This conversation is happening in conference rooms across healthcare right now. AI vendors are moving fast, compliance teams are scrambling to catch up, and the answer to whether you need an AI vendor BAA isn't always clear-cut. After reviewing dozens of these tools and working through the nuances with legal counsel, I can tell you the answer depends on factors most organizations aren't even asking about.

The Basic Rule: If PHI Touches the System, You Need a BAA

Let's start with what HIPAA actually requires. Under the Privacy Rule and Security Rule, if a vendor creates, receives, maintains, or transmits protected health information on your behalf, they're a business associate. Period. You need a signed Business Associate Agreement before any PHI flows to their systems.

This isn't new territory. We've been signing BAAs with billing companies, transcription services, and cloud storage providers for years. AI vendors should be no different.

The problem is that AI tools often operate in ways that make the PHI flow less obvious. A traditional electronic health record system clearly stores patient data. An AI tool that "analyzes clinical notes to suggest billing codes" might argue it only sees de-identified summaries. A chatbot that "helps patients schedule appointments" might claim it doesn't retain conversation history.

In my experience, vendors who resist signing a BAA usually fall into one of three categories: they genuinely don't understand HIPAA requirements, they've architected their system to avoid becoming a business associate, or they're unwilling to accept the liability that comes with BAA obligations.

Your job as a covered entity is to determine which category applies, because the wrong assessment creates risk you own.

When AI Vendors Genuinely Don't Need a BAA

There are legitimate scenarios where an AI vendor doesn't need to sign a BAA. These aren't common, but they exist.

The Tool Processes Data Entirely Within Your Environment

Some AI models can be deployed on-premises or within your own cloud infrastructure. If the vendor provides software that runs entirely within your network perimeter and never transmits data back to their systems, they may not be a business associate under HIPAA.

I've seen this with certain natural language processing tools that install as containerized applications. The model runs locally, processes your clinical notes, generates insights, and the vendor never sees your data. In these cases, the vendor is more like a software licensor than a business associate.

But even here, read the fine print. Does the software phone home with telemetry? Does it upload error logs? Does it send anonymized usage statistics? If any of those data flows could include PHI, you're back to needing a BAA.

You're Only Using Publicly Available Information

If you're using an AI tool to analyze published research papers or aggregate publicly available health statistics, and you're not inputting any patient-specific data, no BAA is required. This is straightforward but also limited in utility for most clinical applications.

The Data Is Truly De-Identified

HIPAA's de-identification standards under 164.514 create a safe harbor. If you properly de-identify data before sending it to an AI vendor—removing all 18 identifiers or using statistical methods certified by an expert—the data is no longer PHI and the vendor doesn't need a BAA.

The catch is that true de-identification is harder than most people think. I've reviewed vendor proposals that claim their "anonymization process" strips identifying information, but HIPAA's standard is strict. Dates, geographic subdivisions smaller than a state, all elements of ages over 89—these and fifteen other categories must be removed. Most AI tools need rich contextual data to function well, and that richness often includes identifiers.

If you're relying on de-identification to avoid a BAA requirement, document your process thoroughly and consider getting an expert determination. The pattern I see is organizations assuming their data is de-identified when it actually meets the definition of a limited data set, which still requires a data use agreement with similar protections to a BAA.

Inline article illustration

The Gray Zone: AI Tools That Claim Technical Separation

This is where most of the confusion lives. Vendors will tell you they've architected their system so they "never see PHI," but the technical details reveal something more complicated.

One vendor told me their AI chatbot for patient engagement doesn't store conversations. Technically true—they didn't retain the full text. But their system logged patient identifiers, timestamps, and conversation metadata to improve the model. That metadata, linked to specific individuals, is still PHI.

Another vendor insisted their clinical decision support tool only received "de-identified case summaries" from our EHR integration. When I asked to see the API payload, it included admission dates, detailed diagnostic codes, and lab result timestamps. A motivated person with access to their database could absolutely re-identify patients.

The technical architectures I see vendors using to claim they're not business associates include:

When a vendor describes technical measures that supposedly eliminate the need for an AI vendor BAA, ask for architecture diagrams. Review data flow documentation. Talk to your privacy officer and legal counsel. And remember: the burden of demonstrating that PHI isn't being accessed rests with your organization, not with the vendor's assurances.

Need Help Assessing AI Vendor Risk?

Carl delivers keynotes on HIPAA compliance, AI governance, and privacy risk for healthcare organizations navigating emerging technology. His talks cut through vendor marketing to focus on practical risk assessment frameworks.

Book Carl to Speak

What to Do When a Vendor Refuses to Sign a BAA

You've done your analysis. PHI is clearly flowing to the vendor's systems. You need a BAA. The vendor says no.

This happens more than it should. Sometimes it's because the vendor doesn't want to accept HIPAA's breach notification requirements. Sometimes they're worried about liability exposure. Sometimes they're a general-purpose AI platform that serves multiple industries and doesn't want to be bound by healthcare-specific obligations.

You have three options, and none of them involve using the tool without proper safeguards.

Option One: Walk Away

This is the right answer more often than people want to hear it. If a vendor won't sign a BAA when one is clearly required, they're telling you they don't want to be held accountable for protecting your patients' data. That's a risk signal.

I've had conversations where business stakeholders push back hard on this recommendation. The AI tool promises significant efficiency gains or improved clinical outcomes. Walking away feels like leaving value on the table.

But consider what you're accepting: you're allowing patient data to flow to a vendor who explicitly refuses to commit to HIPAA's safeguard requirements, breach notification obligations, and individual rights provisions. When—not if—OCR audits your business associate relationships or investigates a breach, "the vendor seemed trustworthy" is not a defense.

Option Two: Redesign the Integration

Sometimes you can restructure how the AI tool connects to your systems in a way that eliminates PHI exposure. This requires technical work and might reduce the tool's functionality, but it can be viable.

I've seen organizations implement middleware layers that de-identify data in transit, switch from cloud-based to on-premises deployment models, or change workflows so that AI-generated insights are produced without patient identifiers and matched back to records only within the covered entity's systems.

This approach works best when you have strong technical resources and the vendor is willing to work with you on custom integration patterns. It doesn't work when the vendor's entire business model depends on aggregating data across customers or when the AI requires identifiable longitudinal data to function.

Option Three: Find an Alternative Vendor

The AI healthcare market is crowded. If one vendor won't sign a BAA, competitors often will. Vendors who specialize in healthcare understand HIPAA requirements and build compliance into their business model from day one.

When I evaluate AI tools that might handle sensitive healthcare data, I actually use BAA willingness as a screening criterion. Vendors who immediately understand why you're asking and have a template ready demonstrate regulatory maturity. Vendors who push back or seem confused about the requirement raise red flags about their overall compliance posture.

Inline article illustration

What Regulators Are Actually Saying About AI and BAAs

OCR hasn't issued comprehensive AI-specific HIPAA guidance yet, but they've made their position clear through enforcement actions, guidance updates, and public statements.

In December 2022, OCR updated their guidance on online tracking technologies to clarify that when covered entities use third-party tracking tools on patient portals or scheduling systems, those vendors may be business associates if they receive PHI. The guidance specifically noted that vendors can't avoid business associate status simply by claiming they "don't look at" the data they collect.

This matters for AI because the same logic applies. If your AI vendor's systems receive PHI—even if they claim automated processes handle it without human review—they're creating, receiving, or maintaining PHI on your behalf.

I've also watched OCR's breach reporting database. When covered entities report breaches involving AI or machine learning tools, OCR asks detailed questions about business associate agreements, risk analyses, and safeguard implementations. They're applying existing HIPAA requirements to new technology, not giving AI vendors special treatment.

The pattern is consistent: OCR expects covered entities to treat AI vendors the same way they treat any other vendor that handles PHI. The burden of compliance rests with the covered entity, and "we didn't think we needed a BAA" doesn't reduce penalties.

The Risk Analysis You Should Be Doing

HIPAA requires covered entities to conduct risk analyses under the Security Rule. For AI vendors, this means going beyond the BAA question to evaluate the actual security posture of the vendor's systems.

Even when a vendor signs a BAA, you need to verify they can meet the obligations they're agreeing to. I use a framework that examines:

I've found that vendors with mature regulatory compliance programs can answer these questions quickly and provide documentation. Vendors who struggle to answer or dismiss the questions as excessive are showing you something about their security culture.

Looking for a Keynote Speaker on AI Governance and Healthcare Compliance?

Carl's presentations help compliance, legal, and IT leaders understand emerging risks at the intersection of AI and regulated data. See all keynote speaking topics or reach out about your event.

Book Carl for Your Event

Building an AI Vendor Assessment Process

Rather than evaluating each AI vendor in isolation, build a repeatable process. This saves time and ensures consistent risk evaluation across your organization.

The process I've implemented starts with a questionnaire that goes out before technical evaluation begins. Key questions include:

Vendors who can't answer these questions aren't ready for healthcare deployments. Vendors who provide thorough responses make it easier to conduct your required risk analysis.

After the questionnaire, technical review happens. This includes reviewing API documentation to understand exactly what data fields are transmitted, examining authentication flows, and testing data deletion requests to confirm PHI can be purged when required.

Finally, legal review ensures the BAA (or documentation justifying why one isn't needed) is sound. I loop in our privacy officer, legal counsel, and sometimes outside healthcare attorneys for this step. AI vendor contracts often include provisions that conflict with HIPAA requirements—unlimited license grants to use your data for model training, limitations on liability that don't align with breach costs, retention policies that exceed HIPAA's minimum necessary standard.

The Conversation You Need to Have Internally

The hardest part of AI vendor assessment isn't the technical analysis. It's the internal conversation about risk tolerance.

Clinical leaders see AI tools that could reduce physician burnout, improve diagnostic accuracy, or streamline administrative workflows. They're impatient with compliance processes that slow adoption. I understand that frustration—I've sat in meetings where oncologists describe how an AI tool could help them spend more time with patients and less time on documentation.

But the compliance and security side sees vendors with immature privacy practices, unclear data handling policies, and reluctance to accept contractual obligations. We see regulators increasing enforcement activity and civil rights groups raising concerns about AI bias in healthcare.

The conversation you need to have is about acceptable risk, not whether risk exists. Every AI vendor carries some risk. The question is whether the potential benefit justifies that risk, whether you have safeguards in place to mitigate it, and whether you're prepared to defend your decision if something goes wrong.

I've found it helpful to frame this as a shared accountability model. Clinical leaders own the decision about whether a tool's benefits justify its adoption. Compliance and IT own the assessment of what risks exist and what safeguards are needed. Executive leadership owns the final call on whether the risk-benefit calculation makes sense for the organization.

What doesn't work is letting enthusiasm for AI capabilities override basic due diligence. The organizations that get this right move fast on AI adoption because they have rigorous assessment processes, not in spite of them.

What This Means for Your AI Strategy

The question of whether your AI vendor needs to sign a BAA isn't just a compliance checkbox. It's a forcing function that makes you examine how the tool actually works, what data it touches, and whether the vendor understands regulated industries.

Organizations that treat AI vendor assessment as a compliance burden tend to either move too slowly and miss opportunities or move too fast and create risk. Organizations that treat it as a strategic capability build competitive advantage. They can evaluate and deploy AI tools faster than competitors because they've built repeatable processes. They avoid the cost and reputation damage of breaches because they've done proper due diligence upfront.

If you're a CISO or compliance leader, your job isn't to say no to AI. It's to build the frameworks that let your organization say yes safely. That means understanding when an AI vendor BAA is required, knowing how to evaluate vendors who claim they don't need one, and having the courage to walk away from vendors who won't accept reasonable safeguards.

The regulatory environment around AI and healthcare data is still evolving. OCR will likely issue more specific guidance in the coming years. State privacy laws are adding new requirements. International regulations like the EU AI Act create additional complexity for global vendors.

But the fundamentals won't change: if a vendor handles your patients' protected health information, they need to be accountable for protecting it. A Business Associate Agreement is how HIPAA codifies that accountability. Everything else is details.

📖
ChatGPT in Healthcare: HIPAA Risks and How to Manage Them → What Is Regulatory Compliance? A Practical Guide →