flowchart TB
A([Identify\nAffected Communities]) --> B[Early Deliberative\nDialogue Pre-Build]
B --> C[Co-Design\nDisclosure Language]
C --> D[Participatory\nRed-Teaming]
D --> E{Does community\naccept deployment?}
E -->|Yes, with conditions| F[Pilot with Community\nFeedback Loop]
E -->|No| G[Redesign or\nDefer Deployment]
G --> B
F --> H[Publish\nOutcomes Report]
H --> I([Ongoing\nAdvisory Relationship])
12 Patient and Community Trust
There is a tempting shorthand for the case for transparency in clinical AI: institutions disclose how AI is used in patient care because they are required to, or because it reduces liability exposure. Both motivations are real. Neither is sufficient to explain why transparency actually matters for the safety and effectiveness of care.
The more important reason is that patient trust is a clinical variable. A patient who distrusts an AI tool used in their care will engage with it differently than one who understands and accepts it. They may withhold information, seek care elsewhere, or disregard recommendations they believe were generated by a system optimized for cost reduction rather than their welfare. Trust does not just affect how patients feel about their care; it affects whether the care is effective. An AI system that produces accurate outputs but is deployed in a population that distrusts it will produce worse population-level outcomes than a less accurate system that has earned confidence.
This chapter addresses the social license for AI in healthcare — the degree to which patients and communities actually believe that an AMC is deploying AI on their behalf. It is distinct from the regulatory compliance chapter, which addresses what institutions are required to do, and from the agentic safety chapter (Chapter 11), which addresses how autonomous systems should be governed. Social license addresses a prior question: do the people served by this institution believe it should be deploying AI at all, and in what ways? Institutions that skip this question and proceed directly to deployment governance are building on ground they have not tested.
12.1 The Empirical Trust Landscape
The consistent finding across surveys of patient attitudes toward AI in healthcare is that trust is lower than many technology optimists assume, and more variable across demographic groups than most institutional communications acknowledge.
Pew Research surveys on AI in healthcare have found that most U.S. adults are uncomfortable with their provider relying on AI for their medical care, and that the large majority would prefer a human provider when it comes to accuracy-sensitive decisions (Pew Research Center 2023). The discomfort is not evenly distributed: younger adults, higher-income adults, and those with more experience using digital health tools consistently report more comfort with AI-assisted care than older adults, lower-income adults, and those with less digital access. Insurance status, chronic condition burden, and prior negative experiences with the health system all independently predict lower trust in AI.
The type of AI application matters as much as the population. Patients who are skeptical of AI-assisted diagnosis are often more accepting of AI in administrative functions — scheduling, prescription refill processing, billing. The concern is concentrated in contexts where the patient perceives the AI as making clinical judgments that might otherwise be made by a physician with whom they have a relationship. Ambient documentation, in which an AI listens during a clinical encounter, occupies a particularly sensitive position: surveys consistently find that patients who receive an explanation of what the ambient system does and how the data is used are substantially more accepting than those who are not told.
There is also a gap between what patients know and what they prefer. Many patients do not know that AI tools are already embedded in their care — in the risk scores that determine their appointment priority, in the routing of their portal messages, in the sepsis alerts that flag their deterioration. Studies that ask patients how they would feel about AI use in care that they already receive without knowing it frequently find that awareness increases concern, at least initially. Institutions that interpret patient acceptance of current care as patient acceptance of AI in care are drawing an inference that the data does not support.
12.2 The Historical Roots of Differential Trust
Understanding why trust in healthcare AI is lower in Black, Latino, and Indigenous communities than in white communities requires engaging with history rather than treating differential trust as an artifact to be corrected through better communication. The medical research system has a documented record of using marginalized communities as subjects without genuine consent, without equitable distribution of benefits, and without recourse when harm resulted. The legacy of the Tuskegee syphilis study — in which Black men with syphilis were observed without treatment for decades after effective treatment became available — produced measurable multi-generational effects on participation in medical research and on trust in medical institutions in Black communities that persist decades later.
These are not historical curiosities that have been corrected by better ethical oversight. Contemporary algorithmic systems in healthcare have replicated structural inequities in ways that researchers have documented repeatedly. A widely cited study found that a commercial algorithm used to allocate healthcare resources to high-need patients systematically underestimated the needs of Black patients, because the algorithm used healthcare utilization as a proxy for health need — and Black patients with equivalent illness burden utilized less care, due to structural barriers in access. The algorithm was not explicitly programmed to be discriminatory; it was trained on data that encoded existing inequities, and it reproduced them at scale.
For AMC leaders deploying clinical AI, the practical implication is that the communities with the most to gain from AI-enabled improvements in care access and quality are also the communities with the best-documented reasons to distrust the institutions deploying it. Community engagement for AI governance in these communities is not a communication exercise. It is not “informing” the community about a decision that has already been made. It is bringing the community into the governance process before deployment decisions are finalized — which means starting earlier, accepting that community input may change deployment decisions, and being transparent about that possibility from the outset.
12.3 Meaningful Disclosure vs. Boilerplate
The regulatory minimum for AI disclosure — satisfying California AB 3030 (AB 3030 2024), for example — requires that patients be told when AI has been used to generate health communications. The regulatory minimum and the trust-building minimum are not the same thing.
Boilerplate disclosure — “This message was generated using AI technology” — satisfies the letter of the disclosure requirement. It does not tell the patient which AI system was used, whether a clinician reviewed the output, what the system is and is not capable of, or what the patient can do if they have concerns. Research on patient responses to AI disclosure consistently finds that boilerplate language increases concern without providing the context needed to resolve it. Patients who receive boilerplate disclosure are not better informed than those who received no disclosure; they are more anxious without a framework for understanding what their anxiety is about.
Meaningful disclosure has three components. It identifies what the AI did specifically (drafted this message, analyzed this image, flagged this result). It clarifies the human role (a physician reviewed this draft before sending; a radiologist reviewed the AI findings). And it provides an actionable recourse (if you have questions about how this was produced, you can ask your care team). None of this requires technical language or a long disclosure form. The most effective disclosure language found in pilot programs is specific and brief: “Dr. [Name] asked our AI system to draft this response based on your message and your recent visit. Dr. [Name] reviewed and approved it before sending.”
The workflow in Figure 12.1 reflects a principle that is easy to state and difficult to practice: community engagement for AI governance is most valuable before deployment decisions are made, not after. An institution that presents a completed AI deployment to a community advisory group and asks for feedback is not practicing participatory governance; it is practicing consultation theater. The community members who participate in that process will know the difference, and the trust consequences will follow.
12.4 Informed Consent and Its Gaps
Existing patient consent frameworks in most AMCs were designed for interventions that are visible, discrete, and time-bounded: a surgery, a medication, an imaging procedure. Clinical AI tools do not fit this model. A predictive risk score that runs continuously in the background of the EHR, influencing which patients get proactive outreach and which do not, is not an intervention in the usual sense. There is no natural moment at which a patient is informed and asked to consent, because the tool is not doing anything to the patient directly at a specific moment. It is shaping institutional decisions about the patient in ways the patient may never observe.
Ambient documentation is the most visible gap. A patient who arrives for a clinic visit where an ambient scribe is running is being recorded and having their conversation synthesized by an AI system. They may or may not know this before the encounter begins. California AB 3030 requires disclosure in written communications; it does not address the consent architecture for the encounter itself. The institutions that have managed this most effectively have adopted a verbal consent practice — the clinician explains the ambient system at the start of the encounter, tells the patient they can ask for it to be turned off, and documents the consent in the visit record. This is not complicated; it is a thirty-second conversation that most patients appreciate.
Predictive risk scoring presents a different consent challenge. The patient does not know whether they scored high or low on the readmission risk model, or whether that score influenced which patients received a follow-up call and which did not. Whether patients should be told their AI risk scores is a question that different institutions have answered differently, and the ethics literature does not offer a clear consensus. What the literature does support is that patients who are told their scores and given an explanation of what the score means and what follows from it are more accepting of AI-influenced care than those who discover after the fact that such scoring was occurring (Jones et al. 2023).
12.5 Patient AI Advisory Councils
A small number of health systems have established Patient AI Advisory Councils — governance bodies distinct from standard patient advisory boards, with a specific mandate to advise on AI tool selection, deployment, and oversight. The Vanderbilt University Medical Center’s ADVANCE Center created an AI Patient and Family Advisory Group that participates in “red-teaming” AI tools during the ideation phase — reviewing tools for potential biases and unintended consequences before they reach the pilot stage. The Digital Medicine Society sets engagement with patients as active partners on AI governance bodies as the gold standard for AI program maturity.
The structural distinction between a Patient AI Advisory Council and a standard patient advisory board matters. Standard advisory boards typically advise on service delivery and patient experience; their members are selected to represent satisfied patients. A Patient AI Advisory Council needs members who can engage with questions of algorithmic fairness, data use, and automated decision-making — which means it should deliberately recruit from communities with high stakes in those questions, including communities with documented reasons to distrust institutional AI. It should have access to technical staff who can explain how specific tools work, and it should have a defined relationship to the AI Steering Committee, including clarity about which decisions it is consulted on versus which decisions it has authority over.
| Dimension | Standard Patient Advisory Board | Patient AI Advisory Council |
|---|---|---|
| Membership basis | Satisfied patient representatives | Diverse community members; includes historically marginalized voices |
| Subject matter | Service delivery, patient experience | AI tool selection, deployment, bias audits, disclosure language |
| Phase of engagement | Post-deployment feedback | Pre-deployment ideation and red-teaming |
| Relationship to AISC | Informational only | Formal consultative role; documented input on pilot approvals |
| Authority | Advisory | Consultative; specific veto scope may be defined for high-risk pilots |
| Meeting frequency | Quarterly | Monthly (active deployment periods); quarterly (steady state) |
12.6 The Regulatory Landscape of Disclosure
The legislative landscape for AI disclosure in healthcare is still forming, but two state laws enacted in 2024 are now in effect and should be treated as the leading edge of a national trend.
California AB 3030, effective January 2025, requires healthcare providers to include explicit disclosure when AI has been used to generate patient communications, unless a licensed human reviews the content first (AB 3030 2024). The law is narrow in scope — it covers written communications, not real-time verbal interactions — but it is the most specific AI disclosure mandate currently in effect at the state level, and AMCs with any California patient population must comply. Legal teams at national AMCs should treat the California standard as the floor for their disclosure practices regardless of the states in which they primarily operate.
Colorado Senate Bill 24-205, effective June 2026, goes further: it requires that any entity using AI in a “consequential decision” about health care provide a “pre-decision notice” that explains the purpose of the AI, the nature of the consequential decision, and how to appeal or opt out (SB 24-205 2024). The Colorado law covers a broader range of AI applications than California’s, including predictive scoring and automated triage, and it places the disclosure obligation on the institution using the AI, not just on the party that communicates the output to the patient.
The FTC’s Operation AI Comply enforcement action in 2024 targeted companies making deceptive claims about AI capabilities in healthcare contexts — claiming human-level accuracy for tools that had not been validated, implying physician oversight for tools that had none (Federal Trade Commission 2024). The enforcement record makes clear that the FTC treats healthcare AI interfaces as covered by existing consumer protection law, and that institutions should expect regulatory attention to the gap between what they claim about AI tools and what the tools actually do.
12.7 Trust Recovery After Adverse AI Events
An AI tool will eventually produce an outcome that harms or nearly harms a patient at any AMC that deploys AI at scale. How the institution communicates about that event — to the patient, to the clinical staff involved, and to the public — will matter more for long-term trust than whether the event occurred at all. Institutions that respond to adverse AI events with transparency and accountability recover trust more effectively than those that minimize or deflect (Jones et al. 2023).
The trust recovery research in other high-stakes domains — aviation, nuclear power, food safety — is consistent on two principles. First, prompt and specific disclosure of what happened and what is being done in response is more effective than delayed disclosure, even when the complete picture is not yet known. “We are investigating an error in our AI system that may have affected your care, and we will contact you with our findings by [date]” is more trust-preserving than silence followed by a detailed disclosure weeks later. Second, apology without accountability — “we are sorry this happened” without a commitment to specific changes — does not restore trust. Patients want to know that the institution understands what went wrong and has taken concrete steps to prevent recurrence.
For institutions with an AI transparency report program — a public annual or semi-annual report that describes deployed AI tools, their performance metrics, and any documented adverse events — the adverse event entry provides evidence that the reporting is genuine. A transparency report that reports zero adverse events in a large AI deployment is not credible. A report that describes an adverse event, the investigation findings, and the remediation measures is evidence that the institution is actually monitoring.
12.8 Where to Start
12.8.1 Starter Project 1: Patient AI Disclosure Audit and Language Standardization
What it is: An audit of every current patient-facing AI interaction to assess whether disclosure is occurring, what language is being used, and whether that language meets the substantive standard described above — specific, human role clarified, recourse identified.
Why now: California AB 3030 is in effect, and Colorado SB24-205 takes effect in 2026. An institution that has not audited its disclosure practices is not in a position to certify compliance, and more importantly, is not in a position to know whether patients understand what AI is being used in their care.
How to execute: Map every patient-facing AI touchpoint: portal messages, discharge instructions, appointment communications, chatbot interactions, ambient documentation encounters. For each touchpoint, assess: is disclosure occurring? Is it meaningful or boilerplate? Does it meet AB 3030’s requirement for communications not reviewed by a licensed human? The output is a prioritized remediation plan and a set of standardized disclosure language options for each touchpoint type.
Buy vs. build: Process and language work, not a technology purchase. Standardized disclosure language templates require legal review, not a vendor relationship.
12.8.2 Starter Project 2: Establish a Patient AI Advisory Council
What it is: A standing advisory council with defined membership, mandate, and relationship to the AI Steering Committee, recruited specifically to advise on AI tool selection and deployment from the patient and community perspective.
Why now: The institutions that deploy AI most sustainably are the ones that have community confidence before adverse events occur, not the ones scrambling to build community relationships afterward. The DiMe Society clinical AI maturity model places patient advisory engagement as a gold-standard governance indicator. An institution without a Patient AI Advisory Council is not yet at the governance standard the field is converging toward.
How to execute: Define the charter before recruiting members: what does the council advise on, how does its input reach the AISC, and what decisions (if any) require council consultation before proceeding? Recruit at least half the membership from communities with documented reasons to be concerned about algorithmic bias in healthcare. Build in access to a technical liaison — a clinical informatics team member who can explain, in accessible terms, how specific AI tools work and what the known risks are. The council should meet monthly during active deployment periods and quarterly during steady-state operations.
Buy vs. build: Program design and staff time. No technology purchase required. The significant investment is in the quality of recruitment and the integrity of the governance relationship — specifically, whether the council’s input is genuinely considered or merely documented.