6  Clinical Domain

6.1 Literature

  • Mandl, K.D., Gottlieb, D. and Mandel, J.C. (2024) ‘Integration of AI in healthcare requires an interoperable digital data ecosystem’, Nature Medicine. Available at: https://doi.org/10.1038/s41591-023-02783-w.
  • Tierney Aaron A. et al. (no date) ‘Ambient Artificial Intelligence Scribes to Alleviate the Burden of Clinical Documentation’, Catalyst non-issue content, 5(1), p. CAT.23.0404. Available at: https://doi.org/10.1056/CAT.23.0404.

6.2 Guiding principles table

Table 6.1: Questions that can be used when considering each principle in the AI development process (Badal, Lee, and Esserman 2023)
Principle Questions
1. Alleviate healthcare disparities • What health disparities are reported for the present AI application?
• How can the AI tool be designed to be accessible to and improve outcomes for the disadvantaged population?
• What clinical interventions are needed to realize the benefit, and are these accessible?
• How can data collection be supported in underserved communities for tool retraining over time?
2. Report clinically meaningful outcomes • How is clinical benefit defined in this domain?
• What is the present threshold for the clinical benefit of existing tools, and how can the AI tool improve upon this threshold?
3. Reduce overdiagnosis and overtreatment • What disease state is an overdiagnosis?
• For every case of overdiagnosis, what are the downstream costs to the patient and healthcare system?
• How can this AI application reduce the number of overdiagnoses compared to existing approaches?
4. Have high healthcare value • Is this AI tool addressing a high-priority healthcare need?
• What would be the cost to the healthcare system in implementation, maintenance, and update?
• What would be the cost to the patient who does and does not benefit from this tool?
• Does this tool have high healthcare value, and if not, how can it be improved?
5. Incorporate biography • What biographical data can be collected or carefully coded for the intended population?
• How do these factors vary in the intended population?
• How can these factors be included when developing AI tools?
6. Be easily tailored to the local population • Can the training features be easily collected in different settings?
• Are these features reliable for training across different populations?
• Will the AI/ML workflow be made open-access?
7. Promote a learning healthcare system • How will this AI application be evaluated over time, and at what intervals?
• What are acceptable thresholds for performance?
• How will the evaluation results contribute to continuous improvement?
8. Facilitate shared decision-making • Have AI explainability tools been explored and utilized?
• Do clinicians and patients find the explainability results helpful?
• Have simpler, explainable algorithms been tried and compared to ‘black-box’ algorithms to determine if a simpler model performs just as well?
• How can patient values be easily integrated into the use of the AI tool?

6.2.1 Principle 1: AI tools should aim to alleviate existing health disparities

Reaching health equity requires eliminating the disparitities in health outcomes that are closely linked with social, economic, and environmental disadvantages. At their very core, AI tools require collection of specialized and high-quality data, advanced computing infrastructure for use, capacity to purchase or partner models from commercial entities, and unique technical expertise, all of which are less likely available to healthcare systems that serve the most disadvantaged populations.

More careful training and model development that accounts for the unique needs of disadvantaged populations is needed to ensure that AI tools do not exacerbate existing health disparities. Creating equitable AI tools may require prioritizing simpler models for deployment, and the trade-off between balancing accuracy and equity can potentially be resolved by designing AI tools that can be easily tailored to the local population. AI tools designed to serve disadvantaged groups must not unnecessarily divert resources from higher priority areas and more effective interventions (Principle 4).

6.2.2 Principle 2: AI tools should produce clinically meaningful outcomes

AI tools should be evaluated based on their ability to improve clinically meaningful outcomes. The clinical benefit of AI tools should be defined in the context of the existing standard of care, and the AI tool should be evaluated against this standard. If AI practitioners do not define clinical metrics for clinical benefit a priori, they risk producing tools that clinicians cannot evaluate or use. Clinician partners of AI researchers should evaluate accuracy, fairness, and risks of overdiagnosis and overtreatment (Principle 3). They should also evaluate the healthcare value (Principle 4) along with the explainability and auditability of AI tools and models (note principles outlined in Table 6.1.

6.2.3 Principle 3: AI tools should reduce overdiagnosis and overtreatment

Particularly in the United States, overdiagnosis and overtreatment are major drivers of healthcare costs and patient harm. Overdiagnosis occurs when a disease is diagnosed that would not have caused symptoms or death in a patient’s lifetime. Overtreatment occurs when a patient is treated for a disease that would not have caused symptoms or death in a patient’s lifetime. AI tools should be carefully constructed with the spectrum of disease and interventions to result in decreased overdiagnosis and overtreatment.

6.2.4 Principle 4: AI tools should have high healthcare value and avoid diverting

resources from higher-priority areas

AI tools applied in healthcare should result in the same outcomes for reduced cost or better outcomes for costs comparable to current costs. Costs to gather inputs, build, maintain, update, interpret, and deploy in clinical practice must be estimated and included in weighing the decisions around AI tool application. Note that what might be cost-effective, leading to high healthcare value, in one setting might be extremely cost-ineffective in settings where resources are scarce.

6.2.5 Principle 5: AI tools should incorporate social, structural, environmental,

emotional, and psychological drivers of health

6.2.6 Principle 6: AI tools should be easily tailored to the local population

6.2.7 Principle 7: AI tools should promote a learning healthcare system

6.2.8 Principle 8: AI tools should facilitate shared decision-making

Badal, Kimberly, Carmen M Lee, and Laura J Esserman. 2023. “Guiding Principles for the Responsible Development of Artificial Intelligence Tools for Healthcare.” Communication & Medicine 3 (1): 47. https://doi.org/10.1038/s43856-023-00279-9.