Appendix A — AI principles proposed by select organizations
This list is adapted from Badal, Lee, and Esserman (2023), Table 1.
- Ethics and governance of artificial intelligence for health, World Health Organization
- Human autonomy
- Human well-being and safety and the public interest
- Transparency, explainability, and intelligibility
- Responsibility and accountability
- Inclusiveness and equity
- Responsive and sustainable
- Ministries of Health, Medical AI algorithm assessment checklist, FUTURE-AI (an international, multi-stakeholder consortium)
- Fairness
- Universality
- Traceability
- Usability
- Robustness
- Explainability
- Good Machine Learning Practice for Medical Device Development: Guiding Principles, fdFDA, Health Canada, United Kingdom’s Medicines and Healthcare products Regulatory Agency (MHRA)
- Leverage multidisciplinary expertise in development
- Implement good software engineering and security practices
- Datasets are representative of intended population
- Training and test sets are independent
- Reference datasets are well developed
- Optimize performance of Human-AI Team
- Thorough clinical testing
- Information accessible to users
- Monitor deployed models and mitigate retraining risk
- Defining AMIA’s artificial intelligence principles, American Medical Informatics Association (AMIA)
- Autonomy
- Beneficence
- Non-maleficence
- Justice
- Explainability
- Interpretability
- Fairness
- Dependability
- Auditability
- Knowledge managemen
Badal, Kimberly, Carmen M Lee, and Laura J Esserman. 2023. “Guiding Principles for the Responsible Development of Artificial Intelligence Tools for Healthcare.” Communication & Medicine 3 (1): 47. https://doi.org/10.1038/s43856-023-00279-9.