Appendix A — AI Principles Across Governance Frameworks
The WHO, the FDA, NIST, the AMA, and the EU have each produced AI governance frameworks from different starting points and for different audiences — and they converge on the same small set of concerns: safety, transparency, fairness, human oversight, and accountability. That convergence is meaningful. It suggests these are not arbitrary categories but genuine pressure points where AI systems consistently generate governance problems, regardless of who is doing the governing.
This appendix describes the major frameworks relevant to AMC AI governance, notes what is distinctive about each, and closes with a comparison table for governance teams designing programs that need to work across multiple standards simultaneously.
A.1 The Frameworks
WHO Ethics and Governance of Artificial Intelligence for Health (2021, updated 2024) (World Health Organization 2024) is the most comprehensive international health-sector AI ethics framework. Its six principles — human autonomy, human well-being and safety, transparency and explainability, responsibility and accountability, inclusiveness and equity, and responsiveness and sustainability — are organized around the patient and community as the primary stakeholders, not the health system. The sustainability principle, which asks whether AI deployment is globally equitable and environmentally responsible, is distinctive among health-sector frameworks and is often the least operationalized in AMC governance programs.
FUTURE-AI is an international multi-stakeholder consortium that developed a framework structured around six properties whose initials spell the name: Fairness, Universality, Traceability, Usability, Robustness, and Explainability (Lekadir et al. 2022). The Universality principle — that AI tools should perform equitably across diverse populations and be validated on representative global datasets — goes further than most frameworks in demanding that equity be built into the development process, not added in post-deployment monitoring.
Good Machine Learning Practice for Medical Device Development: Guiding Principles is a joint statement from the FDA, Health Canada, and the UK’s MHRA (U.S. Food and Drug Administration et al. 2021). It is the most operationally specific of the frameworks listed here, specifying practices rather than values: representative datasets, independent training and test sets, thorough clinical testing, and explicit post-deployment monitoring requirements. Of the listed frameworks, it comes closest to a checklist that a vendor validation team can work through. AMC procurement teams evaluating clinical AI vendors should use this framework as a minimum due diligence standard.
AMIA’s Artificial Intelligence Principles (American Medical Informatics Association 2024) map traditional bioethics principles — autonomy, beneficence, non-maleficence, justice — onto an AI governance context and add informatics-specific properties: explainability, interpretability, fairness, dependability, auditability, and knowledge management. The knowledge management principle, which asks how AI systems contribute to or degrade the institution’s knowledge infrastructure, is distinctive and underrepresented in other frameworks.
NIST AI Risk Management Framework (AI RMF 1.0) (National Institute of Standards and Technology 2023) is a voluntary US federal framework organized around four functions — Govern, Map, Measure, Manage — with nine trustworthiness characteristics: accountable, explainable and interpretable, fair with bias managed, privacy-enhanced, reliable, safe, secure and resilient, transparent, and valid and accurate. The RMF is distinctive in its process orientation: unlike the principle frameworks, it specifies organizational functions and roles, making it useful as an implementation scaffold rather than a values statement. The AISC governance structure described in Chapter 3 maps naturally onto the RMF’s Govern function.
AMA Principles for Augmented Intelligence (American Medical Association 2024) are addressed to clinicians and health systems and place particular weight on physician agency: AI tools should support rather than supplant clinical judgment, and physicians must retain meaningful control over AI-assisted decisions. The AMA principles also address the obligation to inform patients when AI plays a role in their care, connecting to the notice and explanation principle in the OSTP Blueprint (Appendix B).
A.2 Where the Frameworks Agree
| Principle Theme | WHO | FUTURE-AI | FDA/MHRA | AMIA | NIST AI RMF | AMA |
|---|---|---|---|---|---|---|
| Safety and non-maleficence | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Transparency and explainability | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Fairness and equity | ✓ | ✓ | ✓ | ✓ | ✓ | ✓ |
| Human oversight and autonomy | ✓ | — | ✓ | ✓ | ✓ | ✓ |
| Accountability | ✓ | — | ✓ | ✓ | ✓ | ✓ |
| Post-deployment monitoring | ✓ | ✓ | ✓ | ✓ | ✓ | — |
| Privacy | — | — | — | — | ✓ | ✓ |
| Global/demographic representativeness | ✓ | ✓ | ✓ | ✓ | ✓ | — |
Safety, transparency, fairness, and post-deployment monitoring appear in every framework. That is the minimum common denominator — the set of themes any credible AMC AI governance program has to address if it wants to be legible to regulators, accreditors, and institutional partners drawing from any of these sources. The divergences are instructive too: privacy, sustainability, knowledge management, and global representativeness each appear in only one or two frameworks. These are not areas of disagreement so much as areas where different communities have different starting assumptions. Governance teams should treat them as prompts for explicit institutional choices rather than gaps to paper over.