3 The Framework
In order to successfully integrate AI across an academic medical center, it helps to start by acknowledging what an AMC actually is. It is not a single organization with a unified mission, a single budget, and consistent risk tolerance. It is a federation of at least four semi-independent domains — clinical care, research, education, and business operations — that share governance overhead and physical infrastructure but operate under largely independent leadership, funding models, regulatory obligations, and definitions of what success looks like. A tool or policy that makes perfect sense for clinical operations may be irrelevant or actively counterproductive in the research context. An AI use case that is urgent in the business operations domain may be low priority in the education domain, and vice versa.
This structural reality is the organizing principle of the framework. Rather than specifying a single institutional AI program, the framework recognizes that AI deployment in an AMC has both domain-specific and cross-domain dimensions, and that good governance requires handling each dimension appropriately. The domains are organized vertically, reflecting their semi-independence. The workstreams are organized horizontally, reflecting the shared capabilities and governance functions that cut across all domains. The AI Steering Committee sits above both, providing the portfolio management and accountability function that neither the domains nor the workstreams can provide alone.
3.1 The Four Domains
The domain structure reflects a basic fact about how AMCs work: clinical, research, educational, and business operations communities have different relationships to AI risk, different institutional cultures, and different processes for making deployment decisions.
Clinical encompasses patient care — the AI tools used in diagnosis, treatment planning, documentation, patient communication, and care coordination. Clinical AI carries the highest patient safety risk and the most developed regulatory framework. Tools in this domain may be subject to FDA oversight as Software as a Medical Device, must comply with the CMS human-review requirement for coverage decisions, and are now subject to the HHS Section 1557 nondiscrimination mandate. Clinical AI deployment decisions involve the CMO, the CMIO, patient safety leadership, and clinical governance structures that have no parallel in the other domains.
Research encompasses basic, translational, and clinical research programs — the AI tools used in literature synthesis, grant writing, protocol development, data analysis, and research operations. Research AI has a different risk profile: the primary risks are data provenance, IRB compliance, publication integrity, and the integrity of the scientific record, rather than immediate patient harm. Research AI deployment is governed by IRB review, data sharing agreements, federal research compliance frameworks, and professional norms around authorship and reproducibility.
Education encompasses teaching, learning, and assessment across medical school, graduate programs, residency, and continuing education. AI in education introduces academic integrity questions that have no parallel in the clinical or research domains — how to assess student competence in an environment where AI can generate sophisticated responses to most standard assessments, how to design learning experiences that build genuine clinical reasoning rather than delegating it to AI tools, and how to ensure that faculty can teach and evaluate AI-related competencies they are still developing themselves.
Business operations encompasses administrative and operational functions: revenue cycle, supply chain, facilities management, human resources, scheduling, and financial operations. Business operations AI typically has the lowest patient safety risk and the most tractable ROI calculation, but it touches sensitive financial and employment data and introduces process automation that changes staff roles in ways that require careful change management.
3.2 The Five Workstreams
Within each domain, the same five operational questions recur regardless of the specific AI use case. These recurring questions are the workstreams — the cross-cutting capabilities that the institution needs to develop and maintain whether it is deploying clinical AI, research AI, or administrative AI.
Data Access and Governance asks: what data can be used for AI development and deployment, under what conditions, governed by which agreements, and with what privacy protections? The data governance workstream is described in detail in Chapter 17. Its answers determine which AI use cases are feasible and which are not.
IT, Security, and Infrastructure asks: what technical architecture, API management, security controls, and monitoring infrastructure are required to deploy AI responsibly at scale? The infrastructure workstream, described in Chapter 14, is the difference between AI tools that are governed and AI tools that are not — the institutional API gateway is the chokepoint through which governance is enforced.
Ethical, Legal, and Social asks: what are the ethical obligations, regulatory requirements, and social implications of deploying this AI tool in this context? The ethics workstream, described in Chapter 16, is not the same question in all four domains. Research AI raises different equity questions than clinical AI; educational AI raises different accountability questions than business operations AI. The workstream provides shared frameworks and governance structures while leaving domain-specific application to the domain teams.
Training and Workforce Development asks: what do the people using, deploying, and overseeing AI tools need to know, and how does the institution build and sustain that capacity? The workforce workstream, described in Chapter 15, includes the four-tier competency model (consumers, translators, developers, governors) and the faculty development programs that prevent AI literacy from degrading as the technology evolves.
Project Management and Support asks: how does the institution manage the AI portfolio across its full lifecycle — from intake through deployment through decommissioning — and how does it maintain the organizational infrastructure (champions, architects, governance committees) that makes deployment sustainable? The project management workstream, described in Chapter 18, is the connective tissue that keeps everything else from being a set of independent good intentions.
3.3 Why This Structure
The value of the matrix structure is not that it is theoretically elegant. It is that it matches the actual decision-making geography of an AMC.
Domain-level decisions — whether to deploy a specific AI tool in the clinical workflow, which research AI capabilities to invest in, how to address academic integrity in medical education — belong to the domain leadership. The CMO is accountable for clinical AI governance. The VP of Research is accountable for research AI governance. The dean’s office is accountable for educational AI governance. These are not decisions that should be made by a central IT committee or a governance function disconnected from operational reality. Domain ownership is what makes AI governance legible to the people who actually work in those domains.
Workstream-level decisions — the data governance framework, the institutional API architecture, the workforce development curriculum, the portfolio management process — belong to centralized institutional functions because the alternatives are worse. An institution where each domain maintains its own data governance policy, its own API infrastructure, and its own workforce development program will produce four incompatible frameworks, four redundant technical stacks, and four sets of training content that conflict with each other. The workstreams are shared services not because central control is desirable in principle but because fragmentation in these specific areas is costly in practice.
The AISC is what makes the matrix function as a governance program rather than a conceptual diagram. Without the AISC — a body with actual authority over AI deployment decisions, portfolio visibility, and the power to terminate projects that fail governance review — the domains will optimize for their own priorities at the expense of institutional coherence, and the workstreams will provide advice that no one is required to follow. The AISC’s role in portfolio management is described in detail in Chapter 18. The point here is structural: the domains and workstreams need the AISC the way a matrix organization needs its executive leadership. Remove it and you have a framework diagram without a governance program.
3.4 How to Use This Framework
This framework is a starting point, not a finished organizational design. Most AMCs will need to adapt it to their existing governance structures, their current AI deployment footprint, and their specific regulatory environment. An institution that already has a strong clinical informatics governance program may find that the clinical domain workstream structure maps naturally onto existing committees and roles. An institution that is starting from scratch may find the workstream structure useful primarily as a checklist of the functions it needs to build.
The chapters that follow describe each workstream in detail, grounded in current evidence and the practices of peer institutions that have built working programs. The domain chapters describe the AI use cases, governance considerations, and regulatory requirements specific to each domain. Neither set of chapters is meant to be read cover to cover before acting. The right starting point depends on where your institution is and what problem is most pressing.
If there is one organizing principle I would ask you to take from this framework, it is this: the question of what AI to deploy is secondary to the question of how to govern what you deploy. The deployment pipeline will fill itself. The governance program will not build itself. The work described in this book is the governance program.