Toward a Code of Conduct for Artificial Intelligence Used in Health, Health Care, and Biomedical Science
The Artificial Intelligence Code of Conduct (AICC) project is a pivotal initiative of the National Academy of Medicine (NAM), aimed at providing a guiding framework to ensure that AI algorithms and their application in health, health care, and biomedical science perform accurately, safely, reliably, and ethically in the service of better health for all. Stewarded by the NAM Leadership Consortium, the project will yield a pioneering harmonized AI Code of Conduct framework to serve as a starting point of reference for follow-on testing, validation, monitoring, and continuous improvement. This project represents a unique opportunity for national leaders across disciplines to work together to advance trustworthy artificial intelligence in health, health care, and biomedical science.
“People are scared of dying, they’re scared of losing their mom, they’re scared of not being able to parent and walk their child down the aisle. How can we start using the power of these tools, not through a lens of fear and reluctance, but to create a culture change from ‘doctor knows best’ or ‘patient knows best’ to ‘person powered by AI knows best’?”
– Grace Cordovano, Chief Executive Officer, Enlightening Results
“The Universal Declaration of Human Rights from 1948 includes the right to enjoy scientific advancement and its benefits. This has been a dormant right – we have failed to operationalize it and use it to promote certain policy approaches, and that is a great shame.”
– Vardit Ravitsky, President, The Hastings Center & Professor, Bioethics Program, University of Montreal
“Historically in times of technology advancements, health care disparity gaps have widened. AI runs the same risk, but it has a much greater opportunity to avoid further exacerbating the disparities among populations. We have a chance to introduce culturally competent care and to understand the determinants that affect the outcomes.”
– Selwyn Vickers, President & Chief Executive Officer, Memorial Sloan Kettering Cancer Center
Project Programming 2024 – 2025
All meeting minutes will be posted on NAM’s website.
- Year 2 Project Programming
- Spring 2024 | Steering Committee Virtual Meeting
- Summer 2024 |Steering Committee Virtual Meeting
- Fall 2024 | Steering Committee Virtual Meeting
- Year 3 Project Programming
- Summer/Fall 2025 | Final In-Person Steering Committee Meeting
- HealthITAnalytics.com | January 16, 2023
- PharmExec.com | December 22, 2023
- Forbes.com | October 29, 2023
- JAMA Network | October 4, 2023
- Psychiatrictimes.com | August 15, 2023
- Vanderbilt University | June 27, 2023
- Dermatologytimes.com | June 26, 2023
- Washington University School of Medicine in St. Louis | June 23, 2023
- Healthitanalytics.com | June 23, 2023
- Medicaleconomics.com | June 20, 2023
- Healthcare Innovation | June 20, 2023
The AICC Steering Committee’s primary responsibility is providing NAM staff with strategic guidance, so project activities and deliverables achieve their intended aims. Steering Committee members provide thought leadership on issues such as governance, policy development, environmental awareness, risk analysis, and adoption of the Code throughout the industry.
- Grace Cordovano, Enlightening Results
- Andrew Bindman, Kaiser Permanente
- Jodi Daniel, Crowell & Moring
- Wyatt Decker, UnitedHealth Group
- Peter Embí, Vanderbilt University Medical Center
- Gianrico Farrugia, Mayo Clinic
- Kadija Ferryman, Johns Hopkins University
- Sanjay Gupta, Emory University
- Eric Horvitz, Microsoft
- Roy Jakobs, Royal Philips
- Kevin Johnson, University of Pennsylvania
- Kedar Mate, Institute for Healthcare Improvement
- Deven McGraw, Invitae
- Bakul Patel, Google
- Philip R.O. Payne, Washington University
- Vardit Ravitsky, The Hastings Center
- Suchi Saria, Johns Hopkins University | Bayesian Health
- Eric Topol, Scripps Research Translational Institute
- Selwyn M. Vickers, Memorial Sloan Kettering Cancer Center
- Peter Lee, Microsoft Research*
- Kenneth D. Mandl, Harvard Medical School*
*(Digital Health Action Collaborative Co-chair)
Frequently Asked Questions
What is the vision of AICC?
The AICC vision is to align and catalyze collective action to realize the potential of AI to revolutionize health care delivery, generate groundbreaking advances in health research, and contribute to robust health for all, adhering to the highest standards of ethics, equity, privacy, security, and accountability.
What are the key activities to achieve the project vision?
The AICC activities are to: 1) harmonize the many sets of AI principles/frameworks/blueprints for health care and biomedical science and identify and fill the gaps to create a best practice AI Code of Conduct with “guideline interoperability”; 2) align the field in advancing broad adoption and embedding of the harmonized AI Code of Conduct; 3) identify the roles and responsibilities of each stakeholder at each stage of the AI lifecycle; 4) describe the national architecture needed to support responsible health care AI; and 5) identify ways to increase the speed of learning about how to govern health care AI in service of a learning health system.
What does it mean to “align the field” and/or “embed the Code of Conduct”?
Aligning the field means bringing together a broad stakeholder group to ensure all perspectives are understood throughout the process of developing the Code of Conduct to: 1) assure, that to the extent possible, the Code reflects the needs and priorities of all parties; and 2) earn stakeholder support for and commitment to the Code, with the ultimate goal of having the Code woven throughout the fabric of health, health care, and biomedical science.
What is included in the “systems view” of AI?
The systems view of AI in health care and biomedical science considers the aspects of the AI lifecycle (e.g., data acquisition, validation, and representativeness, data modeling and testing, systems procurement and implementation, post-implementation vigilance, etc.), the stakeholders’ roles in each phase, and identifies the ecosystem needed to support trustworthy AI for health, health care, and biomedical science.
Does the project include both predictive AI and Large Language Model (LLM) AI?
This project will address both predictive AI (e.g., models that help to identify patients who are at risk of developing certain conditions, recommending treatment plans, and predicting outcomes) as well as LLM AI (e.g., ChatGPT, Bard, OPT-IML, etc.), as both have significant implications for health, health care, and biomedical science and are becoming integrated in their use.
What are the outputs of the project?
The AICC outputs are: 1) a harmonized and broadly supported Code of Conduct; 2) a comprehensive landscape assessment that includes a systematic review of the literature; a review of the guidelines/frameworks/blueprints from federal agencies; and the guidelines issued by medical specialty societies; 3) a description of the roles and responsibilities of each stakeholder at each stage of the AI lifecycle; 4) a description of the national architecture needed to support responsible health care AI; and 5) identification of priority actions going forward. The work products that contain the above include: 1) summaries from the AICC Steering Committee meetings; 2) a NAM Commentary paper outlining the landscape assessment and key components of the Code; and 3) a final NAM special publication containing a proposed Code of Conduct framework to be deployed for testing, validation, learning, and improvement.
How long will the project last?
The current AICC project will last for 3 years and will conclude in December 2025. We anticipate that additional projects may spawn from this work.
Is this for the U.S. only?
The work will draw primarily from the U.S. experience but will be informed by international efforts to ensure the inclusion of best practices, and to support global AI guideline harmonization efforts for health, health care, and biomedical science. The stakeholder groups involved in the AICC project and developing the Code include patients and families; providers; privacy, security and ethics experts; equity experts; health systems; tech companies, government agencies; biomedical scientists and researchers; health plans; pharma and health care product manufacturers; professional societies; medical societies; and health care AI collaborations. In addition, the NAM will work with these stakeholder groups to embed the Code into their own sets of guidelines and principles and develop their specific implementation guides. The project will be guided by a Steering Committee of national thought leaders representing all key stakeholders.
What is the role of the AICC Steering Committee?
The AICC Steering Committee provides strategic guidance for the project as a whole and overarching leadership for the development of a Code of Conduct fully informed by stakeholders, assuring that process and outcomes warrant and achieve the desired broad support for and implementation of the Code. The project team identified candidates for steering committee membership to ensure broad stakeholder engagement and includes individuals with expertise in ethics, patient advocacy, communications, health systems, technology, and research. The criteria for selection also included: 1) national recognized thought leadership; 2) capacity to influence the adoption and embedding of the AI Code of Conduct through the industry; and 3) ability to provide thought leadership and strategic guidance on issues such as governance, policy development, environmental awareness, and risk analysis.
How does the AICC align efforts and synergistically reinforce other initiatives in the field?
A foundational principle in the development of the AICC project was the importance of honoring and building upon the work that has already been done to promote trustworthy AI in health, health care, and biomedical science. To that end, an early activity in the project plan is a comprehensive landscape assessment that includes a systematic review of the literature; a review of the guidelines/frameworks/blueprints from federal agencies; and the guidelines issued by medical specialty societies. This environmental scan is serving as the foundation upon which the Code is being developed. However, the AICC will not provide the specific implementation guidance on topics already covered by federal agencies or other coalitions, such as those presented in the NIST Risk Management Framework or the Blueprint for Trustworthy AI in Healthcare produced by the Coalition for Health AI (CHAI). Throughout the course of the project, the NAM effort will inform the efforts of CHAI, which is providing robust best practice technical guidance, including assurance labs and implementation guides to enable clinical systems to apply the Code of Conduct. Similarly, the efforts of the CHAI and other groups addressing responsible AI will inform and clarify areas that will need to be addressed in the NAM Code of Conduct. The work and final deliverables of these projects are mutually reinforcing and coordinated to ensure broad adoption and active implementation of the AICC in the field.
For more information on the project please email NAM_AICC@nas.edu.