Skip to main content

Artificial Intelligence and Health Care: Government Action and Best Practices

December 18, 2023

On October 30, the Biden Administration issued an executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence related to the development and use of AI. In turn, on November 15, a bipartisan group of senators introduced the Artificial Intelligence Research, Innovation and Accountability Act (AIRIA). AIRIA aims to establish a regulatory framework that will bolster innovation, transparency, accountability, and security in the development of critical and high-impact applications of AI. Although the executive order does not hold the same weight as congressional legislation and AIRIA has not been signed into law, the actions signal the federal government’s intent to establish a regulatory framework that will have a sweeping effect on how AI-enabled technology is developed and used in the future. 

Executive order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

The executive order tasks federal agencies, including the U.S. Department of Health and Human Services (HHS), to develop policies related to the development and use of AI. The executive order builds upon the Administration’s Blueprint for an AI Bill of Rights and National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. It instructs federal agencies to develop systems that will foster the safe and secure development and use of AI-based technologies on eight principles and priorities:

  1. Ensure AI is safe and secure.
  2. Promote responsible innovation, competition, and collaboration.
  3. Develop and utilize AI with a focus on supporting American workers.
  4. Advance equality and civil rights.
  5. Protect consumer interests.
  6. Protect American’s privacy and civil liberties.
  7. Manage risk from the federal government’s use of AI.
  8. Lead global, societal, economic, and technical progress.

The executive order broadly defines AI as “a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Given such a broad definition, developers and users of even basic machine-based systems in the health care industry should pay careful attention to actions taken by federal agencies pursuant to the executive order.

With regard to the HHS, the executive order instructs the agency to encourage continued innovation while balancing the risks on the development and use of AI in research, administrative, and clinical spaces. The executive order instructs HHS to form an AI task force to develop a strategic plan and consider rulemaking related to AI, develop a plan to promote equitable administration of public benefits, implement quality control strategies, establish a patient safety program, and evaluate a strategy to regulate AI-enabled technologies in the development of pharmaceutical drugs. 

Artificial Intelligence Research, Innovation, and Accountability Act

AIRIA seeks to establish a framework that will bolster AI innovation while increasing transparency, accountability, and security in the development and operation of critical, high-impact AI applications.

A critical-impact AI system is one that is used or intended to be used to make decisions with a legal or similar significant effect on the collection of biometric data by biometric identification systems without consent. The definition also includes systems that direct the management and operation of critical infrastructure (including the health care and public health sectors) in a manner that poses significant risk to safety or Constitutional rights. A high-impact AI system is defined as a system that is specifically developed with the intended purpose of making decisions that have a legal or similarly significant effect on the access of an individual to housing, employment, credit, education, health care, or insurance in a manner that poses significant risk to safety or Constitutional rights.

AIRIA is the most comprehensive federal AI legislation introduced to date. If passed, AIRIA would establish transparency and certification requirements for critical and high-impact AI systems. It would also create an enforcement mechanism for non-compliance, including monetary penalties, prohibitions, and civil actions.

As it relates to health care, AIRIA would require health care organizations or providers who deploy critical or high-impact AI systems to submit transparency reports to the Secretary of Commerce. It would also require health care organizations or providers who develop critical-impact AI systems to disclose information regarding the systems data sources, structure, limitations, capabilities, risks, scope of intended use, guidelines for use, and prohibited uses to the Secretary of Commerce for certification. 

Best Practices

Organizations and providers should continue to evaluate their risk tolerance with respect to the development, use, or implementation of any AI-enabled technology, including seeking input from a variety of stakeholders, including information technology, patient safety/quality, legal services, employee education, human resources, billing, and affected clinical professionals. They should also consider whether the development or implementation of an AI-enabled system requires approval of an organizational committee and update all affected policies and procedures, consent forms, and privacy notices.

To stay ahead of potential regulatory action, organizations should establish a baseline Code of Ethics/AI Governance policy addressing the development, use, and implementation of AI within the organization. A Code of Ethics/AI Governing policy should address the organizational purpose of AI use, data privacy, and non-discrimination and bias. Furthermore, organizations and providers using AI-enabled technology should routinely audit such systems to ensure they are working as intended. Organizations and providers should also establish a reporting system to collect data about patient safety events, unexpected outcomes, care coordination issues, and unintended bias events (including employment issues and benefits administration) related to the use of AI-enabled technology.

This blog was drafted by Kristen Petry, an attorney in the Spencer Fane Houston office. For more information, visit