Artificial Intelligence (AI) has been used within health care since the 1950s, but recent technological advances have expanded how AI is used by health care stakeholders who are keen to deploy AI to reduce administrative burden, as well as improve care and patient experience. Similar to the slow – then rapid – adoption of telehealth, the legal and regulatory framework governing AI lags behind the technology.

The below table outlines key levers states have when evaluating how to best legislate and regulate the use of AI in healthcare.

 PurposeAI Implications
State Privacy LawsState privacy laws that afford heightened protections to all or sensitive patient data and/or consumer health data (including data that may not be protected by HIPAA).Already regulate how data may be collected and used by AI tools. Amended consumer data laws could include specific AI requirements such as patient consent, disclosure, opt-outs (e.g., proposed CCPA amendment), data-sharing transparency requirements, governance structures for discrimination, etc.
Laws Governing Licensed or Registered ActivitiesLaws that dictate requirements to obtain and maintain health care entity licenses, registrations or permits (e.g., hospital, clinical laboratory, ASCs).Issue facility-based requirements for how AI can be used, or overseen by licensed hospitals, clinical laboratories, ASCs, clinics, etc.
Laws Governing Professional ConductState-specific laws that regulate how licensed medical professionals can practice and provide care.State Professional Boards (medicine, nursing, etc.) likely to issue guidelines regarding use of AI in clinical practice, ranging from expansive (i.e., the Board will not regulate use of AI, will rely on the clinician’s discretion) to more limiting (i.e., it is professional misconduct to rely solely on AI for clinical decisions).   Unlike laws and regulations, statements and guidelines are easier to issue, retract, and amend as the AI landscape evolves and thus an attractive option for states interested in providing guardrails but also providing themselves with the flexibility to adapt over time.
Laws and Regulations Governing InsurersState-specific laws governing how insurers perform eligibility determinations and make coverage determinations.State insurance departments may issue regulations, guidelines or circulars regarding how AI can be used to make eligibility, coverage and utilization management.
[1] For an overview of federal AI activity, see Manatt Health’s ATA blog post here.

In 2023, state legislative activity related to health AI was limited. The majority of proposed AI bills in 2023 focused on studying the implications of AI before regulating it (>30 bills introduced), including those that: outlined requirements for the states use of AI; directed states to study the impact of AI, make recommendations on the use of AI, adopt guidelines or guidance regarding the development/deployment of AI tools; or established a task force to evaluate one of the above. States also (~35 bills introduced) began to introduce language related to the use of AI in clinical decision making, anti-discrimination, payer requirements (e.g., when AI could be used for patient determinations), transparency requirements between those who make and those who use AI tools, and patient consent requirements.

State legislatures started 2024 with a flurry of AI-related activity. A few themes have emerged regarding States focus areas:

Transparency. >50 bills introduced in 2024 included transparency requirements. “Transparency” describes the ability for an entity that uses an AI tool to understand how it was trained and where it can best be used. States are beginning to outline requirements that developers of AI tools would have to provide to those who use AI tools. Furthermore, states are also beginning to specify the transparency requirements between those who “deploy” tools (e.g., a physician, hospitals, states, payors) and those for whom the AI tool would impact (e.g., patient, member); that is, states are legislating when, where, and how disclosure must be provided and if and when consent to use AI must be obtained. We anticipate transparency to continue to be a primary area of focus for states throughout the remainder of the 2024 legislative session and into 2025.

Discrimination. >20 bills introduced in 2024 included anti-discrimination requirements. States are eager to ensure AI tools generally – not just specific to health care – do not discriminate against end users. There are several ways AI tools may be discriminatory: for example, the data used to train an AI model could be biased in some way, thereby potentially leading a model to generate outputs that are discriminatory. Alternatively, the use of the AI tool could be used more for certain patient populations than others. Legislation to-date has focused primarily on the first, requiring that the models themselves do not discriminate or cause discriminatory outcomes.

There were fewer bills introduced regarding payer AI-use and the use of AI in clinical decision making, although we expect greater activity in these areas in the future, focusing on ensuring there is some clinician level review of the AI-tool output. However, if you have not yet seen, Utah passed SB149 – the first state AI law addressing health care professionals disclosure of AI tools to their end-user (see summary here). We expect the introduction of more bills that mirror Utah’s law.

In addition to state legislative activity, specialty societies and state medical societies are rapidly developing principles and guidelines for AI use. Some have more public guidelines (e.g., American Academy of Dermatology, Washington State Medical Society), while the majority are finalizing best practices and guidance for physicians.

This month, Manatt Health is publishing a State Health AI Tracker that will overview in more detail (with examples from states) the trends outlined above, among others. For more information on how to access that tracker, please reach out to

Authors: Randi Seigel, Annie Fox, Jacqueline Marks Smith | Manatt