Health AI Regulatory Framework – What’s Happening and What’s Expected?
Artificial Intelligence (AI) has been used within health care since the 1950s, but recent technological advances have expanded how AI is used by health care stakeholders who are keen to deploy AI to reduce administrative burden, as well as improve care and patient experience. Similar to the slow – then rapid – adoption of telehealth, the legal and regulatory framework governing AI lags behind the technology.
While the federal AI regulatory landscape remains nascent, federal agencies are currently or expected to govern AI use within health care soon. We expect a flurry of activity in the second half of 2024 and beyond as President Biden’s Executive Order aimed at promoting responsible AI innovation deadlines approach.
The following table outlines activity across existing federal agencies that have started issuing regulations and guidance that impacting the use of AI within healthcare:
Federal Agency | Impacted Stakeholder | Health AI Implications | Activities To-Date | Expected Activity |
ONC | Certified HIT (certain EMR vendors)[1] | Certified HIT must provide its software users (hospitals and physician) with information regarding the AI clinical decision tools[2] development; they must also establish an intervention risk management program | HTI-1 Rule (December 2023) | HTI-2 Rule |
OCR | Many providers and health plans | Prohibit covered entities including providers, clinics, pharmacies, and health plans, from using AI to discriminate (e.g., racial bias in use of photo-based AI clinical diagnosis tools) | Proposed 1557 Rule (August 2022) | Rule expected to be finalized Spring 2024 |
FDA | Software as a medical device and development of drugs and biological products | Issued an Action Plan that FDA will take to oversee AI/ML in SaMD; –providers overview of current and future uses for AI/ML in drug and biological development- | Non-binding guidance on CDS software (September 2022); Review/approval of AI/ML devices (ongoing) | Develop fit-for-purpose regulatory framework |
CMS | Medicare Advantage (MA) Plans | Prohibit MA plans from solely relying on AI outputs to make coverage determinations or terminate a service | Regulatory guidance (April 2023; February 2024) | Issue regulations or guidance on Medicare enrolled providers use of AI and other MA use cases |
[2] Predictive DSI is “technology that supports decision-making based on algorithms or models that derive relationships from training data and then produces an output that results in prediction, classification, recommendation, evaluation, or analysis.”
In addition to federal regulatory activity, many states have far-reaching privacy laws and laws governing licensed or registered activities and professional conduct that already implicate how AI may be used within health care. States are starting to pass specific laws: Utah just passed a law requiring health care providers to prominently disclose uses of generative AI to communicate with patients.
Lessons Learned from the Telehealth Regulatory and Policy Experience
Telehealth’s rapid rise in response to COVID-19 resulted in a flurry of federal and state-level activity to enable widespread use of telehealth. There are several lessons from the telehealth experience that policymakers and the industry should consider as the health AI policy landscape evolves:
- If the federal government does not act quickly, states will address these regulatory gaps in a vacuum; this will result in a patchwork of state AI laws with different requirements, making it challenging for health care stakeholders to operate across multiple states.
- The federal government needs to provide CMS the authority to develop a flexible (and realistic) regulatory approach. CMS’ pre-COVID Medicare telehealth rules were not flexible – most notably, they did not allow patients to receive telehealth from their homes. CMS is still extending temporary flexibilities through the Medicare Physician Fee Schedule to enable beneficiaries to receive telehealth from home. Congressional action is required to permanently change this and other impactful Medicare telehealth rules.
- States often look to the federal government to lay the groundwork for their oversight. At the beginning of COVID, nearly all states used CMS’ Medicare telehealth rules as the basis for their own telehealth policy design efforts. States then learned that their residents needed greater flexibility to ensure access to care via telehealth (e.g., more modalities, more types of care delivery).
- Similar to telehealth, we expect there to be early anchor state activity related to health AI that will ripple through other states. There were a handful of states that had relatively expansive telehealth policy prior to COVID-19 (e.g., California, Minnesota, Washington); during the PHE, many other states conformed and redesigned their telehealth policies to mirror anchor states.
- State-by-state approaches to telehealth policymaking have resulted in a complex patchwork of rules that are challenging for providers, digital health companies, payors and patients to navigate. Because of the wide variation that remains across states in telehealth, there is value in establishing federal standards and guidelines for use of health IT and AI to reduce the cost and burden on providers, digital health companies, and payors operating in multiple states.
In the current state of flux, innovators rely on the core “responsible AI” principles — safety, transparency, privacy, fairness, security, and equity — that are emerging from the agency guidance and proposed state laws, and build and deploy their technology aligned with such principles. Compliance with the NIST AI Risk Management Framework, which is being updated to include generative AI, will likely be encouraged, if not required. We expect state activity to address AI generally, and some specific health care use cases, incorporating NIST concepts. At this point, it is unclear how many bills will pass as several have been introduced in prior years. Similar to telehealth, it’s possible that state professional boards (medicine, nursing, etc.) will generate the most state activity by issuing guidelines or statements regarding use of AI in clinical practice while they wait for laws and regulations to be promulgated.
Authors: Jacqueline Marks Smith, Randi Seigel, Annie Fox, Manatt