The Department of Health and Human Services (HHS) has recently finalized antidiscrimination regulations addressing patient care decision support tools, including clinical algorithms and artificial intelligence (AI) technologies. These regulations, implemented under Section 1557 of the Affordable Care Act (ACA), try to safeguard against discriminatory practices in healthcare and promote the responsible deployment of advanced technologies.
The final rule, which will be published in the Federal Register on May 6, 2024, represents a step forward in integrating ethical considerations into the healthcare industry's embrace of innovative tools. By expanding the scope of regulated "patient care decision support tools" to encompass a wide range of automated and non-automated systems, the regulations spotlight the government's commitment to ensuring that these technologies are used to uphold principles of non-discrimination and fairness.
The finalized regulations employ the term "patient care decision support tools" to encompass a broad spectrum of mechanisms and technologies used to aid clinical decision-making. This definition includes complex computer algorithms, AI-powered predictive models, and more traditional tools such as flowcharts, formulas, and triage guidance.
Notably, the regulations explicitly cover "predictive decision support interventions" - technologies that use algorithms and training data to generate outputs that inform clinical assessments, diagnoses, and treatment recommendations. This expansive definition ensures that the regulations remain relevant as the healthcare sector embraces the transformative potential of AI and other advanced analytical capabilities.
Related: Artificial Intelligence in healthcare
The finalized regulations place clear obligations on covered entities - healthcare providers, insurers, and other organizations receiving federal financial assistance - to proactively address the potential for discrimination within their patient care decision support tools.
Specifically, the regulations require covered entities to:
This two-pronged approach proves the HHS's recognition that simply prohibiting discrimination is insufficient. Covered entities must also exercise due diligence in understanding the potential biases and limitations inherent in their tools and take appropriate measures to address them.
The Office for Civil Rights (OCR) within HHS will be responsible for enforcing compliance with the new regulations on a case-by-case basis. In evaluating a covered entity's efforts, the OCR will consider factors such as the entity's size, resources, and the complexity of the tools in use.
Notably, the regulations do not prescribe specific mitigation measures, acknowledging that the appropriate steps may vary depending on the nature and context of the patient care decision support tools. However, the HHS encourages covered entities to establish written policies and procedures governing the use of these tools, implement governance measures, monitor potential impacts, and provide staff training on proper utilization.
The regulations stress that the responsibility for addressing discriminatory outcomes rests squarely on the covered entities, rather than solely on the developers of the tools. This approach recognizes the shared accountability between healthcare organizations and technology providers in ensuring the ethical and equitable deployment of advanced analytics in patient care.
Read more: Using AI in patient data analysis
With the approximately one-year compliance timeline, covered entities must act swiftly to ensure they are prepared to meet the new regulatory requirements. Some considerations for healthcare organizations include:
Read also: Personalized patient education, HIPAA, and AI
On February 20, 2024, U.S. House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries jointly announced the creation of a bipartisan Task Force on Artificial Intelligence (AI). The task force is a strategic initiative to position America as a leader in AI innovation while addressing the complexities and potential threats posed by this transformative technology.
The creation of the task force comes in the wake of nuanced challenges and opportunities mentioned by experts like James Manyika. By bringing together regulatory authorities, policymakers, and industry experts, this task force strives to understand how to use AI for groundbreaking advancements in healthcare—such as improving diagnostic accuracy and treatment efficiency. They will also address concerns like privacy, bias, and the responsible use of technology.
See more: U.S. House launches bipartisan AI task force
Yes, HIPAA (Health Insurance Portability and Accountability Act) applies to patient care decision tools, including AI, as they involve the use and disclosure of protected health information. Compliance with HIPAA regulations is necessary to ensure patient privacy and data security.
In most cases, obtaining patient consent is needed for using patient care decision tools, including AI, especially if the tools involve the collection, use, or disclosure of personal health information. Patient consent ensures transparency and empowers individuals to make informed decisions about their healthcare data.
Patient care decision tools, including AI, can be developed using advanced machine learning algorithms, natural language processing, and data analytics. These tools often use electronic health records, medical literature, and real-time patient data to provide personalized and evidence-based healthcare recommendations.
Learn more: HIPAA Compliant Email: The Definitive Guide