3 min read
HHS finalizes regulations on patient care decision tools, including AI
Farah Amod May 14, 2024
The Department of Health and Human Services (HHS) has recently finalized antidiscrimination regulations addressing patient care decision support tools, including clinical algorithms and artificial intelligence (AI) technologies. These regulations, implemented under Section 1557 of the Affordable Care Act (ACA), try to safeguard against discriminatory practices in healthcare and promote the responsible deployment of advanced technologies.
The final rule, which will be published in the Federal Register on May 6, 2024, represents a step forward in integrating ethical considerations into the healthcare industry's embrace of innovative tools. By expanding the scope of regulated "patient care decision support tools" to encompass a wide range of automated and non-automated systems, the regulations spotlight the government's commitment to ensuring that these technologies are used to uphold principles of non-discrimination and fairness.
Understanding the scope of patient care decision support tools
The finalized regulations employ the term "patient care decision support tools" to encompass a broad spectrum of mechanisms and technologies used to aid clinical decision-making. This definition includes complex computer algorithms, AI-powered predictive models, and more traditional tools such as flowcharts, formulas, and triage guidance.
Notably, the regulations explicitly cover "predictive decision support interventions" - technologies that use algorithms and training data to generate outputs that inform clinical assessments, diagnoses, and treatment recommendations. This expansive definition ensures that the regulations remain relevant as the healthcare sector embraces the transformative potential of AI and other advanced analytical capabilities.
Related: Artificial Intelligence in healthcare
Covered entities' obligations
The finalized regulations place clear obligations on covered entities - healthcare providers, insurers, and other organizations receiving federal financial assistance - to proactively address the potential for discrimination within their patient care decision support tools.
Specifically, the regulations require covered entities to:
- Identification of risk: Make reasonable efforts to identify the patient care decision support tools used in their health programs and activities that employ input variables or factors related to race, color, national origin, sex, age, or disability.
- Mitigation of risk: For each identified tool, take reasonable steps to mitigate the risk of discrimination resulting from its use in the entity's health programs or activities.
This two-pronged approach proves the HHS's recognition that simply prohibiting discrimination is insufficient. Covered entities must also exercise due diligence in understanding the potential biases and limitations inherent in their tools and take appropriate measures to address them.
Enforcement and compliance considerations
The Office for Civil Rights (OCR) within HHS will be responsible for enforcing compliance with the new regulations on a case-by-case basis. In evaluating a covered entity's efforts, the OCR will consider factors such as the entity's size, resources, and the complexity of the tools in use.
Notably, the regulations do not prescribe specific mitigation measures, acknowledging that the appropriate steps may vary depending on the nature and context of the patient care decision support tools. However, the HHS encourages covered entities to establish written policies and procedures governing the use of these tools, implement governance measures, monitor potential impacts, and provide staff training on proper utilization.
The regulations stress that the responsibility for addressing discriminatory outcomes rests squarely on the covered entities, rather than solely on the developers of the tools. This approach recognizes the shared accountability between healthcare organizations and technology providers in ensuring the ethical and equitable deployment of advanced analytics in patient care.
Read more: Using AI in patient data analysis
Preparing for compliance
With the approximately one-year compliance timeline, covered entities must act swiftly to ensure they are prepared to meet the new regulatory requirements. Some considerations for healthcare organizations include:
- Detailed inventory: Conducting a thorough audit of all patient care decision support tools used across the organization, including both automated and non-automated systems.
- Risk assessment: Carefully evaluate each tool to identify potential sources of bias or discrimination, drawing on available information from developers, industry resources, and medical literature.
- Mitigation strategies: Developing and implementing policies, procedures, and governance frameworks to address identified risks, including monitoring mechanisms and staff training programs.
- Stakeholder engagement: Fostering open communication and collaboration with patients, clinicians, and other stakeholders to understand their perspectives and incorporate feedback into the organization's approach to ethical AI deployment.
- Continuous improvement: Establishing processes for ongoing monitoring, evaluation, and refinement of patient care decision support tools to ensure they remain aligned with evolving best practices and regulatory requirements.
Read also: Personalized patient education, HIPAA, and AI
In the news
On February 20, 2024, U.S. House Speaker Mike Johnson and Democratic Leader Hakeem Jeffries jointly announced the creation of a bipartisan Task Force on Artificial Intelligence (AI). The task force is a strategic initiative to position America as a leader in AI innovation while addressing the complexities and potential threats posed by this transformative technology.
The creation of the task force comes in the wake of nuanced challenges and opportunities mentioned by experts like James Manyika. By bringing together regulatory authorities, policymakers, and industry experts, this task force strives to understand how to use AI for groundbreaking advancements in healthcare—such as improving diagnostic accuracy and treatment efficiency. They will also address concerns like privacy, bias, and the responsible use of technology.
See more: U.S. House launches bipartisan AI task force
FAQs
Does HIPAA apply to patient care decision tools, including AI?
Yes, HIPAA (Health Insurance Portability and Accountability Act) applies to patient care decision tools, including AI, as they involve the use and disclosure of protected health information. Compliance with HIPAA regulations is necessary to ensure patient privacy and data security.
Do I need consent to use patient care decision tools, including AI?
In most cases, obtaining patient consent is needed for using patient care decision tools, including AI, especially if the tools involve the collection, use, or disclosure of personal health information. Patient consent ensures transparency and empowers individuals to make informed decisions about their healthcare data.
What can I use to develop patient care decision tools, including AI?
Patient care decision tools, including AI, can be developed using advanced machine learning algorithms, natural language processing, and data analytics. These tools often use electronic health records, medical literature, and real-time patient data to provide personalized and evidence-based healthcare recommendations.
Learn more: HIPAA Compliant Email: The Definitive Guide
Subscribe to Paubox Weekly
Every Friday we'll bring you the most important news from Paubox. Our aim is to make you smarter, faster.