Paubox blog: HIPAA compliant email made easy

The ethical use of AI to create HIPAA email templates

Written by Kirsten Peremore | October 18, 2024

Artificial intelligence (AI) generated templates can offer an appealing method of streamlining healthcare communications in a way that does not compromise healthcare workers' time. Its use does, however, come with the consideration of the ethical issues surrounding the application of AI systems and the risk it poses to the content of email systems.

 

The ethics behind the use of AI 

Trust is characterized by the capability to reason and act based on moral principles, something that cannot be programmed into AI models yet. AI cannot truly be trustworthy because it lacks the moral agency necessary for trust. A Philosophy & Technology study provides,AI cannot act by following reasons, it cannot explain the outcomes of its decisions by giving reasons, and it is not an appropriate target for moral critique or blame.” 

This means that any AI model labeled as trustworthy is a misrepresentation. When AI models are deployed to make decisions influencing people's lives, like sharing sensitive diagnoses or making decisions on treatment, there’s a risk of bias and harm. The ethical challenge is making sure that the people developing AI take responsibility for the potential outcomes. It makes oversight necessary to prevent the misuse of AI. 

Related: Machine learning in healthcare

 

Creating policies to ensure AI is ethically used to create HIPAA compliant email templates 

The use of AI systems like ChatGPT requires the understanding that human oversight remains necessary. Effective policies that consider the human factor as well as the way in which these models operate are the only way for them to be used correctly. 

Considerations for creating these policies include: 

Data input controls

  • No protected health information (PHI) should be directly input into AI systems like ChatGPT. They do not meet the requirements for HIPAA compliant email. 
  • Limit AI inputs to nonsensitive generic data to create generalized templates that staff can later fill patient information into on secure and off model platforms. 

Restricted output handling

  • AI generated templates that involve PHI or medical data must be automatically sent to recipients with human review. 
  • Make sure that all AI generated templates are flagged for review by assigned staff whose role it is to verify the accuracy, appropriateness, and compliance of the message with HIPAA’s guidelines before dissemination. 

Access controls for AI models 

  • Limit access to AI systems to authorized personnel with a need to know basis so that staff with HIPAA training and a clear understanding of the AI’s limitations can use it. 
  • Use role based access controls for employees interacting with AI models. Review access logs regularly to ensure no unauthorized access has occurred. 

Human oversight and final approval

  • AI generated emails should always be subject to human review, especially when dealing with healthcare communication. 
  • Create a workflow where AI drafts are routed to trained healthcare professionals who have the authority to approve or reject the message before it’s sent

Use HIPAA compliant systems

  • Make use of HIPAA compliant email systems like Paubox to send all templates, ensuring compliance during transmission. 

Bias detection and mitigation

  • Conduct regular evaluations of AI outputs for biases that could disproportionately affect certain patient groups. 
  • Develop procedures for auditing AI generated content for bias, especially regarding race, gender, socioeconomic status, or health conditions. 

Ethical training for AI users

  • Staff interaction with AI models should undergo training that covers HIPAA, ethical AI use, and the limitations of the AI model in use. 
  • Offer mandatory training modules created around the concept of how AI can be used and cannot be used in HIPAA compliant communications.

The limitations of AI models to consider 

  1. AI models may not fully grasp the complexities of HIPAA, leading to noncompliant content or suggestions. 
  2. AI cannot account for the psychological sensitivities of patients leading to hurtful or insensitive messaging. 
  3. AI often reflects biases in its training data causing biased language to be used in outputs that could affect the fairness and inclusivity of communication. 
  4. AI generated templates are not flexible enough to adapt to evolving conditions and the changes in legislative requirements. 
  5. The effectiveness of AI is heavily reliant on the quality of training data which is rarely up to date. 
  6. AI may generate content that lacks personalization reducing the effectiveness of communication and failing to meet patients' expectations.
  7. AI lacks moral reasoning capabilities resulting in ethically questionable content that does not align with patient values. 

FAQs

What are the psychological sensitivities only humans can consider? 

Psychological sensitivities only humans can consider include understanding emotions, empathy, and personal context. 

 

What are the legislative requirements placed upon healthcare emails? 

HIPAA requirements for the protection of PHI. 

 

What is machine learning?

A branch of AI that allows computers to learn from data and improve their performance on specific tasks without being explicitly programmed.