Paubox News | HIPAA Compliance, Email Security and Healthcare Tech

ChatGPT account breaches raise privacy concerns in healthcare

Written by Dean Levitt | June 21, 2023

In a startling revelation, Group-IB reported the discovery of over 101,100 compromised OpenAI ChatGPT account credentials on dark web marketplaces from June 2022 to May 2023. This breach, attributed mainly to the notorious Raccoon info stealer, Vidar, and RedLine, raises significant privacy and security concerns, particularly for the healthcare sector where sensitive patient data is involved.

 

The big picture

The compromised ChatGPT accounts pose a severe threat to privacy and security. Employees often use ChatGPT to optimize proprietary code or enter classified correspondences. If threat actors access these account credentials, they could access a wealth of sensitive intelligence. The stakes are even higher in sectors like healthcare, where the protection of patient data is governed by stringent regulations such as the Health Insurance Portability and Accountability Act (HIPAA).

 

The HIPAA angle

HIPAA mandates the protection of patient health information, known as protected health information (PHI). Any breach of this information could lead to severe penalties. AI tools like ChatGPT pose a unique challenge in this context. While all data inputted into ChatGPT is encrypted in transit and at rest, entering PHI into ChatGPT may still be a HIPAA violation.

RelatedSafeguarding PHI in ChatGPT

 

The risks of using PHI with ChatGPT

ChatGPT, like many AI tools, learns from the data it is given. This means that any PHI entered into the system could potentially be used to improve the AI's responses. While OpenAI has policies to anonymize data and prevent it from being used to train their models, the risk of a breach remains. Furthermore, even if the data is anonymized, entering PHI into the system could be seen as a violation of HIPAA regulations.

Moreover, storing PHI with third-party services like ChatGPT, which haven't signed a Business Associate Agreement (BAA), could violate HIPAA. While generally secure, these services are an attractive target for hackers due to the proprietary information many companies input into AI chats.

Read moreA quick guide to using ChatGPT in a HIPAA compliant way

 

The industry response

In response to these concerns, some companies have taken proactive measures. For instance, Samsung has banned its employees from using ChatGPT due to fears of sensitive corporate information being fed into the AI platform. This move serves as a reminder of the potential risks associated with AI tools and the need for stringent data security measures.

 

What they're saying

"Employees enter classified correspondences or use the bot to optimize proprietary code. Given that ChatGPT's standard configuration retains all conversations, this could inadvertently offer a trove of sensitive intelligence to threat actors if they obtain account credentials," warns Dmitry Shestakov, head of threat intelligence at Group-IB.

 

Looking ahead

The increasing prevalence of AI tools necessitates robust privacy measures. In the healthcare sector, it's crucial to ensure that these tools are used in a manner that is compliant with HIPAA regulations. This includes secure data storage and transmission, de-identification of data, robust access control mechanisms, and adherence to data-sharing agreements and patient consent.

 

The bottom line

The recent breaches of ChatGPT accounts are a wake-up call. In the healthcare sector, where the protection of patient data is paramount, adherence to HIPAA regulations is a must. The healthcare industry must focus on developing and implementing robust data security measures to safeguard privacy while harnessing the potential of AI tools.

RelatedHIPAA Compliant Email: The Definitive Guide