As artificial intelligence (AI) continues to make its way into healthcare, the U.S. Department of Health and Human Services (HHS) is working on a plan to help guide its safe use.
At a healthcare conference in Las Vegas, HHS officials shared plans to create a framework for AI in healthcare that focuses on safety and privacy. Part of this effort includes hiring for new roles like a Chief Technology Officer, Chief Data Officer, and Chief AI Officer to help lead the way.
FDA Commissioner Dr. Robert Califf spoke about the need to rethink how healthcare works with AI. He pointed out that the FDA's current approach mainly covers AI in medical devices or when privacy issues arise. Instead, the FDA needs to make changes to keep pace with technology.
The HHS has taken a flexible approach, focusing on guidance over strict rules. This lets them work more closely with tech companies on building safe and useful AI. Melanie Fontes Rainer, acting director of HHS’s Office for Civil Rights, explained that many compliance issues are handled through support rather than enforcement.
To promote transparency, the HHS now requires AI developers to seek federal certification to share more details about their algorithms.
At the conference, officials discussed the importance of industry input. The Coalition for Health AI—which includes Microsoft and the Mayo Clinic—is pushing for private companies to take part in evaluating AI tools. Dr. Califf acknowledged that the FDA and U.S. health systems don’t currently have the capacity to validate every advanced AI tool, which shows why new ways of approaching AI validation are needed.
The HHS’s plans signal a big step forward in ensuring AI’s safe use in healthcare. By working with government agencies and private companies, they are tackling the challenges of AI with a focus on patient safety and ethics. As AI becomes more common in healthcare, effective regulation will be fundamental to harnessing its benefits without compromising safety.
Artificial Intelligence (AI) is the simulation of human intelligence in machines that are programmed to think and learn like humans.
HIPAA applies to the use of AI in healthcare, as it governs the protection of patients' medical records and personal health information. When using AI technologies, it's necessary to ensure compliance with HIPAA regulations to safeguard patient privacy and data security.
Yes, healthcare providers typically need informed consent from patients before using AI technologies for diagnosis, treatment, or other healthcare purposes. Obtaining consent is mandatory to ensure transparency and respect for patients' autonomy in the use of AI-driven healthcare interventions.
Healthcare professionals can use various technologies to integrate AI into healthcare, including machine learning algorithms, natural language processing (NLP), computer vision, and predictive analytics.