Menu

On January 23, 2025, President Trump issued Executive Order 14179 on Removing Barriers to American Leadership in Artificial Intelligence. This followed his revocation of the Biden Administration’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The actions mark a notable shift in federal oversight of the rapidly advancing technology and indicate the Trump Administration is likely to take a more hands-off approach to regulating AI.

Although the full implications of this change in posture are still unclear, they could be especially significant for the healthcare industry. In December, the Bipartisan Artificial Intelligence Task Force released a report recognizing AI’s potential to reduce administrative burdens in healthcare, accelerate drug development, and enhance clinical diagnosis. The Task Force also recommended developing guidance to encourage risk management in AI technologies, strengthen privacy protections, enhance security, and prevent disparate health outcomes. The Trump administration’s preference for reducing AI regulations suggests these recommendations and related stakeholder concerns may go unaddressed.

With Executive Order 14110, the Biden administration committed to overseeing AI development by setting guidelines for safety testing, addressing algorithm bias, ensuring nondiscrimination, and safeguarding personal data. These principles laid the foundation for the Department of Health and Human Services’ (HHS’s) strategic AI plan, which sought to promote ethical AI practices while balancing innovation with public safety and privacy.

While lacking specificity, Trump’s Executive Order 14179 emphasizes positioning the U.S. as “the global leader in AI.”  It also characterizes previously enacted AI policies as a “barrier” to American AI innovation and stresses the importance of developing “AI systems that are free from ideological bias or engineered social agendas.”

In addition, Trump’s executive order directs the Assistant to the President for Science and Technology (APST), the Special Advisor for AI and Crypto, and the Assistant to the President for National Security Affairs (APNSA), in coordination with various other executive departments, to develop an action plan to achieve the “policy of the United States to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”  This must be accomplished by July 2025.

Meanwhile, President Trump has touted a potential $500 billion AI business venture involving a partnership between OpenAI, Oracle, and SoftBank. Dubbed “Stargate,” the project will commence with building out data centers and the electricity generation required for the further development of rapidly evolving AI. Oracle’s executive chairman, Larry Ellison, has said that the project could enhance digital health records, improve disease treatment, and offer the potential to develop customized cancer vaccines. However, Elon Musk, a key advisor to the president who founded the AI company xAI, has been a notable skeptic of the Stargate announcement.

Even as federal regulations are eased, American AI developers must still comply with state regulations where applicable (Colorado and Utah are among the states that have already enacted AI governance laws). The Trump Administration’s Executive Order signals that further changes are coming to the regulatory landscape for AI development and deployment. In accordance with the order, a notice posted in the Federal Register on February 6, 2025, “requests input from all interested parties on the Development of an Artificial Intelligence (AI) Action Plan,” with comments accepted through March 15.

Applied Policy will be closely monitoring for anticipated changes.