Impact on Healthcare – AI Executive Order 14110 on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence

Depositphotos 659283794 XL
Apr 30, 2024

Impact on Healthcare – AI Executive Order 14110 on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence

8507654 (1)

Written by Victor Collins

Category: AI/Artificial Intelligence

On October 30, 2023, U.S. President Joe Biden signed Executive Order 14110 (document 75193), focused on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. (See Fact Sheet here.)

The purpose of the Executive Order (EO) is to establish clear direction regarding US policy. It creates the foundation for developing a framework for managing AI risks, directs federal action to regulate the use of health AI systems, and guides the development of tools to advance AI innovation across the healthcare sector.

There are eight key objectives and priorities outlined in the EO to advance and govern the use of AI that apply to healthcare:

  • Ensure safe and secure AI technology

  • Encourage responsible innovation, competition, and collaboration

  • Support American workers

  • Advance equity and civil rights

  • Protect American consumers, patients, passengers, and students

  • Protect privacy and civil liberties

  • Manage the federal government’s use of AI

  • Strengthen U.S. leadership abroad, promoting safeguards so that AI technology is developed and deployed responsibly

Protecting Patients

The EO directs the Secretary of the Department of Health and Human Services (HHS) to take several specific actions, such as establishing an AI Task Force within 90 days and developing a strategic plan for responsible AI deployment by the end of 2024. The strategic plan will likely take a year, leaving healthcare organizations and health technology companies that are developing, considering, or using AI-enabled technologies with uncertainty.

However, there are some specific areas that HHS is directed to address sooner. These include most notably the establishment of an AI Safety Program by October 2024, which, in partnership with voluntary federally listed patient safety organizations (PSOs) and building on their previous work, would establish a common framework for approaches to identifying and capturing clinical errors resulting from AI deployed in healthcare settings. The program would develop specifications for a central tracking repository for associated incidents that cause harm to patients, caregivers, or other parties, as well as analyze data and generate evidence to develop informal guidelines to avoid these harms and disseminate them to appropriate stakeholders.

HHS is directed to develop specific strategies on quality, non-discrimination, and drug development to accomplish the following:

  • Development, maintenance, and use of predictive and generative AI-enabled technologies in care delivery

  • Safety and real-world performance monitoring of AI-enabled technologies

  • Incorporation of equity principles, including protection against bias

  • Incorporation of safety, privacy, and security standards into the software development lifecycle

  • Development and availability of documentation to help users determine appropriate and safe uses of AI

  • Coordination with state, local, tribal, territorial health, and human services agencies to advance positive use cases and best practices for the use of AI

  • Identification of uses of AI to promote workplace efficiency and satisfaction

The EO also provides funding opportunities, including requiring the National Institutes of Health (NIH) to highlight grants and collaborative agreement awards that promote innovation and competition. Furthermore, the broader federal actions called for in the EO related to privacy – including the development of privacy-enhancing technology, cybersecurity, and non-discrimination – will also impact healthcare.

Addressing Equity and Fairness

The Secretary of Health and Human Services (HHS) is tasked with forming an AI assurance policy. This policy will assess the performance of AI healthcare tools, aiding in both pre-market and post-market oversight. The order also requires the provision of guidance to health and human services providers receiving federal funds. This guidance is to ensure compliance with Federal nondiscrimination and privacy laws, preventing AI-induced bias or discrimination and protecting sensitive information.

The EO highlights the need to prevent AI systems from causing unlawful discrimination or negatively impacting individual opportunities. It particularly addresses AI’s role in the criminal justice system and in determining eligibility for government benefits and programs. Accordingly, federal agencies must adhere to specific guidance on using AI to avoid harmful outcomes.

Moreover, the order emphasizes the importance of training and technical assistance to equip users of AI in critical applications with the required expertise.

The order also proposes enhancing the visa process for AI experts immigrating to the U.S. This initiative aims to reinforce the U.S.’s leadership in AI and foster innovation. It includes making visa appointments more accessible, simplifying the application process, and potentially introducing a visa renewal program. These measures are targeted at those coming to the U.S. to study, work, or conduct research in AI.

Using Synthetic Data

The EO mandates that within 240 days, the Secretary of Commerce, in consultation with pertinent agencies, must submit a report to the Director of the Office of Management and Budget (OMB) and the Assistant to the President for National Security Affairs. This report should detail the current standards, tools, methods, and practices for marking content produced using generative AI tools by or for the federal government. Key focus areas include methods for authenticating and tracking the origin of content, and techniques for labeling and detecting synthetic content.

The report should propose protocols for testing software developed for these purposes. It must also cover the auditing and management of synthetic content. Furthermore, guidelines should be established for government agencies regarding the labeling and authentication of synthetic content they produce or disseminate.

Executive Order Limitations

Executive Orders allow Presidents and their Administration to coordinate efforts across various federal agencies, set an overarching policy, and clearly state priorities regarding complex issues that cut across multiple sectors. EOs direct the actions by these agencies but can only require the agencies to act consistently within their existing authority and budget. New regulations or legislation would be required to change any requirements for the private sector. For example, the EO describes how federal agencies must protect data privacy but does not provide rules to guide the private sector on data protection. In the healthcare sector, any changes in health data privacy would require HHS to publish guidance or Congress to pass legislation to amend existing Health Insurance Portability and Accountability (HIPAA) protections, such as those related to consumer-directed health technology.

EOs are fragile in that they are not necessarily enduring. A new President with different priorities could revoke this EO – a constraint that is relevant because the implementation of the objectives in this EO will take time and may not be completed before January 2025, when a new President could be sworn into office. However, many of the issues and policies addressed in the EO have bipartisan support.

 

What’s Next?

We anticipate significant action by the federal agencies as they implement the EO. The key milestones will be the safety reporting program and regulatory updates, some of which are in progress. The most critical step ahead will be the HHS strategic plan that will set in motion the approach to regulating AI in healthcare. It will also identify the priorities and actions that HHS will take to promote AI innovation and minimize the risks.

Why it Matters

The bottom line is a wave of new rules and regulations related to AI are coming in the months ahead – and these changes will have significant implications for healthcare organizations. It is clear from the EO that ensuring fairness and equity – and preventing bias – are top priorities.

The focus of any specific reporting requirements or measures that stem from this EO will certainly not be limited to whether healthcare delivery organizations have implemented AI. Instead, there will be far more scrutiny around which AI models a given hospital or health system has deployed and how they are using them. Being prepared and starting to plan now will make the journey ahead significantly easier.