On April 29, 2024, the Department of Health and Human Services (HHS) released a plan for Promoting Responsible Use of Artificial Intelligence in Automated and Algorithmic Systems by State, Local, Tribal, and Territorial Governments in the Administration of Public Benefits, continuing the rollout of actions mandated under the Biden Administration’s Executive Order 14110 on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Impacted health insurance programs include Medicaid, the Children’s Health Insurance Program (CHIP), and state marketplace plans, which accounted for a total of 102,467,946 beneficiaries in 2023. Although the recommendations made under the plan are not mandatory, they indicate HHS’ views on a number of key issues regarding AI.
Rights and Safety Impacting Use Cases in Benefit Administration
The plan provides examples of uses of AI in benefits administration presumed to impact the rights or safety of beneficiaries. Not only will these use cases be subject to minimum risk management practice requirements under the recently released OMB Memorandum M-24-10 , they are also likely to face further regulations as the Administration moves towards establishing a regulatory framework for AI. Among other use cases, the plan calls out the following as impacting beneficiary rights or safety:
- Health screening or risk assessments for referrals to additional services, such as physical therapy or mental health services, or for interventions,
- Determining eligibility for programs,
- Clinical diagnosis or determination of treatment, including decisions regarding medical devices,
- Live translation of interactions that inform agency actions (e.g. an eligibility interview), or translation of program materials without a human translator’s validation, and
- Triaging cases prior to assigning them to caseworkers.
Perhaps more importantly, HHS listed examples of use cases unlikely to impact rights or safety, almost all of which involved human oversight or clerical tasks that would not interact with beneficiaries. For example, the creation of first drafts that would later be reviewed by humans or creating high level summaries of large sources of narrative information, such as case management files, that humans would later explore in more detail, and the deployment of internal chatbots that would help guide workers to relevant sections of policy manuals.
Option to Opt-out of AI
The plan also provides insight into how HHS views the importance of providing beneficiaries with the option to opt-out of AI services. While HHS encourages agencies to allow beneficiaries an opt-out option whenever possible, the agency acknowledges that this option is not as essential for functions that do not engage the beneficiary, such as back-end processes, enabling functions (e.g. scanning forms) or fraud detection. Agencies are encouraged to provide human alternatives to AI, but HHS also believes that non-AI automated systems could be offered as an alternative when a human is not available, such as after normal business hours. Under the plan, the definition of AI does not include automated systems whose behavior is driven only by human-defined rules, or by “repeating an observed practice exactly as it was conducted.”