Health11 views6 min read

Medicare to Use AI for Treatment Approvals in Six States

A new federal pilot program will use artificial intelligence to approve or deny certain Medicare treatments in six states, raising concerns among lawmakers.

David Chen
By
David Chen

David Chen is a public policy correspondent for Wealtoro, focusing on healthcare economics, insurance regulation, and their impact on household finances across the United States.

Author Profile
Medicare to Use AI for Treatment Approvals in Six States

The U.S. government is set to launch a pilot program that will use artificial intelligence to approve or deny certain Medicare treatments. The initiative, named Wasteful and Inappropriate Service Reduction (WISeR), will begin on January 1 and is scheduled to run through 2031. It will impact Medicare beneficiaries in Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington.

The program expands the use of prior authorization, a process requiring medical professionals to get approval from an insurer before providing specific services. While common in private insurance, this marks a significant step for traditional Medicare, raising concerns among lawmakers and healthcare experts about potential care denials and the transparency of AI-driven decisions.

Key Takeaways

  • A new federal pilot program, WISeR, will use an AI algorithm for Medicare prior authorization decisions starting January 1.
  • The program will affect patients in six states: Arizona, Ohio, Oklahoma, New Jersey, Texas, and Washington, and will run until 2031.
  • The initiative aims to reduce wasteful spending but has drawn criticism for potentially delaying or denying necessary medical care.
  • Lawmakers from both parties and healthcare policy experts have questioned the program's transparency and potential incentives for vendors to deny services.

A Controversial Expansion of Prior Authorization

The federal government's decision to implement the WISeR program has been met with surprise and criticism, particularly because it follows recent statements from the administration condemning similar practices in the private sector. Prior authorization requires doctors to seek pre-approval for certain tests, procedures, and medications, a practice that has long been a point of contention for patients and providers.

Mehmet Oz, the administrator for the Centers for Medicare & Medicaid Services (CMS), recently highlighted the problems with prior authorization in private insurance. He stated that the process can cause significant delays in care and “erodes public trust in the health care system.”

This apparent contradiction has been noted by policy experts and politicians. Vinay Rathi, a doctor and policy researcher at Ohio State University, described the administration's stance as sending mixed messages, simultaneously criticizing private insurers while adopting their cost-cutting methods. Representative Suzan DelBene, a Democrat from Washington, called the move “hugely concerning.”

What is Prior Authorization?

Prior authorization, also known as pre-authorization or pre-certification, is a cost-control process used by health insurance companies. It requires healthcare providers to obtain advance approval from an insurer before a specific service, medication, or procedure is delivered to the patient to qualify for payment coverage. Insurers argue it prevents unnecessary spending, while critics say it creates barriers to care.

Concerns are not limited to one side of the political aisle. Representative Greg Murphy, a Republican from North Carolina and a practicing urologist, criticized what he termed “delay-or-deny tactics” used by insurance companies. “Insurance companies have put it in their mantra that they will take patients’ money and then do their damnedest to deny giving it to the people who deliver care,” Murphy said.

How the WISeR Program Will Work

The WISeR pilot program will use an AI algorithm to make initial prior authorization decisions for a specific list of Medicare services. These include procedures like skin and tissue substitutes, electrical nerve stimulator implants, and knee arthroscopy. The government has identified these services as being vulnerable to “fraud, waste, and abuse.”

Officials have stated that the list of services requiring AI review may expand over time. However, certain categories of care will be exempt from the program. According to the federal announcement, these include inpatient-only services, emergency procedures, and any treatment that “would pose a substantial risk to patients if significantly delayed.”

A July poll from KFF, a health information nonprofit, found that nearly 75% of respondents considered prior authorization a “major” problem within the U.S. healthcare system, highlighting widespread public dissatisfaction with the practice.

A CMS spokesperson, Alexx Pons, said the initiative aims to protect both patients and taxpayer dollars. He assured that no Medicare request would be denied without a review by a “qualified human clinician.” Furthermore, Pons stated that vendors operating the AI systems “are prohibited from compensation arrangements tied to denial rates.”

Despite these assurances, some experts remain skeptical. Jennifer Brackeen, senior director of government affairs for the Washington State Hospital Association, pointed out that the program includes “shared savings arrangements.” She explained that this structure means “vendors financially benefit when less care is delivered,” which could create a powerful incentive to deny medically necessary treatments.

Concerns Over AI Transparency and Oversight

The use of artificial intelligence in health insurance is not new, but its application within Medicare raises specific questions about oversight and accountability. While AI could theoretically speed up the cumbersome prior authorization process, critics worry about the lack of transparency in how these algorithms make decisions.

Dr. Rathi described the plan as “not fully fleshed out” and reliant on “messy and subjective” metrics. He also noted a potential conflict of interest, as the model depends on contractors to evaluate their own performance, which could lead to questionable results. “I’m not sure they know, even, how they’re going to figure out whether this is helping or hurting patients,” he said.

“CMS remains committed to ensuring that automated tools support, not replace, clinically sound decision-making.”
- Alexx Pons, CMS Spokesperson

The role of human oversight is a central point of debate. Insurers often claim that humans make the final call on coverage decisions, but some researchers are doubtful this review is always thorough. Amy Killelea, an assistant research professor at Georgetown University, suggested there is “a little bit of ambiguity over what constitutes ‘meaningful human review.’”

This issue was highlighted in a 2023 ProPublica report which found that doctors at Cigna spent an average of just 1.2 seconds reviewing each case over a two-month period. Cigna stated the report referenced a simple software process for accelerating payments, not an AI-powered system for prior authorizations.

Potential Impact on Patient Care

The primary concern for patients and providers is that AI systems could be programmed to automatically deny expensive treatments, regardless of medical necessity. Jennifer Oliva, a professor at Indiana University's Maurer School of Law, explained that when a patient has a poor prognosis, an insurer may be “motivated to rely on the algorithm” to deny costly care.

She noted that the appeals process can be lengthy, and the longer it takes, the less likely an insurer is to ultimately pay the claim, especially if the patient's health deteriorates or they pass away during the process. “The No. 1 thing to do is make it very, very difficult for people to get high-cost services,” Oliva said.

A February survey from the American Medical Association found that 61% of physicians believe AI is “increasing prior authorization denials, exacerbating avoidable patient harms and escalating unnecessary waste.”

The WISeR pilot has prompted a bipartisan response in Congress. Representative DelBene joined other Democrats in an August letter to CMS demanding more information about the program. In a separate action, House members from both parties supported a measure proposed by Representative Lois Frankel, a Florida Democrat, to block funding for the pilot in the fiscal 2026 budget.

Representative Murphy acknowledged the concerns among physicians that the AI could override their medical judgment. While he believes AI in healthcare is inevitable, he remains cautious about the pilot's potential outcomes. “This is a pilot, and I’m open to see what’s going to happen with this,” Murphy said, “but I will always, always err on the side that doctors know what’s best for their patients.”