Humana Used AI Tool From UnitedHealth to Deny Medicare Advantage Medical Needs of Senior Citizens Across the United States
A national coalition of Public Health leaders, investigative journalists and lawyers continues to expose the Commercial Health Insurance industry Denial of Care business model | December 13, 2023
May 30, 1996.
Dr. Linda Peeno testified to to United States Congress that Humana was systemically, methodically, and intentionally denying medical care to patients in need.
She testified publicly as Humana’s former Chief Medical Officer in Louisville, Kentucky, the she was pressured to deny medical care to patients in need.
She also testified that Denial of Care Harm-for-Profit would increase by the year if it remained legal.
As a result of the decades-long legislative inaction, Denial of Care has remained legal. UnitedHealth and several other Big Insurance companies are on track to rake in $400 billion in revenues in 2024. Even as class action lawsuits by patients who have been harmed—and the families of patients who have been killed—by Denial of Care take their cases to the federal bench, the preventable harm and death persist.
A national coalition of ethical Public Health leaders, investigative journalists and lawyers continue to expose the Commercial Health Insurance industry Denial of Care business model. The coalition has elevated two historic legal cases against three of the largest Commercial Health Insurance industry companies over the last thirty days.
Read about latest Denial of Care Harm-for-Profit case here>
Co-published with Beckers’ Healthcare | Humana used an artificial intelligence tool owned by UnitedHealth Group to wrongfully deny Medicare Advantage members' medical claims, according to a class-action complaint filed Dec. 12.
The lawsuit was filed in the U.S. District Court for the Western District of Kentucky and is the latest legal action against major insurers such as UnitedHealthcare and Cigna for allegedly using automated data tools to wrongfully deny members' claims.
The complaint against Humana, the country's second-largest Medicare Advantage insurer, accuses the company of using an AI tool called nH Predict to determine how long a patient will need to remain in post-acute care and overrides physicians' determinations for the patient. The plaintiffs claim Humana set a goal to keep post-acute facility stay lengths for MA members within 1% of nH Predict's estimations. Employees who deviate from the algorithm's estimates are "disciplined and terminated, regardless of whether a patient requires more care," the lawsuit alleges. When decisions made by the algorithm are appealed, they are allegedly overturned 90% of the time.
"Despite the high rate of wrongful denials, Humana continues to systemically use this flawed AI model to deny claims because they know that only a tiny minority of policyholders will appeal denied claims," the plaintiff's attorneys wrote.
The nH Predict tool was created by naviHealth, a care management company acquired by Optum in 2020. The tool is not used to make coverage determinations, an Optum spokesperson previously told Becker's.
"The tool is used as a guide to help us inform providers, families and other caregivers about what sort of assistance and care the patient may need both in the facility and after returning home," the spokesperson said. "Coverage decisions are based on CMS coverage criteria and the terms of the member's plan."
Becker's has reached out to Humana for comment and will update this article if more information becomes available.
The allegations come amid broader ongoing conversations among policymakers around insurers' use of algorithms and artificial intelligence when processing claims or prior authorization requests.
States are ramping up scrutiny over how payers across industries are deploying AI for underwriting purposes, Bloomberg reported Nov. 30. At the federal level, lawmakers asked CMS in November to increase its oversight of AI and algorithms used in Medicare Advantage prior authorization decisions. In their letter, lawmakers pointed to advocacy group reports that indicate use of AI in Medicare Advantage prior authorization decisions is resulting in care denials that are more restrictive than traditional Medicare. They asked CMS to require MA plans to report prior authorization data, including reasons for denials; compare guidance generated by AI tools to actual Medicare Advantage coverage decisions; and assess if AI-powered algorithms used in prior authorization are self-correcting.