Algorithms cannot decide a patient's healthcare coverage, the US government clarifies

Alfonso Maruccia

Posts: 1,025   +301
Staff
A hot potato: The Centers for Medicare & Medicaid Services (CMS) is the US federal agency that manages the Medicare program and decides the corresponding healthcare standards. The organization recently sent out a new memo to insurers that explains the right way to use AI algorithms, and what will happen if they break the rules.

Insurers offering Medicare Advantage (MA) plans have received a new memo from the CMS, an FAQ-like document that offers careful clarifications on the use (or abuse) of AI predictions. The agency says that insuring companies cannot deny coverage to ailing people solely based on those predictions because AI doesn't take into account the full picture of a patient's condition.

The CMS documents come after patients filed charges against UnitedHealth and Humana, two insurance companies that employed an AI tool known as nH Predict. The lawsuits state that the firms wrongly denied healthcare coverage to patients under the MA plans, making incorrect predictions about their rehabilitation periods.

Estimations provided by nH Predict are unreliable, the lawsuits say, and they are far more restrictive when compared to official MA plans. For instance, if a plan was designed to cover up to 100 days in a nursing home after surgery, UnitedHealth reportedly used nH Predict's artificial judgment to limit coverage to only 14 days before denying further coverage.

CMS is now saying that tools like nH Predict aren't enough to deny insurance to MA patients. The algorithm has seemingly been trained on a database of 6 million patients, therefore it has a limited knowledge of potential health conditions or healthcare needs. The US agency states that MA insurers must base their decision on an "individual patient's circumstances," which include medical history, physician's recommendations, and clinical notes.

AI algorithms can be used to make a prediction and "assist" insurance providers, the CMS states, but they cannot be abused to terminate post-acute care services. A patient's condition must be thoroughly reassessed before ending coverage, the agency says, and insurers are required to provide a "specific and detailed" explanation about why they are not providing the service anymore.

According to lawsuits filed against UnitedHealth and Humana, patients were never given the reasons for their AI-decided healthcare rejections. The CMS is also providing a precise, albeit broad, explanation about what qualifies as artificial intelligence and algorithm-based predictions to ensure that insurance companies clearly understand their legal obligations. The agency adds that non-compliant companies could receive warning letters, corrective plans, or even monetary penalties and sanctions.

Permalink to story.

 
One the one hand? Great that they nipped this in the bud.

On the other? Mean letters? Really? The STARTING punishment should be a $1 million fine per case, increasing 10% for every month this issue continues.
 
An algorithm designed by an insurance company arbitrarily cutting expensive medical support?

Noooo, that sounds too unlikely to be true. An insurance company would never do anything to avoid paying their customers xD
 
Back