AMP Expo Chatter: Can AI Algorithms Be LDTs?


AMP Expo Chatter: Can AI Algorithms Be LDTs?
Credit: Yuichiro Chino / Getty Images / Moment

In my conversations at the AMP 2024 Annual Meeting & Expo, I stumbled upon a topic that I think will be of increasing importance to the molecular pathology community and the diagnostics community at large—the use of artificial intelligence (AI) as laboratory-developed tests (LDTs).

Perhaps it was mental osmosis of the inescapable, permeating chatter around the FDA’s oversight over LDTs, but the idea of relatively simple and tangible biochemical tests that can be as simple as a binary “yes/no” result from testing an analyte to home-brewed code perform analysis in the digital space is a whole new headache and also means a whole new dialogue on regulations.

From pattern recognition in sequencing diagnostics to predictive analytics in patient prognosis and diagnostic report generation within generative AI, there is no doubt that the range of applications with demonstrated utility in molecular pathology is broad. Artificial intelligence models have shown promise in several areas, including improving diagnostic accuracy, automating processes, forecasting patient outcomes, improving workflow efficiency, and creating individualized treatment plans.

After digging into this topic, I think several intertwined issues must be addressed.

Can an AI algorithm be an LDT?

One important thing to figure out is when the AI algorithms are seen as part of the LDT as a whole medical device and when it is seen as its own “Software as a Medical Device” (SaMD).

In one conversation I had with a consultant with a legal background, the individual’s perspective was that an AI algorithm in itself cannot be an LDT but rather a part of an LDT. Along these lines, the FDA has authorized nearly 1,000 “AI-enabled” medical devices, many diagnostics, since 1995 (the FDA released an updated list of 951 such entities earlier this year). However, not everyone agreed, with others having the perspective that an AI algorithm can be considered an LDT if developed and used internally within a single clinical laboratory.

According to Google’s Generative AI-supported search, the answer is:

Yes, an AI algorithm can be considered a Laboratory Developed Test (LDT) if it is designed, developed, and used solely within a single clinical laboratory, meaning the algorithm is not commercially distributed to other labs and is used for clinical analysis within that specific facility; essentially, the AI acts as a diagnostic tool developed in-house by the laboratory itself. 

But that’s like playing a modern digitized game of telephone, so I’m not sure how much to take that as fact.

What regulations are there for AI algorithms as LDTs?

The emergence of AI platforms similar to LDTs poses new challenges for regulators, regardless of whether the AI algorithm is an LDT component or an LDT in and of itself. There is currently no established process for the regulatory evaluation of AI-based tools. Although the Food and Drug Administration (FDA) has established numerous regulations to guarantee testing quality in clinical testing laboratories, the use of LDTs in the U.S. has traditionally been the responsibility of the Clinical Laboratory Improvement Amendments (CLIA) program. That said, all of this is a bit up in the air now.

On the other hand, commercial AI algorithms distributed to multiple laboratories are not considered LDTs. If used for clinical work, a laboratory must still validate an AI algorithm for its intended use, irrespective of whether it is an LDT, LDT-like, or non-LDT tool.

Reimbursement issues for AI

Of course, you can’t discuss any medical device without considering who will pay for the test. The reimbursement landscape for AI-backed applications in health care is complex and evolving.

One area that is somewhat clear pertains to scenarios in which regulated software devices deliver clinical analytical services to a healthcare practitioner—sometimes referred to as “algorithm-based health care services” (ABHSs). These stand in contrast to AI that simplifies operational tasks or uses generative AI to answer clinical questions in an uncontrolled or informal setting. ABHSs generate clinical outputs to diagnose or treat a patient’s condition through the use of AI.

Due to antiquated reimbursement frameworks, U.S. health payment systems have difficulty integrating ABHS (artificial intelligence and behavioral health software) technologies. These frameworks lack specific billing codes and standardized economic impact and efficacy evaluation criteria. Problems like producing strong proof of better patient outcomes and handling liability issues are making it harder for healthcare systems to evaluate and authorize these ABHS tools for coverage, causing innovation in the field to outstrip their capacity.

Two possible solutions are making Medicare pathways more official (like an Add-on Policy for software) and testing out different payment models (like performance-linked or episode-based reimbursements). Another is developing specialized reimbursement codes for ABHS. Collaboration among stakeholders—developers, healthcare providers, payers, and regulators—is key to establishing evidence-based guidelines. Using data from the real world and continuous learning systems can improve and make the ABHS tools useful for a long time to come.

Human-generated decisions

The fact remains that some patients would prefer to rely on their primary care physician than engage with AI-assisted healthcare decisions. At the end of the day, healthcare is not a sterile, robotic practice that lives in some virtual universe of ones and zeros.



Source link

Latest articles

Related articles

Discover more from Technology Tangle

Subscribe now to keep reading and get access to the full archive.

Continue reading

0