More

    New AI Model Enhances Cancer Diagnostics through Virtual Staining


    hand, holding pathology slides stained with leishman stain, displayed and ready for microscopy.
    hand, holding histopathology slides stained with leishman stain, displayed and ready for microscopy.

    New work from researchers in Switzerland demonstrates an advanced artificial intelligence (AI) model that is able to create virtual colorations of cancer tissue and advancement that could significantly change pathology analysis and diagnostics. The new tool, called the “VirtualMultiplexer,” promises to help solve a common problem: a limited supply of tissue sample available for staining and analysis.

    Details of the VirtualMultiplexer were published recently in Nature Machine Intelligence.

    Leveraging generative AI, the team of investigators from the Universities of Lausanne and Bern, show that their new tool can produce detailed pathology images that mimic what staining for a given cellular marker would look like. The use of different dyes for specific markers is an important element of pathology work to determine disease status and to inform more accurate diagnoses.

    “The idea is that you only need one actual tissue coloration that is done in the lab as part of routine pathology, to then simulate which cells in that tissue would dye positive for several other, more specific markers”, said co-leader of the study Marianna Rapsomaniki, a computer scientist and AI expert at the biomedical data science center of the University of Lausanne and the Lausanne University Hospital.

    First author Pushpak Pati, PhD, a postdoctoral researcher at IBM Europe, added that the model is particularly useful when tissue material is limited or when experimental stainings cannot be performed for other reasons. The VirtualMultiplexer not only conserves valuable tissue samples but also reduces the need for labor-intensive laboratory processes.

    The model operates using a methodology known as contrastive unpaired translation. This technique can be likened to a mobile app that predicts how a person might look as they age. In this instance the app uses data from thousands of photos of older, unrelated people to create a future image of a younger person.

    In the same vein, the VirtualMultiplexer transforms one coloration of a sample that broadly distinguishes different regions within a cancer tissue sample into a series of generated images showing which cells in that tissue stain positive for a specific marker molecule. The model achieves this based on prior training on images of other tissues that where stained with those dyes in previous experiments. Once it has learned from these actual images of dyed tissues, the VirtualMulltiplexer applies the same style to the tissue image presented to generate a virtually-stained image.

    To ensure that the virtual images the tool was generating were clinically meaningful, the investigators conducted a rigorous validation process to ensure that the virtual images were not simply AI-generated images that were plausible and not “halluciantions,” or false inventions of the AI tool.

    The researchers tested to see how well the virtual representations accurately predicted clinical outcomes including disease progression and survival and compared them with data from real stained tissues. Their analysis showed that not only are the image generated by the VirtualMultiplexer accurate, but they were also clinically actionable.

    Next, to provide further confidence in their validation work, the team used what is commonly called the Turing test—named for the father of modern AI, Alan Turing—to discover whether the tool produced images that are indistinguishable from those created in the real world. The researchers called upon expert pathologists to see if they could distinguish between the AI-generated images from those that were traditionally stained and found the AI colorations were perceived as close to identical to real stainings.

    A key feature of the VirtualMultiplexer is its multiscale approach. Traditional models typically focus on either the microscopic (cellular) or macroscopic (overall tissue) scale. In contrast, the VirtualMultiplexer examines cancer tissue at three levels: its overall structure, the relationships between cells, and the detailed characteristics of individual cells. This more comprehensive approach provides a more accurate representation of tissue images.

    The implications of this new tool are many. By generating high-quality simulated stains, the VirtualMultiplexer can assist researchers in formulating hypotheses, prioritizing experiments, and advancing their understanding of cancer biology.

    “We developed our tool using tissues from people affected by prostate cancer,” said co-lead author Marianna Kruithof-de Julio, PhD, a professor and research group leader at the University of Bern. “In the paper, we also showed that it works similarly well for pancreatic tumors—making us confident that it can be useful for many other disease types.”

    Further, the authors noted, the VirtualMultiplexer has the potential to support foundation AI models in biological studies. These models benefit from processing large amounts of data in a self-supervised manner, allowing them to understand complex structures and perform various tasks. Rapsomaniki said the VirtualMultiplexer can address gaps in data availability for rare tissues by generating realistic images quickly and cost-effectively.



    Source link

    Latest articles

    Related articles

    Discover more from Blog | News | Travel

    Subscribe now to keep reading and get access to the full archive.

    Continue reading