More

    Natural language processing generates CXR captions comparable to those from radiologists


    The new JAMA paper detailing the analysis was co-authored by Yaping Zhang, MD, PhD, Mingqian Liu, MSc and Lu Zhang, MD. The team’s analysis included a training dataset of nearly 75,000 chest radiographs labeled with NLP for 23 abnormal findings, in addition to a retrospective dataset and a prospective dataset of more than 5,000 participants. 

    For the study, radiology residents drafted reports based on randomly assigned captions from three caption generation models: a normal template, NLP-generated captions and rule-based captions. Radiologists, who were blinded to the original captions, finalized the reports. Experts used these to compare accuracy and reporting times. 

    NLP reports achieved AUCs ranging from .84 to .87 for each dataset. NLP-generated caption reporting time was recorded as 283 seconds, which, the experts note was significantly shorter than the normal template (347 seconds) and rule-based model (296 seconds) reporting times. The NLP-generated CXR captions also showed good consistency with radiologists. 

    The authors suggested that these findings indicate that NLP can be used to generate CXR captions and might even make the process more efficient, though additional research on a broader dataset is necessary. 

    The study is available here



    Source link

    Latest articles

    Related articles

    Discover more from Blog | News | Travel

    Subscribe now to keep reading and get access to the full archive.

    Continue reading