As I was preparing for my clinic the following day, I noticed one of the patients had already undergone genetic testing through another specialty and been diagnosed with a rare genetic disorder. Given the breadth of conditions we encounter as clinical geneticists, it’s common to come across disorders we’ve never seen before. I did my homework: reviewed the literature, learned the condition, and outlined a management plan.
The next day, I walked into the room to meet a healthy-appearing, intellectually sharp software engineer who worked professionally in artificial intelligence (AI). As I collected his history, he explained that he had self-diagnosed his condition using a large language model (LLM), inputting lab results and clinical features. Initially, the specialist managing the condition (outside of genetics) refused to order the test he requested. Eventually, he convinced them to send the genetic test—and the LLM was right.
After gathering his history and completing the exam, I sat down across from him to explain the condition, the management plan, and its implications for family planning. He listened thoughtfully, then smiled and said, “Great, that’s exactly what the LLM told me.”
Not long ago, we only had to contend with “Dr. Google,” “Dr. Facebook,” and “Dr. TikTok.” The first often fueled anxious patients with unreliable information. The latter two frequently left patients convinced that common symptoms pointed to rare and exotic diseases—and don’t even get me started on Methylenetetrahydrofolate reductase. These platforms created confusion, but the physician remained the anchor; someone who could thoughtfully interpret, correct, and guide with grounded medical expertise.
Now, the challenge is different. We’re being compared directly to a digital superintelligence that can rival, and sometimes exceed, our ability to synthesize information. These models have an advantage: they retain and access vast stores of knowledge instantaneously, which is something the human brain can’t do. Yet, we still hold a unique role, particularly in our ability to examine, empathize with, and physically connect with patients.
I’ve always encouraged my patients to research their conditions, offering guidance and trusted sources to avoid misinformation. I often recommend online patient communities to help them learn from others with lived experience while cautioning them to steer clear of miracle cures and consult me before trying any “exotic berry” elixirs.
But how do we talk to patients about AI? Its use is growing rapidly. Whether or not we’re ready, our patients are using it, and not just those with technical backgrounds. Do we encourage its use? Do we feel comfortable with the information it provides?
As I reflect on that patient’s parting words—“you told me exactly what the LLM told me”—what I didn’t share is that I had used AI too. It wasn’t my only tool, but it was part of my process. I’ve been using it for a while now and I am consistently impressed.
Some modern LLMs go beyond regurgitating memorized content and fabricating references. They can reason, hypothesize, and synthesize insights in useful ways. In one recent case, I saw a patient with an extremely rare disorder, only documented in a few dozen individuals. Unlike reported cases, this patient had hearing loss. I wondered whether this was part of the syndrome or an unrelated finding. I asked an LLM if hearing loss could be associated with the disorder. It found a single case, one I had missed despite thorough searching, and even proposed a plausible biological mechanism based on the gene’s function.
Despite that, I still ordered further testing to rule out other causes. We’re not being replaced just yet. But we must embrace these tools and not fear them. Because whether we like it or not, every day our patients are comparing our knowledge and decisions to that of our digital counterparts.
It’s a high bar. And when you can’t beat them, join them.
Nephi Walton, MD, completed his MD and MS in biomedical informatics with a focus on machine learning/artificial intelligence at the University of Utah School of Medicine. He completed a combined residency in pediatrics and genetics at Washington University in St Louis, Missouri. He is board certified in both clinical genetics and clinical informatics. He has worked with two of the largest population health sequencing programs in the U.S.: MyCode at Geisinger and HerediGene at Intermountain Health. He was past chair of the American Medical Informatics Association Genomics and Translational Bioinformatics Workgroup a former program director at the National Human Genome Research Institute and has presented at several meetings on translating the use of genomics and artificial intelligence into general medical practice, something he is actively pursuing in clinical practice.