Is getting faster medical test results with Elon Musk’s AI bot Grok safe? Doctors warn ‘buyer beware’, ET HealthWorld


Is getting faster medical test results with Elon Musk’s AI bot Grok safe? Doctors warn ‘buyer beware’, ET HealthWorld

New Delhi: Elon Musk’s AI chatbot, Grok, has gained attention as users upload medical scans, such as MRIs and X-rays, for analysis. Musk, via his platform X (formerly Twitter), encouraged users to test Grok’s abilities, claiming the tool is in its early stages but showing promise. While some users report useful insights, others cite inaccurate diagnoses, highlighting risks in relying on experimental AI. The initiative has sparked discussions about the balance between technological innovation, accuracy, and user privacy.

Promise and Pitfalls of AI Diagnostics

Musk urged users to “try submitting x-ray, PET, MRI, or other medical images to Grok for analysis,” adding that the tool “is already quite accurate and will become extremely good.” Many users responded, sharing Grok’s feedback on brain scans, fractures, and more. “Had it check out my brain tumor, not bad at all,” one user posted. However, not all experiences were positive. In one case, Grok misdiagnosed a fractured clavicle as a dislocated shoulder; in another, it mistook a benign breast cyst for testicles.

Such mixed results underline the complexities of using general-purpose AI for medical diagnoses. Medical professionals like Suchi Saria, director of the machine learning and healthcare lab at Johns Hopkins University, stress that accurate AI in healthcare requires robust, high-quality, and diverse datasets. “Anything less,” she warned, “is a bit like a hobbyist chemist mixing ingredients in the kitchen sink.”

The Privacy Quandary: Who Owns Your Data?

A significant concern is the privacy implications of uploading sensitive health information to an AI chatbot. Unlike healthcare providers governed by laws like the Health Insurance Portability and Accountability Act (HIPAA), platforms like X operate without such safeguards. “This is very personal information, and you don’t exactly know what Grok is going to do with it,” said Bradley Malin, professor of biomedical informatics at Vanderbilt University.

X’s privacy policy states that while it doesn’t sell user data to third parties, it shares information with “related companies.” Even xAI, the company behind Grok, advises users against submitting personal or sensitive information in prompts. Yet, Musk’s call to share medical scans contrasts with these warnings. “Posting personal information to Grok is more like, ‘Wheee! Let’s throw this data out there and hope the company is going to do what I want them to do,’” Malin added.

Matthew McCoy, assistant professor of medical ethics at the University of Pennsylvania, echoed these concerns, saying, “As an individual user, would I feel comfortable contributing health data? Absolutely not.”

AI in Healthcare and Musk’s Vision

Grok is part of xAI, Musk’s AI-focused venture launched in 2023, which describes its mission as advancing “our collective understanding of the universe.” The platform positions itself as a conversational AI with fewer guardrails than competitors like OpenAI’s ChatGPT, enabling broader applications but also raising ethical questions.

In healthcare, AI is already transforming areas like radiology and patient data analysis. Specialized tools are used to detect cancer in mammograms and match patients with clinical trials. Musk’s approach with Grok, however, bypasses traditional data collection methods, relying on user contributions without de-identification or structured safeguards. Ryan Tarzy, CEO of health tech startup Avandra Imaging, called this method risky, warning that “personal health information is ‘burned in’ to many images, such as CT scans, and would inevitably be released in this plan.”

Risks of Faulty Diagnoses

Experts caution that inaccuracies in Grok’s results could lead to unnecessary tests or missed critical conditions. One doctor testing the chatbot noted that it failed to identify a “textbook case” of spinal tuberculosis, while another found that Grok misinterpreted breast scans, missing clear signs of cancer. “Imperfect answers might be okay for people purely experimenting with the tool,” said Saria, “but getting faulty health information could lead to tests or other costly care you don’t actually need.”

Ethical Concerns: Information Altruism or Risk?

Some users may knowingly share their medical data, believing in the potential benefits of advancing AI healthcare capabilities. Malin referred to this as “information altruism,” where individuals contribute data to support a greater cause. However, he added, “If you strongly believe the information should be out there, even if you have no protections, go ahead. But buyer beware.”

Despite Musk’s optimistic vision, experts urge caution, emphasizing the importance of secure systems and ethical implementation in medical AI. Laws like the Americans with Disabilities Act and the Genetic Information Nondiscrimination Act offer some protections, but loopholes exist. For example, certain insurance providers are exempt from these laws, leaving room for potential misuse of health data.

Grok exemplifies the growing intersection of AI and healthcare, but its current implementation raises critical questions about privacy, ethics, and reliability. While the technology holds promise, users must weigh the risks of sharing sensitive medical information on public platforms. Experts recommend exercising extreme caution and prioritizing tools with clear safeguards and accountability. The success of AI in healthcare depends not just on innovation but on ensuring trust and transparency in its application.

  • Published On Nov 19, 2024 at 01:10 PM IST

Join the community of 2M+ industry professionals

Subscribe to our newsletter to get latest insights & analysis.

Download ETHealthworld App

  • Get Realtime updates
  • Save your favourite articles


Scan to download App




Source link

Latest articles

Related articles

Discover more from Technology Tangle

Subscribe now to keep reading and get access to the full archive.

Continue reading

0