Elon Musk's AI Chatbot Raises Health Privacy Concerns

Experts advise caution for people willing to share their sensitive (and even medical) information with AI tools such as X's Grok.

Over the past few weeks, users on X (formerly Twitter) have been uploading medical images, such as X-rays, to the platform's AI chatbot, Grok, for diagnoses. X's owner, Elon Musk, sparked this trend, claiming the tool is "already quite accurate and will become extremely good."

Grok's diagnoses have raised the interest of many users. Even some doctors have tested the chatbot, curious to see if it could confirm their findings. While various mistakes have been reported (for instance, it misidentified a broken collarbone as a dislocated shoulder), some have praised Grok's performance.

However, this practice has alarmed some medical privacy experts. Unlike healthcare providers, platforms like X aren’t governed by laws such as the Health Insurance Portability and Accountability Act (HIPAA) that prevent your personal health information from being shared without your consent.

Matthew McCoy, assistant professor of medical ethics and health policy at the University of Pennsylvania, told The New York Times he would "absolutely not" feel comfortable contributing health data, despite the possibility that there may be clear guardrails around health information uploaded to Grok that X hasn't described publicly.

But what’s so bad about sharing any data you want?

Ryan Tarzy, CEO of health tech startup Avandra Imagin, told Fast Company that Grok's approach "has myriad risks, including the accidental sharing of patient identities" and warned that "personal health information is 'burned in' too many images, such as CT scans, and would inevitably be released in this plan."

The problem is that once data has been collected by a chatbot, it’s no longer entirely in your control and could fall into the wrong hands. Personal medical information could become part of your online footprint, where anyone from future employers to insurance companies could find it. Protecting data is crucial, whether it's sensitive company information that could harm a business or your own personal data.

Other concerns with Grok's medical capabilities include the hit-and-miss accuracy of its diagnoses, which could result in users accessing improper healthcare, and the potential for it to develop biases based on the data it’s being fed by X users willing to use it.

Despite these concerns with Grok, AI shows promise in some areas of healthcare. For example, current models can already interpret mammograms. However, experts stress the need for high-quality and diverse data, alongside extensive expertise in medicine, technology, product design, and more, to develop reliable AI tools for healthcare.

Experts advise caution for those still willing to share sensitive information with chatbots. As Bradley Malin (a professor of biomedical informatics at Vanderbilt University) put it for The New York Times, "If you strongly believe the information should be out there, even if you have no protections, go ahead. But buyer beware."

Discover how Narus helps businesses protect their data and adopt GenAI safely.