Flowing the launch of Meta’s Muse Spark model, its first generative AI model that analyses users’ health data, including lab results, expert ha e raised concern over perceived risk in sharing sensitive health data on such chatbots. Experts are of the view that beyond the obvious privacy risks, the model lacks the capability to stand-in for a real doctor.
According to the company, the Muse Spark, which is already available through the Meta AI app, was designed, in part, to be better at answering questions people have about their health. It also disclosed plans to integrate the model across all of its platforms—including Facebook, Instagram, and WhatsApp—in the coming weeks.
Meta in an announcement blog said it even worked with “over 1,000 physicians to curate training data that enables more factual and comprehensive responses.” It added: “As the new model rolls out to millions of users, I tested Muse Spark to see how it would respond to health-related questions.
When I asked how it could help me, the bot listed off a few basic uses, like building a workout routine or generating questions to ask my doctor, but a direct request for my health data stood out:
“Paste your numbers from a fitness tracker, glucose monitor, or a lab report. I’ll calculate trends, flag patterns, and visualize them,” read the Meta AI output. “Example: ‘Here are my last 10 blood pressure readings—is there a pattern?’” Meta, however, is not alone in nudging users to upload their health data is not unique to Meta.
OpenAI’s ChatGPT and Anthropic’s Claude both have chatbot modes designed specifically for helping users understand their health and make decisions. For example, you can open Claude and connect it to your Apple or Android health data with just the flip of an in-app toggle.
Then, Claude will use that information as part of its answers. Google also lets you upload medical data to Fitbit for its AI health coach to parse. Nonetheless, experts note that handing over this kind of data to any AI tool is a risky decision, even if users are able to generate personalised advice. According to, Monica Agrawal, an assistant professor at Duke University and cofounder of Layer Health, an AI platform for hospitals to examine medical charts, “Usage of these models can be really tricky.
“The more information you give it, the more context it has about you and, potentially, it can provide better responses. But on the flip side, there are major privacy concerns to sharing your health data without protections.” The expert expressed concern about users uploading sensitive data to chatbots since these commonly used AI tools are not compliant with HIPAA protections, the landmark US law that guards patients from having their sensitive health information exposed.
Layer Health is HIPAA compliant. It’s a high standard of privacy that people are used to experiencing during doctor visits. The information someone shares with a bot is much more loosely regulated, even if it’s their clinical lab result. Anything you share in a chat with Meta AI may be stored and used to train future AI models.
“We keep training data for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely, and efficiently,” reads Meta’s privacy policy about generative AI. Meta has also stated it may tailor advertisements for users based on their interactions with the AI features.











