Meta Drops Muse Spark: Zuckerberg's First AI Model From Billion-Dollar Superintelligence Push—Now Wants Your Health Data

Image: Bloomberg AI
Main Takeaway
Meta's new Muse Spark matches OpenAI on benchmarks, but it's already pushing users to upload private health records and giving questionable medical advice.
Summary
What Meta just launched
Meta dropped Muse Spark on Wednesday, the first model from its newly-minted Superintelligence Labs. According to Meta's own benchmarks, the model matches performance with leading systems from OpenAI, Anthropic, and Google. The model powers Meta AI app and website in the US right now, with Instagram, Facebook, and Ray-Ban smart glasses getting the upgrade in coming weeks.
This isn't just another Llama release. Muse Spark represents Zuckerberg's full reboot of Meta's AI efforts after last year's Llama 4 stumble. The company reportedly spent billions staffing and equipping the new division under Alexandr Wang, the Scale AI CEO who joined Meta to lead the superintelligence push.
The performance claims
Meta says Muse Spark beats GPT-4.5 on standard benchmarks and matches Claude 3.5 Sonnet on coding tasks. The company published internal tests showing the model outperforming Google's Gemini 1.5 Pro on reasoning benchmarks. But there's a catch: these results haven't been independently verified yet.
The model appears optimized for Meta's specific use cases. It excels at visual understanding tasks crucial for Instagram's AI features and shows strong performance on multimodal queries that blend text, images, and video. Early access testers report the model feels more "personality-driven" than competitors, which aligns with Zuckerberg's vision of "personal superintelligence."
The health data grab nobody expected
Here's where things get messy. Within days of launch, Muse Spark started asking users to upload their raw health data—lab results, medical records, even genetic tests. A Wired reporter tried this feature and got back advice so bad it could've been dangerous.
The model presents itself as a health assistant, claiming it can "analyze your biomarkers and provide personalized wellness recommendations." But when fed actual lab results, it misinterpreted normal cholesterol levels as dangerously high and suggested dietary changes that would've been harmful for someone with the reporter's medical history.
This isn't buried in some advanced menu either. The health analysis prompt appears prominently in the main chat interface, positioned as a core feature rather than an experimental add-on.
Why the walled garden approach
Unlike Meta's previous open-source Llama models, Muse Spark won't be freely available. The model runs exclusively through Meta's own products and services. This represents a fundamental shift in strategy from the company's historically open approach to AI development.
The closed ecosystem gives Meta two advantages: tighter integration with their trillion-parameter knowledge graph of user data, and protection against competitors copying their innovations. But it also means users can't verify the health claims independently or run the model on their own hardware to keep sensitive medical data local.
Privacy implications of the health push
The health data feature raises immediate red flags. Meta's privacy policy already allows broad use of user data for advertising and product improvement. Uploading medical records into this ecosystem means potentially sharing sensitive health information with the same advertising infrastructure that tracks which shoes you looked at last Tuesday.
There's no clear indication whether uploaded health data gets stored permanently, used to train future models, or shared with pharmaceutical advertisers. The model's tendency to give questionable medical advice compounds these concerns—users might act on harmful recommendations while Meta's lawyers point to terms of service disclaimers.
What happens next
Meta hasn't responded to questions about the health feature's safety testing or whether medical professionals reviewed the model's advice. The company's silence is notable given that health misinformation typically triggers immediate regulatory scrutiny.
For now, Muse Spark continues offering health analysis to anyone who clicks the prompt. The feature's prominence suggests this isn't a bug but a deliberate product direction—Zuckerberg's "personal superintelligence" apparently includes playing doctor with your private medical data.
Key Points
Muse Spark actively solicits and analyzes users' private health data including lab results
Early testing shows the model gives potentially harmful medical advice
Health data uploads occur within Meta's closed ecosystem with unclear privacy protections
Feature appears as prominent prompt in main chat interface, not buried in settings
Meta hasn't disclosed safety testing or medical oversight for health analysis feature
FAQs
No. Early testing shows the model misinterprets medical data and gives potentially harmful advice. It's not a substitute for medical professionals.
Unclear. Meta's privacy policy allows broad use of user data, but the company hasn't specified whether health records get stored permanently or used for advertising.
Yes. The health analysis is optional, though the prompt appears prominently in the interface.
The article doesn't specify international availability. Currently Muse Spark itself is US-only.
Source Reliability
50% of sources are highly trusted · Avg reliability: 79
Go deeper with Organic Intel
Our AI for Your Work systems give you practical, step-by-step guides based on stories like this.
Explore ai for your work systems