According to Windows Central, OpenAI has launched a new dedicated experience called ChatGPT Health, designed to let users connect their medical records and wellness apps like Apple Health directly to the chatbot. The company revealed that a massive 230 million people worldwide ask health-related questions on ChatGPT every week. The new experience is shipping now to a limited number of users on Free, Go, Plus, and Pro plans outside of Europe and the UK, with a broader rollout planned over the next few weeks. OpenAI emphasizes this isn’t for diagnosis or to replace medical care, but to help users understand test results, prepare for appointments, and get wellness advice. Crucially, all data in ChatGPT Health will be stored separately and won’t be used to train AI models.
The Altman Paradox
Here’s the thing that’s impossible to ignore. In the same report, OpenAI CEO Sam Altman delivers a stunningly candid take. He admits that ChatGPT is “most of the time, a better diagnostician than most doctors in the world” and has seen the life-saving stories. And yet, his personal stance is clear: “I really do not want to trust my medical fate to ChatGPT with no human doctor in the loop.” He even calls himself a “dinosaur” for feeling that way. This creates a fascinating paradox. The CEO of the company pushing this tech forward is publicly stating his own deep-seated reluctance to fully rely on it for the most critical personal decisions. It’s a powerful piece of context that frames the entire launch not as a replacement, but as an assistant. He’s basically telling you, the user, to have the same healthy skepticism he does.
privacy-promises-and-practical-use”>Privacy Promises and Practical Use
OpenAI is clearly trying to get ahead of the massive privacy concerns this kind of product triggers. Storing health data separately and pledging not to use it for training are essential first steps. But let’s be real—trust in this area is earned over years, not built with a single press release. The practical applications they’re highlighting are smart, though. Helping people decipher confusing lab results or organize their symptoms before a 15-minute doctor’s visit? That’s a genuine pain point. Recommending workout routines based on your Apple Health data? That’s basically what a lot of wellness apps already do, just with a more conversational interface. The real test will be whether it can provide useful, nuanced guidance without crossing the line into dangerous speculation.
A New Layer, Not a Replacement
So what’s the actual impact? This isn’t an AI doctor. It’s more like an AI-powered health scribe and research assistant. For users, it adds a new layer of interaction with their own health data, which can be empowering if the advice is sound. For the healthcare system, it could potentially lead to better-prepared patients, which might make doctor visits more efficient. But it also introduces a new intermediary between a patient and verified medical information. The big risk is the “high degree” of trust Altman says users already place in ChatGPT. When the chatbot hallucinates about your connected bloodwork, the consequences are very different than when it makes up a book title. The company is threading a very careful needle, and Altman’s own comments suggest they know just how fragile that needle is.
