Jan 08, 2026Ravie LakshmananPrivacy / Synthetic Intelligence
Synthetic intelligence (AI) firm OpenAI on Wednesday introduced the launch of ChatGPT Well being, a devoted area that enables customers to have conversations with the chatbot about their well being.
To that finish, the sandboxed expertise affords customers the elective potential to securely join medical information and wellness apps, together with Apple Well being, Operate, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton, to get tailor-made responses, lab check insights, vitamin recommendation, customized meal concepts, and prompt exercise courses.
The brand new characteristic is rolling out for customers with ChatGPT Free, Go, Plus, and Professional plans outdoors of the European Financial Space, Switzerland, and the U.Okay.
“ChatGPT Well being builds on the robust privateness, safety, and knowledge controls throughout ChatGPT with further, layered protections designed particularly for well being — together with purpose-built encryption and isolation to maintain well being conversations protected and compartmentalized,” OpenAI mentioned in an announcement.
Stating that over 230 million individuals globally ask well being and wellness-related questions on the platform each week, OpenAI emphasised that the software is designed to help medical care, not substitute it or be used as an alternative choice to analysis or therapy.
The corporate additionally highlighted the varied privateness and safety features constructed into the Well being expertise –
Well being operates in silo with enhanced privateness and its personal reminiscence to safeguard delicate knowledge utilizing “purpose-built” encryption and isolation
Conversations in Well being are usually not used to coach OpenAI’s basis fashions
Customers who try to have a health-related dialog in ChatGPT are prompted to change over to Well being for added protections
Well being info and reminiscences will not be used to contextualize non-Well being chats
Conversations outdoors of Well being can not entry recordsdata, conversations, or reminiscences created inside Well being
Apps can solely join with customers’ well being knowledge with their specific permission, even when they’re already related to ChatGPT for conversations outdoors of Well being
All apps obtainable in Well being are required to fulfill OpenAI’s privateness and safety necessities, similar to amassing solely the minimal knowledge wanted, and endure further safety assessment for them to be included in Well being
Moreover, OpenAI identified that it has evaluated the mannequin that powers Well being towards medical requirements utilizing HealthBench, a benchmark the corporate revealed in Might 2025 as a strategy to higher measure the capabilities of AI techniques for well being, placing security, readability, and escalation of care in focus.
“This evaluation-driven strategy helps make sure the mannequin performs nicely on the duties individuals really need assistance with, together with explaining lab leads to accessible language, making ready questions for an appointment, deciphering knowledge from wearables and wellness apps, and summarizing care directions,” it added.
OpenAI’s announcement follows an investigation from The Guardian that discovered Google AI Overviews to be offering false and deceptive well being info. OpenAI and Character.AI are additionally going through a number of lawsuits claiming their instruments drove individuals to suicide and dangerous delusions after confiding within the chatbot. A report printed by SFGate earlier this week detailed how a 19-year-old died of a drug overdose after trusting ChatGPT for medical recommendation.
