Article continues after the ad.

Recently, LinkedIn's co-founder Reid Hoffman mentioned a viral Reddit post in which a user said that in less than a minute, he could solve a chronic medical issue through ChatGPT after struggling with that problem for five years amazing feat!

According to the statements of the Reddit user, his greater jaw clicking (possibly due to a boxing accident) was a chronic problem, unsolved even after several visits to an ear, nose, and throat doctor, two MRIs, and referral to a maxillofacial expert. Frustrated, he turned to ChatGPT for a diagnosis and treatment, with the AI suggesting a mildly displaced but mobile disc in the jaw, with a repositioning of the disc technique focused on tongue placement and symmetry.

When another user stated that "doctors will hate ChatGPT" for being "1000% more useful than WebMD," the LinkedIn co-founder disagreed. "I am not sure they will hate. If implemented correctly, AI could help doctors diagnose individual patients faster, do less paperwork, and see more patients in a day," he said.

Another user wrote, "I suffered from extreme debilitating TMJ since 2023. Got braces last year which relieved it to great extent, but anything from stress, anxiety, sleep deprevation still triggers it. Hence like any individual, i find myself on chatgpt and touch the tongue to top palette WORKS."

A third user commented, "Isn’t it weird how every time OpenAI drops a model, there’s a Reddit post like ‘my son walked again after one ChatGPT session’? Every model gets instantly outclassed by others, yet only OpenAI gets this hype. It’s getting tired."

The big message from Hoffman’s story isn’t that ChatGPT is some kind of magic doctor. It’s that more and more people are turning to AI to help them understand and manage their health. That can be a good thing, it gives people more tools to take control of their well-being. But it also comes with risks, especially if the advice is wrong or misunderstood. Like any tool, AI can be helpful, but only when used wisely.