We had Dr. Google…Now we have Dr. AI!

We had Dr. Google…Now we have Dr. AI! (said with a sigh!!!)

In an era where information on virtually any topic is readily available online, Dr. Dominique Fradin-Read would like to remind her patients of the potential risks and harm that can result from self-diagnosis or self-treatment using AI, especially as more individuals turn to AI to manage their healthcare.

Let’s begin with a few recent anecdotes involving some cases from our VitaLifeMD practice.

One of our patients—a very kind woman who tends to be naturally anxious—developed back pain and had been on pain medication for a week with little improvement. After observing limited progress, we proceeded with an MRI, which revealed a “cyst” near her L5 vertebra. The report indicated that it was likely a synovial cyst compressing a nerve—a benign yet painful condition.

Soon after receiving her results, the patient called our office multiple times, understandably eager to discuss her diagnosis. Dr. Fradin-Read made every effort to return the call as soon as she was free from consultations. However, by that time, the patient was already on a plane and unable to answer.

While in flight, she turned to AI for answers. She consulted ChatGPT and received a detailed explanation of her condition, followed by a list of ten questions she then sent to the office based on what she had read. One part of the AI response seemed to even raise the possibility of cancer, which, though unlikely, added unnecessary stress to an already anxious patient.

The AI consultation presented a wide range of treatment options—from physical therapy to lumbar fusion surgery. However, Dr. Fradin-Read was unable to offer any specific recommendations for this patient without a comprehensive physical exam and a thorough review of the imaging by a spine specialist. At that point, the most appropriate next step was simply to schedule an in-person consultation with a qualified physician as soon as possible after landing.

Reflecting on the situation, Dr. Fradin-Read questioned whether the AI consultation had truly been helpful—or if it had, in fact, caused more harm than good. While AI can offer general information, in this case, it seemed to overwhelm the patient with possibilities, many of which did not apply to her specific condition, ultimately increasing her anxiety rather than providing clarity.

The second case involved a well-meaning patient who submitted his lab results and asked an AI tool to recommend a natural supplement regimen. The response was extensive!!! Recommending a staggering 42 pills per day, along with a so-called “magic powder” to be added to the routine.

Given that this patient already exhibits some obsessive-compulsive tendencies, Dr. Fradin-Read had to carefully apply both her clinical judgment and communication skills to navigate the situation. With patience and empathy, she worked to identify which supplements were truly beneficial and gently persuaded him that a more reasonable regimen—closer to 20 pills per day—was more than sufficient to support his health. 

The third case involved a very anxious patient with a tendency toward hypochondria, who had been managing her hypothyroidism for several years with thyroid medication from a compounding pharmacy. Due to changes in her insurance coverage, she had to switch to a commercial medication—levothyroxine.

Shortly after the change, she came across an article online and emailed Dr. Fradin-Read, alarmed by a study suggesting that levothyroxine could lead to “an array of unwanted side effects,” including bone loss in older adults. What this well-meaning patient didn’t realize is that such risks are well known and taught in medical school. Responsible medical practice involves prescribing the lowest effective dose and regularly monitoring patients—something Dr. Fradin-Read consistently does.

Ironically, the compounded medication the patient had been taking for years carries the same risks if given in excessive doses. Yet, because that formulation wasn’t flagged by AI nor in popular articles, she had never worried about it before. Dr. Fradin-Read reassured her that she was on the lowest necessary dose to maintain her health. The patient ultimately acknowledged that this experience highlighted a common issue: how online “news stories” can confuse or mislead lay readers who lack medical context.

These are just a few examples, that picture the current environment of AI in healthcare and the reason why  Dr. Fradin-Read has some real concerns about the rapid pace of AI development and its impact on her interactions with patients. 

 These are some of the reasons why AI can be harmful for patients who might be tempted to self-diagnosed their medical conditions and figure out their treatment:

 Misdiagnosis and Misinterpretation of Results

AI tools may provide generic or oversimplified explanations.

  • Patients may misunderstand normal ranges (e.g., what’s “normal” may vary by age, gender, or context).

  • A "high" or "low" result may be clinically insignificant—but AI might not explain that nuance.

Patients might incorrectly identify symptoms, leading to:

  • Wrong condition: For example, mistaking a heart attack for indigestion.

  • Delayed treatment: If a serious illness is assumed to be minor, care may be delayed.

 Overlooking Serious Conditions and False Reassurance

Some may incorrectly believe they are fine and avoid seeing a doctor, missing a chance for early diagnosis and treatment.

Some serious diseases (like cancer, stroke, or infections) may start with mild or vague symptoms. Without proper evaluation of data and appropriate testing, a patient may dismiss something important.

 

Lack of Personalization

  • Lab results need to be interpreted based on the patient's overall clinical picture.

  • AI lacks access to a complete medical history, physical exam, and doctor’s judgment.

Unnecessary Anxiety -Misinterpretation or Misuse of data- Information Overload

  • Using the internet for self-diagnosis (like "Dr. Google" or AI ) often leads people to assume the worst-case scenario, causing stress and panic over something minor. They may panic over harmless abnormalities that don’t require treatment. 

  • AI can generate large volumes of detailed info, which may overwhelm or confuse patients. Patients might misread AI advice or take it too literally.

  • Medical terms or complex explanations can trigger worry, especially if misunderstood.

Example: a benign symptom could be linked to cancer in a list, causing panic, even though it's statistically unlikely.

  • Risks to mental health have been documented following the excess use of AI for medical information by patients. Symptoms such as anxiety, stress, sleep disturbances, obsessive checking and second guessing that can impact one’s everyday life.  ,  

Inappropriate Treatment- False Sense of Safety

People might:

  • Use over-the-counter or herbal remedies that are ineffective or harmful.

  • Take medications that interact poorly with others that they are already taking.

  • Some "natural" remedies are not safe and not regulated.AI does not usually refer to these risks. Natural is different from safe: Many herbs can be toxic or have side effects, especially in high doses or for vulnerable populations (children, elderly, pregnant people).

Confirmation Bias

This has become a frequent situation with AI.  People often look for answers that match their fears or beliefs — not balanced medical facts. They want to prove their case and ask questions to AI in a way that will confirm their “a-priori” even if not funded on scientific data. 

Legal and Privacy Concern – Ethical Aspect of AI in Healthcare 

We should not forget this increasingly urgent issue. Beyond the potential harm that AI can cause to patients’ health, it is critical to examine both the legal and ethical aspects of AI use in healthcare.

Many patients do not fully understand the implications of uploading their medical data online, especially for AI analysis. This feeds directly into ethical and legal challenges, particularly regarding data exploitation, consent, and transparency.

There is a total lack of Informed Consent

  • Many AI tools don’t clearly disclose how patient data will be used.

  • Terms of service may be vague or hidden in legalese.

  • Ethical problem: Consent is meaningless if it's not truly informed.

By sharing their results on line patients do not realize that their data will 

  • Stored indefinitely

  • Sold to third parties

  • Used to train commercial AI models

Example: A patient uploads their chest X-ray to a "free AI diagnosis tool." That image may now be part of a global training dataset owned by a private company. 

Companies may monetize health data to develop AI products — without compensating or even informing the patients who provided it.

Who would want Elan Musk or some other AI guru to get access to one’ s private medical records?   

Even when data is anonymized, re-identification is often possible — especially when medical data is combined with other data sources such as computers used to send them or other tools .This exposes patients to privacy breaches, insurance discrimination, or identity theft.

To come back to Dr. Fradin-Read’s daily practice- even though she fully recognizes the utility of AI to help improve the practice of medicine and add a new scientific approach to diagnosis and treatment- she does not believe that this should be a tool in the hands of patients without medical background. 

When her patients come to her with a full report downloaded from some AI site on their computer, she respectfully and humorously answers, “AI has not gone through 13 years of medical school education and residency training!!”

The bottom line is that only trained healthcare professionals can safely diagnose and manage medical conditions using physical exams, lab tests, and experience.  The practice of medicine has always been and will remain an Art that requires the Human Touch.  Not just some kind of intelligence, artificial or not, it does not matter… it will always lack the soul and heart that only a human connection can provide.  

Dominique Fradin-Read