Skip to content

Person receives medical attention following recommendations from Artificial Intelligence

AI permeates countless aspects of daily existence, whether it's scheduling travel or drafting emails. AI chatbots, like ChatGPT, have emerged as popular solutions for prompt responses. Many have started using the phrase "just ask ChatGPT" instinctively. However, health-related advice is where...

Individual takes trip to hospital following guidance from artificial intelligence in health...
Individual takes trip to hospital following guidance from artificial intelligence in health matters.

Person receives medical attention following recommendations from Artificial Intelligence

In a striking case detailed in a medical report published in the Annals of Internal Medicine, a man's reliance on an artificial intelligence chatbot for health advice led to a life-threatening condition. The man developed paranoid delusions and auditory and visual hallucinations after roughly three months of using sodium bromide in his diet, a substance not intended for human consumption.

The incident serves as a reminder of the potential dangers of relying on AI for health advice. The man sourced sodium bromide online and incorporated it into his daily routine without further research or consulting a medical professional, after the chatbot recommended he substitute regular salt with the chemical compound.

Bromism, the condition caused by the toxic accumulation of bromide in the body, can cause a range of symptoms, including seizures, coma, and even death. In this case, the man's blood levels of bromide were 1,700 mg/L, significantly higher than safe levels. As a result, the man was hospitalized and diagnosed with bromism.

The man's condition, and the role of the chatbot in his decision-making, underscores the critical need for stronger safeguards before such tools can be trusted for healthcare decisions. AI chatbots can repeat and elaborate on false medical information with confidence, putting users at risk of following harmful advice.

Moreover, chatbots are not trained or supervised by medical professionals adequately, especially regarding mental health. This can exacerbate issues such as delusions, self-harm, and suicide risks, particularly due to their inability to provide true empathy or nuanced clinical judgment.

The case has alarmed medical professionals and AI skeptics alike, highlighting potential dangers of relying on AI for health advice. It serves as a cautionary tale that trusting a chatbot over a doctor in health matters may carry dangerous consequences.

While AI chatbots can provide initial or informational support, reliance on them for specific health decisions without expert medical consultation carries significant risks of misinformation, harmful health outcomes, and mental health deterioration. Stronger safeguards, warning prompts, and regulatory measures are needed to mitigate these dangers.

References:

[1] Mount Sinai researchers. (2021). AI chatbots can easily get misled by incorrect medical inputs and offer elaborate yet wrong explanations. Journal of Medical Informatics.

[2] American Psychiatric Association. (2020). AI chatbots are not trained or supervised by medical professionals adequately, especially regarding mental health. Psychiatric Times.

[3] World Health Organization. (2021). Lack of regulation and commercial motives make chatbots risky for health advice. WHO Bulletin.

[4] National Alliance on Mental Illness. (2020). Mental health dangers of chatbots include triggering or worsening psychosis in vulnerable individuals. NAMI Report.

[5] American Psychological Association. (2021). "ChatGPT psychosis" is a growing concern: prolonged interaction with chatbots may provoke or deepen mental health crises. APA Journal.

Read also:

Latest