Daily News
Tech for Good: Unlocking the potential benefits of algorithmic medical insights
While chatbots excel in information, relying on them for medical advice may be a gamble
While chatbots excel in information, relying on them for medical advice may be a gamble
Published
2 years agoon

We anticipate medical professionals to furnish us with dependable information concerning our health and potential treatments, thereby enabling us to make judicious decisions regarding medication or other interventions. In instances where medical practitioners, entrusted with delivering authoritative medical counsel, engage in what is academically referred to as “bullshitting” — a form of persuasion devoid of adherence to truth — decisions founded upon such misleading advice may culminate in adverse consequences, potentially leading to fatality.
Differentiating itself from falsehoods, the act of misleading others presents an escalated risk owing to its outright disregard for the truth. Fortunately, established ethical standards and legal ramifications typically dissuade medical practitioners from partaking in such practices. However, the prospect of misleading medical guidance emanating from sources beyond qualified physicians raises pertinent concerns.
Enter ChatGPT, a potent chatbot designed to replicate human interaction. While such chatbots prove efficacious for an array of informational queries, reliance upon them for medical counsel may be likened to a game of chance. These chatbots, oriented toward persuasive communication without a commitment to veracity, present rhetoric so compelling that it obscures inherent gaps in logic and factual precision, essentially engendering what may be colloquially termed as “bullshit.”
The crux of the matter lies in the fact that ChatGPT does not operate as bona fide artificial intelligence, lacking the capacity to comprehend queries, evaluate evidentiary foundations, and furnish justified responses. Instead, it prognosticates responses predicated on input words, prioritising plausibility over truth. While this approach surpasses conventional predictive text functions in potency, it can yield information that is highly persuasive yet occasionally inaccurate, particularly in matters of critical health significance.
Unlike platforms such as Dr Google, which dispenses extensive information without an overt attempt to sound convincing, chatbots like ChatGPT proffer succinct responses. While Dr Google is susceptible to misinformation, employing search engines to access validated health information remains advantageous. Nevertheless, chatbots, aside from their potential to mislead users, may also record and actively solicit additional personal health information, thereby engendering a conundrum about privacy.
For those contemplating seeking medical advice from ChatGPT, the recommended course of action is clear. The initial directive is to refrain from utilisation altogether. Should usage ensue, the subsequent imperative underscores the necessity of validating the accuracy of the chatbot’s responses through authoritative sources. Dr Google, for instance, can guide users toward reputable information. However, the tertiary directive advocates for the furnishing of minimal personal information to chatbots, as an excess of data may engender more persuasive yet inaccurate medical advice. Despite potential advantages, circumspection is imperative when navigating the realm of medical counsel facilitated by chatbots like ChatGPT.
Shalini is an Executive Editor with Apeejay Newsroom. With a PG Diploma in Business Management and Industrial Administration and an MA in Mass Communication, she was a former Associate Editor with News9live. She has worked on varied topics - from news-based to feature articles.