The Limited Times

Now you can see non-English news...

Doctor ChatGPT: heads and tails of artificial intelligence in the consultation

2023-06-05T06:41:49.024Z

Highlights: A study shows that the 'chatbot' improves doctors with more empathetic and quality advice to patient questions. Experts claim that there is always final supervision of the doctor. The explosion of new AI tools in the world has opened the debate on their potential also in the field of health. The ChatGPT seeks its place as support to health workers to develop medical procedures or to avoid bureaucratic tasks and, at street level, already plans as an eventual substitute for the imprecise and, often, misguided doctor Google.


A study shows that the 'chatbot' improves doctors with more empathetic and quality advice to patient questions, but experts claim that there is always final supervision of the doctor


When asked by a citizen about the risk of dying after swallowing a toothpick, two answers. The first points out that between two or six hours after ingestion, it is likely that it has already passed into the intestines and, in addition, many people swallow chopsticks without anything happening, but warns that if you feel "stomach pain", go to the emergency room. The second response is along the same lines and insists that, although concern is normal, serious harm is unlikely to occur after swallowing a toothpick because it is made of wood, which is not a toxic or poisonous material, and is a small utensil; However, he adds, if he has "abdominal pain, difficulty swallowing or vomiting," he should go to the doctor: "It's understandable that you feel paranoid, but try not to worry too much," he comforts her.

The two answers say the same thing in substance, but change in forms. One is more aseptic and terse; another, more empathetic and detailed. The first has been generated by a doctor, in his own handwriting, and the second, by ChatGPT, the generative artificial intelligence (AI) tool that has revolutionized the planet in recent months. The study in which this experiment is framed, published in the journal JAMA Internal Medicine, wanted to delve into the role that AI assistants could have in medicine and compared the answers given by real doctors and the chatbot to health questions raised by citizens in an internet forum. The conclusions, after the analysis of the answers by an external panel of health professionals who did not know who had answered what, is that, 79% of the time, the explanations of ChatGPT were more empathetic and of higher quality.

The explosion of new AI tools in the world has opened the debate on their potential also in the field of health. The ChatGPT, for example, seeks its place as support to health workers to develop medical procedures or to avoid bureaucratic tasks and, at street level, already plans as an eventual substitute for the imprecise and, often, misguided doctor Google. The experts consulted say that it is a technology with a lot of potential, but it is in its infancy: we must still refine the regulatory field in its application in real medical practice, solve the ethical doubts that may arise and, above all, assume that it is a fallible tool and that it can be wrong. Everything that comes out of that chatbot will always require the final review of a health professional.

More informationJosep Munuera, radiologist: "Artificial intelligence tools do not replace the doctor, they empower him"

Paradoxically, the most empathetic voice in the JAMA Internal Medicine study is the machine and not the human. At least, in the written reply. Josep Munuera, head of the Diagnostic Imaging Service at the Sant Pau Hospital in Barcelona and an expert in digital technologies applied to health, warns that the concept of empathy is broader than this study can crystallize. Written communication is not the same as face-to-face, nor doubts raised in the context of a social network than within a consultation. "When we talk about empathy, we talk about a lot of issues. At the moment, it is difficult to replace non-verbal language, which is very important when a doctor has to talk to a patient or his family, "he says. But he does admit the potential of these generative tools to ground medical jargon, for example: "In written communication, medical technical language can be complex and we may have difficulty translating it into understandable language. Probably, these algorithms find the equivalence between the technical word and another adapted to the receiver."

Joan Gibert, bioinformatician and reference in the development of AI models at the Hospital del Mar in Barcelona, adds another variable when assessing the potential empathy of the machine in front of the doctor. "The study mixes two concepts that enter the equation: the ChatGPT itself, which can be useful in certain scenarios and that have the ability to concatenate words that generate the feeling that it is more empathetic, and the burnout of doctors, that emotional exhaustion when caring for patients that does not leave clinicians the ability to be more empathetic", Explains.

The danger of "hallucinations"

In any case, and as with the famous doctor Google, we must always be careful with the answers that the ChatGPT throws, no matter how sensitive or friendly it may seem. Experts remind that the chatbot is not a doctor and can fail. Unlike other algorithms, ChatGPT is generative, that is, it creates information from the databases with which it has been trained, but some answers it launches may be invented. "We must always bear in mind that it is not an independent entity and cannot serve as a diagnostic tool without supervision," insists Gibert.

These chats can suffer what experts call "hallucinations", explains the bioinformatician del Mar: "Depending on what situations, it tells you something that is not true. The chat puts words together in a way that has coherence and since it has a lot of information, it can be valuable. But it has to be reviewed because, if not, it can feed fake news." Munuera also highlights the importance of "knowing the database that has trained the algorithm because if the databases are insufficient, the answer will also be."

"You have to understand that when you ask him to make a diagnosis, he may invent a disease"

Josep Munuera, Hospital Sant Pau de Barcelona

On the street, the potential uses of ChatGPT in health are limited, as the information they throw can lead to errors. Jose Ibeas, nephrologist at the Parc Taulí Hospital in Sabadel and secretary of the Big Data and Artificial Intelligence Group of the Spanish Society of Nephrology, points out that it is "useful for the first layers of information because it synthesizes information and helps, but when you enter a more specific area, in more complex pathologies, the usefulness is minimal or erroneous". Munuera agrees and emphasizes that "it is not an algorithm that helps solve doubts." "You have to understand that when you ask him to make a differential diagnosis, he may invent a disease," he warns. And, in the same way, the algorithm can answer the doubts of a citizen concluding that it is nothing serious when, in fact, it is: there you can lose an opportunity for health care because the person is satisfied with the response of the chatbot and does not consult a real doctor.

Where experts find more room for maneuver to these applications is as a tool to support health professionals. For example, to help answer patient questions in writing, although always under the supervision of the doctor. The JAMA Internal Medicine study posits that it would help "improve workflow" and patient outcomes: "If more patients' questions are answered quickly, empathetically and at a high level, it could reduce unnecessary clinic visits, freeing up resources for those who need them. In addition, messages are a critical resource for fostering patient equity, where people who have mobility limitations or irregular work schedules are more likely to resort to messages," the authors agree.

The scientific community is also studying using these tools for other repetitive work, such as covering files and reports. "Starting from the premise that always, always, always, everything will need review by the doctor," says Gibert, support in bureaucratic tasks – repetitive, but important – frees up time for doctors to devote to other issues, such as the patient himself. An article published in The Lancet raises, for example, its potential to streamline high reporting: automating this process could alleviate workloads and even improve the quality of reports, although they are aware, say the authors, of the difficulties in training algorithms with large databases and, among other problems, of the risk of "depersonalization of attention", something that could generate resistance to this technology.

Ibeas insists that, for any medical use, it will be necessary to "validate" this kind of tools and fix the distribution of responsibilities: "The systems will never decide. The one who always signs at the end is the doctor," he says.

Ethical issues

Gibert also points out some ethical considerations that must be taken into account when landing these tools in clinical practice: "You need this type of technology to be under a legal umbrella, that there are integrated solutions within the hospital structure and ensure that patient data is not used to retrain the model. And if someone wants to do the latter, let them do it within a project, with anonymized data and following all controls and regulations. You can't share sensitive patient information a la �va."

The bioinformatician also points out that these AI solutions, such as ChatGPT or models that help diagnosis, introduce "biases" into the doctor's day-to-day life. For example, that conditions the doctor's decision, in one direction or another. "The fact that the professional has the result of an AI model, modifies the evaluator himself: the way of relating can be very good, but it can introduce problems, especially in professionals who have less experience. That's why the process has to be done in parallel: until the professional gives the diagnosis, he can't look at what the AI says."

A group of researchers from Stanford University also reflected in an article in Jama Internal Medicine on how these tools can help humanize healthcare more: "The practice of medicine is much more than processing information and associating words with concepts; It's attributing meaning to those concepts while connecting with patients as a trusted partner in building healthier lives. We can hope that emerging AI systems can help tame the laborious tasks that overwhelm modern medicine and empower physicians to refocus on treating human patients."

While waiting to see how this incipient technology expands and what its repercussions are, in the face of citizens, Munuera insists: "You have to understand that [ChatGPT] is not a medical tool and there is no health professional who can confirm the veracity of the answer. You have to be prudent and understand what the limits are." In summary, Ibeas continues: "The system is good, robust, positive and is the future, but like any tool, you have to know how to drive it so that it does not become a weapon."

You can follow EL PAÍS Salud y Bienestar on Facebook, Twitter and Instagram.

Source: elparis

All tech articles on 2023-06-05

You may like

Trends 24h

Latest

© Communities 2019 - Privacy

The information on this site is from external sources that are not under our control.
The inclusion of any links does not necessarily imply a recommendation or endorse the views expressed within them.