World

Can the ChatGPT neural network be a diabetes consultant?

In a recent study published in the journal PLoS ONE, scientists found out whether the popular neural network ChatGPT can answer frequently asked questions about diabetes. And whether people can distinguish the answer of artificial intelligence from the answer of a doctor.

Can the ChatGPT neural network be a diabetes consultant? 24684

Artificial intelligence (AI), especially ChatGPT, has received significant attention due to its potential clinical applications. Research has shown that people are more receptive to AI-based solutions for low-risk scenarios. The scientists believe this requires more research into understanding and using large language models such as ChatGPT in routine clinical care. In the current study, researchers from Denmark assessed ChatGPT's expertise in diabetes, especially its ability to answer frequently asked questions related to the disease.

They tested whether study participants who were knowledgeable about diabetes could distinguish responses from real people from responses written by ChatGPT to common questions about diabetes. Scientists also assessed which categories of people are more likely to detect responses generated by ChatGPT.

The study was a closed computer survey based on the Turing test among all employees of the Steno Diabetes Center in Aarhus (SDCA). The survey included 10 questions with two types of answers: one created by people and the other by ChatGPT. Participants had to recognize the answer generated by the AI. Questions addressed pathophysiology, therapy, complications, physical activity, and nutrition in diabetes. The study was conducted from January 23 to 27, 2023.

Of the 311 people invited, 183 completed the survey (59% response rate), with 70% (n=129) being women, 64% having heard of ChatGPT before, 19 % had used it and 58% (n=107) had interacted with patients with diabetes as healthcare practitioners in the past. For 10 questions, the percentage of correct answers (when participants determined the AI ​​answer) ranged from 38% to 74%.

On average, participants correctly identified responses generated by ChatGPT 60% of the time. Men and women had a 64% and 58% chance of accurately recognizing the AI ​​response, respectively. Individuals with prior exposure to diabetes patients were able to correctly recognize AI responses 61% of the time, compared to 57% for those with no prior exposure to diabetes patients.

Overall, this work serves as a starting point. exploring the capabilities and limitations of ChatGPT in providing patient-centered recommendations for the management of chronic diseases, particularly diabetes. While ChatGPT showed some potential for accurately answering frequently asked questions, problems associated with misinformation and a lack of detailed, personalized advice were apparent.

As large language models increasingly intersect with healthcare, rigorous studies are needed to evaluate their safety, effectiveness and ethical aspects of patient care, highlighting the need for a robust regulatory framework and ongoing oversight.