Introduction: ChatGPT is an artificial intelligence based large language model with the ability to generate human-like response to text input, its performance has already been the subject of several studies in different fields. The aim of this study was to evaluate the performance of ChatGPT in the management of maxillofacial clinical cases.
Materials and methods: A total of 38 clinical cases consulting at the Stomatology-Maxillofacial Surgery Department were prospectively recruited and presented to ChatGPT, which was interrogated for diagnosis, differential diagnosis, management and treatment. The performance of trainees and ChatGPT was compared by three blinded board-certified maxillofacial surgeons using the AIPI score.
Results: The average total AIPI score assigned to the practitioners was 18.71 and 16.39 to ChatGPT, significantly lower (p < 0.001). According to the experts, ChatGPT was significantly less effective for diagnosis and treatment (p < 0.001). Following two of the three experts, ChatGPT was significantly less effective in considering patient data (p = 0.001) and suggesting additional examinations (p < 0.0001). The primary diagnosis proposed by ChatGPT was judged by the experts as not plausible and /or incomplete in 2.63 % to 18 % of the cases, the additional examinations were associated with inadequate examinations in 2.63 %, to 21.05 % of the cases and proposed an association of pertinent, but incomplete therapeutic findings in 18.42 % to 47.37 % of the cases, while the therapeutic findings were considered pertinent, necessary and inadequate in 18.42 % of cases.
Conclusions: ChatGPT appears less efficient in diagnosis, the selection of the most adequate additional examination and the proposition of pertinent and necessary therapeutic approaches.
Keywords: ChatGPT; Comparison; Diagnosis; Maxillofacial surgery.
Copyright © 2024 Elsevier Masson SAS. All rights reserved.