Objective: To evaluate the efficiency of the four domestic language models, ERNIE Bot, ChatGLM2, Spark Desk and Qwen-14B-Chat, all with a massive user base and significant social attention, in response to consultations about PCa-related perioperative nursing and health education.
Methods: We designed a questionnaire that includes 15 questions commonly concerned by patients undergoing radical prostatectomy and 2 common nursing cases, and inputted the questions into each of the four language models for simulation consultation. Three nursing experts assessed the model responses based on a pre-designed Likert 5-point scale in terms of accuracy, comprehensiveness, understandability, humanistic care, and case analysis. We evaluated and compared the performance of the four models using visualization tools and statistical analyses.
Results: All the models generated high-quality texts with no misleading information and exhibited satisfactory performance. Qwen-14B-Chat scored the highest in all aspects and showed relatively stable outputs in multiple tests compared with ChatGLM2. Spark Desk performed well in terms of understandability but lacked comprehensiveness and humanistic care. Both Qwen-14B-Chat and ChatGLM2 demonstrated excellent performance in case analysis. The overall performance of ERNIE Bot was slightly inferior. All things considered, Qwen-14B-Chat was superior to the other three models in consultations about PCa-related perioperative nursing and health education.
Conclusion: In PCa-related perioperative nursing, large language models represented by Qwen-14B-Chat are expected to become powerful auxiliary tools to provide patients with more medical expertise and information support, so as to improve the patient compliance and the quality of clinical treatment and nursing.
Keywords: prostate cancer; large language model; artificial intelligence; health education.