Objectives: To examine whether comfort with the use of ChatGPT in society differs from comfort with other uses of AI in society and to identify whether this comfort and other patient characteristics such as trust, privacy concerns, respect, and tech-savviness are associated with expected benefit of the use of ChatGPT for improving health.
Materials and methods: We analyzed an original survey of U.S. adults using the NORC AmeriSpeak Panel (n = 1787). We conducted paired t-tests to assess differences in comfort with AI applications. We conducted weighted univariable regression and 2 weighted logistic regression models to identify predictors of expected benefit with and without accounting for trust in the health system.
Results: Comfort with the use of ChatGPT in society is relatively low and different from other, common uses of AI. Comfort was highly associated with expecting benefit. Other statistically significant factors in multivariable analysis (not including system trust) included feeling respected and low privacy concerns. Females, younger adults, and those with higher levels of education were less likely to expect benefits in models with and without system trust, which was positively associated with expecting benefits (P = 1.6 × 10-11). Tech-savviness was not associated with the outcome.
Discussion: Understanding the impact of large language models (LLMs) from the patient perspective is critical to ensuring that expectations align with performance as a form of calibrated trust that acknowledges the dynamic nature of trust.
Conclusion: Including measures of system trust in evaluating LLMs could capture a range of issues critical for ensuring patient acceptance of this technological innovation.
Keywords: artificial intelligence; large language model; patient trust; public opinion.
© The Author(s) 2024. Published by Oxford University Press on behalf of the American Medical Informatics Association.