Artificial intelligence in scientific medical writing: Legitimate and deceptive uses and ethical concerns

Eur J Intern Med. 2024 Sep:127:31-35. doi: 10.1016/j.ejim.2024.07.012. Epub 2024 Jul 24.

Abstract

The debate surrounding the integration of artificial intelligence (AI) into scientific writing has already attracted significant interest in medical and life sciences. While AI can undoubtedly expedite the process of manuscript creation and correction, it raises several criticisms. The crossover between AI and health sciences is relatively recent, but the use of AI tools among physicians and other scientists who work in the life sciences is growing very fast. Within this whirlwind, it is becoming essential to realize where we are heading and what the limits are, including an ethical perspective. Modern conversational AIs exhibit a context awareness that enables them to understand and remember any conversation beyond any predefined script. Even more impressively, they can learn and adapt as they engage with a growing volume of human language input. They all share neural networks as background mathematical models and differ from old chatbots for their use of a specific network architecture called transformer model [1]. Some of them exceed 100 terabytes (TB) (e.g., Bloom, LaMDA) or even 500 TB (e.g., Megatron-Turing NLG) of text data, the 4.0 version of ChatGPT (GPT-4) was trained with nearly 45 TB, but stays updated by the internet connection and may integrate with different plugins that enhance its functionality, making it multimodal.

Keywords: Artificial intelligence; ChatGPT; Chatbots; Large language models; Medical writing; Natural language understanding.

MeSH terms

  • Artificial Intelligence* / ethics
  • Humans
  • Medical Writing* / standards