A recent study published in the European Heart Journal: Digital Health, by Skalidis et al, has demonstrated the potential of artificial intelligence (AI) language models in medical education. The study aimed to evaluate the accuracy of the AI language model, ChatGPT, in answering questions on the European Exam in Core Cardiology (EECC). The study utilised a training dataset consisting of more than 30 million words of cardiology-related articles, and the model was evaluated using a dataset of 300 questions from the EECC.
The results of the study showed that ChatGPT achieved an accuracy of 80.6% in answering the questions, which is comparable to the performance of human cardiologists. The AI model was also able to provide detailed explanations for its answers, highlighting its potential as a tool for enhancing medical education.
However, the authors acknowledge that there are limitations to the use of AI in medical education, including the need for continued development of the technology to ensure its accuracy and reliability, as well as concerns over the potential impact on the traditional methods of medical education.
Nonetheless, the study’s findings highlight the potential of AI language models to enhance medical education and improve healthcare outcomes. As AI technology continues to evolve, it may provide a valuable tool for medical students and practitioners in their efforts to expand their knowledge and skills.
Read more here: https://academic.oup.com/ehjdh/advance-article/doi/10.1093/ehjdh/ztad029/7137370?login=false