ChatGPT gets passing score in US medical licensing exam that takes years of training

ChatGPT is designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes. In the study, published in the open-access journal PLOS Digital Health, Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth tested ChatGPT's performance on the USMLE.

Microsoft's AI chatbot ChatGPT can score at or around the approximately 60 per cent passing threshold for the United States Medical Licensing Exam (USMLE), with responses that make coherent, internal sense and contain frequent insights, according to a study.

ChatGPT is designed to generate human-like writing by predicting upcoming word sequences. Unlike most chatbots, ChatGPT cannot search the internet. Instead, it generates text using word relationships predicted by its internal processes.

Advertisement

In the study, published in the open-access journal PLOS Digital Health, Tiffany Kung, Victor Tseng, and colleagues at AnsibleHealth tested ChatGPT's performance on the USMLE.

Also read |Microsoft plans to demo its new ChatGPT-like AI in MS Office

Advertisement

Taken by medical students and physicians-in-training, the USMLE assesses knowledge spanning most medical disciplines, ranging from biochemistry, to diagnostic reasoning, to bioethics.

After screening to remove image-based questions, the authors tested the software on 350 of the 376 public questions available from the June 2022 USMLE release.

Advertisement

After indeterminate responses were removed, ChatGPT scored between 52.4 per cent and 75 per cent across the three USMLE exams.

Also read |Ex-Apple designer Jony Ive creates King Charles III coronation logo

Advertisement

The passing threshold each year is approximately 60 per cent.

ChatGPT also demonstrated 94.6 per cent concordance across all its responses and produced at least one significant insight (something that was new, non-obvious, and clinically valid) for 88.9 per cent of its responses.

Advertisement

Notably, ChatGPT exceeded the performance of PubMedGPT, a counterpart model trained exclusively on biomedical domain literature, which scored 50.8 per cent on an older dataset of USMLE-style questions.

"Reaching the passing score for this notoriously difficult expert exam, and doing so without any human reinforcement, marks a notable milestone in clinical AI maturation," said the authors.

Advertisement

"ChatGPT contributed substantially to the writing of our manuscript. We interacted with ChatGPT much like a colleague, asking it to synthesise, simplify, and offer counterpoints to drafts in progress. All of the co-authors valued ChatGPT's input," said Kung.

Advertisement