Published in Research

ChatGPT scores 46% on ophthalmology board certification practice test

This is editorially independent content
2 min read

In a new study published in JAMA Ophthalmology, ChatGPT correctly answered 58/125 (46%) text-based multiple-choice practice questions from a test preparation program for board certification in ophthalmology.

Give me some background on ChatGPT, first.

Created by Microsoft-backed OpenAI, ChatGPT is perhaps the most well-known “generative AI.”

Other popular examples include Google Bard, Midjourney, and DALL-E (although these last two are most well-known for their image generation).

What is generative AI?

Generative artificial intelligence (AI) is the term for chatbots that are capable of putting together text or images in response to user prompts.

These chatbots are trained on immense amounts of text data, but as many recent studies have shown, are not capable of judging or verifying the accuracy of the data they generate.

About this study . . .

Researchers fed 125 text-based multiple-choice questions to ChatGPT.

These questions were provided by the OphthoQuestions bank of practice questions for ophthalmology board certification.

Tell me more.

In the first round, held January 2023, ChatGPT correctly answered 58 of these questions, or 46%. It answered 11/14 questions in the general medicine category correctly, but 0 in retina and vitreous.

In the second round, held February 2023, ChatGPT answered 73/125 multiple-choice questions, or 58%, correctly, and answered 42/78 (54%) stand-alone questions correctly.

Take away.

Ultimately, ChatGPT is not advanced enough yet to provide quality assistance with preparing for ophthalmology board certification.

Further, the use of AI in medicine is quickly evolving which means practitioners should stay up to date on what these tools can—and can’t—do.


How would you rate the quality of this content?