Activity

Activity ID

14586

Expires

December 23, 2028

Format Type

Journal-based

CME Credit

1

Fee

$30

CME Provider: JAMA Ophthalmology

Description of CME Course

Importance  Democratizing artificial intelligence (AI) enables model development by clinicians with a lack of coding expertise, powerful computing resources, and large, well-labeled data sets.

Objective  To determine whether resource-constrained clinicians can use self-training via automated machine learning (ML) and public data sets to design high-performing diabetic retinopathy classification models.

Design, Setting, and Participants  This diagnostic quality improvement study was conducted from January 1, 2021, to December 31, 2021. A self-training method without coding was used on 2 public data sets with retinal images from patients in France (Messidor-2 [n = 1748]) and the UK and US (EyePACS [n = 58 689]) and externally validated on 1 data set with retinal images from patients of a private Egyptian medical retina clinic (Egypt [n = 210]). An AI model was trained to classify referable diabetic retinopathy as an exemplar use case. Messidor-2 images were assigned adjudicated labels available on Kaggle; 4 images were deemed ungradable and excluded, leaving 1744 images. A total of 300 images randomly selected from the EyePACS data set were independently relabeled by 3 blinded retina specialists using the International Classification of Diabetic Retinopathy protocol for diabetic retinopathy grade and diabetic macular edema presence; 19 images were deemed ungradable, leaving 281 images. Data analysis was performed from February 1 to February 28, 2021.

Exposures  Using public data sets, a teacher model was trained with labeled images using supervised learning. Next, the resulting predictions, termed pseudolabels, were used on an unlabeled public data set. Finally, a student model was trained with the existing labeled images and the additional pseudolabeled images.

Main Outcomes and Measures  The analyzed metrics for the models included the area under the receiver operating characteristic curve (AUROC), accuracy, sensitivity, specificity, and F1 score. The Fisher exact test was performed, and 2-tailed P values were calculated for failure case analysis.

Results  For the internal validation data sets, AUROC values for performance ranged from 0.886 to 0.939 for the teacher model and from 0.916 to 0.951 for the student model. For external validation of automated ML model performance, AUROC values and accuracy were 0.964 and 93.3% for the teacher model, 0.950 and 96.7% for the student model, and 0.890 and 94.3% for the manually coded bespoke model, respectively.

Conclusions and Relevance  These findings suggest that self-training using automated ML is an effective method to increase both model performance and generalizability while decreasing the need for costly expert labeling. This approach advances the democratization of AI by enabling clinicians without coding expertise or access to large, well-labeled private data sets to develop their own AI models.

Disclaimers

1. This activity is accredited by the American Medical Association.
2. This activity is free to AMA members.

Register for this Activity

ABMS Member Board Approvals by Type
More Information
Commercial Support?
No

NOTE: If a Member Board has not deemed this activity for MOC approval as an accredited CME activity, this activity may count toward an ABMS Member Board’s general CME requirement. Please refer directly to your Member Board’s MOC Part II Lifelong Learning and Self-Assessment Program Requirements.

Educational Objectives

To identify the key insights or developments described in this article.

Keywords

Diabetic Retinopathy, Ophthalmic Imaging, Ophthalmology, Retinal Disorders, Artificial Intelligence

Competencies

Medical Knowledge

CME Credit Type

AMA PRA Category 1 Credit

DOI

10.1001/jamaophthalmol.2023.4508

View All Activities by this CME Provider

The information provided on this page is subject to change. Please refer to the CME Provider’s website to confirm the most current information.