Published in Research

Novel research supports AI-led screenings for DR

This is editorially independent content
5 min read

A recent peer-reviewed study published in the Clinical Medicine Insights: Endocrinology and Diabetes and supported by Orbis International examined the use of the organization’s artificial intelligence (AI) in potentially diagnosing diabetic retinopathy (DR).

Let’s start with some background.

The World Health Organization (WHO) has reported that, until recently, type 2 diabetes mellitus (T2DM)—which has been on the rapid rise in low-and middle-income countries (LMICs) as of late—has largely only manifested in adults.

Now, however; pediatric patients are increasingly being diagnosed with T2DM.

Case in point: In 2020, the International Diabetes Federation (IDF) reported an estimated 44 million children and adolescents (0-19 years of age) across the globe had T2DM.

How does DR come into play?

DR is one of the most common microvascular complications of DM, and according to researchers, “adolescents T2DM have a higher risk of DR progression compared to adults, especially when glycaemic control is poor.”

And AI?

As of late, AI has become a promising tool for detecting and screening for DR, “offering the potential to improve access, efficiency, and accuracy of diagnosis,” researchers stated. This is particularly the case for LMICs, where human resources are often limited.

While AI algorithms have been trained on large datasets of adult retinal images, few studies have assessed children’s retinal images.

Which brings us to this AI tool, right?

Yes! Cybersight AI (by Orbis), a component of the organization’s not-for-profit telemedicine and e-learning platform, is a free, open-access tool for eyecare practitioners (ECPs) to detect and visualize DR, glaucoma, and macular disease.

To note, Cybersight Consult (part of Cybersight AI) is also capable of supporting AI grading of colour fundus images attached to consultation cases.

The goal is to support ECPs with an effective tool for providing care to diabetic patients, particularly in LMICs and low-resource settings with limited numbers of trained medical staff.

And this study?

Investigators of the study, which was conducted in partnership with the Diabetic Association of Bangladesh (DABAS), screened 1,274 pediatric and adolescent patients (ages 3 to 36; 53% female) diagnosed with diabetes (type 1 diabetes mellitus [T1DM] or T2DM) in the Dhaka BIRDEM-2 hospital in Bangladesh.

What was measured?

All participants had gradable fundus images of their retina uploaded to Cybersight AI for interpretation, with two main outcomes taken into consideration:

  • Any DR, defined as mild non-proliferative diabetic retinopathy (NPDR or more severe)
  • Referable DR, defined as moderate NPDR or more severe

And then?

Investigators compared the diagnostic test performance of the Cybersight AI vs a reference standard (which included a DR grading-trained optometrist) using the following:

  • A statistical rate prediction (via the Matthews correlation coefficient [MCC])
  • Area under the receiver operating characteristic curve (AUC-ROC)
  • Area under the precision-recall curve (AUC-PR)
  • Sensitivity
  • Specificity
  • Positive and negative predictive values

And the findings?

Out of the 1,274 patients, 19.4% (n = 247) were identified by Cybesight AI as having any type of DR. Comparatively, 16.6% (n=212) were identified by a DR grader as having DR.

For referrable DR, 2.35% (n = 30) and 1.49% (n = 19), were detected by AI and the DR grader, respectively.

How did sensitivity and specificity compare for AI?

For any DR:

  • Sensitivity =  75.5% (Confidence interval [CI] 69.7-81.3%)
  • Specificity = 91.8% (CI 90.2-93.5%)

For referrable DR:

  • Sensitivity = 84.2% (CI 67.8-100%)
  • Specificity = 98.9% (CI 98.3%-99.5%)

And the other outcomes?

For referrable DR:

  • MCC = 63.4%
  • AUC-ROC = 91.2%
  • AUC-PR = 76.2%

Did age make a difference in AI diagnosis accuracy?

It did, actually. Investigators found that Cybersight AI tended to make a correct diagnosis in younger patients (mean age = 16.7 years) with a short duration of DR (SD = 4.85 years).

And before you ask: No, gender did not impact these outcomes.

So what was the conclusion?

The study authors concluded that the Cybersight AI performed well on pediatric and adolescent fundus images, “despite its algorithms being trained on adult eyes.”

They noted that the high specificity outcomes for AI is a key finding for screening pediatric patients, as the majority of images tend to be normal.

And the take home?

Per the authors: “AI may be an effective tool to screen children with DM to identify referable DR, and could help to reduce demands on scarce physician resources in low-resource settings.”

How would you rate the quality of this content?