A systematic review and meta-analysis recently published in the American Journal of Ophthalmology assessed the accuracy of EyeArt for fundus-based detection of diabetic retinopathy (DR).
Give me some background.
Although early detection of DR through regular dilated retinal exams reduces the risk of severe vision loss, less than half of diabetics obtain recommended screening due to workforce shortages and limited accessibility.
To address this issue: Autonomous artificial intelligence (AI) systems—such as EyeArt, IDx-DR, and AEYE-DS—offer an FDA-authorized solution for point-of-care DR screening without an eyecare provider's oversight.
How does AI screening help patients?
The ACCESS randomized trial (NCT05131451) in young people demonstrated striking improvements in screening uptake and follow-up completion, indicating impressive gains when AI-generated results were offered immediately with patient education.
- Plus: Health-economic studies have also confirmed the cost-saving results of autonomous AI screening, especially in children and at the primary care level.
Let’s dig into EyeArt.
Eyenuk’s flagship EyeArt system first received FDA clearance in 2020 before being cleared for expanded use with the Topcon NW400 retinal camera, Canon CR-2 AF, and Canon CR-2 Plus AF cameras in 2023.
What it does: EyeArt autonomously analyzes patients’ retinal images to comprehensively detect signs of disease and returns an easy-to-read report in under 60 seconds.
- The report provides eye and patient level DR outputs and also indicates the presence or absence of referrable DR (rDR) or vision-threatening DR.
Any prior supporting clinical data on the system?
Indeed … EyeArt has been validated in a pivotal, prospective, multicenter clinical trial (NCT03112005) against clinical reference standard using the Early Treatment of DR Study (ETDRS) grading scale.
It was also tested in a clinical validation study on over 100,000 patient visits, representing one of the largest data sets used to test any available DR screening technology in real-world clinical environments using images captured in everyday practice.
Gotcha ... Now talk about this new study.
This systematic review and meta-analysis followed Preferred Reporting Items for Systematic Reviews and Meta-Analyses for Diagnostic Test Accuracy (PRISMA-DTA) guidelines to assess the diagnostic accuracy of EyeArt in detecting referrable DR (rDR) from color fundus photographs.
Specifically: Searches of PubMed, Embase, and ClinicalTrials.gov through April 2025 identified eligible studies involving adult populations screened with EyeArt.
Findings?
In total: 17 studies comprising 162,695 examinations were included in the analysis.
- EyeArt demonstrated a pooled sensitivity of 95% (95% confidence interval [CI]: 92-97%) and specificity of 81% (95% CI: 74-87%).
Subgroup analyses indicated consistent accuracy across several parameters, including:
- Study designs
- Economic settings
- Healthcare contexts
- Device types
- External validation
- Image gradability
To note: Specificity varied slightly depending on vendor involvement.
Expert opinion?
Despite EyeArt’s strong diagnostic performance, real-world uptake of autonomous AI DR screening remains minimal.
- U.S. claims data show that Current Procedural Terminology (CPT) 92229 (“remote retinal imaging with automated analysis”) was billed only 3,440 times among 154,136 diabetic eye imaging encounters (2.2%) between 2021-2023, corresponding to ~0.09% of all adults with diabetes.
Go on …
Researchers also noted that DR screening reimbursement in the United States is modest—$40.28 nationally in 2023 compared to $17.35 for staff-reviewed imaging (92227) and $29.14 for physician-interpreted imaging (92228)—and far below procedure codes for other AI applications (e.g., stroke CT).
As such: “These economics, combined with upfront camera costs and IT integration expenses, discourage primary-care adoption,” the study authors explained.
Limitations?
These included:
- There was inconsistent reporting among studies due to a lack of standardized definitions or quantification of ungradable images and often failed to detail how such images were processed by the AI system
- Specificity showed greater variability across studies, likely influenced by ungradable image management and hybrid workflows (which were inconsistently reported)
- Only 11 studies represented true cross-country external validations, so this analysis may still overestimate specificity for regions whose retinal-image characteristics diverge sharply from the original training distribution
- The majority of included studies originated from high-income settings, with limited data availability from low-resource environments where screening needs are greatest
- This may limit generalizability and underrepresent implementation challenges in resource-constrained contexts
Now to the take home.
These findings suggest that EyeArt exhibits high diagnostic accuracy for detecting rDR (pooled sensitivity 95%, specificity 81%), with high certainty for sensitivity and moderate certainty for specificity.
- Meaning: Its consistently strong sensitivity supports autonomous screening in primary care.
However: Variability in specificity—along with inconsistent reporting/handling of ungradable images—warrants attention and standardized quality-assurance, according to the study authors.
And the next steps in this research?
To fully realize the public health potential of AI screening, the authors emphasized that greater attention must also be paid to the post-diagnosis care pathway—as delays in referral or incomplete follow-up can negate clinical benefit.
Meaning: Real-world impact will depend on electronic health record (EHR) connectivity, clear referral processes, sustainable reimbursement, and targeted implementation in underserved populations to maximize public health impact, with prospective implementation and cost-effectiveness studies needed to guide policy.