Published in Research

How reliable are VR devices for visual field testing?

This is editorially independent content
6 min read

Findings from a study recently published in the Journal of Glaucoma evaluated the reliability of remote, self-administered visual field (VF) monitoring using a virtual reality VF (VRVF) device in individuals with and without VF defects.

Give me some background.

Glaucoma management requires lifelong follow-up and VF testing to monitor disease progression, with routine VF assessments typically performed every 4 months to 1 year.

However: Ideally, VF testing should be performed more frequently (i.e., 3-20 times per year, depending on the risk of VF loss) to identify early disease or progression.

As such, a research team speculated that remote-based, self-administered VF tests may become a significant facet of telemedicine and sought to understand its efficacy and acceptability.

Enter the VRVF headset.

Developed and validated at Bascom Palmer Eye Institute, the Virtual Eye device (Virtual Vision Health [VVH]) was designed to increase practice efficiency by providing improved testing modalities, reporting accuracy, and repeatability comparable to standard automated perimetry (SAP) tests.

In clinical trials, the suite of Virtual Eye products showed a mean sensitivity comparable to standard automated perimetry (SAP) for accurate assessment of the VF.

  • In fact: These products demonstrated 26% shorter test times on average than traditional SAP.

Wait a minute … why does this sound familiar?

Probably because earlier this year, Glance reported that VVH added new features to an upgraded version of the standard Virtual Eye device—called the Virtual Eye Pro—which included live eye monitoring and pupillography.

Ah, gotcha. Now talk about the study.

In this pilot study, investigators included 42 eyes from 21 participants without ocular disease (10 subjects, mean age 63.1 years, 70% female) and with stable VF defects (11 subjects, mean age 51.0 years, 55% female).

  • All participants had a baseline SAP test.

The analysis was completed in two phases at a tertiary eye care institute:

  • Phase 1: A study on individuals without ocular disease was conducted from November 2021 to February 2022 to evaluate the study’s viability in a general population.
  • Phase 2: A similar study was performed on individuals with stable VF defects or those who would require clinical VF testing from February 2022 to May 2022.

Keep going…

Subjects tested remotely on a VRVF device for 4 weeks (examinations V1, V2, V3, and V4)—with the last three performed without assistance.

Then: The mean sensitivities of the VRVF results were compared with each other and to the SAP results to assess reliability.

Findings?

Participants tested consistently in spite of patient-reported external factors (ex., ambient noise, distractions, time of testing) that impacted outcomes.

Further: VRVF results were in reasonable agreement with the baseline SAP.

  • Patients generally considered the device comfortable and easy to use.

Anything else?

Examinations performed by the cohort with stable defects exhibited better agreement with SAP examinations (V2: P = 0.79; V3: P = 0.39; V4: P = 0.35) than those reported by the cohort without ocular disease (V2: P = 0.02; V3: P = 0.15; V4: P = 0.22).

Additionally: Fixation losses were high and variable in VRVF examinations compared to those of SAP.

Expert opinion?

The study authors suggested that individuals with stable defects were more reliable test takers because of their experience with VF testing compared to the learning curve of the group without ocular disease.

With this in mind: They recommended that “examiners consider their patient’s personal testing history and use their own clinical judgment to determine whether their patients require supervision with testing.”

Limitations?

These included:

  • A lack of liners and antifog spray in the cohort without ocular disease may have confounded test results and comparisons to the stable defects group—which had access to both
  • The researchers used mean sensitivity as an outcome measure because the VRVF device did not yet have a robust normative database to determine accurate mean deviation (MD) or pattern standard deviation (PSD)
    • Note: Mean sensitivity does not convey spatial information or variability, so the standard metric for comparing perimetry results has traditionally been MD or PSD
  • Analyzing VRVF results by averaging the mean sensitivities (i.e., V2, V3, V4) during the pointwise comparisons diluted the individual VRVF examination data
    • As such: This may have understated the differences between VRVF and SAP pointwise results

Take home?

These findings suggest that self-administered, remote VF tests on a VRVF device showed:

  • Satisfactory test-retest reliability
  • Good inter-test agreement with SAP
  • Acceptability by its users

External factors may impact at-home testing and age and visual impairment may hinder fixation.

Next steps?

The study authors recommended further studies with expanded sample sizes to understand inconsistencies in fixation losses.

How would you rate the quality of this content?