Document Type
Article
Publication Date
4-2-2024
Abstract
PURPOSE: To establish generalizable pointwise spatial relationship between structure and function through occlusion analysis of a deep-learning (DL) model for predicting the visual field (VF) sensitivities from 3-dimensional (3D) OCT scan.
DESIGN: Retrospective cross-sectional study.
PARTICIPANTS: A total of 2151 eyes from 1129 patients.
METHODS: A DL model was trained to predict 52 VF sensitivities of 24-2 standard automated perimetry from 3D spectral-domain OCT images of the optic nerve head (ONH) with 12 915 OCT-VF pairs. Using occlusion analysis, the contribution of each individual cube covering a 240 × 240 × 31.25 μm region of the ONH to the model's prediction was systematically evaluated for each OCT-VF pair in a separate test set that consisted of 996 OCT-VF pairs. After simple translation (shifting in x- and y-axes to match the ONH center), group t-statistic maps were derived to visualize statistically significant ONH regions for each VF test point within a group. This analysis allowed for understanding the importance of each super voxel (240 × 240 × 31.25 μm covering the entire 4.32 × 4.32 × 1.125 mm ONH cube) in predicting VF test points for specific patient groups.
MAIN OUTCOME MEASURES: The region at the ONH corresponding to each VF test point and the effect of the former on the latter.
RESULTS: The test set was divided to 2 groups, the healthy-to-early-glaucoma group (792 OCT-VF pairs, VF mean deviation [MD]: -1.32 ± 1.90 decibels [dB]) and the moderate-to-advanced-glaucoma group (204 OCT-VF pairs, VF MD: -17.93 ± 7.68 dB). Two-dimensional group t-statistic maps (x, y projection) were generated for both groups, assigning related ONH regions to visual field test points. The identified influential structural locations for VF sensitivity prediction at each test point aligned well with existing knowledge and understanding of structure-function spatial relationships.
CONCLUSIONS: This study successfully visualized the global trend of point-by-point spatial relationships between OCT-based structure and VF-based function without the need for prior knowledge or segmentation of OCTs. The revealed spatial correlations were consistent with previously published mappings. This presents possibilities of learning from trained machine learning models without applying any prior knowledge, potentially robust, and free from bias.
FINANCIAL DISCLOSURES: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.
Recommended Citation
Chen, Zhiqi; Ishikawa, Hiroshi; Wang, Yao; Wollstein, Gadi; and Schuman, Joel, "Deep-Learning-Based Group Pointwise Spatial Mapping of Structure to Function in Glaucoma" (2024). Wills Eye Hospital Papers. Paper 222.
https://jdc.jefferson.edu/willsfp/222
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
PubMed ID
38881610
Language
English
Comments
This article is the author's final published version in Ophthalmology Science, Volume 4, Issue 5, 2024, Article number 100523.
The published version is available at .https://doi.org/10.1016/j.xops.2024.100523.
Copyright © 2024 by the American Academy of Ophthalmology