Skip to main content
  • Research article
  • Open access
  • Published:

Computer-aided recognition of myopic tilted optic disc using deep learning algorithms in fundus photography

Abstract

Background

It is necessary to consider myopic optic disc tilt as it seriously impacts normal ocular parameters. However, ophthalmologic measurements are within inter-observer variability and time-consuming to get. This study aimed to develop and evaluate deep learning models that automatically recognize a myopic tilted optic disc in fundus photography.

Methods

This study used 937 fundus photographs of patients with normal or myopic tilted disc, collected from Samsung Medical Center between April 2016 and December 2018. We developed an automated computer-aided recognition system for optic disc tilt on color fundus photographs via a deep learning algorithm. We preprocessed all images with two image resizing techniques. GoogleNet Inception-v3 architecture was implemented. The performances of the models were compared with the human examiner’s results. Activation map visualization was qualitatively analyzed using the generalized visualization technique based on gradient-weighted class activation mapping (Grad-CAM++).

Results

Nine hundred thirty-seven fundus images were collected and annotated from 509 subjects. In total, 397 images from eyes with tilted optic discs and 540 images from eyes with non-tilted optic discs were analyzed. We included both eye data of most included patients and analyzed them separately in this study. For comparison, we conducted training using two aspect ratios: the simple resized dataset and the original aspect ratio (AR) preserving dataset, and the impacts of the augmentations for both datasets were evaluated. The constructed deep learning models for myopic optic disc tilt achieved the best results when simple image-resizing and augmentation were used. The results were associated with an area under the receiver operating characteristic curve (AUC) of 0.978 ± 0.008, an accuracy of 0.960 ± 0.010, sensitivity of 0.937 ± 0.023, and specificity of 0.963 ± 0.015. The heatmaps revealed that the model could effectively identify the locations of the optic discs, the superior retinal vascular arcades, and the retinal maculae.

Conclusions

We developed an automated deep learning-based system to detect optic disc tilt. The model demonstrated excellent agreement with the previous clinical criteria, and the results are promising for developing future programs to adjust and identify the effect of optic disc tilt on ophthalmic measurements.

Peer Review reports

Background

Tilted discs can be classified into two groups based on etiology: congenital tilted disc syndrome, which is an anomaly of the eye and is characterized by inferior or inferonasal tilting of the optic disc [1, 2], and myopic tilted disc, which is an acquired change related to progression of myopia [1]. Previous studies have illustrated optic disc tilt development and temporal crescent formation over time, and disc tilting develops in the relatively early stages of mild myopia in some patients.

The prevalence of myopia has increased and is expected to continue increasing globally [3]. The expected population of individuals affected by myopia is reported to be 4758 million by 2050 [3]. The prevalence is significantly higher in countries of the Asia-Pacific region compared with other regions and is dramatically increasing in East Asia [3, 4]. Consequently, the clinical significance of myopic optic disc tilt might also be increasing.

Optic disc tilt can lead to significant changes in optic disc appearance [5,6,7] and affects ocular parameters used in the majority of ophthalmic devices, such as optical coherence tomography [8, 9] and visual field analyzers [10,11,12], which are the most widely used devices in ophthalmology. However, it is difficult to obtain normal measurement results for patients with tilted optic discs using most ophthalmologic instruments, and the ophthalmic measurements in these patients are interpreted based on the supervising physician’s discretion.

Medical image analysis that uses deep learning algorithms has recently gained attention due to the variety of technological applications, including image recognition and speech recognition, as well as medical applications [13, 14]. Numerous studies have used deep learning algorithms to characterize and diagnose several diseases from fundus images [15,16,17,18]. However, to the best of our knowledge, there is limited research that has focused on tilted optic discs using deep learning, even though it is significant for ophthalmologic diagnosis systems or disease progression recognition systems.

To construct an ophthalmologic automatic diagnosis system or disease progression recognition system, it is necessary to consider myopic optic disc tilt as it seriously impacts normal ophthalmological measurements. This will be of greater clinical significance with the increasing population of myopic patients. The first step in developing a technology for a fully automated diagnosis system is to automatically recognize the presence of a tilted disc. This can be the basis of an automated clinical decision support system that enables calibration of the tilted disc to distinguish abnormal from normal ocular measurements.

Thus, this study aimed to develop a fully automated system for detecting tilted disc in fundus photographs using deep learning algorithms. This system can provide a framework for deep learning-based research that is focused on other tilted disc-related diseases. We evaluated the algorithm’s ability to differentiate between subjects with from those without tilted optic discs under various experimental settings. Activation map visualizations are also provided and show which parts of the fundus images are related to the algorithm decision process.

Methods

This study was performed at a single center and was performed in accordance with the tenets of the Declaration of Helsinki. This study was approved by the Institutional Review Board of Samsung Medical Center (Seoul, Republic of Korea, Approval No.: 2018–11-018). Informed consent was waived for the patients in this study.

Patients

Nine hundred thirty-seven fundus photographs of normal patients and patients with myopic tilted discs were collected from Samsung Medical Center between April 2016 and December 2018. Fundus photographs were acquired using a TRC-50IX digital camera (Topcon, Tokyo, Japan) or Kowa nonmyd 10 megapixel fundus camera (Kowa, Torrance, CA). Enrollment criteria for myopic tilted discs were as follows: an optic disc with a ratio of minimal to maximal disc diameter of 0.75 or less on the fundus photograph, as described in previous studies [19, 20], a white semilunar patch of sclera adjacent to the optic disc [21,22,23], and − 0.5 diopters (D) or more of myopia.

Only temporally tilted discs were considered myopic tilted discs, and discs tilted in another direction, including nasally, superiorly, or inferiorly, were excluded to avoid including tilted discs with a congenital etiology. Tilted discs with axes beyond 45 degrees of the vertical meridian were also excluded. Normal controls had normal optic disc shapes without semilunar patches of sclera adjacent to the optic disc.

Fundus photographs with poor image quality that could not be used for analysis were also excluded. Exclusion criteria for both patients with myopic tilted discs and healthy controls included previous eye trauma or eye surgery and any ocular pathology that may affect fundus photography, with the exception for refractive error. Patient demographics and refractive data were collected from medical records. Figure 1 is an illustration of included and excluded anatomical findings of the optic disc.

Fig. 1
figure 1

Illustration that shows the included and the excluded anatomical findings of the optic disc. a, a representative myopic tilted optic disc (included in the study); b, nasally tilted disc (excluded from the study); c, vertically tilted disc (excluded from the study); d, obliquely tilted disc (excluded from the study)

Data preprocessing

The original dataset included various image sizes because of different device settings. Several images were of both eyes in one photograph; therefore, we split those images so that each side represented one of the eyes. Some of the original images also contained text information about the patients; this information was cropped to optimize extraction.

To obtain a fixed size of an input image for training, we resized all the images to a resolution of 524 × 400 pixels. However, such image resizing could lead to distortion of the aspect ratio (AR). Inevitably, in regions of the optic disc, locations of objects in the fundus could be distorted. Because the angle between the disc and the other retinal sections is correlated with disc tilt angle, simple image resizing could affect deep learning model training [24,25,26]. Consequently, another preprocessing method was applied to resize images to try and preserve the AR of the original images. The image preprocessing for the AR-preserving dataset included the following steps: (1) crop the black borders of both the left and right sides from the fundus images to fit the exact area of the retinal fundus; (2) calculate the AR distribution of the border-cropped dataset; (3) select the most common aspect ratio from the distribution; (4) preserve the original height of each image and either crop or zero-pad the left and right sides to establish it as the selected AR; (5) resize each image to a resolution of 524 × 436 pixels for the AR-preserving dataset.

Figure 2 is an overview of data preprocessing for the AR-preserving dataset. Steps (1)–(3) above are illustrated in Fig. 2b. The original fundus images are composed of the actual fundus portion and the black border on both the left and right sides. Thus, we implemented a method that automatically crops the black background on both sides based on the intensity of the middle row of each image because the background has low intensity. Figure 2c illustrates step (4) to place the background-cropped image into the template. Finally, we resized the template image by preserving the AR of the original image, as in Fig. 2d.

Fig. 2
figure 2

Overview of data preprocessing for the AR-preserving dataset. a, data cleansing; b, cropping the black border area and determining the aspect ratio distribution of the actual fundus area; c, cropping or zero-padding while keeping the height intact; d, resizing each image to a resolution of 524 X 436

Deep learning model training

In this study, we used K-fold stratified cross-validation [27, 28] to evaluate the performance accuracy of our proposed model so that each fold has the same distributions of classes as the whole dataset.

The preprocessed input dataset is divided into K non-overlapping subsets balancing the number of images per class. K-1 subsets are used for training, and the remaining set is used for validation. This process is repeated for all K partitions. There were several cases for which we collected multiple fundus images from a single subject. Those cases had inconsistent clinical imaging findings since we collected images from both eyes and on different dates. We split the dataset into 5 subsets with respect to the patient IDs so that data with the same ID would not be distributed to both training and test sets. The dataset was split patient-wise according to the proportion of 0.6:0.4 for normal and abnormal class across each fold.

The images of the training dataset were augmented by mirror flipping, brightness control, and intensity control to enhance the size and quality. Since regions in the fundus have bilateral symmetry, we flipped some images horizontally. We also assigned lighting bias with a value between 0.8 and 1.2 and increased the pixel values in range of 0–4. Even though these transformations are among the most common for image classification problems, we cannot ensure that the labels of the original images are not altered. Therefore, augmentation options had to be carefully estimated, and the generated images were observed by clinicians manually. Consequently, the amount of training data was increased by 50 times after augmentation.

The input dataset included RGB images with a range of 0–255 pixel values for each RGB channel. We used global centering for the datasets by calculating and subtracting the mean values per pixel of the training data across all the color channels. Pixel-wise mean subtraction allowed the distribution of pixel values to be centered at zero [29, 30]. Next, each channel was normalized to the range of 0–1 [31, 32]. Both the simple image resized dataset and the original AR preserved dataset were prepared following the procedures described above.

GoogleNet Inception-v3 architecture was implemented as a base model [33]. The model was initialized by the ImageNet pre-trained model [34, 35] and fine-tuned with our datasets [36,37,38]. The weights of the pre-trained model were used to initialize each layer of the model and were updated as training proceeded. The performance of this algorithm was compared with that of a human examiner, who is an expert in identifying myopic disc appearance. Areas under the receiver operating characteristic curves, sensitivity, and specificity were computed for each of the models [39,40,41].

Results

Nine hundred thirty-seven fundus images were collected and annotated. Among those, 397 images were from eyes with tilted optic discs and 540 images were from eyes with non-tilted optic discs. A total of 509 subjects participated in this study. The mean age in the cases with myopic tilted disc was 10.8 (standard deviation, SD = 3.5; range, 1–55) years, and that in normal cases was 10.3 (SD = 5.2; range, 2–70) years. There was no statistical difference in age between the two groups (p = 0.45). Mean spherical equivalent refraction (SER) of the cases with tilted optic discs was - 5.57D ± 3.74D in the range of − 19.5 to 3.5D, and the mean SER of cases with non-tilted optic discs was − 1.17D ± 1.76D in the range of − 11 to 4D.

Training was performed for 50 epochs for each experiment, and a mini-batch of 16 was used. We achieved the best fine-tuning result with He initialization with normal, the Adam optimizer [42], a 1e-4 learning rate, and a 1e-3 learning decay rate [43, 44]. The dropout rate was set at 50% [45]. Categorical cross-entropy was used as a loss function for model training and validation. Our implementation incorporated the Keras and Tensorflow frameworks.

Table 1 summarizes the performance of the deep learning model. Two aspect ratios were compared for each dataset to assess the simple resized dataset and the original AR-preserving dataset. Areas under receiver operating characteristic curves (AUCs) of the models using the simple image resizing (0.960 ± 0.017) were better than those that used preprocessing for original aspect ratio preservation (0.927 ± 0.083). The impact of augmentation was also evaluated. The AUCs were higher in the models that used 50 times the augmentation than in those that used the non-augmented dataset for both ARs. We found the best results when using the simple image resizing and augmentation, as follows: an AUC of 0.978 ± 0.008, an accuracy of 0.960 ± 0.010, sensitivity of 0.937 ± 0.023, and specificity of 0.963 ± 0.015. Figure 3 shows the mean receiver operating characteristic curve for the 5-fold cross-validation results of the best models.

Table 1 Cross validation results of the proposed models for myopic tilted discs
Fig. 3
figure 3

The mean receiver operating characteristic (ROC) curve derived from the stratified 5-fold cross-validation and the area under the curve (AUC) of the deep learning myopic tilted disc algorithm

Our model generates both a classification result based on the presence of the tilted optic disc and a heatmap that highlights the location of focus of the deep learning model. We used the Grad-CAM++ [46], which is a generalized visualization technique based on gradient-weighted class activation mapping [47], to generate the heatmaps. This is a powerful tool that can be used to identify visual features in input images that help interpret the results of the trained model. The generated heatmap is a single-channel image whose intensity values are normalized. The last convolutional layer from GoogleNet Inception-v3 was used to designate a gradient layer of the activation map. Figure 4 shows the Grad-CAM++ heatmaps and the corresponding original input images. When the AI model classifies an input image as an abnormal case, it focuses on the optic disc and retinal maculae, as illustrated in Fig. 4a (true positive) and c (false positive). In contrast, when it identifies an image to be a normal case, the heatmap highlights the wider area around the optic disc, often with superior retinal vascular arcades, as in Fig. 4b (false negative) and d (true negative). Accordingly, for images with the same prediction results, the model concentrated in similar areas and shapes to the heatmaps. However, it was difficult to understand why the model pays attention to those areas in the incorrectly classified cases.

Fig. 4
figure 4

The confusion matrix of representative heat maps. a is correctly classified as myopic tilted disc images; b shows abnormal cases where the classifier was predicted as normal; c shows the opposite; d is correctly classified as normal optic disc images. Note that we used a threshold of 0.5 to generate the prediction outputs and heatmaps

Discussion

In this study, we implemented and tested a deep learning approach to detect optic disc tilt using fundus photographs. We demonstrated that the proposed algorithm showed excellent agreement with the case definition of optic disc tilt in this study.

The algorithm showed reliable results for tilted optic disc classification (Table 1). Conventionally, the role and impact of image size have been emphasized, as has consistency in aspect ratio (AR) of a dataset maintaining intact shapes of objects of interest, which is crucial for accuracy [25]. In this study, we established a process to resize the dataset while preserving the original AR. However, the performance of the model showed unexpected results. Using the simple image resizing increased the scores compared with using the original AR preservation. Therefore, we proposed that preprocessing for original AR preservation might impact the deep learning model training for feature extraction, which further impacted the accuracy in a negative way.

Our deep learning approach exported the heatmaps to visualize the feature maps to generate outputs from the activation maps and determine the presence of tilted optic disc. In this study, Grad-CAM++ visualization revealed that the models were able to identify the location of the optic disc in most photographs, even though the models were trained without additional information about anatomical locations. The model seemed to not only highlighted the optic disc, but also traced significant retinal sections such as the superior retinal vascular arcade and the retinal macula. One interesting finding was that the heat map revealed difference between the two classes. The heatmap showed a ring-shaped region along the rim of the disc for cases predicted as negative (Fig. 4b and d), while it showed a round shape for cases classified as abnormal (Fig. 4 a and c). The model seemed to be focusing on the superior retinal vascular arcade for most of the abnormal cases even when the prediction was false. Meanwhile, the retinal macula was highly regarded in some of the cases with a positive prediction. Importantly, those retinal sections play a significant role in interpretation of the tilted optic disc by clinicians. Thus, we expect that heatmap visualization can help clinicians understand the result of the deep learning model [48]. However, interpretation of Grad-CAM++ visualization is subjective. We cannot entirely trust the heatmaps to locate anatomical findings. The primary goal was to demonstrate if the machine learning approach can distinguish myopic optic disc tilt in the fundus image. Our approach used weakly annotated data and tried to identify the visualized area used by the AI for the decision. Therefore, further research is needed to verify the relationship between interpretation results and actual ophthalmological anatomy. A segmentation-based future study will help provide clear interpretation.

Developing an algorithm that automatically discriminates disc tilt should precede development of an algorithm that corrects significant effects of disc tilt on ophthalmic instrumentation. Given the rapidly increasing myopic prevalence [3, 4], these types of algorithms can be integral parts of automated ophthalmologic diagnostic programs. Combined alteration of the optic nerve head in myopic optic disc tilt can vary and can include stretched vertical and/or horizontal dimension, with larger and shallower cups [49]. The degree of tilt and direction of disc tilt can also vary [49]. Nakazawa et al. reported one nasally tilted optic disc among 10 patients with mild or moderate myopia [22]. Although a peripapillary crescent developes gradually in the optic disc in most myopic patients [22], it is not a pathognomonic finding for myopic optic disc tilt. However, this study only considered temporally tilted discs with white semilunar patches of sclera adjacent to the optic discs as myopic tilted discs, and discs tilted in other directions were excluded. In addition, an optic disc with a ratio of minimal to maximal disc diameter of 0.75 or less was regarded as a definite tilted disc and was included in this study. As we mentioned earlier, appearance of the tilted disc may vary, and the definition varies among studies. It may be difficult to discriminate the excluded anatomical findings of this study from the myopic tilted disc using the developed algorithm. In photographs with false positives, the long to short axis ratio was greater than 0.75, but they had enlarged discs that deviated from the shape of the normal optic nerve head. In the study, based on the 0.75 long to short axis ratio, the cases where the optic nerves were ovoid and accompanied by slight peripapillary atrophy were included as tilted disc when visually discriminated. However, using the algorithms, they were excluded from the tilted disc. It may be that the algorithms detected the rotation of a three-dimensional disc more intrinsically than a deliberate research criterion. Further investigation of a wider range of myopic configurations of the optic disc using diverse devices such as optical coherence tomography is needed to enhance the accuracy and broader application of the program. Finally, a program that can analyze ophthalmic images and measurement values while correcting the effect of various degrees of optic disc tilt is needed for this patient population.

An ophthalmologic automatic diagnosis system can be of great help even for non-experts. For example, it can be useful in routine checkup or for non-tertiary hospitals. It might be effective and time-saving if an AI system could indicate or pre-select cases with non-tilted optic discs with a high confidence level as an automated screening tool. This research showed the possibility of AI-based automated tilted optic disc recognition. Since we already have trained a tilted optic disc detection model, we can train an advanced model with additional data for other anatomical changes as well as the excluded data by fine-tuning. We also can adopt various few-shot learning approaches [50] even if the amount of data is small. This could reduce the workload of ophthalmologists and non-experts. Also, further research could use the model that effectively employs a confidence level such as SelectiveNet [51].

There were several limitations to this study. First, we compared the accuracy of the algorithm with the results based on previous criteria of tilted optic disc [19, 20]. In the literature to date, optic disc tilt has been classified based on observations that are based on fundus photography [52,53,54]. Several studies have examined the optic nerve head of a patient with a myopic tilted disc using three-dimensional optical coherence tomography [52,53,54] or observed vascular abnormalities in a patient with a tilted disc using angiography [55, 56]. However, these previous approaches are not used as diagnostic standards. Therefore, because there are no accurate diagnostic criteria for using an objective device, this study used one of the previously published criteria that include an optic disc with a ratio of minimal to maximal disc diameter of 0.75 or less on the fundus photograph [19, 20] with a white semilunar patch of sclera adjacent to the optic disc. Second, we analyzed only temporally tilted discs, and it should be noted that the results of our study might not be valid when considering non-temporally tilted discs. Third, there may be limitations associated with using both eye data in this study due to possible inter-eye correlations. In future studies, the use of single eye data will be more desirable.

Conclusions

In conclusion, we developed an automated system that detected optic disc tilt. The approach demonstrated excellent agreement with the previous clinical criteria which provides promising results for future programs that can help identify this condition. In addition, the approach for adjusting the effect of optic disc tilt on ophthalmic measurements can also be adapted, based on this novel approach.

Availability of data and materials

The datasets used in current study are available from the corresponding author upon reasonable request.

Abbreviations

AUCs:

areas under receiver operating characteristic curves

AR:

aspect ratio

D:

diopters

Grad-CAM:

gradient-weighted class activation mapping

SER:

spherical equivalent refraction

References

  1. Apple DJ, Rabb MF, Walsh PM. Congenital anomalies of the optic disc. Surv Ophthalmol. 1982;27(1):3–41.

    Article  CAS  PubMed  Google Scholar 

  2. You Q, Xu L, Jonas J. Tilted optic discs: the Beijing eye study. Eye. 2008;22(5):728.

    Article  CAS  PubMed  Google Scholar 

  3. Holden BA, Fricke TR, Wilson DA, Jong M, Naidoo KS, Sankaridurg P, Wong TY, Naduvilath TJ, Resnikoff S. Global prevalence of myopia and high myopia and temporal trends from 2000 through 2050. Ophthalmology. 2016;123(5):1036–42.

    Article  PubMed  Google Scholar 

  4. Pan C-W, Dirani M, Cheng C-Y, Wong T-Y, Saw S-M. The age-specific prevalence of myopia in Asia: a meta-analysis. Optom Vis Sci. 2015;92(3):258–66.

    Article  PubMed  Google Scholar 

  5. Jonas JB, Dichtl A. Optic disc morphology in myopic primary open-angle glaucoma. Graefes Arch Clin Exp Ophthalmol. 1997;235(10):627–33.

    Article  CAS  PubMed  Google Scholar 

  6. Jonas JB, Gusek GC, Naumann GO. Optic disk morphometry in high myopia. Graefes Arch Clin Exp Ophthalmol. 1988;226(6):587–90.

    Article  CAS  PubMed  Google Scholar 

  7. Samarawickrama C, Mitchell P, Tong L, Gazzard G, Lim L, Wong T-Y, Saw S-M. Myopia-related optic disc and retinal changes in adolescent children from Singapore. Ophthalmology. 2011;118(10):2050–7.

    Article  PubMed  Google Scholar 

  8. Hwang YH, Yoo C, Kim YY. Characteristics of peripapillary retinal nerve fiber layer thickness in eyes with myopic optic disc tilt and rotation. J Glaucoma. 2012;21(6):394–400.

    Article  PubMed  Google Scholar 

  9. Law SK, Tamboli DA, Giaconi J, Caprioli J. Characterization of retinal nerve fiber layer in nonglaucomatous eyes with tilted discs. Arch Ophthalmol. 2010;128(1):141–2.

    Article  PubMed  Google Scholar 

  10. Vuori ML, Mäntyjärvi M. Tilted disc syndrome may mimic false visual field deterioration. Acta Ophthalmol. 2008;86(6):622–5.

    Article  PubMed  Google Scholar 

  11. Brazitikos PD, Safran AB, Simona F, Zulauf M. Threshold perimetry in tilted disc syndrome. Arch Ophthalmol. 1990;108(12):1698–700.

    Article  CAS  PubMed  Google Scholar 

  12. Shoeibi N, Moghadas Sharif N, Daneshvar R, Ehsaei A. Visual field assessment in high myopia with and without tilted optic disc. Clin Exp Optom. 2017;100(6):690–4.

    Article  PubMed  Google Scholar 

  13. Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, Van Der Laak JA, Van Ginneken B, Sánchez CI. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.

    Article  PubMed  Google Scholar 

  14. Shen D, Wu G, Suk H-I. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–48.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  15. Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. Jama. 2016;316(22):2402–10.

    Article  PubMed  Google Scholar 

  16. Abràmoff MD, Lou Y, Erginay A, Clarida W, Amelon R, Folk JC, Niemeijer M. Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Invest Ophthalmol Vis Sci. 2016;57(13):5200–6.

    Article  PubMed  Google Scholar 

  17. Ting DSW, Cheung CY-L, Lim G, Tan GSW, Quang ND, Gan A, Hamzah H, Garcia-Franco R, San Yeo IY, Lee SY. Development and validation of a deep learning system for diabetic retinopathy and related eye diseases using retinal images from multiethnic populations with diabetes. Jama. 2017;318(22):2211–23.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Son J, Shin JY, Kim HD, Jung KH, Park KH, Park SJ. Development and validation of deep learning models for screening multiple abnormal findings in retinal fundus images. Ophthalmology. 2020;127(1):85-94.

  19. How AC, Tan GS, Chan Y-H, Wong TT, Seah SK, Foster PJ, Aung T. Population prevalence of tilted and torted optic discs among an adult Chinese population in Singapore: the Tanjong Pagar study. Arch Ophthalmol. 2009;127(7):894–9.

    Article  PubMed  Google Scholar 

  20. Jonas JB, Kling F, Gründler AE. Optic disc shape, corneal astigmatism, and amblyopia. Ophthalmology. 1997;104(11):1934–7.

    Article  CAS  PubMed  Google Scholar 

  21. Grossniklaus HE, Green WR. Pathologic findings in pathologic myopia. Retina (Philadelphia, Pa). 1992;12(2):127–33.

    Article  CAS  Google Scholar 

  22. Nakazawa M, Kurotaki J, Ruike H. Longterm findings in peripapillary crescent formation in eyes with mild or moderate myopia. Acta Ophthalmol. 2008;86(6):626–9.

    Article  PubMed  Google Scholar 

  23. Yasuzumi K, Ohno-Matsui K, Yoshida T, Kojima A, Shimada N, Futagami S, Tokoro T, Mochizuki M. Peripapillary crescent enlargement in highly myopic eyes evaluated by fluorescein and indocyanine green angiography. Br J Ophthalmol. 2003;87(9):1088–90.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  24. He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell. 2015;37(9):1904–16.

    Article  PubMed  Google Scholar 

  25. Zheng L, Zhao Y, Wang S, Wang J, Tian Q: Good practice in CNN feature transfer. arXiv preprint arXiv:160400133 2016.

  26. Esmaeili SA, Singh B, Davis LS: Fast-at: fast automatic thumbnail generation using deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: 2017; 2017: 4622–4630.

  27. Kohavi R: A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Ijcai: 1995: Montreal, Canada; 1995: 1137–1145.

  28. Arlot S, Celisse A. A survey of cross-validation procedures for model selection. Statistics surveys. 2010;4:40–79.

    Article  Google Scholar 

  29. Bro R, Smilde AK. Centering and scaling in component analysis. J Chemom. 2003;17(1):16–33.

    Article  CAS  Google Scholar 

  30. Ioffe S, Szegedy C: Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:150203167 2015.

  31. Sola J, Sevilla J. Importance of input data normalization for the application of neural networks to complex industrial problems. IEEE Trans Nucl Sci. 1997;44(3):1464–8.

    Article  Google Scholar 

  32. Alom MZ, Taha TM, Yakopcic C, Westberg S, Sidike P, Nasrin MS, Hasan M, Van Essen BC, Awwal AA, Asari VK. A state-of-the-art survey on deep learning theory and architectures. Electronics. 2019;8(3):292.

    Article  Google Scholar 

  33. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition: 2016; 2016: 2818–2826.

  34. Krizhevsky A, Sutskever I, Hinton GE: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems: 2012; 2012: 1097–1105.

  35. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M. Imagenet large scale visual recognition challenge. Int J Comput Vis. 2015;115(3):211–52.

    Article  Google Scholar 

  36. Agrawal P, Girshick R, Malik J: Analyzing the performance of multilayer neural networks for object recognition. In: European conference on computer vision: 2014: Springer; 2014: 329–344.

  37. Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285–98.

    Article  PubMed  Google Scholar 

  38. Burlina PM, Joshi N, Pekala M, Pacheco KD, Freund DE, Bressler NM. Automated grading of age-related macular degeneration from color fundus images using deep convolutional neural networks. JAMA Ophthalmol. 2017;135(11):1170–6.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Hajian-Tilaki K. Receiver operating characteristic (ROC) curve analysis for medical diagnostic test evaluation. Caspian J Internal Med. 2013;4(2):627.

    Google Scholar 

  40. Theodorou-Kanakari A, Karampitianis S, Karageorgou V, Kampourelli E, Kapasakis E, Theodossiadis P, Chatziralli I. Current and emerging treatment modalities for Leber's hereditary optic neuropathy: a review of the literature. Adv Ther. 2018;35(10):1510–8.

    Article  PubMed  PubMed Central  Google Scholar 

  41. Christopher M, Belghith A, Bowd C, Proudfoot JA, Goldbaum MH, Weinreb RN, Girkin CA, Liebmann JM, Zangwill LM. Performance of deep learning architectures and transfer learning for detecting glaucomatous optic neuropathy in fundus photographs. Sci Rep. 2018;8(1):16685.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  42. Kingma DP, Ba J: Adam: A method for stochastic optimization. arXiv preprint arXiv:14126980 2014.

  43. He K, Zhang X, Ren S, Sun J: Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision: 2015; 2015: 1026–1034.

  44. Glorot X, Bengio Y: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the thirteenth international conference on artificial intelligence and statistics: 2010; 2010: 249–256.

  45. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Machine Learning Res. 2014;15(1):1929–58.

    Google Scholar 

  46. Chattopadhay A, Sarkar A, Howlader P, Balasubramanian VN: Grad-cam++: Generalized gradient-based visual explanations for deep convolutional networks. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV): 2018: IEEE; 2018: 839–847.

  47. Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D: Grad-cam: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision: 2017; 2017: 618–626.

  48. Montavon G, Samek W, Müller K-R. Methods for interpreting and understanding deep neural networks. Digital Signal Processing. 2018;73:1–15.

    Article  Google Scholar 

  49. Tan NY, Sng CC, Ang M. Myopic optic disc changes and its role in glaucoma. Curr Opin Ophthalmol. 2019;30(2):89–96.

    Article  PubMed  Google Scholar 

  50. Chen W-Y, Liu Y-C, Kira Z, Wang Y-CF, Huang J-B: A closer look at few-shot classification. arXiv preprint arXiv:190404232 2019.

  51. Geifman Y, El-Yaniv R: Selectivenet: A deep neural network with an integrated reject option. arXiv preprint arXiv:190109192 2019.

  52. Park H-YL, Choi SI, Choi J-A, Park CK. Disc torsion and vertical disc tilt are related to subfoveal scleral thickness in open-angle glaucoma patients with myopia. Invest Ophthalmol Vis Sci. 2015;56(8):4927–35.

    Article  PubMed  Google Scholar 

  53. Kim YC, Jung Y, Park H-YL, Park CK. The location of the deepest point of the eyeball determines the optic disc configuration. Sci Rep. 2017;7(1):5881.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  54. Kim YC, Moon J-S, Park H-YL, Park CK. Three dimensional evaluation of posterior pole and optic nerve head in tilted disc. Sci Rep. 2018;8(1):1121.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  55. Sung MS, Lee TH, Heo H, Park SW. Clinical features of superficial and deep peripapillary microvascular density in healthy myopic eyes. PLoS One. 2017;12(10):e0187160.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  56. Aizawa N, Kunikata H, Shiga Y, Yokoyama Y, Omodaka K, Nakazawa T. Correlation between structure/function and optic disc microcirculation in myopic glaucoma, measured with laser speckle flowgraphy. BMC Ophthalmol. 2014;14(1):113.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Acknowledgements

None.

Funding

This research was supported by the Biological & Medical Technology Development Program of the National Research Foundation of Korea (NRF) funded by the Korean government, MSIT (NRF-2017M3A9E1064784), the Basic Science Research Program through the NRF funded by the Ministry of Science and ICT (NRF-2019R1F1A1048920) to KAP, and the Basic Science Research Program through the NRF funded by the Ministry of Science and ICT (NRF-2020R1F1A1049248) to SYO. The funding offered support in the design of the study and collection, analysis, interpretation of data, and publication fee. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

K.A.P., S.Y.O., M.J.C., and B.H.C. contributed to conception and design. G.I.L., H.N., J.K.C., and M.C.K. contributed to acquisition of data. B.H.C., D.Y.L., and J.H.M. were involved in analysis and interpretation of data. K.A.P. and D.Y.L. are involved in drafting and revising the manuscript. K.A.P. and S.Y.O. supervised the study and contributed as co-corresponding authors. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Kyung-Ah Park or Sei Yeul Oh.

Ethics declarations

Ethics approval and consent to participate

This study was approved by the Institutional Review Board of Samsung Medical Center (Seoul, Republic of Korea, Approval No.: 2018–11-018); the informed consent was waived. All clinical investigations were conducted according to the principles expressed in the Declaration of Helsinki.

Consent for publication

Not applicable.

Competing interests

None of the authors have financial or other conflicts of interest to disclose.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cho, B.H., Lee, D.Y., Park, KA. et al. Computer-aided recognition of myopic tilted optic disc using deep learning algorithms in fundus photography. BMC Ophthalmol 20, 407 (2020). https://doi.org/10.1186/s12886-020-01657-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12886-020-01657-w

Keywords