1
|
Hasan MM, Phu J, Sowmya A, Meijering E, Kalloniatis M. Artificial intelligence in the diagnosis of glaucoma and neurodegenerative diseases. Clin Exp Optom 2024; 107:130-146. [PMID: 37674264 DOI: 10.1080/08164622.2023.2235346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 07/07/2023] [Indexed: 09/08/2023] Open
Abstract
Artificial Intelligence is a rapidly expanding field within computer science that encompasses the emulation of human intelligence by machines. Machine learning and deep learning - two primary data-driven pattern analysis approaches under the umbrella of artificial intelligence - has created considerable interest in the last few decades. The evolution of technology has resulted in a substantial amount of artificial intelligence research on ophthalmic and neurodegenerative disease diagnosis using retinal images. Various artificial intelligence-based techniques have been used for diagnostic purposes, including traditional machine learning, deep learning, and their combinations. Presented here is a review of the literature covering the last 10 years on this topic, discussing the use of artificial intelligence in analysing data from different modalities and their combinations for the diagnosis of glaucoma and neurodegenerative diseases. The performance of published artificial intelligence methods varies due to several factors, yet the results suggest that such methods can potentially facilitate clinical diagnosis. Generally, the accuracy of artificial intelligence-assisted diagnosis ranges from 67-98%, and the area under the sensitivity-specificity curve (AUC) ranges from 0.71-0.98, which outperforms typical human performance of 71.5% accuracy and 0.86 area under the curve. This indicates that artificial intelligence-based tools can provide clinicians with useful information that would assist in providing improved diagnosis. The review suggests that there is room for improvement of existing artificial intelligence-based models using retinal imaging modalities before they are incorporated into clinical practice.
Collapse
Affiliation(s)
- Md Mahmudul Hasan
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Jack Phu
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- Centre for Eye Health, University of New South Wales, Sydney, New South Wales, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| | - Arcot Sowmya
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Erik Meijering
- School of Computer Science and Engineering, University of New South Wales, Kensington, New South Wales, Australia
| | - Michael Kalloniatis
- School of Optometry and Vision Science, University of New South Wales, Kensington, Australia
- School of Medicine (Optometry), Deakin University, Waurn Ponds, Victoria, Australia
| |
Collapse
|
2
|
Shroff S, Rao DP, Savoy FM, Shruthi S, Hsu CK, Pradhan ZS, Jayasree PV, Sivaraman A, Sengupta S, Shetty R, Rao HL. Agreement of a Novel Artificial Intelligence Software With Optical Coherence Tomography and Manual Grading of the Optic Disc in Glaucoma. J Glaucoma 2023; 32:280-286. [PMID: 36730188 DOI: 10.1097/ijg.0000000000002147] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2022] [Accepted: 11/19/2022] [Indexed: 02/03/2023]
Abstract
PRCIS The offline artificial intelligence (AI) on a smartphone-based fundus camera shows good agreement and correlation with the vertical cup-to-disc ratio (vCDR) from the spectral-domain optical coherence tomography (SD-OCT) and manual grading by experts. PURPOSE The purpose of this study is to assess the agreement of vCDR measured by a new AI software from optic disc images obtained using a validated smartphone-based imaging device, with SD-OCT vCDR measurements, and manual grading by experts on a stereoscopic fundus camera. METHODS In a prospective, cross-sectional study, participants above 18 years (Glaucoma and normal) underwent a dilated fundus evaluation, followed by optic disc imaging including a 42-degree monoscopic disc-centered image (Remidio NM-FOP-10), a 30-degree stereoscopic disc-centered image (Kowa nonmyd WX-3D desktop fundus camera), and disc analysis (Cirrus SD-OCT). Remidio FOP images were analyzed for vCDR using the new AI software, and Kowa stereoscopic images were manually graded by 3 fellowship-trained glaucoma specialists. RESULTS We included 473 eyes of 244 participants. The vCDR values from the new AI software showed strong agreement with SD-OCT measurements [95% limits of agreement (LoA)=-0.13 to 0.16]. The agreement with SD-OCT was marginally better in eyes with higher vCDR (95% LoA=-0.15 to 0.12 for vCDR>0.8). Interclass correlation coefficient was 0.90 (95% CI, 0.88-0.91). The vCDR values from AI software showed a good correlation with the manual segmentation by experts (interclass correlation coefficient=0.89, 95% CI, 0.87-0.91) on stereoscopic images (95% LoA=-0.18 to 0.11) with agreement better for eyes with vCDR>0.8 (LoA=-0.12 to 0.08). CONCLUSIONS The new AI software vCDR measurements had an excellent agreement and correlation with the SD-OCT and manual grading. The ability of the Medios AI to work offline, without requiring cloud-based inferencing, is an added advantage.
Collapse
Affiliation(s)
- Sujani Shroff
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Divya P Rao
- Remidio Innovative Solution Inc., Glen Allen, VA
| | - Florian M Savoy
- Medios Technologies, Remidio Innovative Solutions Pvt Ltd, Singapore
| | - S Shruthi
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Chao-Kai Hsu
- Medios Technologies, Remidio Innovative Solutions Pvt Ltd, Singapore
| | - Zia S Pradhan
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - P V Jayasree
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Anand Sivaraman
- Remidio Innovative Solution Pvt Ltd, Bengaluru, Karnataka, India
| | | | - Rohit Shetty
- Department of Glaucoma, Narayana Nethralaya, Rajajinagar
| | - Harsha L Rao
- Department of Glaucoma, Narayana Nethralaya, Bannerghatta Road
| |
Collapse
|
3
|
Coan LJ, Williams BM, Krishna Adithya V, Upadhyaya S, Alkafri A, Czanner S, Venkatesh R, Willoughby CE, Kavitha S, Czanner G. Automatic detection of glaucoma via fundus imaging and artificial intelligence: A review. Surv Ophthalmol 2023; 68:17-41. [PMID: 35985360 DOI: 10.1016/j.survophthal.2022.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 08/04/2022] [Accepted: 08/08/2022] [Indexed: 02/01/2023]
Abstract
Glaucoma is a leading cause of irreversible vision impairment globally, and cases are continuously rising worldwide. Early detection is crucial, allowing timely intervention that can prevent further visual field loss. To detect glaucoma an examination of the optic nerve head via fundus imaging can be performed, at the center of which is the assessment of the optic cup and disc boundaries. Fundus imaging is noninvasive and low-cost; however, image examination relies on subjective, time-consuming, and costly expert assessments. A timely question to ask is: "Can artificial intelligence mimic glaucoma assessments made by experts?" Specifically, can artificial intelligence automatically find the boundaries of the optic cup and disc (providing a so-called segmented fundus image) and then use the segmented image to identify glaucoma with high accuracy? We conducted a comprehensive review on artificial intelligence-enabled glaucoma detection frameworks that produce and use segmented fundus images and summarized the advantages and disadvantages of such frameworks. We identified 36 relevant papers from 2011 to 2021 and 2 main approaches: 1) logical rule-based frameworks, based on a set of rules; and 2) machine learning/statistical modeling-based frameworks. We critically evaluated the state-of-art of the 2 approaches, identified gaps in the literature and pointed at areas for future research.
Collapse
Affiliation(s)
- Lauren J Coan
- School of Computer Science and Mathematics, Liverpool John Moores University, UK.
| | - Bryan M Williams
- School of Computing and Communications, Lancaster University, UK
| | | | - Swati Upadhyaya
- Department of Glaucoma, Aravind Eye Hospital, Pondicherry, India
| | - Ala Alkafri
- School of Computing, Engineering & Digital Technologies, Teesside University, UK
| | - Silvester Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| | - Rengaraj Venkatesh
- Department of Glaucoma and Chief Medical Officer, Aravind Eye Hospital, Pondicherry, India
| | | | | | - Gabriela Czanner
- School of Computer Science and Mathematics, Liverpool John Moores University, UK; Faculty of Informatics and Information Technologies, Slovak University of Technology, Slovakia
| |
Collapse
|
4
|
Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med 2022; 5:156. [PMID: 36261476 PMCID: PMC9581990 DOI: 10.1038/s41746-022-00699-2] [Citation(s) in RCA: 28] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 09/29/2022] [Indexed: 11/16/2022] Open
Abstract
Transparency in Machine Learning (ML), often also referred to as interpretability or explainability, attempts to reveal the working mechanisms of complex models. From a human-centered design perspective, transparency is not a property of the ML model but an affordance, i.e., a relationship between algorithm and users. Thus, prototyping and user evaluations are critical to attaining solutions that afford transparency. Following human-centered design principles in highly specialized and high stakes domains, such as medical image analysis, is challenging due to the limited access to end users and the knowledge imbalance between those users and ML designers. To investigate the state of transparent ML in medical image analysis, we conducted a systematic review of the literature from 2012 to 2021 in PubMed, EMBASE, and Compendex databases. We identified 2508 records and 68 articles met the inclusion criteria. Current techniques in transparent ML are dominated by computational feasibility and barely consider end users, e.g. clinical stakeholders. Despite the different roles and knowledge of ML developers and end users, no study reported formative user research to inform the design and development of transparent ML models. Only a few studies validated transparency claims through empirical user evaluations. These shortcomings put contemporary research on transparent ML at risk of being incomprehensible to users, and thus, clinically irrelevant. To alleviate these shortcomings in forthcoming research, we introduce the INTRPRT guideline, a design directive for transparent ML systems in medical image analysis. The INTRPRT guideline suggests human-centered design principles, recommending formative user research as the first step to understand user needs and domain requirements. Following these guidelines increases the likelihood that the algorithms afford transparency and enable stakeholders to capitalize on the benefits of transparent ML.
Collapse
|
5
|
WU JOHSUAN, NISHIDA TAKASHI, WEINREB ROBERTN, LIN JOUWEI. Performances of Machine Learning in Detecting Glaucoma Using Fundus and Retinal Optical Coherence Tomography Images: A Meta-Analysis. Am J Ophthalmol 2022; 237:1-12. [PMID: 34942113 DOI: 10.1016/j.ajo.2021.12.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2021] [Revised: 11/24/2021] [Accepted: 12/03/2021] [Indexed: 11/01/2022]
Abstract
PURPOSE To evaluate the performance of machine learning (ML) in detecting glaucoma using fundus and retinal optical coherence tomography (OCT) images. DESIGN Meta-analysis. METHODS PubMed and EMBASE were searched on August 11, 2021. A bivariate random-effects model was used to pool ML's diagnostic sensitivity, specificity, and area under the curve (AUC). Subgroup analyses were performed based on ML classifier categories and dataset types. RESULTS One hundred and five studies (3.3%) were retrieved. Seventy-three (69.5%), 30 (28.6%), and 2 (1.9%) studies tested ML using fundus, OCT, and both image types, respectively. Total testing data numbers were 197,174 for fundus and 16,039 for OCT. Overall, ML showed excellent performances for both fundus (pooled sensitivity = 0.92 [95% CI, 0.91-0.93]; specificity = 0.93 [95% CI, 0.91-0.94]; and AUC = 0.97 [95% CI, 0.95-0.98]) and OCT (pooled sensitivity = 0.90 [95% CI, 0.86-0.92]; specificity = 0.91 [95% CI, 0.89-0.92]; and AUC = 0.96 [95% CI, 0.93-0.97]). ML performed similarly using all data and external data for fundus and the external test result of OCT was less robust (AUC = 0.87). When comparing different classifier categories, although support vector machine showed the highest performance (pooled sensitivity, specificity, and AUC ranges, 0.92-0.96, 0.95-0.97, and 0.96-0.99, respectively), results by neural network and others were still good (pooled sensitivity, specificity, and AUC ranges, 0.88-0.93, 0.90-0.93, 0.95-0.97, respectively). When analyzed based on dataset types, ML demonstrated consistent performances on clinical datasets (fundus AUC = 0.98 [95% CI, 0.97-0.99] and OCT AUC = 0.95 [95% 0.93-0.97]). CONCLUSIONS Performance of ML in detecting glaucoma compares favorably to that of experts and is promising for clinical application. Future prospective studies are needed to better evaluate its real-world utility.
Collapse
|
6
|
Segmentation and Classification of Glaucoma Using U-Net with Deep Learning Model. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1601354. [PMID: 35222876 PMCID: PMC8866016 DOI: 10.1155/2022/1601354] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/07/2022] [Accepted: 01/12/2022] [Indexed: 11/17/2022]
Abstract
Glaucoma is the second most common cause for blindness around the world and the third most common in Europe and the USA. Around 78 million people are presently living with glaucoma (2020). It is expected that 111.8 million people will have glaucoma by the year 2040. 90% of glaucoma is undetected in developing nations. It is essential to develop a glaucoma detection system for early diagnosis. In this research, early prediction of glaucoma using deep learning technique is proposed. In this proposed deep learning model, the ORIGA dataset is used for the evaluation of glaucoma images. The U-Net architecture based on deep learning algorithm is implemented for optic cup segmentation and a pretrained transfer learning model; DenseNet-201 is used for feature extraction along with deep convolution neural network (DCNN). The DCNN approach is used for the classification, where the final results will be representing whether the glaucoma infected or not. The primary objective of this research is to detect the glaucoma using the retinal fundus images, which can be useful to determine if the patient was affected by glaucoma or not. The result of this model can be positive or negative based on the outcome detected as infected by glaucoma or not. The model is evaluated using parameters such as accuracy, precision, recall, specificity, and F-measure. Also, a comparative analysis is conducted for the validation of the model proposed. The output is compared to other current deep learning models used for CNN classification, such as VGG-19, Inception ResNet, ResNet 152v2, and DenseNet-169. The proposed model achieved 98.82% accuracy in training and 96.90% in testing. Overall, the performance of the proposed model is better in all the analysis.
Collapse
|
7
|
Addis V, Chen M, Zorger R, Salowe R, Daniel E, Lee R, Pistilli M, Gao J, Maguire MG, Chan L, Gudiseva HV, Zenebe-Gete S, Merriam S, Smith EJ, Martin R, Parker Ostroff C, Gee JC, Cui QN, Miller-Ellis E, O’Brien JM, Sankar PS. A Precise Method to Evaluate 360 Degree Measures of Optic Cup and Disc Morphology in an African American Cohort and Its Genetic Applications. Genes (Basel) 2021; 12:genes12121961. [PMID: 34946910 PMCID: PMC8701339 DOI: 10.3390/genes12121961] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/01/2021] [Accepted: 12/05/2021] [Indexed: 11/16/2022] Open
Abstract
(1) Background: Vertical cup-to-disc ratio (CDR) is an important measure for evaluating damage to the optic nerve head (ONH) in glaucoma patients. However, this measure often does not fully capture the irregular cupping observed in glaucomatous nerves. We developed and evaluated a method to measure cup-to-disc ratio (CDR) at all 360 degrees of the ONH. (2) Methods: Non-physician graders from the Scheie Reading Center outlined the cup and disc on digital stereo color disc images from African American patients enrolled in the Primary Open-Angle African American Glaucoma Genetics (POAAGG) study. After converting the resultant coordinates into polar representation, the CDR at each 360-degree location of the ONH was obtained. We compared grader VCDR values with clinical VCDR values, using Spearman correlation analysis, and validated significant genetic associations with clinical VCDR, using grader VCDR values. (3) Results: Graders delineated outlines of the cup contour and disc boundaries twice in each of 1815 stereo disc images. For both cases and controls, the mean CDR was highest at the horizontal bisector, particularly in the temporal region, as compared to other degree locations. There was a good correlation between grader CDR at the vertical bisector and clinical VCDR (Spearman Correlation OD: r = 0.78 [95% CI: 0.76–0.79]). An SNP in the MPDZ gene, associated with clinical VCDR in a prior genome-wide association study, showed a significant association with grader VCDR (p = 0.01) and grader CDR area ratio (p = 0.02). (4) Conclusions: The CDR of both glaucomatous and non-glaucomatous eyes varies by degree location, with the highest measurements in the temporal region of the eye. This method can be useful for capturing innate eccentric ONH morphology, tracking disease progression, and identifying genetic associations.
Collapse
Affiliation(s)
- Victoria Addis
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Min Chen
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA; (M.C.); (J.C.G.)
| | - Richard Zorger
- Penn Vision Research Center, University of Pennsylvania, Philadelphia, PA 19104, USA;
| | - Rebecca Salowe
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Ebenezer Daniel
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Roy Lee
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Maxwell Pistilli
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Jinpeng Gao
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Maureen G. Maguire
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Lilian Chan
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Harini V. Gudiseva
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Selam Zenebe-Gete
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Sayaka Merriam
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Eli J. Smith
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Revell Martin
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Candace Parker Ostroff
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - James C. Gee
- Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA; (M.C.); (J.C.G.)
| | - Qi N. Cui
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Eydie Miller-Ellis
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| | - Joan M. O’Brien
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
- Correspondence: Joan.O’; Tel.: +1-215-662-8657; Fax: +1-215-662-9676
| | - Prithvi S. Sankar
- Scheie Eye Institute, University of Pennsylvania, Philadelphia, PA 19104, USA; (V.A.); (R.S.); (E.D.); (R.L.); (M.P.); (J.G.); (M.G.M.); (L.C.); (H.V.G.); (S.Z.-G.); (S.M.); (E.J.S.); (R.M.); (C.P.O.); (Q.N.C.); (E.M.-E.); (P.S.S.)
| |
Collapse
|
8
|
Burke J, King S. Edge Tracing Using Gaussian Process Regression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 31:138-148. [PMID: 34807828 DOI: 10.1109/tip.2021.3128329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
We introduce a novel edge tracing algorithm using Gaussian process regression. Our edge-based segmentation algorithm models an edge of interest using Gaussian process regression and iteratively searches the image for edge pixels in a recursive Bayesian scheme. This procedure combines local edge information from the image gradient and global structural information from posterior curves, sampled from the model's posterior predictive distribution, to sequentially build and refine an observation set of edge pixels. This accumulation of pixels converges the distribution to the edge of interest. Hyperparameters can be tuned by the user at initialisation and optimised given the refined observation set. This tunable approach does not require any prior training and is not restricted to any particular type of imaging domain. Due to the model's uncertainty quantification, the algorithm is robust to artefacts and occlusions which degrade the quality and continuity of edges in images. Our approach also has the ability to efficiently trace edges in image sequences by using previous-image edge traces as a priori information for consecutive images. Various applications to medical imaging and satellite imaging are used to validate the technique and comparisons are made with two commonly used edge tracing algorithms.
Collapse
|
9
|
Kollia E, Patsea E, Georgalas I, Brouzas D, Papaconstantinou D. Correlation Between Central Corneal Thickness and Radial Peripapillary Capillary Density, in Patients With Ocular Hypertension. Cureus 2021; 13:e17138. [PMID: 34408962 PMCID: PMC8362868 DOI: 10.7759/cureus.17138] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/12/2021] [Indexed: 11/17/2022] Open
Abstract
Purpose To investigate any possible relationship between the central corneal thickness and the radial peripapillary capillary density detected by optical coherence tomography (OCT) angiography in eyes with ocular hypertension. Materials and methods In this observational study, 135 eyes were examined. OCT angiography of the optic disc (4.5 mm) and ultrasound corneal pachymetry were performed in all cases. Age, medical treatment for ocular hypertension, sex, and retinal nerve fiber layer thickness were evaluated. The main indices of blood flow were also examined. Spearman correlation coefficients were used to explore the association between two continuous variables. Results A directly proportional significance regarding the correlation between central corneal thickness and radial peripapillary network was indicated in eyes with ocular hypertension (p = .036). Conclusions Central corneal thickness and radial peripapillary capillary density constitute two essential screening parameters for patients with ocular hypertension.
Collapse
Affiliation(s)
- Elpida Kollia
- Ophthalmology, National and Kapodistrian University of Athens School of Medicine, Athens, GRC
| | - Eleni Patsea
- Ophthalmology/Glaucoma, Ophthalmiatreion Athinon, Athens, GRC
| | - Ilias Georgalas
- Ophthalmology, National and Kapodistrian University of Athens School of Medicine, Athens, GRC
| | - Dimitrios Brouzas
- Ophthalmology, "G. Gennimatas" Hospital, National and Kapodistrian University of Athens School of Medicine, Athens, GRC
| | - Dimitrios Papaconstantinou
- Ophthalmology, "G. Gennimatas" Hospital, National and Kapodistrian University of Athens School of Medicine, Athens, GRC
| |
Collapse
|
10
|
Aggarwal R, Sounderajah V, Martin G, Ting DSW, Karthikesalingam A, King D, Ashrafian H, Darzi A. Diagnostic accuracy of deep learning in medical imaging: a systematic review and meta-analysis. NPJ Digit Med 2021; 4:65. [PMID: 33828217 PMCID: PMC8027892 DOI: 10.1038/s41746-021-00438-z] [Citation(s) in RCA: 202] [Impact Index Per Article: 67.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Accepted: 02/25/2021] [Indexed: 12/19/2022] Open
Abstract
Deep learning (DL) has the potential to transform medical diagnostics. However, the diagnostic accuracy of DL is uncertain. Our aim was to evaluate the diagnostic accuracy of DL algorithms to identify pathology in medical imaging. Searches were conducted in Medline and EMBASE up to January 2020. We identified 11,921 studies, of which 503 were included in the systematic review. Eighty-two studies in ophthalmology, 82 in breast disease and 115 in respiratory disease were included for meta-analysis. Two hundred twenty-four studies in other specialities were included for qualitative review. Peer-reviewed studies that reported on the diagnostic accuracy of DL algorithms to identify pathology using medical imaging were included. Primary outcomes were measures of diagnostic accuracy, study design and reporting standards in the literature. Estimates were pooled using random-effects meta-analysis. In ophthalmology, AUC's ranged between 0.933 and 1 for diagnosing diabetic retinopathy, age-related macular degeneration and glaucoma on retinal fundus photographs and optical coherence tomography. In respiratory imaging, AUC's ranged between 0.864 and 0.937 for diagnosing lung nodules or lung cancer on chest X-ray or CT scan. For breast imaging, AUC's ranged between 0.868 and 0.909 for diagnosing breast cancer on mammogram, ultrasound, MRI and digital breast tomosynthesis. Heterogeneity was high between studies and extensive variation in methodology, terminology and outcome measures was noted. This can lead to an overestimation of the diagnostic accuracy of DL algorithms on medical imaging. There is an immediate need for the development of artificial intelligence-specific EQUATOR guidelines, particularly STARD, in order to provide guidance around key issues in this field.
Collapse
Affiliation(s)
- Ravi Aggarwal
- Institute of Global Health Innovation, Imperial College London, London, UK
| | | | - Guy Martin
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Daniel S W Ting
- Singapore Eye Research Institute, Singapore National Eye Center, Singapore, Singapore
| | | | - Dominic King
- Institute of Global Health Innovation, Imperial College London, London, UK
| | - Hutan Ashrafian
- Institute of Global Health Innovation, Imperial College London, London, UK.
| | - Ara Darzi
- Institute of Global Health Innovation, Imperial College London, London, UK
| |
Collapse
|
11
|
Oh S, Park Y, Cho KJ, Kim SJ. Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation. Diagnostics (Basel) 2021; 11:510. [PMID: 33805685 PMCID: PMC8001225 DOI: 10.3390/diagnostics11030510] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 03/08/2021] [Accepted: 03/10/2021] [Indexed: 12/14/2022] Open
Abstract
The aim is to develop a machine learning prediction model for the diagnosis of glaucoma and an explanation system for a specific prediction. Clinical data of the patients based on a visual field test, a retinal nerve fiber layer optical coherence tomography (RNFL OCT) test, a general examination including an intraocular pressure (IOP) measurement, and fundus photography were provided for the feature selection process. Five selected features (variables) were used to develop a machine learning prediction model. The support vector machine, C5.0, random forest, and XGboost algorithms were tested for the prediction model. The performance of the prediction models was tested with 10-fold cross-validation. Statistical charts, such as gauge, radar, and Shapley Additive Explanations (SHAP), were used to explain the prediction case. All four models achieved similarly high diagnostic performance, with accuracy values ranging from 0.903 to 0.947. The XGboost model is the best model with an accuracy of 0.947, sensitivity of 0.941, specificity of 0.950, and AUC of 0.945. Three statistical charts were established to explain the prediction based on the characteristics of the XGboost model. Higher diagnostic performance was achieved with the XGboost model. These three statistical charts can help us understand why the machine learning model produces a specific prediction result. This may be the first attempt to apply "explainable artificial intelligence" to eye disease diagnosis.
Collapse
Affiliation(s)
- Sejong Oh
- Software Science, College of Software Convergence, Jukjeon Campus, Dankook University, Yongin 16890, Korea;
| | - Yuli Park
- Department of Ophthalmology, College of Medicine, Dankook University, 119, Dandae-ro, Dongnam-gu, Cheonan-si, Chungnam 31116, Korea; (Y.P.); (K.J.C.)
| | - Kyong Jin Cho
- Department of Ophthalmology, College of Medicine, Dankook University, 119, Dandae-ro, Dongnam-gu, Cheonan-si, Chungnam 31116, Korea; (Y.P.); (K.J.C.)
| | - Seong Jae Kim
- Department of Ophthalmology, Institute of Health Sciences, Gyeongsang National University School of Medicine and Gyeongsang National University Hospital, Jinju 52727, Korea
| |
Collapse
|
12
|
Campbell CG, Ting DSW, Keane PA, Foster PJ. The potential application of artificial intelligence for diagnosis and management of glaucoma in adults. Br Med Bull 2020; 134:21-33. [PMID: 32518944 DOI: 10.1093/bmb/ldaa012] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/03/2020] [Revised: 04/02/2020] [Accepted: 04/02/2020] [Indexed: 12/26/2022]
Abstract
BACKGROUND Glaucoma is the most frequent cause of irreversible blindness worldwide. There is no cure, but early detection and treatment can slow the progression and prevent loss of vision. It has been suggested that artificial intelligence (AI) has potential application for detection and management of glaucoma. SOURCES OF DATA This literature review is based on articles published in peer-reviewed journals. AREAS OF AGREEMENT There have been significant advances in both AI and imaging techniques that are able to identify the early signs of glaucomatous damage. Machine and deep learning algorithms show capabilities equivalent to human experts, if not superior. AREAS OF CONTROVERSY Concerns that the increased reliance on AI may lead to deskilling of clinicians. GROWING POINTS AI has potential to be used in virtual review clinics, telemedicine and as a training tool for junior doctors. Unsupervised AI techniques offer the potential of uncovering currently unrecognized patterns of disease. If this promise is fulfilled, AI may then be of use in challenging cases or where a second opinion is desirable. AREAS TIMELY FOR DEVELOPING RESEARCH There is a need to determine the external validity of deep learning algorithms and to better understand how the 'black box' paradigm reaches results.
Collapse
Affiliation(s)
- Cara G Campbell
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
| | - Daniel S W Ting
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
| | - Pearse A Keane
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| | - Paul J Foster
- UCL Institute of Ophthalmology, Faculty of Brain Science, University College London, 11-43 Bath Street, London EC1V 9EL, UK
- Medical Retina Service, Moorfields Eye Hospital NHS Foundation Trust, 162 City Road, London EC1V 2PD, UK
- National Institute for Health Research Biomedical Research Centre, Moorfields Eye Hospital NHS Foundation Trust NHS Foundation Trust, 2/12 Wolfson Building and UCL Institute of Ophthalmology, 11-43 Bath Street, London EC1V 9EL, UK
| |
Collapse
|
13
|
Normando EM, Yap TE, Maddison J, Miodragovic S, Bonetti P, Almonte M, Mohammad NG, Ameen S, Crawley L, Ahmed F, Bloom PA, Cordeiro MF. A CNN-aided method to predict glaucoma progression using DARC (Detection of Apoptosing Retinal Cells). Expert Rev Mol Diagn 2020; 20:737-748. [PMID: 32310684 PMCID: PMC7115906 DOI: 10.1080/14737159.2020.1758067] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 04/16/2020] [Indexed: 12/12/2022]
Abstract
BACKGROUND A key objective in glaucoma is to identify those at risk of rapid progression and blindness. Recently, a novel first-in-man method for visualising apoptotic retinal cells called DARC (Detection-of-Apoptosing-Retinal-Cells) was reported. The aim was to develop an automatic CNN-aided method of DARC spot detection to enable prediction of glaucoma progression. METHODS Anonymised DARC images were acquired from healthy control (n=40) and glaucoma (n=20) Phase 2 clinical trial subjects (ISRCTN10751859) from which 5 observers manually counted spots. The CNN-aided algorithm was trained and validated using manual counts from control subjects, and then tested on glaucoma eyes. RESULTS The algorithm had 97.0% accuracy, 91.1% sensitivity and 97.1% specificity to spot detection when compared to manual grading of 50% controls. It was next tested on glaucoma patient eyes defined as progressing or stable based on a significant (p<0.05) rate of progression using OCT-retinal nerve fibre layer measurements at 18 months. It demonstrated 85.7% sensitivity, 91.7% specificity with AUC of 0.89, and a significantly (p=0.0044) greater DARC count in those patients who later progressed. CONCLUSION This CNN-enabled algorithm provides an automated and objective measure of DARC, promoting its use as an AI-aided biomarker for predicting glaucoma progression and testing new drugs.
Collapse
Affiliation(s)
- Eduardo M Normando
- ICORG, Imperial College London , London, UK
- Western Eye Hospital, Imperial College Healthcare NHS Trust , London, UK
| | - Tim E Yap
- ICORG, Imperial College London , London, UK
- Western Eye Hospital, Imperial College Healthcare NHS Trust , London, UK
| | | | | | | | | | | | | | | | | | - Philip A Bloom
- ICORG, Imperial College London , London, UK
- Western Eye Hospital, Imperial College Healthcare NHS Trust , London, UK
| | - Maria Francesca Cordeiro
- ICORG, Imperial College London , London, UK
- Western Eye Hospital, Imperial College Healthcare NHS Trust , London, UK
- UCL Institute of Ophthalmology , London, UK
| |
Collapse
|
14
|
Zhu W, Kolamunnage-Dona R, Zheng Y, Harding S, Czanner G. Spatial and spatio-temporal statistical analyses of retinal images: a review of methods and applications. BMJ Open Ophthalmol 2020; 5:e000479. [PMID: 32537517 PMCID: PMC7264837 DOI: 10.1136/bmjophth-2020-000479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2020] [Revised: 04/26/2020] [Accepted: 04/28/2020] [Indexed: 11/12/2022] Open
Abstract
Background Clinical research and management of retinal diseases greatly depend on the interpretation of retinal images and often longitudinally collected images. Retinal images provide context for spatial data, namely the location of specific pathologies within the retina. Longitudinally collected images can show how clinical events at one point can affect the retina over time. In this review, we aimed to assess statistical approaches to spatial and spatio-temporal data in retinal images. We also review the spatio-temporal modelling approaches used in other medical image types. Methods We conducted a comprehensive literature review of both spatial or spatio-temporal approaches and non-spatial approaches to the statistical analysis of retinal images. The key methodological and clinical characteristics of published papers were extracted. We also investigated whether clinical variables and spatial correlation were accounted for in the analysis. Results Thirty-four papers that included retinal imaging data were identified for full-text information extraction. Only 11 (32.4%) papers used spatial or spatio-temporal statistical methods to analyse images, others (23 papers, 67.6%) used non-spatial methods. Twenty-eight (82.4%) papers reported images collected cross-sectionally, while 6 (17.6%) papers reported analyses on images collected longitudinally. In imaging areas outside of ophthalmology, 19 papers were identified with spatio-temporal analysis, and multiple statistical methods were recorded. Conclusions In future statistical analyses of retinal images, it will be beneficial to clearly define and report the spatial distributions studied, report the spatial correlations, combine imaging data with clinical variables into analysis if available, and clearly state the software or packages used.
Collapse
Affiliation(s)
- Wenyue Zhu
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK
| | - Ruwanthi Kolamunnage-Dona
- Department of Health Data Science, Institute of Population Health Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK
| | - Yalin Zheng
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK
| | - Simon Harding
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK
| | - Gabriela Czanner
- Department of Eye and Vision Science, Institute of Life Course and Medical Sciences, University of Liverpool, a member of Liverpool Health Partners, Liverpool, UK.,St Paul's Eye Unit, Liverpool University Hospitals Foundation Trust, a member of Liverpool Health Partners, Liverpool, UK.,Department of Applied Mathematics, Liverpool John Moores University, Liverpool, UK
| |
Collapse
|
15
|
Spatial Linear Mixed Effects Modelling for OCT Images: SLME Model. J Imaging 2020; 6:jimaging6060044. [PMID: 34460590 PMCID: PMC8321139 DOI: 10.3390/jimaging6060044] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Revised: 05/27/2020] [Accepted: 05/30/2020] [Indexed: 11/22/2022] Open
Abstract
Much recent research focuses on how to make disease detection more accurate as well as “slimmer”, i.e., allowing analysis with smaller datasets. Explanatory models are a hot research topic because they explain how the data are generated. We propose a spatial explanatory modelling approach that combines Optical Coherence Tomography (OCT) retinal imaging data with clinical information. Our model consists of a spatial linear mixed effects inference framework, which innovatively models the spatial topography of key information via mixed effects and spatial error structures, thus effectively modelling the shape of the thickness map. We show that our spatial linear mixed effects (SLME) model outperforms traditional analysis-of-variance approaches in the analysis of Heidelberg OCT retinal thickness data from a prospective observational study, involving 300 participants with diabetes and 50 age-matched controls. Our SLME model has a higher power for detecting the difference between disease groups, and it shows where the shape of retinal thickness profiles differs between the eyes of participants with diabetes and the eyes of healthy controls. In simulated data, the SLME model demonstrates how incorporating spatial correlations can increase the accuracy of the statistical inferences. This model is crucial in the understanding of the progression of retinal thickness changes in diabetic maculopathy to aid clinicians for early planning of effective treatment. It can be extended to disease monitoring and prognosis in other diseases and with other imaging technologies.
Collapse
|
16
|
Farnell DJJ, Richmond S, Galloway J, Zhurov AI, Pirttiniemi P, Heikkinen T, Harila V, Matthews H, Claes P. Multilevel principal components analysis of three-dimensional facial growth in adolescents. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2020; 188:105272. [PMID: 31865094 DOI: 10.1016/j.cmpb.2019.105272] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/29/2019] [Revised: 11/19/2019] [Accepted: 12/10/2019] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVES The study of age-related facial shape changes across different populations and sexes requires new multivariate tools to disentangle different sources of variations present in 3D facial images. Here we wish to use a multivariate technique called multilevel principal components analysis (mPCA) to study three-dimensional facial growth in adolescents. METHODS These facial shapes were captured for Welsh and Finnish subjects (both male and female) at multiple ages from 12 to 17 years old (i.e., repeated-measures data). 1000 "dense" 3D points were defined regularly for each shape by using a deformable template via "meshmonk" software. A three-level model was used here, namely: level 1 (sex/ethnicity); level 2, all "subject" variations excluding sex, ethnicity, and age; and level 3, age. The technicalities underpinning the mPCA method are presented in Appendices. RESULTS Eigenvalues via mPCA predicted that: level 1 (ethnicity/sex) contained 7.9% of variation; level 2 contained 71.5%; and level 3 (age) contained 20.6%. The results for the eigenvalues via mPCA followed a similar pattern to those results of single-level PCA. Results for modes of variation made sense, where effects due to ethnicity, sex, and age were reflected in modes at appropriate levels of the model. Standardised scores at level 1 via mPCA showed much stronger differentiation between sex and ethnicity groups than results of single-level PCA. Results for standardised scores from both single-level PCA and mPCA at level 3 indicated that females had different average "trajectories" with respect to these scores than males, which suggests that facial shape matures in different ways for males and females. No strong evidence of differences in growth patterns between Finnish and Welsh subjects was observed. CONCLUSIONS mPCA results agree with existing research relating to the general process of facial changes in adolescents with respect to age quoted in the literature. They support previous evidence that suggests that males demonstrate larger changes and for a longer period of time compared to females, especially in the lower third of the face. These calculations are therefore an excellent initial test that multivariate multilevel methods such as mPCA can be used to describe such age-related changes for "dense" 3D point data.
Collapse
Affiliation(s)
- D J J Farnell
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom.
| | - S Richmond
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - J Galloway
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - A I Zhurov
- School of Dentistry, Cardiff University, Heath Park, Cardiff CF14 4XY, United Kingdom
| | - P Pirttiniemi
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - T Heikkinen
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - V Harila
- Research Unit of Oral Health Sciences, Faculty of Medicine, University of Oulu, Oulu, Finland; Medical Research Center Oulu (MRC Oulu), Oulu University Hospital, Oulu, Finland
| | - H Matthews
- Medical Imaging Research Center, UZ Leuven, 3000 Leuven, Belgium; Department of Human Genetics, KU Leuven, 3000 Leuven, Belgium; OMFS IMPATH Research Group, Department of Imaging and Pathology, Faculty of Medicine, KU Leuven, Leuven, Belgium; Facial Sciences Research Group, Murdoch Children's Research Institute, Melbourne, Australia; Department of Paediatrics, University of Melbourne, Melbourne, Australia
| | - P Claes
- Medical Imaging Research Center, UZ Leuven, 3000 Leuven, Belgium; Department of Human Genetics, KU Leuven, 3000 Leuven, Belgium; Department of Electrical Engineering, ESAT/PSI, KU Leuven, 3000 Leuven, Belgium
| |
Collapse
|
17
|
Requirements and Limitations of Thermal Drones for Effective Search and Rescue in Marine and Coastal Areas. DRONES 2019. [DOI: 10.3390/drones3040078] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Search and rescue (SAR) is a vital line of defense against unnecessary loss of life. However,in a potentially hazardous environment, it is important to balance the risks associated with SARaction. Drones have the potential to help with the efficiency, success rate and safety of SAR operationsas they can cover large or hard to access areas quickly. The addition of thermal cameras to the dronesprovides the potential for automated and reliable detection of people in need of rescue. We performeda pilot study with a thermal-equipped drone for SAR applications in Morecambe Bay. In a varietyof realistic SAR scenarios, we found that we could detect humans who would be in need of rescue,both by the naked eye and by a simple automated method. We explore the current advantages andlimitations of thermal drone systems, and outline the future path to a useful system for deploymentin real-life SAR.
Collapse
|
18
|
Correction: Accurate, fast, data efficient and interpretable glaucoma diagnosis with automated spatial analysis of the whole cup to disc profile. PLoS One 2019; 14:e0215056. [PMID: 30943257 PMCID: PMC6447221 DOI: 10.1371/journal.pone.0215056] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
[This corrects the article DOI: 10.1371/journal.pone.0209409.].
Collapse
|