1
|
Kang DW, Park GH, Ryu WS, Schellingerhout D, Kim M, Kim YS, Park CY, Lee KJ, Han MK, Jeong HG, Kim DE. Strengthening deep-learning models for intracranial hemorrhage detection: strongly annotated computed tomography images and model ensembles. Front Neurol 2023; 14:1321964. [PMID: 38221995 PMCID: PMC10784380 DOI: 10.3389/fneur.2023.1321964] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2023] [Accepted: 12/11/2023] [Indexed: 01/16/2024] Open
Abstract
Background and purpose Multiple attempts at intracranial hemorrhage (ICH) detection using deep-learning techniques have been plagued by clinical failures. We aimed to compare the performance of a deep-learning algorithm for ICH detection trained on strongly and weakly annotated datasets, and to assess whether a weighted ensemble model that integrates separate models trained using datasets with different ICH improves performance. Methods We used brain CT scans from the Radiological Society of North America (27,861 CT scans, 3,528 ICHs) and AI-Hub (53,045 CT scans, 7,013 ICHs) for training. DenseNet121, InceptionResNetV2, MobileNetV2, and VGG19 were trained on strongly and weakly annotated datasets and compared using independent external test datasets. We then developed a weighted ensemble model combining separate models trained on all ICH, subdural hemorrhage (SDH), subarachnoid hemorrhage (SAH), and small-lesion ICH cases. The final weighted ensemble model was compared to four well-known deep-learning models. After external testing, six neurologists reviewed 91 ICH cases difficult for AI and humans. Results InceptionResNetV2, MobileNetV2, and VGG19 models outperformed when trained on strongly annotated datasets. A weighted ensemble model combining models trained on SDH, SAH, and small-lesion ICH had a higher AUC, compared with a model trained on all ICH cases only. This model outperformed four deep-learning models (AUC [95% C.I.]: Ensemble model, 0.953[0.938-0.965]; InceptionResNetV2, 0.852[0.828-0.873]; DenseNet121, 0.875[0.852-0.895]; VGG19, 0.796[0.770-0.821]; MobileNetV2, 0.650[0.620-0.680]; p < 0.0001). In addition, the case review showed that a better understanding and management of difficult cases may facilitate clinical use of ICH detection algorithms. Conclusion We propose a weighted ensemble model for ICH detection, trained on large-scale, strongly annotated CT scans, as no model can capture all aspects of complex tasks.
Collapse
Affiliation(s)
- Dong-Wan Kang
- Department of Public Health, Seoul National University Bundang Hospital, Seongnam, Republic of Korea
- Department of Neurology, Gyeonggi Provincial Medical Center, Icheon Hospital, Icheon, Republic of Korea
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Gi-Hun Park
- JLK Inc., Artificial Intelligence Research Center, Seoul, Republic of Korea
| | - Wi-Sun Ryu
- JLK Inc., Artificial Intelligence Research Center, Seoul, Republic of Korea
| | - Dawid Schellingerhout
- Department of Neuroradiology and Imaging Physics, The University of Texas M.D. Anderson Cancer Center, Houston, TX, United States
| | - Museong Kim
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Hospital Medicine Center, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Yong Soo Kim
- Department of Neurology, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul, Republic of Korea
| | - Chan-Young Park
- Department of Neurology, Chung-Ang University Hospital, Seoul, Republic of Korea
| | - Keon-Joo Lee
- Department of Neurology, Korea University Guro Hospital, Seoul, Republic of Korea
| | - Moon-Ku Han
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Han-Gil Jeong
- Department of Neurology, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
- Department of Neurosurgery, Seoul National University Bundang Hospital, Seoul National University College of Medicine, Seongnam, Republic of Korea
| | - Dong-Eog Kim
- Department of Neurology, Dongguk University Ilsan Hospital, Goyang, Republic of Korea
- National Priority Research Center for Stroke, Goyang, Republic of Korea
| |
Collapse
|
2
|
Zhao XF, Yang SQ, Wen XH, Huang QW, Qiu PF, Wei TR, Zhang H, Wang JC, Zhang DW, Shi X, Lu HL. A Fully Flexible Intelligent Thermal Touch Panel Based on Intrinsically Plastic Ag 2 S Semiconductor. Adv Mater 2022; 34:e2107479. [PMID: 35040221 DOI: 10.1002/adma.202107479] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 12/26/2021] [Indexed: 06/14/2023]
Abstract
Wearable touch panels, a typical flexible electronic device, can recognize and feed back the information of finger touch and movement. Excellent wearable touch panels are required to accurately and quickly monitor the signals of finger movement as well as the capacity of bearing various types of deformation. High-performance thermistor materials are one of the key functional components, but to date, a long-standing bottleneck is that inorganic semiconductors are typically brittle while the electrical properties of organic semiconductors are quite low. Herein, a high-performance flexible temperature sensor is reported by using plastic Ag2 S with ultrahigh temperature coefficient of resistance of -4.7% K-1 and resolution of 0.05 K, and rapid response/recovery time of 0.11/0.11 s. Moreover, the temperature sensor shows excellent durability without performance damage or loss during force stimuli tests. In addition, a fully flexible intelligent touch panel composed of a 16 × 10 Ag2 S-film-based temperature sensor array, as well as a flexible printed circuit board and a deep-learning algorithm is designed for perceiving finger touch signals in real-time, and intelligent feedback of Chinese characters and letters on an app. These results strongly show that high-performance flexible inorganic semiconductors can be widely used in flexible electronics.
Collapse
Affiliation(s)
- Xue-Feng Zhao
- State Key Laboratory of ASIC and System, Shanghai Institute of Intelligent Electronics & Systems, School of Microelectronics, Fudan University, Shanghai, 200433, China
| | - Shi-Qi Yang
- State Key Laboratory of High-Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, 200050, China
| | - Xiao-Hong Wen
- State Key Laboratory of ASIC and System, Shanghai Institute of Intelligent Electronics & Systems, School of Microelectronics, Fudan University, Shanghai, 200433, China
| | - Qi-Wei Huang
- School of Information Science and Technology, Fudan University, Shanghai, 200433, China
| | - Peng-Fei Qiu
- State Key Laboratory of High-Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, 200050, China
| | - Tian-Ran Wei
- State Key Laboratory of Metal Matrix Composites, School of Materials Science and Engineering, Shanghai Jiao Tong University, Shanghai, 200240, China
| | - Hao Zhang
- Key Laboratory of Micro and Nano Photonic Structures, Department of Optical Science and Engineering, Fudan University, Shanghai, 200433, China
| | - Jia-Cheng Wang
- State Key Laboratory of High-Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, 200050, China
| | - David Wei Zhang
- State Key Laboratory of ASIC and System, Shanghai Institute of Intelligent Electronics & Systems, School of Microelectronics, Fudan University, Shanghai, 200433, China
| | - Xun Shi
- State Key Laboratory of High-Performance Ceramics and Superfine Microstructure, Shanghai Institute of Ceramics, Chinese Academy of Sciences, Shanghai, 200050, China
| | - Hong-Liang Lu
- State Key Laboratory of ASIC and System, Shanghai Institute of Intelligent Electronics & Systems, School of Microelectronics, Fudan University, Shanghai, 200433, China
| |
Collapse
|
3
|
Keel S, Li Z, Scheetz J, Robman L, Phung J, Makeyeva G, Aung K, Liu C, Yan X, Meng W, Guymer R, Chang R, He M. Development and validation of a deep-learning algorithm for the detection of neovascular age-related macular degeneration from colour fundus photographs. Clin Exp Ophthalmol 2019; 47:1009-1018. [PMID: 31215760 DOI: 10.1111/ceo.13575] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 05/24/2019] [Accepted: 06/13/2019] [Indexed: 12/17/2022]
Abstract
IMPORTANCE Detection of early onset neovascular age-related macular degeneration (AMD) is critical to protecting vision. BACKGROUND To describe the development and validation of a deep-learning algorithm (DLA) for the detection of neovascular age-related macular degeneration. DESIGN Development and validation of a DLA using retrospective datasets. PARTICIPANTS We developed and trained the DLA using 56 113 retinal images and an additional 86 162 images from an independent dataset to externally validate the DLA. All images were non-stereoscopic and retrospectively collected. METHODS The internal validation dataset was derived from real-world clinical settings in China. Gold standard grading was assigned when consensus was reached by three individual ophthalmologists. The DLA classified 31 247 images as gradable and 24 866 as ungradable (poor quality or poor field definition). These ungradable images were used to create a classification model for image quality. Efficiency and diagnostic accuracy were tested using 86 162 images derived from the Melbourne Collaborative Cohort Study. Neovascular AMD and/or ungradable outcome in one or both eyes was considered referable. MAIN OUTCOME MEASURES Area under the receiver operating characteristic curve (AUC), sensitivity and specificity. RESULTS In the internal validation dataset, the AUC, sensitivity and specificity of the DLA for neovascular AMD was 0.995, 96.7%, 96.4%, respectively. Testing against the independent external dataset achieved an AUC, sensitivity and specificity of 0.967, 100% and 93.4%, respectively. More than 60% of false positive cases displayed other macular pathologies. Amongst the false negative cases (internal validation dataset only), over half (57.2%) proved to be undetected detachment of the neurosensory retina or RPE layer. CONCLUSIONS AND RELEVANCE This DLA shows robust performance for the detection of neovascular AMD amongst retinal images from a multi-ethnic sample and under different imaging protocols. Further research is warranted to investigate where this technology could be best utilized within screening and research settings.
Collapse
Affiliation(s)
- Stuart Keel
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Zhixi Li
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, China
| | - Jane Scheetz
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Liubov Robman
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia.,Monash University Melbourne, Melbourne, Victoria, Australia
| | - James Phung
- Monash University Melbourne, Melbourne, Victoria, Australia
| | - Galina Makeyeva
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - KhinZaw Aung
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Chi Liu
- Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Xixi Yan
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Wei Meng
- Healgoo Interactive Medical Technology Co. Ltd., Guangzhou, China
| | - Robyn Guymer
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia
| | - Robert Chang
- Department of Ophthalmology, Byers Eye Institute at Stanford University, Palo Alto, California
| | - Mingguang He
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, University of Melbourne, Melbourne, Victoria, Australia.,State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-Sen University, Guangzhou, China
| |
Collapse
|