1
|
Phipps B, Hadoux X, Sheng B, Campbell JP, Liu TYA, Keane PA, Cheung CY, Chung TY, Wong TY, van Wijngaarden P. AI image generation technology in ophthalmology: Use, misuse and future applications. Prog Retin Eye Res 2025; 106:101353. [PMID: 40107410 DOI: 10.1016/j.preteyeres.2025.101353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Revised: 03/12/2025] [Accepted: 03/13/2025] [Indexed: 03/22/2025]
Abstract
BACKGROUND AI-powered image generation technology holds the potential to reshape medical practice, yet it remains an unfamiliar technology for both medical researchers and clinicians alike. Given the adoption of this technology relies on clinician understanding and acceptance, we sought to demystify its use in ophthalmology. To this end, we present a literature review on image generation technology in ophthalmology, examining both its theoretical applications and future role in clinical practice. METHODS First, we consider the key model designs used for image synthesis, including generative adversarial networks, autoencoders, and diffusion models. We then perform a survey of the literature for image generation technology in ophthalmology prior to September 2024, presenting both the type of model used and its clinical application. Finally, we discuss the limitations of this technology, the risks of its misuse and the future directions of research in this field. RESULTS Applications of this technology include improving AI diagnostic models, inter-modality image transformation, more accurate treatment and disease prognostication, image denoising, and individualised education. Key barriers to its adoption include bias in generative models, risks to patient data security, computational and logistical barriers to development, challenges with model explainability, inconsistent use of validation metrics between studies and misuse of synthetic images. Looking forward, researchers are placing a further emphasis on clinically grounded metrics, the development of image generation foundation models and the implementation of methods to ensure data provenance. CONCLUSION Compared to other medical applications of AI, image generation is still in its infancy. Yet, it holds the potential to revolutionise ophthalmology across research, education and clinical practice. This review aims to guide ophthalmic researchers wanting to leverage this technology, while also providing an insight for clinicians on how it may change ophthalmic practice in the future.
Collapse
Affiliation(s)
- Benjamin Phipps
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia.
| | - Xavier Hadoux
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia
| | - Bin Sheng
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - J Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health and Science University, Portland, USA
| | - T Y Alvin Liu
- Retina Division, Wilmer Eye Institute, Johns Hopkins University, Baltimore, MD, 21287, USA
| | - Pearse A Keane
- NIHR Biomedical Research Centre at Moorfields Eye Hospital NHS Foundation Trust, London, UK; Institute of Ophthalmology, University College London, London, UK
| | - Carol Y Cheung
- Department of Ophthalmology and Visual Sciences, The Chinese University of Hong Kong, 999077, China
| | - Tham Yih Chung
- Yong Loo Lin School of Medicine, National University of Singapore, Singapore; Centre for Innovation and Precision Eye Health, Department of Ophthalmology, Yong Loo Lin School of Medicine, National University of Singapore and National University Health System, Singapore; Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Eye Academic Clinical Program (Eye ACP), Duke NUS Medical School, Singapore
| | - Tien Y Wong
- Singapore Eye Research Institute, Singapore National Eye Centre, Singapore; Tsinghua Medicine, Tsinghua University, Beijing, China; Beijing Visual Science and Translational Eye Research Institute, Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Peter van Wijngaarden
- Centre for Eye Research Australia, Royal Victorian Eye and Ear Hospital, East Melbourne, 3002, VIC, Australia; Ophthalmology, Department of Surgery, University of Melbourne, Parkville, 3010, VIC, Australia; Florey Institute of Neuroscience & Mental Health, Parkville, VIC, Australia
| |
Collapse
|
2
|
Veeramani N, Jayaraman P. A promising AI based super resolution image reconstruction technique for early diagnosis of skin cancer. Sci Rep 2025; 15:5084. [PMID: 39934265 PMCID: PMC11814132 DOI: 10.1038/s41598-025-89693-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2024] [Accepted: 02/06/2025] [Indexed: 02/13/2025] Open
Abstract
Skin cancer can be prevalent in people of any age group who are exposed to ultraviolet (UV) radiation. Among all other types, melanoma is a notable severe kind of skin cancer, which can be fatal. Melanoma is a malignant skin cancer arising from melanocytes, requiring early detection. Typically, skin lesions are classified either as benign or malignant. However, some lesions do exist that don't show clear cancer signs, making them suspicious. If unnoticed, these suspicious lesions develop into severe melanoma, requiring invasive treatments later on. These intermediate or suspicious skin lesions are completely curable if it is diagnosed at their early stages. To tackle this, few researchers intended to improve the image quality of the infected lesions obtained from the dermoscopy through image reconstruction techniques. Analyzing reconstructed super-resolution (SR) images allows early detection, fine feature extraction, and treatment plans. Despite advancements in machine learning, deep learning, and complex neural networks enhancing skin lesion image quality, a key challenge remains unresolved: how the intricate textures are obtained while performing significant up scaling in medical image reconstruction? Thus, an artificial intelligence (AI) based reconstruction algorithm is proposed to obtain the fine features from the intermediate skin lesion from dermoscopic images for early diagnosis. This serves as a non-invasive approach. In this research, a novel melanoma information improvised generative adversarial network (MELIIGAN) framework is proposed for the expedited diagnosis of intermediate skin lesions. Also, designed a stacked residual block that handles larger scaling factors and the reconstruction of fine-grained details. Finally, a hybrid loss function with a total variation (TV) regularization term switches to the Charbonnier loss function, a robust substitute for the mean square error loss function. The benchmark dataset results in a structural index similarity (SSIM) of 0.946 and a peak signal-to-noise ratio (PSNR) of 40.12 dB as the highest texture information, evidently compared to other state-of-the-art methods.
Collapse
Affiliation(s)
- Nirmala Veeramani
- School of Computing, SASTRA University, Thirumalaisamudram, Thanjavur, 613401, Tamil Nadu, India
| | - Premaladha Jayaraman
- School of Computing, SASTRA University, Thirumalaisamudram, Thanjavur, 613401, Tamil Nadu, India.
| |
Collapse
|
3
|
Xie X, Jiachu D, Liu C, Xie M, Guo J, Cai K, Li X, Mi W, Ye H, Luo L, Yang J, Zhang M, Zheng C. Generating Synthesized Fluorescein Angiography Images From Color Fundus Images by Generative Adversarial Networks for Macular Edema Assessment. Transl Vis Sci Technol 2024; 13:26. [PMID: 39312216 PMCID: PMC11423947 DOI: 10.1167/tvst.13.9.26] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2024] Open
Abstract
Purpose To assess the feasibility of generating synthetic fluorescein angiography (FA) images from color fundus (CF) images using pixel-to-pixel generative adversarial network (pix2pixGANs) for clinical applications. Research questions addressed image realism to retinal specialists and utility for assessing macular edema (ME) in Retinal Vein Occlusion (RVO) eyes. Methods We used a registration-guided pix2pixGANs method trained on the CF-FA dataset from Kham Eye Centre, Kandze Prefecture People's Hospital. A visual Turing test confirmed the realism of synthetic images without novel artifacts. We then assessed the synthetic FA images for assessing ME. Finally, we quantitatively evaluated the synthetic images using Fréchet Inception distance (FID) and structural similarity measures (SSIM). Results The raw development dataset had 881 image pairs from 349 subjects. Our approach is capable of generating realistic FA images because small vessels are clearly visible and sharp within one optic disc diameter around the macula. Two retinal specialists agreed that more than 85% of synthetic FA images have good or excellent image quality. For ME detection, accuracy was similar for real and synthetic images. FID demonstrated a 38.9% improvement over the previous state-of-the-art (SOTA), and SSIM reached 0.78 compared to the previous SOTA's 0.67. Conclusions We developed a pix2pixGANs model translating FA images from label-free CF images, yielding reliable synthetic FA images. This suggests potential for noninvasive evaluation of ME in RVO eyes using pix2pix GANs techniques. Translational Relevance Pix2pixGANs techniques have the potential to assist in the noninvasive clinical assessment of ME in RVO eyes.
Collapse
Affiliation(s)
- Xiaoling Xie
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Danba Jiachu
- Kham Eye Centre, Kandze Prefecture People's Hospital, Kangding, China
| | - Chang Liu
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Meng Xie
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Jinming Guo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Kebo Cai
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xiangbo Li
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Wei Mi
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Hehua Ye
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Li Luo
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Jianlong Yang
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Mingzhi Zhang
- Joint Shantou International Eye Center of Shantou University and the Chinese University of Hong Kong, Shantou University Medical College, Shantou, Guangdong, China
| | - Ce Zheng
- Department of Ophthalmology, Xinhua Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
4
|
Chen R, Zhang W, Song F, Yu H, Cao D, Zheng Y, He M, Shi D. Translating color fundus photography to indocyanine green angiography using deep-learning for age-related macular degeneration screening. NPJ Digit Med 2024; 7:34. [PMID: 38347098 PMCID: PMC10861476 DOI: 10.1038/s41746-024-01018-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2023] [Accepted: 01/18/2024] [Indexed: 02/15/2024] Open
Abstract
Age-related macular degeneration (AMD) is the leading cause of central vision impairment among the elderly. Effective and accurate AMD screening tools are urgently needed. Indocyanine green angiography (ICGA) is a well-established technique for detecting chorioretinal diseases, but its invasive nature and potential risks impede its routine clinical application. Here, we innovatively developed a deep-learning model capable of generating realistic ICGA images from color fundus photography (CF) using generative adversarial networks (GANs) and evaluated its performance in AMD classification. The model was developed with 99,002 CF-ICGA pairs from a tertiary center. The quality of the generated ICGA images underwent objective evaluation using mean absolute error (MAE), peak signal-to-noise ratio (PSNR), structural similarity measures (SSIM), etc., and subjective evaluation by two experienced ophthalmologists. The model generated realistic early, mid and late-phase ICGA images, with SSIM spanned from 0.57 to 0.65. The subjective quality scores ranged from 1.46 to 2.74 on the five-point scale (1 refers to the real ICGA image quality, Kappa 0.79-0.84). Moreover, we assessed the application of translated ICGA images in AMD screening on an external dataset (n = 13887) by calculating area under the ROC curve (AUC) in classifying AMD. Combining generated ICGA with real CF images improved the accuracy of AMD classification with AUC increased from 0.93 to 0.97 (P < 0.001). These results suggested that CF-to-ICGA translation can serve as a cross-modal data augmentation method to address the data hunger often encountered in deep-learning research, and as a promising add-on for population-based AMD screening. Real-world validation is warranted before clinical usage.
Collapse
Affiliation(s)
- Ruoyu Chen
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Weiyi Zhang
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Fan Song
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Honghua Yu
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Dan Cao
- Department of Ophthalmology, Guangdong Academy of Medical Sciences, Guangdong Provincial People's Hospital, Southern Medical University, Guangzhou, China
| | - Yingfeng Zheng
- State Key Laboratory of Ophthalmology, Zhongshan Ophthalmic Center, Sun Yat-sen University, Guangdong Provincial Key Laboratory of Ophthalmology and Visual Science, Guangdong Provincial Clinical Research Center for Ocular Diseases, Guangzhou, China.
| | - Mingguang He
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Centre for Eye and Vision Research (CEVR), 17W Hong Kong Science Park, Hong Kong SAR, China.
| | - Danli Shi
- Experimental Ophthalmology, School of Optometry, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
- Research Centre for SHARP Vision, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China.
| |
Collapse
|
5
|
Heger KA, Waldstein SM. Artificial intelligence in retinal imaging: current status and future prospects. Expert Rev Med Devices 2024; 21:73-89. [PMID: 38088362 DOI: 10.1080/17434440.2023.2294364] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Accepted: 12/09/2023] [Indexed: 12/19/2023]
Abstract
INTRODUCTION The steadily growing and aging world population, in conjunction with continuously increasing prevalences of vision-threatening retinal diseases, is placing an increasing burden on the global healthcare system. The main challenges within retinology involve identifying the comparatively few patients requiring therapy within the large mass, the assurance of comprehensive screening for retinal disease and individualized therapy planning. In order to sustain high-quality ophthalmic care in the future, the incorporation of artificial intelligence (AI) technologies into our clinical practice represents a potential solution. AREAS COVERED This review sheds light onto already realized and promising future applications of AI techniques in retinal imaging. The main attention is directed at the application in diabetic retinopathy and age-related macular degeneration. The principles of use in disease screening, grading, therapeutic planning and prediction of future developments are explained based on the currently available literature. EXPERT OPINION The recent accomplishments of AI in retinal imaging indicate that its implementation into our daily practice is likely to fundamentally change the ophthalmic healthcare system and bring us one step closer to the goal of individualized treatment. However, it must be emphasized that the aim is to optimally support clinicians by gradually incorporating AI approaches, rather than replacing ophthalmologists.
Collapse
Affiliation(s)
- Katharina A Heger
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| | - Sebastian M Waldstein
- Department of Ophthalmology, Landesklinikum Mistelbach-Gaenserndorf, Mistelbach, Austria
| |
Collapse
|
6
|
Hua K, Fang X, Tang Z, Cheng Y, Yu Z. DCAM-NET:A novel domain generalization optic cup and optic disc segmentation pipeline with multi-region and multi-scale convolution attention mechanism. Comput Biol Med 2023; 163:107076. [PMID: 37379616 DOI: 10.1016/j.compbiomed.2023.107076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 04/27/2023] [Accepted: 05/27/2023] [Indexed: 06/30/2023]
Abstract
Fundus images are an essential basis for diagnosing ocular diseases, and using convolutional neural networks has shown promising results in achieving accurate fundus image segmentation. However, the difference between the training data (source domain) and the testing data (target domain) will significantly affect the final segmentation performance. This paper proposes a novel framework named DCAM-NET for fundus domain generalization segmentation, which substantially improves the generalization ability of the segmentation model to the target domain data and enhances the extraction of detailed information on the source domain data. This model can effectively overcome the problem of poor model performance due to cross-domain segmentation. To enhance the adaptability of the segmentation model to target domain data, this paper proposes a multi-scale attention mechanism module (MSA) that functions at the feature extraction level. Extracting different attribute features to enter the corresponding scale attention module further captures the critical features in channel, position, and spatial regions. The MSA attention mechanism module also integrates the characteristics of the self-attention mechanism, it can capture dense context information, and the aggregation of multi-feature information effectively enhances the generalization of the model when dealing with unknown domain data. In addition, this paper proposes the multi-region weight fusion convolution module (MWFC), which is essential for the segmentation model to extract feature information from the source domain data accurately. Fusing multiple region weights and convolutional kernel weights on the image to enhance the model adaptability to information at different locations on the image, the fusion of weights deepens the capacity and depth of the model. It enhances the learning ability of the model for multiple regions on the source domain. Our experiments on fundus data for cup/disc segmentation show that the introduction of MSA and MWFC modules in this paper effectively improves the segmentation ability of the segmentation model on the unknown domain. And the performance of the proposed method is significantly better than other methods in the current domain generalization segmentation of the optic cup/disc.
Collapse
Affiliation(s)
- Kaiwen Hua
- School of Computer Science and Engineering, Anhui University of Science and Technology, 232001, Huainan, Anhui, China
| | - Xianjin Fang
- School of Computer Science and Engineering, Anhui University of Science and Technology, 232001, Huainan, Anhui, China.
| | - Zhiri Tang
- Academy for Engineering and Technology, Fudan University, 200433, Shanghai, China
| | - Ying Cheng
- School of Artificial Intelligence Academy, Anhui University of Science and Technology, 232001, Huainan, Anhui, China
| | - Zekuan Yu
- Academy for Engineering and Technology, Fudan University, 200433, Shanghai, China.
| |
Collapse
|
7
|
Wang Z, Lim G, Ng WY, Tan TE, Lim J, Lim SH, Foo V, Lim J, Sinisterra LG, Zheng F, Liu N, Tan GSW, Cheng CY, Cheung GCM, Wong TY, Ting DSW. Synthetic artificial intelligence using generative adversarial network for retinal imaging in detection of age-related macular degeneration. Front Med (Lausanne) 2023; 10:1184892. [PMID: 37425325 PMCID: PMC10324667 DOI: 10.3389/fmed.2023.1184892] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Accepted: 05/30/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Age-related macular degeneration (AMD) is one of the leading causes of vision impairment globally and early detection is crucial to prevent vision loss. However, the screening of AMD is resource dependent and demands experienced healthcare providers. Recently, deep learning (DL) systems have shown the potential for effective detection of various eye diseases from retinal fundus images, but the development of such robust systems requires a large amount of datasets, which could be limited by prevalence of the disease and privacy of patient. As in the case of AMD, the advanced phenotype is often scarce for conducting DL analysis, which may be tackled via generating synthetic images using Generative Adversarial Networks (GANs). This study aims to develop GAN-synthesized fundus photos with AMD lesions, and to assess the realness of these images with an objective scale. Methods To build our GAN models, a total of 125,012 fundus photos were used from a real-world non-AMD phenotypical dataset. StyleGAN2 and human-in-the-loop (HITL) method were then applied to synthesize fundus images with AMD features. To objectively assess the quality of the synthesized images, we proposed a novel realness scale based on the frequency of the broken vessels observed in the fundus photos. Four residents conducted two rounds of gradings on 300 images to distinguish real from synthetic images, based on their subjective impression and the objective scale respectively. Results and discussion The introduction of HITL training increased the percentage of synthetic images with AMD lesions, despite the limited number of AMD images in the initial training dataset. Qualitatively, the synthesized images have been proven to be robust in that our residents had limited ability to distinguish real from synthetic ones, as evidenced by an overall accuracy of 0.66 (95% CI: 0.61-0.66) and Cohen's kappa of 0.320. For the non-referable AMD classes (no or early AMD), the accuracy was only 0.51. With the objective scale, the overall accuracy improved to 0.72. In conclusion, GAN models built with HITL training are capable of producing realistic-looking fundus images that could fool human experts, while our objective realness scale based on broken vessels can help identifying the synthetic fundus photos.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Wei Yan Ng
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien-En Tan
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Jane Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Sing Hui Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Valencia Foo
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Joshua Lim
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | | | - Feihui Zheng
- Singapore Eye Research Institute, Singapore, Singapore
| | - Nan Liu
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Ching-Yu Cheng
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Gemmy Chui Ming Cheung
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| | - Tien Yin Wong
- Singapore National Eye Centre, Singapore, Singapore
- School of Medicine, Tsinghua University, Beijing, China
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore, Singapore, Singapore
- Singapore Eye Research Institute, Singapore, Singapore
- Singapore National Eye Centre, Singapore, Singapore
| |
Collapse
|
8
|
Ataş İ. Comparison of deep convolution and least squares GANs for diabetic retinopathy image synthesis. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08482-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
9
|
Goceri E. Medical image data augmentation: techniques, comparisons and interpretations. Artif Intell Rev 2023; 56:1-45. [PMID: 37362888 PMCID: PMC10027281 DOI: 10.1007/s10462-023-10453-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2023] [Indexed: 03/29/2023]
Abstract
Designing deep learning based methods with medical images has always been an attractive area of research to assist clinicians in rapid examination and accurate diagnosis. Those methods need a large number of datasets including all variations in their training stages. On the other hand, medical images are always scarce due to several reasons, such as not enough patients for some diseases, patients do not want to allow their images to be used, lack of medical equipment or equipment, inability to obtain images that meet the desired criteria. This issue leads to bias in datasets, overfitting, and inaccurate results. Data augmentation is a common solution to overcome this issue and various augmentation techniques have been applied to different types of images in the literature. However, it is not clear which data augmentation technique provides more efficient results for which image type since different diseases are handled, different network architectures are used, and these architectures are trained and tested with different numbers of data sets in the literature. Therefore, in this work, the augmentation techniques used to improve performances of deep learning based diagnosis of the diseases in different organs (brain, lung, breast, and eye) from different imaging modalities (MR, CT, mammography, and fundoscopy) have been examined. Also, the most commonly used augmentation methods have been implemented, and their effectiveness in classifications with a deep network has been discussed based on quantitative performance evaluations. Experiments indicated that augmentation techniques should be chosen carefully according to image types.
Collapse
Affiliation(s)
- Evgin Goceri
- Department of Biomedical Engineering, Engineering Faculty, Akdeniz University, Antalya, Turkey
| |
Collapse
|
10
|
Li P, He Y, Wang P, Wang J, Shi G, Chen Y. Synthesizing multi-frame high-resolution fluorescein angiography images from retinal fundus images using generative adversarial networks. Biomed Eng Online 2023; 22:16. [PMID: 36810105 PMCID: PMC9945680 DOI: 10.1186/s12938-023-01070-6] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 01/17/2023] [Indexed: 02/23/2023] Open
Abstract
BACKGROUND Fundus fluorescein angiography (FA) can be used to diagnose fundus diseases by observing dynamic fluorescein changes that reflect vascular circulation in the fundus. As FA may pose a risk to patients, generative adversarial networks have been used to convert retinal fundus images into fluorescein angiography images. However, the available methods focus on generating FA images of a single phase, and the resolution of the generated FA images is low, being unsuitable for accurately diagnosing fundus diseases. METHODS We propose a network that generates multi-frame high-resolution FA images. This network consists of a low-resolution GAN (LrGAN) and a high-resolution GAN (HrGAN), where LrGAN generates low-resolution and full-size FA images with global intensity information, HrGAN takes the FA images generated by LrGAN as input to generate multi-frame high-resolution FA patches. Finally, the FA patches are merged into full-size FA images. RESULTS Our approach combines supervised and unsupervised learning methods and achieves better quantitative and qualitative results than using either method alone. Structural similarity (SSIM), normalized cross-correlation (NCC) and peak signal-to-noise ratio (PSNR) were used as quantitative metrics to evaluate the performance of the proposed method. The experimental results show that our method achieves better quantitative results with structural similarity of 0.7126, normalized cross-correlation of 0.6799, and peak signal-to-noise ratio of 15.77. In addition, ablation experiments also demonstrate that using a shared encoder and residual channel attention module in HrGAN is helpful for the generation of high-resolution images. CONCLUSIONS Overall, our method has higher performance for generating retinal vessel details and leaky structures in multiple critical phases, showing a promising clinical diagnostic value.
Collapse
Affiliation(s)
- Ping Li
- grid.54549.390000 0004 0369 4060School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Yi He
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Pinghe Wang
- grid.54549.390000 0004 0369 4060School of Optoelectronic Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731 China
| | - Jing Wang
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Guohua Shi
- grid.9227.e0000000119573309Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163 China ,grid.59053.3a0000000121679639School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026 China
| | - Yiwei Chen
- Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| |
Collapse
|
11
|
Qiu D, Cheng Y, Wang X. Improved generative adversarial network for retinal image super-resolution. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 225:106995. [PMID: 35970055 DOI: 10.1016/j.cmpb.2022.106995] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 04/30/2022] [Accepted: 06/29/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND AND OBJECTIVE The retina is the only organ in the body that can use visible light for non-invasive observation. By analyzing retinal images, we can achieve early screening, diagnosis and prevention of many ophthalmological and systemic diseases, helping patients avoid the risk of blindness. Due to the powerful feature extraction capabilities, many deep learning super-resolution reconstruction networks have been applied to retinal image analysis and achieved excellent results. METHODS Given the lack of high-frequency information and poor visual perception in the current reconstruction results of super-resolution reconstruction networks under large-scale factors, we present an improved generative adversarial network (IGAN) algorithm for retinal image super-resolution reconstruction. Firstly, we construct a novel residual attention block, improving the reconstruction results lacking high-frequency information and texture details under large-scale factors. Secondly, we remove the Batch Normalization layer that affects the quality of image generation in the residual network. Finally, we use the more robust Charbonnier loss function instead of the mean square error loss function and the TV regular term to smooth the training results. RESULTS Experimental results show that our proposed method significantly improves objective evaluation indicators such as peak signal-to-noise ratio and structural similarity. The obtained image has rich texture details and a better visual experience than the state-of-the-art image super-resolution methods. CONCLUSION Our proposed method can better learn the mapping relationship between low-resolution and high-resolution retinal images. This method can be effectively and stably applied to the analysis of retinal images, providing an effective basis for early clinical treatment.
Collapse
Affiliation(s)
- Defu Qiu
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| | - Yuhu Cheng
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| | - Xuesong Wang
- Engineering Research Center of Intelligent Control for Underground Space, Ministry of Education, China University of Mining and Technology, Xuzhou 221116, China; School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221116, China.
| |
Collapse
|
12
|
Jeon M, Park H, Kim HJ, Morley M, Cho H. k-SALSA: k-anonymous synthetic averaging of retinal images via local style alignment. COMPUTER VISION - ECCV ... : ... EUROPEAN CONFERENCE ON COMPUTER VISION : PROCEEDINGS. EUROPEAN CONFERENCE ON COMPUTER VISION 2022; 13681:661-678. [PMID: 37525827 PMCID: PMC10388376 DOI: 10.1007/978-3-031-19803-8_39] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/02/2023]
Abstract
The application of modern machine learning to retinal image analyses offers valuable insights into a broad range of human health conditions beyond ophthalmic diseases. Additionally, data sharing is key to fully realizing the potential of machine learning models by providing a rich and diverse collection of training data. However, the personallyidentifying nature of retinal images, encompassing the unique vascular structure of each individual, often prevents this data from being shared openly. While prior works have explored image de-identification strategies based on synthetic averaging of images in other domains (e.g. facial images), existing techniques face difficulty in preserving both privacy and clinical utility in retinal images, as we demonstrate in our work. We therefore introduce k-SALSA, a generative adversarial network (GAN)-based framework for synthesizing retinal fundus images that summarize a given private dataset while satisfying the privacy notion of k-anonymity. k-SALSA brings together state-of-the-art techniques for training and inverting GANs to achieve practical performance on retinal images. Furthermore, k-SALSA leverages a new technique, called local style alignment, to generate a synthetic average that maximizes the retention of fine-grain visual patterns in the source images, thus improving the clinical utility of the generated images. On two benchmark datasets of diabetic retinopathy (EyePACS and APTOS), we demonstrate our improvement upon existing methods with respect to image fidelity, classification performance, and mitigation of membership inference attacks. Our work represents a step toward broader sharing of retinal images for scientific collaboration. Code is available at https://github.com/hcholab/k-salsa.
Collapse
Affiliation(s)
- Minkyu Jeon
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
- Korea University, Seoul, Republic of Korea
| | | | | | - Michael Morley
- Harvard Medical School, Boston, MA, USA
- Ophthalmic Consultants of Boston, Boston, MA, USA
| | - Hyunghoon Cho
- Broad Institute of MIT and Harvard, Cambridge, MA, USA
| |
Collapse
|
13
|
Zhang Z, Cheng N, Liu Y, Song J, Liu X, Zhang S, Zhang G. Prediction of corneal astigmatism based on corneal tomography after femtosecond laser arcuate keratotomy using a pix2pix conditional generative adversarial network. Front Public Health 2022; 10:1012929. [PMID: 36187623 PMCID: PMC9523441 DOI: 10.3389/fpubh.2022.1012929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2022] [Accepted: 08/29/2022] [Indexed: 01/27/2023] Open
Abstract
Purpose This study aimed to develop a deep learning model to generate a postoperative corneal axial curvature map of femtosecond laser arcuate keratotomy (FLAK) based on corneal tomography using a pix2pix conditional generative adversarial network (pix2pix cGAN) for surgical planning. Methods A total of 451 eyes of 318 nonconsecutive patients were subjected to FLAK for corneal astigmatism correction during cataract surgery. Paired or single anterior penetrating FLAKs were performed at an 8.0-mm optical zone with a depth of 90% using a femtosecond laser (LenSx laser, Alcon Laboratories, Inc.). Corneal tomography images were acquired from Oculus Pentacam HR (Optikgeräte GmbH, Wetzlar, Germany) before and 3 months after the surgery. The raw data required for analysis consisted of the anterior corneal curvature for a range of ± 3.5 mm around the corneal apex in 0.1-mm steps, which the pseudo-color corneal curvature map synthesized was based on. The deep learning model used was a pix2pix conditional generative adversarial network. The prediction accuracy of synthetic postoperative corneal astigmatism in zones of different diameters centered on the corneal apex was assessed using vector analysis. The synthetic postoperative corneal axial curvature maps were compared with the real postoperative corneal axial curvature maps using the structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR). Results A total of 386 pairs of preoperative and postoperative corneal tomography data were included in the training set, whereas 65 preoperative data were retrospectively included in the test set. The correlation coefficient between synthetic and real postoperative astigmatism (difference vector) in the 3-mm zone was 0.89, and that between surgically induced astigmatism (SIA) was 0.93. The mean absolute errors of SIA for real and synthetic postoperative corneal axial curvature maps in the 1-, 3-, and 5-mm zone were 0.20 ± 0.25, 0.12 ± 0.17, and 0.09 ± 0.13 diopters, respectively. The average SSIM and PSNR of the 3-mm zone were 0.86 ± 0.04 and 18.24 ± 5.78, respectively. Conclusion Our results showed that the application of pix2pix cGAN can synthesize plausible postoperative corneal tomography for FLAK, showing the possibility of using GAN to predict corneal tomography, with the potential of applying artificial intelligence to construct surgical planning models.
Collapse
Affiliation(s)
- Zhe Zhang
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China,Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,First Hospital of Shanxi Medical University, Taiyuan, China
| | - Nan Cheng
- College of Biomedical Engineering, Taiyuan University of Technology, Taiyuan, China
| | - Yunfang Liu
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China
| | - Junyang Song
- Department of Ophthalmology, First Affiliated Hospital of Huzhou University, Huzhou, China,*Correspondence: Junyang Song
| | - Xinhua Liu
- Shenzhen Eye Hospital, Jinan University, Shenzhen Eye Institute, Shenzhen, China
| | - Suhua Zhang
- Department of Cataract, Shanxi Eye Hospital, Taiyuan, China,Taiyuan Central Hospital of Shanxi Medical University, Taiyuan, China,Suhua Zhang
| | - Guanghua Zhang
- Department of Intelligence and Automation, Taiyuan University, Taiyuan, China,Graphics and Imaging Laboratory, University of Girona, Girona, Spain,Guanghua Zhang
| |
Collapse
|
14
|
Guo X, Lu X, Lin Q, Zhang J, Hu X, Che S. A novel retinal image generation model with the preservation of structural similarity and high resolution. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Abdelmotaal H, Sharaf M, Soliman W, Wasfi E, Kedwany SM. Bridging the resources gap: deep learning for fluorescein angiography and optical coherence tomography macular thickness map image translation. BMC Ophthalmol 2022; 22:355. [PMID: 36050661 PMCID: PMC9434904 DOI: 10.1186/s12886-022-02577-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Accepted: 08/23/2022] [Indexed: 11/29/2022] Open
Abstract
Background To assess the ability of the pix2pix generative adversarial network (pix2pix GAN) to synthesize clinically useful optical coherence tomography (OCT) color-coded macular thickness maps based on a modest-sized original fluorescein angiography (FA) dataset and the reverse, to be used as a plausible alternative to either imaging technique in patients with diabetic macular edema (DME). Methods Original images of 1,195 eyes of 708 nonconsecutive diabetic patients with or without DME were retrospectively analyzed. OCT macular thickness maps and corresponding FA images were preprocessed for use in training and testing the proposed pix2pix GAN. The best quality synthesized images using the test set were selected based on the Fréchet inception distance score, and their quality was studied subjectively by image readers and objectively by calculating the peak signal-to-noise ratio, structural similarity index, and Hamming distance. We also used original and synthesized images in a trained deep convolutional neural network (DCNN) to plot the difference between synthesized images and their ground-truth analogues and calculate the learned perceptual image patch similarity metric. Results The pix2pix GAN-synthesized images showed plausible subjectively and objectively assessed quality, which can provide a clinically useful alternative to either image modality. Conclusion Using the pix2pix GAN to synthesize mutually dependent OCT color-coded macular thickness maps or FA images can overcome issues related to machine unavailability or clinical situations that preclude the performance of either imaging technique. Trial registration ClinicalTrials.gov Identifier: NCT05105620, November 2021. “Retrospectively registered”.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt.
| | - Mohamed Sharaf
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Wael Soliman
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Ehab Wasfi
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| | - Salma M Kedwany
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, 71515, Egypt
| |
Collapse
|
16
|
Zhao J, Hou X, Pan M, Zhang H. Attention-based generative adversarial network in medical imaging: A narrative review. Comput Biol Med 2022; 149:105948. [PMID: 35994931 DOI: 10.1016/j.compbiomed.2022.105948] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2022] [Revised: 07/24/2022] [Accepted: 08/06/2022] [Indexed: 11/18/2022]
Abstract
As a popular probabilistic generative model, generative adversarial network (GAN) has been successfully used not only in natural image processing, but also in medical image analysis and computer-aided diagnosis. Despite the various advantages, the applications of GAN in medical image analysis face new challenges. The introduction of attention mechanisms, which resemble the human visual system that focuses on the task-related local image area for certain information extraction, has drawn increasing interest. Recently proposed transformer-based architectures that leverage self-attention mechanism encode long-range dependencies and learn representations that are highly expressive. This motivates us to summarize the applications of using transformer-based GAN for medical image analysis. We reviewed recent advances in techniques combining various attention modules with different adversarial training schemes, and their applications in medical segmentation, synthesis and detection. Several recent studies have shown that attention modules can be effectively incorporated into a GAN model in detecting lesion areas and extracting diagnosis-related feature information precisely, thus providing a useful tool for medical image processing and diagnosis. This review indicates that research on the medical imaging analysis of GAN and attention mechanisms is still at an early stage despite the great potential. We highlight the attention-based generative adversarial network is an efficient and promising computational model advancing future research and applications in medical image analysis.
Collapse
Affiliation(s)
- Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Meiqing Pan
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; School of Biological Science and Medical Engineering, Beihang University, Beijing, 100191, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, 100191, China; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Beijing, 100191, China; Key Laboratory of Big Data-Based Precision Medicine (Beihang University), Ministry of Industry and Information Technology of the People's Republic of China, Beijing, 100191, China.
| |
Collapse
|
17
|
Gao J, Zhao W, Li P, Huang W, Chen Z. LEGAN: A Light and Effective Generative Adversarial Network for medical image synthesis. Comput Biol Med 2022; 148:105878. [PMID: 35863249 DOI: 10.1016/j.compbiomed.2022.105878] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 06/21/2022] [Accepted: 07/09/2022] [Indexed: 11/28/2022]
Abstract
Medical image synthesis plays an important role in clinical diagnosis by providing auxiliary pathological information. However, previous methods usually utilize the one-step strategy designed for wild image synthesis, which are not sensitive to local details of tissues within medical images. In addition, these methods consume a great number of computing resources in generating medical images, which seriously limits their applicability in clinical diagnosis. To address the above issues, a Light and Effective Generative Adversarial Network (LEGAN) is proposed to generate high-fidelity medical images in a lightweight manner. In particular, a coarse-to-fine paradigm is designed to imitate the painting process of humans for medical image synthesis within a two-stage generative adversarial network, which guarantees the sensitivity to local information of medical images. Furthermore, a low-rank convolutional layer is introduced to construct LEGAN for lightweight medical image synthesis, which utilizes principal components of full-rank convolutional kernels to reduce model redundancy. Additionally, a multi-stage mutual information distillation is devised to maximize dependencies of distributions between generated and real medical images in model training. Finally, extensive experiments are conducted in two typical tasks, i.e., retinal fundus image synthesis and proton density weighted MR image synthesis. The results demonstrate that LEGAN outperforms the comparison methods by a significant margin in terms of Fréchet inception distance (FID) and Number of parameters (NoP).
Collapse
Affiliation(s)
- Jing Gao
- School of Software Technology, Dalian University of Technology, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China; Key Laboratory for Ubiquitous Network and Service Software of Liaoning, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China
| | - Wenhan Zhao
- School of Software Technology, Dalian University of Technology, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China
| | - Peng Li
- School of Software Technology, Dalian University of Technology, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China.
| | - Wei Huang
- Department of Scientifc Research, First Affiliated Hospital of Dalian Medical University, Zhongshan Road No. 222, Dalian, 116012, Liaoning, China.
| | - Zhikui Chen
- School of Software Technology, Dalian University of Technology, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China; Key Laboratory for Ubiquitous Network and Service Software of Liaoning, Economic and Technological Development Zone Tuqiang Street No. 321, Dalian, 116620, Liaoning, China
| |
Collapse
|
18
|
Abazari MA, Soltani M, Moradi Kashkooli F, Raahemifar K. Synthetic 18F-FDG PET Image Generation Using a Combination of Biomathematical Modeling and Machine Learning. Cancers (Basel) 2022; 14:2786. [PMID: 35681767 PMCID: PMC9179454 DOI: 10.3390/cancers14112786] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/21/2022] [Accepted: 06/01/2022] [Indexed: 12/10/2022] Open
Abstract
No previous works have attempted to combine generative adversarial network (GAN) architectures and the biomathematical modeling of positron emission tomography (PET) radiotracer uptake in tumors to generate extra training samples. Here, we developed a novel computational model to produce synthetic 18F-fluorodeoxyglucose (18F-FDG) PET images of solid tumors in different stages of progression and angiogenesis. First, a comprehensive biomathematical model is employed for creating tumor-induced angiogenesis, intravascular and extravascular fluid flow, as well as modeling of the transport phenomena and reaction processes of 18F-FDG in a tumor microenvironment. Then, a deep convolutional GAN (DCGAN) model is employed for producing synthetic PET images using 170 input images of 18F-FDG uptake in each of 10 different tumor microvascular networks. The interstitial fluid parameters and spatiotemporal distribution of 18F-FDG uptake in tumor and healthy tissues have been compared against previously published numerical and experimental studies, indicating the accuracy of the model. The structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) of the generated PET sample and the experimental one are 0.72 and 28.53, respectively. Our results demonstrate that a combination of biomathematical modeling and GAN-based augmentation models provides a robust framework for the non-invasive and accurate generation of synthetic PET images of solid tumors in different stages.
Collapse
Affiliation(s)
- Mohammad Amin Abazari
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran 19967-15433, Iran; (M.A.A.); (F.M.K.)
| | - Madjid Soltani
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran 19967-15433, Iran; (M.A.A.); (F.M.K.)
- Faculty of Science, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Advanced Bioengineering Initiative Center, Multidisciplinary International Complex, K. N. Toosi Univesity of Technology, Tehran 14176-14411, Iran
- Center for Biotechnology and Bioengineering (CBB), University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Department of Electrical and Computer Engineering, Faculty of Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Farshad Moradi Kashkooli
- Department of Mechanical Engineering, K. N. Toosi University of Technology, Tehran 19967-15433, Iran; (M.A.A.); (F.M.K.)
| | - Kaamran Raahemifar
- Faculty of Science, School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada;
- Data Science and Artificial Intelligence Program, College of Information Sciences and Technology (IST), Penn State University, State College, PA 16801, USA
- Department of Chemical Engineering, University of Waterloo, 200 University Avenue West, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
19
|
Selim M, Zhang J, Fei B, Zhang GQ, Ge GY, Chen J. Cross-Vendor CT Image Data Harmonization Using CVH-CT. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2022; 2021:1099-1108. [PMID: 35308983 PMCID: PMC8861670] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
While remarkable advances have been made in Computed Tomography (CT), most of the existing efforts focus on imaging enhancement while reducing radiation dose. How to harmonize CT image data captured using different scanners is vital in cross-center large-scale radiomics studies but remains the boundary to explore. Furthermore, the lack of paired training image problem makes it computationally challenging to adopt existing deep learning models. We propose a novel deep learning approach called CVH-CT for harmonizing CT images captured using scanners from different vendors. The generator of CVH-CT uses a self-attention mechanism to learn the scanner-related information. We also propose a VGG feature based domain loss to effectively extract texture properties from unpaired image data to learn the scanner based texture distributions. The experimental results show that CVH-CT is clearly better than the baselines because of the use of the proposed domain loss, and CVH-CT can effectively reduce the scanner-related variability in terms of radiomic features.
Collapse
Affiliation(s)
- Md Selim
- Department of Computer Science
- Institute for Biomedical Informatics
| | | | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX
| | - Guo-Qiang Zhang
- Department of Neurology, University of Texas Health Science Center at Houston, Houston, TX
| | | | - Jin Chen
- Department of Computer Science
- Institute for Biomedical Informatics
- Department of Internal Medicine, University of Kentucky, Lexington, KY
| |
Collapse
|
20
|
You A, Kim JK, Ryu IH, Yoo TK. Application of generative adversarial networks (GAN) for ophthalmology image domains: a survey. EYE AND VISION (LONDON, ENGLAND) 2022; 9:6. [PMID: 35109930 PMCID: PMC8808986 DOI: 10.1186/s40662-022-00277-3] [Citation(s) in RCA: 57] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/15/2021] [Accepted: 01/11/2022] [Indexed: 12/12/2022]
Abstract
BACKGROUND Recent advances in deep learning techniques have led to improved diagnostic abilities in ophthalmology. A generative adversarial network (GAN), which consists of two competing types of deep neural networks, including a generator and a discriminator, has demonstrated remarkable performance in image synthesis and image-to-image translation. The adoption of GAN for medical imaging is increasing for image generation and translation, but it is not familiar to researchers in the field of ophthalmology. In this work, we present a literature review on the application of GAN in ophthalmology image domains to discuss important contributions and to identify potential future research directions. METHODS We performed a survey on studies using GAN published before June 2021 only, and we introduced various applications of GAN in ophthalmology image domains. The search identified 48 peer-reviewed papers in the final review. The type of GAN used in the analysis, task, imaging domain, and the outcome were collected to verify the usefulness of the GAN. RESULTS In ophthalmology image domains, GAN can perform segmentation, data augmentation, denoising, domain transfer, super-resolution, post-intervention prediction, and feature extraction. GAN techniques have established an extension of datasets and modalities in ophthalmology. GAN has several limitations, such as mode collapse, spatial deformities, unintended changes, and the generation of high-frequency noises and artifacts of checkerboard patterns. CONCLUSIONS The use of GAN has benefited the various tasks in ophthalmology image domains. Based on our observations, the adoption of GAN in ophthalmology is still in a very early stage of clinical validation compared with deep learning classification techniques because several problems need to be overcome for practical use. However, the proper selection of the GAN technique and statistical modeling of ocular imaging will greatly improve the performance of each image analysis. Finally, this survey would enable researchers to access the appropriate GAN technique to maximize the potential of ophthalmology datasets for deep learning research.
Collapse
Affiliation(s)
- Aram You
- School of Architecture, Kumoh National Institute of Technology, Gumi, Gyeongbuk, South Korea
| | - Jin Kuk Kim
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Ik Hee Ryu
- B&VIIT Eye Center, Seoul, South Korea
- VISUWORKS, Seoul, South Korea
| | - Tae Keun Yoo
- B&VIIT Eye Center, Seoul, South Korea.
- Department of Ophthalmology, Aerospace Medical Center, Republic of Korea Air Force, 635 Danjae-ro, Namil-myeon, Cheongwon-gun, Cheongju, Chungcheongbuk-do, 363-849, South Korea.
| |
Collapse
|
21
|
Jeong JJ, Tariq A, Adejumo T, Trivedi H, Gichoya JW, Banerjee I. Systematic Review of Generative Adversarial Networks (GANs) for Medical Image Classification and Segmentation. J Digit Imaging 2022; 35:137-152. [PMID: 35022924 PMCID: PMC8921387 DOI: 10.1007/s10278-021-00556-w] [Citation(s) in RCA: 38] [Impact Index Per Article: 12.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 11/23/2021] [Accepted: 11/26/2021] [Indexed: 11/28/2022] Open
Abstract
In recent years, generative adversarial networks (GANs) have gained tremendous popularity for various imaging related tasks such as artificial image generation to support AI training. GANs are especially useful for medical imaging-related tasks where training datasets are usually limited in size and heavily imbalanced against the diseased class. We present a systematic review, following the PRISMA guidelines, of recent GAN architectures used for medical image analysis to help the readers in making an informed decision before employing GANs in developing medical image classification and segmentation models. We have extracted 54 papers that highlight the capabilities and application of GANs in medical imaging from January 2015 to August 2020 and inclusion criteria for meta-analysis. Our results show four main architectures of GAN that are used for segmentation or classification in medical imaging. We provide a comprehensive overview of recent trends in the application of GANs in clinical diagnosis through medical image segmentation and classification and ultimately share experiences for task-based GAN implementations.
Collapse
Affiliation(s)
- Jiwoong J Jeong
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA.
| | - Amara Tariq
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA
| | | | - Hari Trivedi
- Department of Radiology, Emory School of Medicine, Atlanta, USA
| | - Judy W Gichoya
- Department of Radiology, Emory School of Medicine, Atlanta, USA
| | - Imon Banerjee
- Department of Biomedical Informatics, Emory School of Medicine, Atlanta, USA.,Department of Radiology, Emory School of Medicine, Atlanta, USA
| |
Collapse
|
22
|
Chen JS, Coyner AS, Chan RP, Hartnett ME, Moshfeghi DM, Owen LA, Kalpathy-Cramer J, Chiang MF, Campbell JP. Deepfakes in Ophthalmology. OPHTHALMOLOGY SCIENCE 2021; 1:100079. [PMID: 36246951 PMCID: PMC9562356 DOI: 10.1016/j.xops.2021.100079] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 10/01/2021] [Accepted: 10/29/2021] [Indexed: 02/06/2023]
Abstract
Purpose Generative adversarial networks (GANs) are deep learning (DL) models that can create and modify realistic-appearing synthetic images, or deepfakes, from real images. The purpose of our study was to evaluate the ability of experts to discern synthesized retinal fundus images from real fundus images and to review the current uses and limitations of GANs in ophthalmology. Design Development and expert evaluation of a GAN and an informal review of the literature. Participants A total of 4282 image pairs of fundus images and retinal vessel maps acquired from a multicenter ROP screening program. Methods Pix2Pix HD, a high-resolution GAN, was first trained and validated on fundus and vessel map image pairs and subsequently used to generate 880 images from a held-out test set. Fifty synthetic images from this test set and 50 different real images were presented to 4 expert ROP ophthalmologists using a custom online system for evaluation of whether the images were real or synthetic. Literature was reviewed on PubMed and Google Scholars using combinations of the terms ophthalmology, GANs, generative adversarial networks, ophthalmology, images, deepfakes, and synthetic. Ancestor search was performed to broaden results. Main Outcome Measures Expert ability to discern real versus synthetic images was evaluated using percent accuracy. Statistical significance was evaluated using a Fisher exact test, with P values ≤ 0.05 thresholded for significance. Results The expert majority correctly identified 59% of images as being real or synthetic (P = 0.1). Experts 1 to 4 correctly identified 54%, 58%, 49%, and 61% of images (P = 0.505, 0.158, 1.000, and 0.043, respectively). These results suggest that the majority of experts could not discern between real and synthetic images. Additionally, we identified 20 implementations of GANs in the ophthalmology literature, with applications in a variety of imaging modalities and ophthalmic diseases. Conclusions Generative adversarial networks can create synthetic fundus images that are indiscernible from real fundus images by expert ROP ophthalmologists. Synthetic images may improve dataset augmentation for DL, may be used in trainee education, and may have implications for patient privacy.
Collapse
Affiliation(s)
- Jimmy S. Chen
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - Aaron S. Coyner
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
| | - R.V. Paul Chan
- Department of Ophthalmology and Visual Sciences, University of Illinois at Chicago, Chicago, Illinois
| | - M. Elizabeth Hartnett
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Darius M. Moshfeghi
- Byers Eye Institute, Horngren Family Vitreoretinal Center, Department of Ophthalmology, Stanford University School of Medicine, Palo Alto, California
| | - Leah A. Owen
- Department of Ophthalmology, John A. Moran Eye Center, University of Utah, Salt Lake City, Utah
| | - Jayashree Kalpathy-Cramer
- Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, Massachusetts
- Massachusetts General Hospital & Brigham and Women’s Hospital Center for Clinical Data Science, Boston, Massachusetts
| | - Michael F. Chiang
- National Eye Institute, National Institutes of Health, Bethesda, Maryland
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon
- Correspondence: J. Peter Campbell, MD, MPH, Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, 515 SW Campus Drive, Portland, OR 97239.
| |
Collapse
|
23
|
Chen H, Shi Y, Bo B, Zhao D, Miao P, Tong S, Wang C. Real-Time Cerebral Vessel Segmentation in Laser Speckle Contrast Image Based on Unsupervised Domain Adaptation. Front Neurosci 2021; 15:755198. [PMID: 34916898 PMCID: PMC8669333 DOI: 10.3389/fnins.2021.755198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Accepted: 10/20/2021] [Indexed: 12/02/2022] Open
Abstract
Laser speckle contrast imaging (LSCI) is a full-field, high spatiotemporal resolution and low-cost optical technique for measuring blood flow, which has been successfully used for neurovascular imaging. However, due to the low signal-noise ratio and the relatively small sizes, segmenting the cerebral vessels in LSCI has always been a technical challenge. Recently, deep learning has shown its advantages in vascular segmentation. Nonetheless, ground truth by manual labeling is usually required for training the network, which makes it difficult to implement in practice. In this manuscript, we proposed a deep learning-based method for real-time cerebral vessel segmentation of LSCI without ground truth labels, which could be further integrated into intraoperative blood vessel imaging system. Synthetic LSCI images were obtained with a synthesis network from LSCI images and public labeled dataset of Digital Retinal Images for Vessel Extraction, which were then used to train the segmentation network. Using matching strategies to reduce the size discrepancy between retinal images and laser speckle contrast images, we could further significantly improve image synthesis and segmentation performance. In the testing LSCI images of rodent cerebral vessels, the proposed method resulted in a dice similarity coefficient of over 75%.
Collapse
Affiliation(s)
- Heping Chen
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Yan Shi
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Bin Bo
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Denghui Zhao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Peng Miao
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Shanbao Tong
- School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Chunliang Wang
- School of Technology and Health, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
24
|
Qiu B, Zeng S, Meng X, Jiang Z, You Y, Geng M, Li Z, Hu Y, Huang Z, Zhou C, Ren Q, Lu Y. Comparative study of deep neural networks with unsupervised Noise2Noise strategy for noise reduction of optical coherence tomography images. JOURNAL OF BIOPHOTONICS 2021; 14:e202100151. [PMID: 34383390 DOI: 10.1002/jbio.202100151] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/08/2021] [Revised: 08/09/2021] [Accepted: 08/09/2021] [Indexed: 06/13/2023]
Abstract
As a powerful diagnostic tool, optical coherence tomography (OCT) has been widely used in various clinical setting. However, OCT images are susceptible to inherent speckle noise that may contaminate subtle structure information, due to low-coherence interferometric imaging procedure. Many supervised learning-based models have achieved impressive performance in reducing speckle noise of OCT images trained with a large number of noisy-clean paired OCT images, which are not commonly feasible in clinical practice. In this article, we conducted a comparative study to investigate the denoising performance of OCT images over different deep neural networks through an unsupervised Noise2Noise (N2N) strategy, which only trained with noisy OCT samples. Four representative network architectures including U-shaped model, multi-information stream model, straight-information stream model and GAN-based model were investigated on an OCT image dataset acquired from healthy human eyes. The results demonstrated all four unsupervised N2N models offered denoised OCT images with a performance comparable with that of supervised learning models, illustrating the effectiveness of unsupervised N2N models in denoising OCT images. Furthermore, U-shaped models and GAN-based models using UNet network as generator are two preferred and suitable architectures for reducing speckle noise of OCT images and preserving fine structure information of retinal layers under unsupervised N2N circumstances.
Collapse
Affiliation(s)
- Bin Qiu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Shuang Zeng
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
| | - Xiangxi Meng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Key Laboratory for Research and Evaluation of Radiopharmaceuticals (National Medical Products Administration), Department of Nuclear Medicine, Peking University Cancer Hospital & Institute, Beijing, China
| | - Zhe Jiang
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yunfei You
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Mufeng Geng
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Ziyuan Li
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yicheng Hu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Zhiyu Huang
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Chuanqing Zhou
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
| |
Collapse
|
25
|
Updates in deep learning research in ophthalmology. Clin Sci (Lond) 2021; 135:2357-2376. [PMID: 34661658 DOI: 10.1042/cs20210207] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2021] [Revised: 09/14/2021] [Accepted: 09/29/2021] [Indexed: 12/13/2022]
Abstract
Ophthalmology has been one of the early adopters of artificial intelligence (AI) within the medical field. Deep learning (DL), in particular, has garnered significant attention due to the availability of large amounts of data and digitized ocular images. Currently, AI in Ophthalmology is mainly focused on improving disease classification and supporting decision-making when treating ophthalmic diseases such as diabetic retinopathy, age-related macular degeneration (AMD), glaucoma and retinopathy of prematurity (ROP). However, most of the DL systems (DLSs) developed thus far remain in the research stage and only a handful are able to achieve clinical translation. This phenomenon is due to a combination of factors including concerns over security and privacy, poor generalizability, trust and explainability issues, unfavorable end-user perceptions and uncertain economic value. Overcoming this challenge would require a combination approach. Firstly, emerging techniques such as federated learning (FL), generative adversarial networks (GANs), autonomous AI and blockchain will be playing an increasingly critical role to enhance privacy, collaboration and DLS performance. Next, compliance to reporting and regulatory guidelines, such as CONSORT-AI and STARD-AI, will be required to in order to improve transparency, minimize abuse and ensure reproducibility. Thirdly, frameworks will be required to obtain patient consent, perform ethical assessment and evaluate end-user perception. Lastly, proper health economic assessment (HEA) must be performed to provide financial visibility during the early phases of DLS development. This is necessary to manage resources prudently and guide the development of DLS.
Collapse
|
26
|
Hu L, Zhou DW, Zha YF, Li L, He H, Xu WH, Qian L, Zhang YK, Fu CX, Hu H, Zhao JG. Synthesizing High- b-Value Diffusion-weighted Imaging of the Prostate Using Generative Adversarial Networks. Radiol Artif Intell 2021; 3:e200237. [PMID: 34617025 DOI: 10.1148/ryai.2021200237] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 04/11/2021] [Accepted: 05/18/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop and evaluate a diffusion-weighted imaging (DWI) deep learning framework based on the generative adversarial network (GAN) to generate synthetic high-b-value (b =1500 sec/mm2) DWI (SYNb1500) sets from acquired standard-b-value (b = 800 sec/mm2) DWI (ACQb800) and acquired standard-b-value (b = 1000 sec/mm2) DWI (ACQb1000) sets. Materials and Methods This retrospective multicenter study included 395 patients who underwent prostate multiparametric MRI. This cohort was split into internal training (96 patients) and external testing (299 patients) datasets. To create SYNb1500 sets from ACQb800 and ACQb1000 sets, a deep learning model based on GAN (M0) was developed by using the internal dataset. M0 was trained and compared with a conventional model based on the cycle GAN (Mcyc). M0 was further optimized by using denoising and edge-enhancement techniques (optimized version of the M0 [Opt-M0]). The SYNb1500 sets were synthesized by using the M0 and the Opt-M0 were synthesized by using ACQb800 and ACQb1000 sets from the external testing dataset. For comparison, traditional calculated (b =1500 sec/mm2) DWI (CALb1500) sets were also obtained. Reader ratings for image quality and prostate cancer detection were performed on the acquired high-b-value (b = 1500 sec/mm2) DWI (ACQb1500), CALb1500, and SYNb1500 sets and the SYNb1500 set generated by the Opt-M0 (Opt-SYNb1500). Wilcoxon signed rank tests were used to compare the readers' scores. A multiple-reader multiple-case receiver operating characteristic curve was used to compare the diagnostic utility of each DWI set. Results When compared with the Mcyc, the M0 yielded a lower mean squared difference and higher mean scores for the peak signal-to-noise ratio, structural similarity, and feature similarity (P < .001 for all). Opt-SYNb1500 resulted in significantly better image quality (P ≤ .001 for all) and a higher mean area under the curve than ACQb1500 and CALb1500 (P ≤ .042 for all). Conclusion A deep learning framework based on GAN is a promising method to synthesize realistic high-b-value DWI sets with good image quality and accuracy in prostate cancer detection.Keywords: Prostate Cancer, Abdomen/GI, Diffusion-weighted Imaging, Deep Learning Framework, High b Value, Generative Adversarial Networks© RSNA, 2021 Supplemental material is available for this article.
Collapse
Affiliation(s)
- Lei Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Da-Wei Zhou
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Yun-Fei Zha
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Liang Li
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Huan He
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Wen-Hao Xu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Li Qian
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Yi-Kun Zhang
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Cai-Xia Fu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Hui Hu
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| | - Jun-Gong Zhao
- Department of Diagnostic and Interventional Radiology, Shanghai Jiao Tong University Affiliated Sixth People's Hospital, 600 Yi Shan Road, Shanghai 200233, China (L.H., W.H.X., J.G.Z.); State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University, Xi'an, China (D.W.Z.); Department of Radiology, Renmin Hospital, Wuhan University, Wuhan, China (Y.F.Z., L.L., H. He, L.Q., Y.K.Z.); MR Application Development, Siemens Shenzhen MR, Shenzhen, China (C.X.F.); and Department of Radiology, The Affiliated Renmin Hospital of Jiangsu University, Zhenjiang, China (H. Hu)
| |
Collapse
|
27
|
Wang Z, Lim G, Ng WY, Keane PA, Campbell JP, Tan GSW, Schmetterer L, Wong TY, Liu Y, Ting DSW. Generative adversarial networks in ophthalmology: what are these and how can they be used? Curr Opin Ophthalmol 2021; 32:459-467. [PMID: 34324454 PMCID: PMC10276657 DOI: 10.1097/icu.0000000000000794] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
PURPOSE OF REVIEW The development of deep learning (DL) systems requires a large amount of data, which may be limited by costs, protection of patient information and low prevalence of some conditions. Recent developments in artificial intelligence techniques have provided an innovative alternative to this challenge via the synthesis of biomedical images within a DL framework known as generative adversarial networks (GANs). This paper aims to introduce how GANs can be deployed for image synthesis in ophthalmology and to discuss the potential applications of GANs-produced images. RECENT FINDINGS Image synthesis is the most relevant function of GANs to the medical field, and it has been widely used for generating 'new' medical images of various modalities. In ophthalmology, GANs have mainly been utilized for augmenting classification and predictive tasks, by synthesizing fundus images and optical coherence tomography images with and without pathologies such as age-related macular degeneration and diabetic retinopathy. Despite their ability to generate high-resolution images, the development of GANs remains data intensive, and there is a lack of consensus on how best to evaluate the outputs produced by GANs. SUMMARY Although the problem of artificial biomedical data generation is of great interest, image synthesis by GANs represents an innovation with yet unclear relevance for ophthalmology.
Collapse
Affiliation(s)
- Zhaoran Wang
- Duke-NUS Medical School, National University of Singapore
| | - Gilbert Lim
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Wei Yan Ng
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Pearse A. Keane
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - J. Peter Campbell
- Department of Ophthalmology, Casey Eye Institute, Oregon Health & Science University, Portland, Oregon, USA
| | - Gavin Siew Wei Tan
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Leopold Schmetterer
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
- SERI-NTU Advanced Ocular Engineering (STANCE)
- School of Chemical and Biomedical Engineering, Nanyang Technological University, Singapore
- Institute of Molecular and Clinical Ophthalmology Basel, Basel, Switzerland
- Department of Clinical Pharmacology
- Center for Medical Physics and Biomedical Engineering, Medical University of Vienna, Vienna, Austria
| | - Tien Yin Wong
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| | - Yong Liu
- Institute of High Performance Computing, Agency for Science, Technology and Research (A*STAR), Singapore
| | - Daniel Shu Wei Ting
- Duke-NUS Medical School, National University of Singapore
- Singapore Eye Research Institute, Singapore, Singapore National Eye Centre, Singapore
| |
Collapse
|
28
|
Kim S, Jang H, Hong S, Hong YS, Bae WC, Kim S, Hwang D. Fat-saturated image generation from multi-contrast MRIs using generative adversarial networks with Bloch equation-based autoencoder regularization. Med Image Anal 2021; 73:102198. [PMID: 34403931 DOI: 10.1016/j.media.2021.102198] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Revised: 07/18/2021] [Accepted: 07/23/2021] [Indexed: 11/28/2022]
Abstract
Obtaining multiple series of magnetic resonance (MR) images with different contrasts is useful for accurate diagnosis of human spinal conditions. However, this can be time consuming and a burden on both the patient and the hospital. We propose a Bloch equation-based autoencoder regularization generative adversarial network (BlochGAN) to generate a fat saturation T2-weighted (T2 FS) image from T1-weighted (T1-w) and T2-weighted (T2-w) images of human spine. To achieve this, our approach was to utilize the relationship between the contrasts using Bloch equation since it is a fundamental principle of MR physics and serves as a physical basis of each contrasts. BlochGAN properly generated the target-contrast images using the autoencoder regularization based on the Bloch equation to identify the physical basis of the contrasts. BlochGAN consists of four sub-networks: an encoder, a decoder, a generator, and a discriminator. The encoder extracts features from the multi-contrast input images, and the generator creates target T2 FS images using the features extracted from the encoder. The discriminator assists network learning by providing adversarial loss, and the decoder reconstructs the input multi-contrast images and regularizes the learning process by providing reconstruction loss. The discriminator and the decoder are only used in the training process. Our results demonstrate that BlochGAN achieved quantitatively and qualitatively superior performance compared to conventional medical image synthesis methods in generating spine T2 FS images from T1-w, and T2-w images.
Collapse
Affiliation(s)
- Sewon Kim
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Hanbyol Jang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Seokjun Hong
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea
| | - Yeong Sang Hong
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea
| | - Won C Bae
- Department of Radiology, Veterans Affairs San Diego Healthcare System, 3350 La Jolla Village Drive, San Diego, CA 92161-0114, USA; Department of Radiology, University of California-San Diego, La Jolla, CA 92093-0997, USA
| | - Sungjun Kim
- Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Department of Radiology, Gangnam Severance Hospital, 211, Eonju-ro, Gangnam-gu, Seoul 06273, Republic of Korea.
| | - Dosik Hwang
- School of Electrical and Electronic Engineering, Yonsei University, 50, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea; Center for Clinical Imaging Data Science Center, Research Institute of Radiological Science, Department of Radiology, Yonsei University College of Medicine, 50-1, Yonsei-ro, Seodaemun-gu, Seoul 03722, Republic of Korea.
| |
Collapse
|
29
|
Saeed AQ, Sheikh Abdullah SNH, Che-Hamzah J, Abdul Ghani AT. Accuracy of Using Generative Adversarial Networks for Glaucoma Detection During the COVID-19 Pandemic: A Systematic Review and Bibliometric Analysis. J Med Internet Res 2021; 23:e27414. [PMID: 34236992 PMCID: PMC8493455 DOI: 10.2196/27414] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2021] [Revised: 05/11/2021] [Accepted: 07/05/2021] [Indexed: 01/19/2023] Open
Abstract
Background Glaucoma leads to irreversible blindness. Globally, it is the second most common retinal disease that leads to blindness, slightly less common than cataracts. Therefore, there is a great need to avoid the silent growth of this disease using recently developed generative adversarial networks (GANs). Objective This paper aims to introduce a GAN technology for the diagnosis of eye disorders, particularly glaucoma. This paper illustrates deep adversarial learning as a potential diagnostic tool and the challenges involved in its implementation. This study describes and analyzes many of the pitfalls and problems that researchers will need to overcome to implement this kind of technology. Methods To organize this review comprehensively, articles and reviews were collected using the following keywords: (“Glaucoma,” “optic disc,” “blood vessels”) and (“receptive field,” “loss function,” “GAN,” “Generative Adversarial Network,” “Deep learning,” “CNN,” “convolutional neural network” OR encoder). The records were identified from 5 highly reputed databases: IEEE Xplore, Web of Science, Scopus, ScienceDirect, and PubMed. These libraries broadly cover the technical and medical literature. Publications within the last 5 years, specifically 2015-2020, were included because the target GAN technique was invented only in 2014 and the publishing date of the collected papers was not earlier than 2016. Duplicate records were removed, and irrelevant titles and abstracts were excluded. In addition, we excluded papers that used optical coherence tomography and visual field images, except for those with 2D images. A large-scale systematic analysis was performed, and then a summarized taxonomy was generated. Furthermore, the results of the collected articles were summarized and a visual representation of the results was presented on a T-shaped matrix diagram. This study was conducted between March 2020 and November 2020. Results We found 59 articles after conducting a comprehensive survey of the literature. Among the 59 articles, 30 present actual attempts to synthesize images and provide accurate segmentation/classification using single/multiple landmarks or share certain experiences. The other 29 articles discuss the recent advances in GANs, do practical experiments, and contain analytical studies of retinal disease. Conclusions Recent deep learning techniques, namely GANs, have shown encouraging performance in retinal disease detection. Although this methodology involves an extensive computing budget and optimization process, it saturates the greedy nature of deep learning techniques by synthesizing images and solves major medical issues. This paper contributes to this research field by offering a thorough analysis of existing works, highlighting current limitations, and suggesting alternatives to support other researchers and participants in further improving and strengthening future work. Finally, new directions for this research have been identified.
Collapse
Affiliation(s)
- Ali Q Saeed
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY.,Computer Center, Northern Technical University, Ninevah, IQ
| | - Siti Norul Huda Sheikh Abdullah
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| | - Jemaima Che-Hamzah
- Department of Ophthalmology, Faculty of Medicine, Universiti Kebangsaan Malaysia (UKM), Cheras, Kuala Lumpur, MY
| | - Ahmad Tarmizi Abdul Ghani
- Faculty of Information Science & Technology (FTSM), Universiti Kebangsaan Malaysia (UKM), UKM, 43600 Bangi, Selangor, Malaysia, Selangor, MY
| |
Collapse
|
30
|
Abdelmotaal H, Abdou AA, Omar AF, El-Sebaity DM, Abdelazeem K. Pix2pix Conditional Generative Adversarial Networks for Scheimpflug Camera Color-Coded Corneal Tomography Image Generation. Transl Vis Sci Technol 2021; 10:21. [PMID: 34132759 PMCID: PMC8242686 DOI: 10.1167/tvst.10.7.21] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
Purpose To assess the ability of pix2pix conditional generative adversarial network (pix2pix cGAN) to create plausible synthesized Scheimpflug camera color-coded corneal tomography images based upon a modest-sized original dataset to be used for image augmentation during training a deep convolutional neural network (DCNN) for classification of keratoconus and normal corneal images. Methods Original images of 1778 eyes of 923 nonconsecutive patients with or without keratoconus were retrospectively analyzed. Images were labeled and preprocessed for use in training the proposed pix2pix cGAN. The best quality synthesized images were selected based on the Fréchet inception distance score, and their quality was studied by calculating the mean square error, structural similarity index, and the peak signal-to-noise ratio. We used original, traditionally augmented original and synthesized images to train a DCNN for image classification and compared classification performance metrics. Results The pix2pix cGAN synthesized images showed plausible subjectively and objectively assessed quality. Training the DCNN with a combination of real and synthesized images allowed better classification performance compared with training using original images only or with traditional augmentation. Conclusions Using the pix2pix cGAN to synthesize corneal tomography images can overcome issues related to small datasets and class imbalance when training computer-aided diagnostic models. Translational Relevance Pix2pix cGAN can provide an unlimited supply of plausible synthetic Scheimpflug camera color-coded corneal tomography images at levels useful for experimental and clinical applications.
Collapse
Affiliation(s)
- Hazem Abdelmotaal
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed A Abdou
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | - Ahmed F Omar
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| | | | - Khaled Abdelazeem
- Department of Ophthalmology, Faculty of Medicine, Assiut University, Assiut, Egypt
| |
Collapse
|
31
|
Yu Z, Yan R, Yu Y, Ma X, Liu X, Liu J, Ren Q, Lu Y. Multiple Lesions Insertion: boosting diabetic retinopathy screening through Poisson editing. BIOMEDICAL OPTICS EXPRESS 2021; 12:2773-2789. [PMID: 34123503 PMCID: PMC8176793 DOI: 10.1364/boe.420776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/20/2021] [Accepted: 04/02/2021] [Indexed: 06/12/2023]
Abstract
Deep neural networks have made incredible progress in many computer vision tasks, owing to access to a great amount of data. However, collecting ground truth for large medical image datasets is extremely inconvenient and difficult to implement in practical applications, due to high professional requirements. Synthesizing can generate meaningful supplement samples to enlarge the insufficient medical image dataset. In this study, we propose a new data augmentation method, Multiple Lesions Insertion (MLI), to simulate new diabetic retinopathy (DR) fundus images based on the healthy fundus images that insert real lesions, such as exudates, hemorrhages, microaneurysms templates, into new healthy fundus images with Poisson editing. The synthetic fundus images can be generated according to the clinical rules, i.e., in different DR grading fundus images, the number of exudates, hemorrhages, microaneurysms are different. The generated DR fundus images by our MLI method are realistic with the real texture features and rich details, without black spots, artifacts, and discontinuities. We first demonstrate the feasibility of this method in a DR computer-aided diagnosis (CAD) system, which judges whether the patient has transferred treatment or not. Our results indicate that the MLI method outperforms most of the traditional augmentation methods, i.e, oversampling, under-sampling, cropping, rotation, and adding other real sample methods in the DR screening task.
Collapse
Affiliation(s)
- Zekuan Yu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing 100871, China
- Key Laboratory of Industrial Dust Prevention and Control and Occupational Health and Safety, Ministry of Education, Anhui, China
- Anhui Province Engineering Laboratory of Occupational Health and Safety, Anhui, China
- Key Laboratory of Industrial Dust Deep Reduction and Occupational Health and Safety of Anhui Higher Education Institutes, Anhui, China
| | - Rongyao Yan
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100000, China
| | - Yuanyuan Yu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100000, China
| | - Xiao Ma
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100000, China
| | - Xiao Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100000, China
| | - Jie Liu
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing 100000, China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing 100871, China
| | - Yanye Lu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing 100871, China
| |
Collapse
|
32
|
Jiang Z, Huang Z, Qiu B, Meng X, You Y, Liu X, Geng M, Liu G, Zhou C, Yang K, Maier A, Ren Q, Lu Y. Weakly Supervised Deep Learning-Based Optical Coherence Tomography Angiography. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:688-698. [PMID: 33136539 DOI: 10.1109/tmi.2020.3035154] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a promising imaging modality for microvasculature studies. Deep learning networks have been widely applied in the field of OCTA reconstruction, benefiting from its powerful mapping capability among images. However, these existing deep learning-based methods depend on high-quality labels, which are hard to acquire considering imaging hardware limitations and practical data acquisition conditions. In this article, we proposed an unprecedented weakly supervised deep learning-based pipeline for OCTA reconstruction task, in the absence of high-quality training labels. The proposed pipeline was investigated on an in vivo animal dataset and a human eye dataset by a cross-validation strategy. Compared with supervised learning approaches, the proposed approach demonstrated similar or even better performance in the OCTA reconstruction task. These investigations indicate that the proposed weakly supervised learning strategy is well capable of performing OCTA reconstruction, and has a certain potential towards clinical applications.
Collapse
|
33
|
Selim M, Zhang J, Fei B, Zhang GQ, Chen J. STAN-CT: Standardizing CT Image using Generative Adversarial Networks. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2021; 2020:1100-1109. [PMID: 33936486 PMCID: PMC8075475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Computed Tomography (CT) plays an important role in lung malignancy diagnostics, therapy assessment, and facilitating precision medicine delivery. However, the use of personalized imaging protocols poses a challenge in large-scale cross-center CT image radiomic studies. We present an end-to-end solution called STAN-CT for CT image standardization and normalization, which effectively reduces discrepancies in image features caused by using different imaging protocols or using different CT scanners with the same imaging protocol. STAN-CT consists oftwo components: 1)a Generative Adversarial Networks (GAN) model where a latent-feature-based loss function is adopted to learn the data distribution of standard images within a few rounds of generator training, and 2) an automatic DICOM reconstruction pipeline with systematic image quality control that ensures the generation ofhigh-quality standard DICOM images. Experimental results indicate that the training efficiency and model performance of STAN-CT have been significantly improved compared to the state-of-the-art CT image standardization and normalization algorithms.
Collapse
Affiliation(s)
- Md Selim
- Department of Computer Science, University of Kentucky, Lexington, KY
- Institute for Biomedical Informatics, University of Kentucky, Lexington, KY
| | - Jie Zhang
- Department of Radiology, University of Kentucky, Lexington, KY
| | - Baowei Fei
- Department of Bioengineering, University of Texas at Dallas, Richardson, TX
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX
| | - Guo-Qiang Zhang
- The University of Texas Health Science Center at Houston, Houston, TX
| | - Jin Chen
- Department of Computer Science, University of Kentucky, Lexington, KY
- Institute for Biomedical Informatics, University of Kentucky, Lexington, KY
- Department of Internal Medicine, University of Kentucky, Lexington, KY
| |
Collapse
|
34
|
Coyner AS, Chen J, Campbell JP, Ostmo S, Singh P, Kalpathy-Cramer J, Chiang MF. Diagnosability of Synthetic Retinal Fundus Images for Plus Disease Detection in Retinopathy of Prematurity. AMIA ... ANNUAL SYMPOSIUM PROCEEDINGS. AMIA SYMPOSIUM 2021; 2020:329-337. [PMID: 33936405 PMCID: PMC8075515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Advances in generative adversarial networks have allowed for engineering of highly-realistic images. Many studies have applied these techniques to medical images. However, evaluation of generated medical images often relies upon image quality and reconstruction metrics, and subjective evaluation by laypersons. This is acceptable for generation of images depicting everyday objects, but not for medical images, where there may be subtle features experts rely upon for diagnosis. We implemented the pix2pix generative adversarial network for retinal fundus image generation, and evaluated the ability of experts to identify generated images as such and to form accurate diagnoses of plus disease in retinopathy of prematurity. We found that, while experts could discern between real and generated images, the diagnoses between image sets were similar. By directly evaluating and confirming physicians' abilities to diagnose generated retinal fundus images, this work supports conclusions that generated images may be viable for dataset augmentation and physician training.
Collapse
Affiliation(s)
| | - Jimmy Chen
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - J Peter Campbell
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - Susan Ostmo
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| | - Praveer Singh
- Radiology, MGH/Harvard Medical School, Charlestown, MA, United States
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Jayashree Kalpathy-Cramer
- Radiology, MGH/Harvard Medical School, Charlestown, MA, United States
- MGH & BWH Center for Clinical Data Science, Boston, MA, United States
| | - Michael F Chiang
- Medical Informatics & Clinical Epidemiology
- Ophthalmology Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
35
|
Qiu B, You Y, Huang Z, Meng X, Jiang Z, Zhou C, Liu G, Yang K, Ren Q, Lu Y. N2NSR-OCT: Simultaneous denoising and super-resolution in optical coherence tomography images using semisupervised deep learning. JOURNAL OF BIOPHOTONICS 2021; 14:e202000282. [PMID: 33025760 DOI: 10.1002/jbio.202000282] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 09/21/2020] [Accepted: 09/29/2020] [Indexed: 06/11/2023]
Abstract
Optical coherence tomography (OCT) imaging shows a significant potential in clinical routines due to its noninvasive property. However, the quality of OCT images is generally limited by inherent speckle noise of OCT imaging and low sampling rate. To obtain high signal-to-noise ratio (SNR) and high-resolution (HR) OCT images within a short scanning time, we presented a learning-based method to recover high-quality OCT images from noisy and low-resolution OCT images. We proposed a semisupervised learning approach named N2NSR-OCT, to generate denoised and super-resolved OCT images simultaneously using up- and down-sampling networks (U-Net (Semi) and DBPN (Semi)). Additionally, two different super-resolution and denoising models with different upscale factors (2× and 4×) were trained to recover the high-quality OCT image of the corresponding down-sampling rates. The new semisupervised learning approach is able to achieve results comparable with those of supervised learning using up- and down-sampling networks, and can produce better performance than other related state-of-the-art methods in the aspects of maintaining subtle fine retinal structures.
Collapse
Affiliation(s)
- Bin Qiu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Yunfei You
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Zhiyu Huang
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Xiangxi Meng
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Nuclear Medicine, Peking University Cancer Hospital & Institute, Beijing, China
| | - Zhe Jiang
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Chuanqing Zhou
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Gangjun Liu
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Kun Yang
- College of Quality and Technical Supervision, Hebei University, Baoding, China
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
| | - Yanye Lu
- Department of Biomedical Engineering, College of Engineering, Peking University, Beijing, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, Shenzhen, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory, Shenzhen, China
- Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
36
|
Shaga Devan K, Walther P, von Einem J, Ropinski T, A Kestler H, Read C. Improved automatic detection of herpesvirus secondary envelopment stages in electron microscopy by augmenting training data with synthetic labelled images generated by a generative adversarial network. Cell Microbiol 2020; 23:e13280. [PMID: 33073426 DOI: 10.1111/cmi.13280] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2020] [Revised: 10/01/2020] [Accepted: 10/14/2020] [Indexed: 12/16/2022]
Abstract
Detailed analysis of secondary envelopment of the herpesvirus human cytomegalovirus (HCMV) by transmission electron microscopy (TEM) is crucial for understanding the formation of infectious virions. Here, we present a convolutional neural network (CNN) that automatically recognises cytoplasmic capsids and distinguishes between three HCMV capsid envelopment stages in TEM images. 315 TEM images containing 2,610 expert-labelled capsids of the three classes were available for CNN training. To overcome the limitation of small training datasets and thus poor CNN performance, we used a deep learning method, the generative adversarial network (GAN), to automatically increase our labelled training dataset with 500 synthetic images and thus to 9,192 labelled capsids. The synthetic TEM images were added to the ground truth dataset to train the Faster R-CNN deep learning-based object detector. Training with 315 ground truth images yielded an average precision (AP) of 53.81% for detection, whereas the addition of 500 synthetic training images increased the AP to 76.48%. This shows that generation and additional use of synthetic labelled images for detector training is an inexpensive way to improve detector performance. This work combines the gold standard of secondary envelopment research with state-of-the-art deep learning technology to speed up automatic image analysis even when large labelled training datasets are not available.
Collapse
Affiliation(s)
| | - Paul Walther
- Central Facility for Electron Microscopy, Ulm University, Ulm, Germany
| | - Jens von Einem
- Institute of Virology, Ulm University Medical Center, Ulm, Germany
| | - Timo Ropinski
- Institute of Media Informatics, Ulm University, Ulm, Germany
| | | | - Clarissa Read
- Central Facility for Electron Microscopy, Ulm University, Ulm, Germany.,Institute of Virology, Ulm University Medical Center, Ulm, Germany
| |
Collapse
|
37
|
Jiang Z, Huang Z, Qiu B, Meng X, You Y, Liu X, Liu G, Zhou C, Yang K, Maier A, Ren Q, Lu Y. Comparative study of deep learning models for optical coherence tomography angiography. BIOMEDICAL OPTICS EXPRESS 2020; 11:1580-1597. [PMID: 32206430 PMCID: PMC7075619 DOI: 10.1364/boe.387807] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2020] [Revised: 02/21/2020] [Accepted: 02/21/2020] [Indexed: 05/10/2023]
Abstract
Optical coherence tomography angiography (OCTA) is a promising imaging modality for microvasculature studies. Meanwhile, deep learning has achieved rapid development in image-to-image translation tasks. Some studies have proposed applying deep learning models to OCTA reconstruction and have obtained preliminary results. However, current studies are mostly limited to a few specific deep neural networks. In this paper, we conducted a comparative study to investigate OCTA reconstruction using deep learning models. Four representative network architectures including single-path models, U-shaped models, generative adversarial network (GAN)-based models and multi-path models were investigated on a dataset of OCTA images acquired from rat brains. Three potential solutions were also investigated to study the feasibility of improving performance. The results showed that U-shaped models and multi-path models are two suitable architectures for OCTA reconstruction. Furthermore, merging phase information should be the potential improving direction in further research.
Collapse
Affiliation(s)
- Zhe Jiang
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Zhiyu Huang
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Bin Qiu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Xiangxi Meng
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Nuclear Medicine, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Yunfei You
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Xi Liu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
| | - Gangjun Liu
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Chuangqing Zhou
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Kun Yang
- College of Quality and Technical Supervision, Hebei University, No. 2666 Qiyidong Road, Baoding, 071000, China
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstrasse 3, 91058 Erlangen, Germany
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Yanye Lu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstrasse 3, 91058 Erlangen, Germany
| |
Collapse
|
38
|
Qiu B, Huang Z, Liu X, Meng X, You Y, Liu G, Yang K, Maier A, Ren Q, Lu Y. Noise reduction in optical coherence tomography images using a deep neural network with perceptually-sensitive loss function. BIOMEDICAL OPTICS EXPRESS 2020; 11:817-830. [PMID: 32133225 PMCID: PMC7041484 DOI: 10.1364/boe.379551] [Citation(s) in RCA: 58] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2019] [Revised: 01/07/2020] [Accepted: 01/08/2020] [Indexed: 05/02/2023]
Abstract
Optical coherence tomography (OCT) is susceptible to the coherent noise, which is the speckle noise that deteriorates contrast and the detail structural information of OCT images, thus imposing significant limitations on the diagnostic capability of OCT. In this paper, we propose a novel OCT image denoising method by using an end-to-end deep learning network with a perceptually-sensitive loss function. The method has been validated on OCT images acquired from healthy volunteers' eyes. The label images for training and evaluating OCT denoising deep learning models are images generated by averaging 50 frames of respective registered B-scans acquired from a region with scans occurring in one direction. The results showed that the new approach can outperform other related denoising methods on the aspects of preserving detail structure information of retinal layers and improving the perceptual metrics in the human visual perception.
Collapse
Affiliation(s)
- Bin Qiu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
| | - Zhiyu Huang
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
| | - Xi Liu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
| | - Xiangxi Meng
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Nuclear Medicine, Peking University Cancer Hospital & Institute, Beijing 100142, China
| | - Yunfei You
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
| | - Gangjun Liu
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Kun Yang
- College of Quality and Technical Supervision, Hebei University, No. 2666 Qiyidong Road, Baoding 071000, China
| | - Andreas Maier
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstrasse 3, 91058 Erlangen, Germany
| | - Qiushi Ren
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Institute of Biomedical Engineering, Shenzhen Bay Laboratory 5F, No. 9 Duxue Road, Nanshan District, Shenzhen 518071, China
| | - Yanye Lu
- Department of Biomedical Engineering, College of Engineering, Peking University, No. 5 Yihe Yuan Road, Haidian District, Beijing 100871, China
- Institute of Biomedical Engineering, Peking University Shenzhen Graduate School, No. 2199 Lishui Road, Nanshan District, Shenzhen 518055, China
- Pattern Recognition Lab, Department of Computer Science, Friedrich-Alexander-University Erlangen-Nuremberg, Martensstrasse 3, 91058 Erlangen, Germany
| |
Collapse
|