1
|
Yue G, Zhang L, Du J, Zhou T, Zhou W, Lin W. Subjective and Objective Quality Assessment of Colonoscopy Videos. IEEE TRANSACTIONS ON MEDICAL IMAGING 2025; 44:841-854. [PMID: 39283779 DOI: 10.1109/tmi.2024.3461737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/20/2024]
Abstract
Captured colonoscopy videos usually suffer from multiple real-world distortions, such as motion blur, low brightness, abnormal exposure, and object occlusion, which impede visual interpretation. However, existing works mainly investigate the impacts of synthesized distortions, which differ from real-world distortions greatly. This research aims to carry out an in-depth study for colonoscopy Video Quality Assessment (VQA). In this study, we advance this topic by establishing both subjective and objective solutions. Firstly, we collect 1,000 colonoscopy videos with typical visual quality degradation conditions in practice and construct a multi-attribute VQA database. The quality of each video is annotated by subjective experiments from five distortion attributes (i.e., temporal-spatial visibility, brightness, specular reflection, stability, and utility), as well as an overall perspective. Secondly, we propose a Distortion Attribute Reasoning Network (DARNet) for automatic VQA. DARNet includes two streams to extract features related to spatial and temporal distortions, respectively. It adaptively aggregates the attribute-related features through a multi-attribute association module to predict the quality score of each distortion attribute. Motivated by the observation that the rating behaviors for all attributes are different, a behavior guided reasoning module is further used to fuse the attribute-aware features, resulting in the overall quality. Experimental results on the constructed database show that our DARNet correlates well with subjective ratings and is superior to nine state-of-the-art methods.
Collapse
|
2
|
Wang KN, Wang H, Zhou GQ, Wang Y, Yang L, Chen Y, Li S. TSdetector: Temporal-Spatial self-correction collaborative learning for colonoscopy video detection. Med Image Anal 2025; 100:103384. [PMID: 39579624 DOI: 10.1016/j.media.2024.103384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 09/24/2024] [Accepted: 10/28/2024] [Indexed: 11/25/2024]
Abstract
CNN-based object detection models that strike a balance between performance and speed have been gradually used in polyp detection tasks. Nevertheless, accurately locating polyps within complex colonoscopy video scenes remains challenging since existing methods ignore two key issues: intra-sequence distribution heterogeneity and precision-confidence discrepancy. To address these challenges, we propose a novel Temporal-Spatial self-correction detector (TSdetector), which first integrates temporal-level consistency learning and spatial-level reliability learning to detect objects continuously. Technically, we first propose a global temporal-aware convolution, assembling the preceding information to dynamically guide the current convolution kernel to focus on global features between sequences. In addition, we designed a hierarchical queue integration mechanism to combine multi-temporal features through a progressive accumulation manner, fully leveraging contextual consistency information together with retaining long-sequence-dependency features. Meanwhile, at the spatial level, we advance a position-aware clustering to explore the spatial relationships among candidate boxes for recalibrating prediction confidence adaptively, thus eliminating redundant bounding boxes efficiently. The experimental results on three publicly available polyp video dataset show that TSdetector achieves the highest polyp detection rate and outperforms other state-of-the-art methods. The code can be available at https://github.com/soleilssss/TSdetector.
Collapse
Affiliation(s)
- Kai-Ni Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Haolin Wang
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China
| | - Guang-Quan Zhou
- School of Biological Science and Medical Engineering, Southeast University, Nanjing, China; Jiangsu Key Laboratory of Biomaterials and Devices, Southeast University, Nanjing, China.
| | | | - Ling Yang
- Institute of Medical Technology, Peking University Health Science Center, China
| | - Yang Chen
- Laboratory of Image Science and Technology, Southeast University, Nanjing, China; Key Laboratory of Computer Network and Information Integration, Southeast University, Nanjing, China
| | - Shuo Li
- Department of Computer and Data Science and Department of Biomedical Engineering, Case Western Reserve University, USA
| |
Collapse
|
3
|
Tang Y, Lyu T, Jin H, Du Q, Wang J, Li Y, Li M, Chen Y, Zheng J. Domain adaptive noise reduction with iterative knowledge transfer and style generalization learning. Med Image Anal 2024; 98:103327. [PMID: 39191093 DOI: 10.1016/j.media.2024.103327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 08/20/2024] [Accepted: 08/21/2024] [Indexed: 08/29/2024]
Abstract
Low-dose computed tomography (LDCT) denoising tasks face significant challenges in practical imaging scenarios. Supervised methods encounter difficulties in real-world scenarios as there are no paired data for training. Moreover, when applied to datasets with varying noise patterns, these methods may experience decreased performance owing to the domain gap. Conversely, unsupervised methods do not require paired data and can be directly trained on real-world data. However, they often exhibit inferior performance compared to supervised methods. To address this issue, it is necessary to leverage the strengths of these supervised and unsupervised methods. In this paper, we propose a novel domain adaptive noise reduction framework (DANRF), which integrates both knowledge transfer and style generalization learning to effectively tackle the domain gap problem. Specifically, an iterative knowledge transfer method with knowledge distillation is selected to train the target model using unlabeled target data and a pre-trained source model trained with paired simulation data. Meanwhile, we introduce the mean teacher mechanism to update the source model, enabling it to adapt to the target domain. Furthermore, an iterative style generalization learning process is also designed to enrich the style diversity of the training dataset. We evaluate the performance of our approach through experiments conducted on multi-source datasets. The results demonstrate the feasibility and effectiveness of our proposed DANRF model in multi-source LDCT image processing tasks. Given its hybrid nature, which combines the advantages of supervised and unsupervised learning, and its ability to bridge domain gaps, our approach is well-suited for improving practical low-dose CT imaging in clinical settings. Code for our proposed approach is publicly available at https://github.com/tyfeiii/DANRF.
Collapse
Affiliation(s)
- Yufei Tang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Tianling Lyu
- Research Center of Augmented Intelligence, Zhejiang Lab, Hangzhou, 310000, China
| | - Haoyang Jin
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Qiang Du
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jiping Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Yunxiang Li
- Nanovision Technology Co., Ltd., Beiqing Road, Haidian District, Beijing, 100094, China
| | - Ming Li
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Yang Chen
- Laboratory of Image Science and Technology, the School of Computer Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Medical Imaging Department, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; Shandong Laboratory of Advanced Biomaterials and Medical Devices in Weihai, Weihai, 264200, China.
| |
Collapse
|
4
|
Garbaz A, Oukdach Y, Charfi S, El Ansari M, Koutti L, Salihoun M. MLFA-UNet: A multi-level feature assembly UNet for medical image segmentation. Methods 2024; 232:52-64. [PMID: 39481818 DOI: 10.1016/j.ymeth.2024.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 10/03/2024] [Accepted: 10/22/2024] [Indexed: 11/03/2024] Open
Abstract
Medical image segmentation is crucial for accurate diagnosis and treatment in medical image analysis. Among the various methods employed, fully convolutional networks (FCNs) have emerged as a prominent approach for segmenting medical images. Notably, the U-Net architecture and its variants have gained widespread adoption in this domain. This paper introduces MLFA-UNet, an innovative architectural framework aimed at advancing medical image segmentation. MLFA-UNet adopts a U-shaped architecture and integrates two pivotal modules: multi-level feature assembly (MLFA) and multi-scale information attention (MSIA), complemented by a pixel-vanishing (PV) attention mechanism. These modules synergistically contribute to the segmentation process enhancement, fostering both robustness and segmentation precision. MLFA operates within both the network encoder and decoder, facilitating the extraction of local information crucial for accurately segmenting lesions. Furthermore, the bottleneck MSIA module serves to replace stacking modules, thereby expanding the receptive field and augmenting feature diversity, fortified by the PV attention mechanism. These integrated mechanisms work together to boost segmentation performance by effectively capturing both detailed local features and a broader range of contextual information, enhancing both accuracy and resilience in identifying lesions. To assess the versatility of the network, we conducted evaluations of MFLA-UNet across a range of medical image segmentation datasets, encompassing diverse imaging modalities such as wireless capsule endoscopy (WCE), colonoscopy, and dermoscopic images. Our results consistently demonstrate that MFLA-UNet outperforms state-of-the-art algorithms, achieving dice coefficients of 91.42%, 82.43%, 90.8%, and 88.68% for the MICCAI 2017 (Red Lesion), ISIC 2017, PH2, and CVC-ClinicalDB datasets, respectively.
Collapse
Affiliation(s)
- Anass Garbaz
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco.
| | - Yassine Oukdach
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Said Charfi
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Mohamed El Ansari
- Informatics and Applications Laboratory, Department of Computer Science Faculty of sciences, Moulay Ismail University, Meknes, 50000, Morocco
| | - Lahcen Koutti
- Laboratory of Computer Systems and Vision, Faculty of Science, Ibn Zohr University, Agadir, 80000, Morocco
| | - Mouna Salihoun
- Faculty of Medicine and Pharmacy, Mohammed V University, Rabat, 10100, Morocco
| |
Collapse
|
5
|
Liu X, Li W, Yuan Y. Decoupled Unbiased Teacher for Source-Free Domain Adaptive Medical Object Detection. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7287-7298. [PMID: 37224362 DOI: 10.1109/tnnls.2023.3272389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Source-free domain adaptation (SFDA) aims to adapt a lightweight pretrained source model to unlabeled new domains without the original labeled source data. Due to the privacy of patients and storage consumption concerns, SFDA is a more practical setting for building a generalized model in medical object detection. Existing methods usually apply the vanilla pseudo-labeling technique, while neglecting the bias issues in SFDA, leading to limited adaptation performance. To this end, we systematically analyze the biases in SFDA medical object detection by constructing a structural causal model (SCM) and propose an unbiased SFDA framework dubbed decoupled unbiased teacher (DUT). Based on the SCM, we derive that the confounding effect causes biases in the SFDA medical object detection task at the sample level, feature level, and prediction level. To prevent the model from emphasizing easy object patterns in the biased dataset, a dual invariance assessment (DIA) strategy is devised to generate counterfactual synthetics. The synthetics are based on unbiased invariant samples in both discrimination and semantic perspectives. To alleviate overfitting to domain-specific features in SFDA, we design a cross-domain feature intervention (CFI) module to explicitly deconfound the domain-specific prior with feature intervention and obtain unbiased features. Besides, we establish a correspondence supervision prioritization (CSP) strategy for addressing the prediction bias caused by coarse pseudo-labels by sample prioritizing and robust box supervision. Through extensive experiments on multiple SFDA medical object detection scenarios, DUT yields superior performance over previous state-of-the-art unsupervised domain adaptation (UDA) and SFDA counterparts, demonstrating the significance of addressing the bias issues in this challenging task. The code is available at https://github.com/CUHK-AIM-Group/Decoupled-Unbiased-Teacher.
Collapse
|
6
|
Fang Y, Yap PT, Lin W, Zhu H, Liu M. Source-free unsupervised domain adaptation: A survey. Neural Netw 2024; 174:106230. [PMID: 38490115 PMCID: PMC11015964 DOI: 10.1016/j.neunet.2024.106230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/14/2024] [Accepted: 03/07/2024] [Indexed: 03/17/2024]
Abstract
Unsupervised domain adaptation (UDA) via deep learning has attracted appealing attention for tackling domain-shift problems caused by distribution discrepancy across different domains. Existing UDA approaches highly depend on the accessibility of source domain data, which is usually limited in practical scenarios due to privacy protection, data storage and transmission cost, and computation burden. To tackle this issue, many source-free unsupervised domain adaptation (SFUDA) methods have been proposed recently, which perform knowledge transfer from a pre-trained source model to the unlabeled target domain with source data inaccessible. A comprehensive review of these works on SFUDA is of great significance. In this paper, we provide a timely and systematic literature review of existing SFUDA approaches from a technical perspective. Specifically, we categorize current SFUDA studies into two groups, i.e., white-box SFUDA and black-box SFUDA, and further divide them into finer subcategories based on different learning strategies they use. We also investigate the challenges of methods in each subcategory, discuss the advantages/disadvantages of white-box and black-box SFUDA methods, conclude the commonly used benchmark datasets, and summarize the popular techniques for improved generalizability of models learned without using source data. We finally discuss several promising future directions in this field.
Collapse
Affiliation(s)
- Yuqi Fang
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Pew-Thian Yap
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Weili Lin
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Hongtu Zhu
- Department of Biostatistics and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599, United States.
| |
Collapse
|
7
|
Li H, Lin Z, Qiu Z, Li Z, Niu K, Guo N, Fu H, Hu Y, Liu J. Enhancing and Adapting in the Clinic: Source-Free Unsupervised Domain Adaptation for Medical Image Enhancement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1323-1336. [PMID: 38015687 DOI: 10.1109/tmi.2023.3335651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Medical imaging provides many valuable clues involving anatomical structure and pathological characteristics. However, image degradation is a common issue in clinical practice, which can adversely impact the observation and diagnosis by physicians and algorithms. Although extensive enhancement models have been developed, these models require a well pre-training before deployment, while failing to take advantage of the potential value of inference data after deployment. In this paper, we raise an algorithm for source-free unsupervised domain adaptive medical image enhancement (SAME), which adapts and optimizes enhancement models using test data in the inference phase. A structure-preserving enhancement network is first constructed to learn a robust source model from synthesized training data. Then a teacher-student model is initialized with the source model and conducts source-free unsupervised domain adaptation (SFUDA) by knowledge distillation with the test data. Additionally, a pseudo-label picker is developed to boost the knowledge distillation of enhancement tasks. Experiments were implemented on ten datasets from three medical image modalities to validate the advantage of the proposed algorithm, and setting analysis and ablation studies were also carried out to interpret the effectiveness of SAME. The remarkable enhancement performance and benefits for downstream tasks demonstrate the potential and generalizability of SAME. The code is available at https://github.com/liamheng/Annotation-free-Medical-Image-Enhancement.
Collapse
|
8
|
Kumari S, Singh P. Deep learning for unsupervised domain adaptation in medical imaging: Recent advancements and future perspectives. Comput Biol Med 2024; 170:107912. [PMID: 38219643 DOI: 10.1016/j.compbiomed.2023.107912] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/02/2023] [Accepted: 12/24/2023] [Indexed: 01/16/2024]
Abstract
Deep learning has demonstrated remarkable performance across various tasks in medical imaging. However, these approaches primarily focus on supervised learning, assuming that the training and testing data are drawn from the same distribution. Unfortunately, this assumption may not always hold true in practice. To address these issues, unsupervised domain adaptation (UDA) techniques have been developed to transfer knowledge from a labeled domain to a related but unlabeled domain. In recent years, significant advancements have been made in UDA, resulting in a wide range of methodologies, including feature alignment, image translation, self-supervision, and disentangled representation methods, among others. In this paper, we provide a comprehensive literature review of recent deep UDA approaches in medical imaging from a technical perspective. Specifically, we categorize current UDA research in medical imaging into six groups and further divide them into finer subcategories based on the different tasks they perform. We also discuss the respective datasets used in the studies to assess the divergence between the different domains. Finally, we discuss emerging areas and provide insights and discussions on future research directions to conclude this survey.
Collapse
Affiliation(s)
- Suruchi Kumari
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| | - Pravendra Singh
- Department of Computer Science and Engineering, Indian Institute of Technology Roorkee, India.
| |
Collapse
|
9
|
Khan ZF, Ramzan M, Raza M, Khan MA, Alasiry A, Marzougui M, Shin J. Real-Time Polyp Detection From Endoscopic Images Using YOLOv8 With YOLO-Score Metrics for Enhanced Suitability Assessment. IEEE ACCESS 2024; 12:176346-176362. [DOI: 10.1109/access.2024.3505619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Affiliation(s)
- Zahid Farooq Khan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Ramzan
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Wah Cantt, Pakistan
| | - Muhammad Attique Khan
- Department of AI, College of Computer Engineering and Science, Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia
| | - Areej Alasiry
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Mehrez Marzougui
- College of Computer Science, King Khalid University, Abha, Saudi Arabia
| | - Jungpil Shin
- School of Computer Science and Engineering, The University of Aizu, Aizuwakamatsu, Japan
| |
Collapse
|
10
|
Wang Z, Zhu H, Huang B, Wang Z, Lu W, Chen N, Wang Y. M-MSSEU: source-free domain adaptation for multi-modal stroke lesion segmentation using shadowed sets and evidential uncertainty. Health Inf Sci Syst 2023; 11:46. [PMID: 37780536 PMCID: PMC10539264 DOI: 10.1007/s13755-023-00247-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 09/08/2023] [Indexed: 10/03/2023] Open
Abstract
Due to the unavailability of source domain data encountered in unsupervised domain adaptation, there has been an increasing number of studies on source-free domain adaptation (SFDA) in recent years. To better solve the SFDA problem and effectively leverage the multi-modal information in medical images, this paper presents a novel SFDA method for multi-modal stroke lesion segmentation in which evidential deep learning instead of convolutional neural network. Specifically, for multi-modal stroke images, we design a multi-modal opinion fusion module which uses Dempster-Shafer evidence theory for decision fusion of different modalities. Besides, for the SFDA problem, we use the pseudo label learning method, which obtains pseudo labels from the pre-trained source model to perform the adaptation process. To solve the unreliability of pseudo label caused by domain shift, we propose a pseudo label filtering scheme using shadowed sets theory and a pseudo label refining scheme using evidential uncertainty. These two schemes can automatically extract unreliable parts in pseudo labels and jointly improve the quality of pseudo labels with low computational costs. Experiments on two multi-modal stroke lesion datasets demonstrate the superiority of our method over other state-of-the-art SFDA methods.
Collapse
Affiliation(s)
- Zhicheng Wang
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Hongqing Zhu
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Bingcang Huang
- Department of Radiology, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| | - Ziying Wang
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Weiping Lu
- Department of Radiology, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| | - Ning Chen
- School of Information Science and Engineering, East China University of Science and Technology, No.130 Meilong Road, Shanghai, 200237 China
| | - Ying Wang
- Shanghai Health Commission Key Lab of Artificial Intelligence (AI)-Based Management of Inflammation and Chronic Diseases, Sino-French Cooperative Central Lab, Gongli Hospital of Shanghai Pudong New Area, Shanghai, 200135 China
| |
Collapse
|