1
|
Heinrich A, Hubig M, Mall G, Teichgräber U. Computer vision-based personal identification using 2D maximum intensity projection CT images. Eur Radiol 2025:10.1007/s00330-025-11630-0. [PMID: 40287870 DOI: 10.1007/s00330-025-11630-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2025] [Revised: 03/06/2025] [Accepted: 04/01/2025] [Indexed: 04/29/2025]
Abstract
OBJECTIVES Computer vision (CV) mimics human vision, enabling the automatic comparison of radiological images from recent examinations with a vast image database for unique identification. This method offers significant potential in emergencies involving unknown individuals. This study assesses whether maximum intensity projection (MIP) images from thoracic computed tomography (CT) examinations are suitable for automated CV-based personal identification. METHODS The study analyzed 12,465 native CT examinations of the thorax from 8177 individuals, focusing on MIP images to assess their potential for CV-based personal identification in 300 cases. CV automatically identifies and describes features in images, which are then matched to reference images. The number of matching points was used as an indicator of identification accuracy. RESULTS The identification rate was 98.67% (296/300) at rank 1 and 99.67% (299/300) at rank 10, among over 8177 potential identities. Matching points were higher for images of the same individual (7.43 ± 5.83%) compared to different individuals (0.16 ± 0.14%), with 100% representing the maximum possible matching points. Reliable matching points were mainly found in the thoracic skeleton, sternum, and spine. Challenges arose when the patient was curved on the table or when medical equipment was present in the image. CONCLUSION Unambiguous identification based on MIP images from thoracic CT examinations is highly reliable, even for large CV databases. This method is applicable to various 2D reconstructions, provided anatomical structures are comparably represented. Radiology offers extensive reference images for CV databases, enhancing automated personal identification in emergencies. KEY POINTS Question Computer vision-based personal identification holds great potential, but it remains unclear whether maximum intensity projection images from thoracic-CT scans are suitable for this purpose. Findings Maximum intensity projection images of the thorax are highly individual, with computer vision-based identification achieving nearly 100% rank-1 accuracy across a potential 8177 identities. Clinical relevance Radiology holds a vast collection of reference images for a computer vision database, enabling automated personal identification in emergency examinations. This improves patient care and communication with relatives by providing access to medical history.
Collapse
Affiliation(s)
- Andreas Heinrich
- Department of Radiology, Jena University Hospital-Friedrich Schiller University, Jena, Germany.
| | - Michael Hubig
- Institute of Forensic Medicine, Jena University Hospital-Friedrich Schiller University, Jena, Germany
| | - Gita Mall
- Institute of Forensic Medicine, Jena University Hospital-Friedrich Schiller University, Jena, Germany
| | - Ulf Teichgräber
- Department of Radiology, Jena University Hospital-Friedrich Schiller University, Jena, Germany
| |
Collapse
|
2
|
Godinez F, Mingels C, Bayerlein R, Mehadji B, Nardo L. Total Body PET/CT: Future Aspects. Semin Nucl Med 2025; 55:107-115. [PMID: 39542814 PMCID: PMC11977673 DOI: 10.1053/j.semnuclmed.2024.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 10/19/2024] [Indexed: 11/17/2024]
Abstract
Total-body (TB) positron emission tomography (PET) scanners are classified by their axial field of view (FOV). Long axial field of view (LAFOV) PET scanners can capture images from eyes to thighs in a one-bed position, covering all major organs with an axial FOV of about 100 cm. However, they often miss essential areas like distal lower extremities, limiting their use beyond oncology.TB-PET is reserved for scanners with a FOV of 180 cm or longer, allowing coverage of most of the body. LAFOV PET technology emerged about 40 years ago but gained traction recently due to advancements in data acquisition and cost. Early research highlighted its benefits, leading to the first FDA-cleared TB-PET/CT device in 2019 at UC Davis. Since then, various LAFOV scanners with enhanced capabilities have been developed, improving image quality, reducing acquisition times, and allowing for dynamic imaging. The uEXPLORER, the first LAFOV scanner, has a 194 cm active PET AFOV, far exceeding traditional scanners. The Panorama GS and others have followed suit in optimizing FOVs. Despite slow adoption due to the COVID pandemic and costs, over 50 LAFOV scanners are now in use globally. This review explores the future of LAFOV technology based on recent literature and experiences, covering its clinical applications, implications for radiation oncology, challenges in managing PET data, and expectations for technological advancements.
Collapse
Affiliation(s)
- Felipe Godinez
- Department of Radiology, University of California Davis, Sacramento, CA.
| | - Clemens Mingels
- Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Switzerland
| | - Reimund Bayerlein
- Department of Biomedical Engineering, University of California Davis, Davis, CA
| | - Brahim Mehadji
- Department of Radiology, University of California Davis, Sacramento, CA
| | - Lorenzo Nardo
- Department of Radiology, University of California Davis, Sacramento, CA
| |
Collapse
|
3
|
Lindholz M, Ruppel R, Schulze-Weddige S, Baumgärtner GL, Schobert I, Panten A, Schmidt R, Auer TA, Nawabi J, Haack AM, Stepansky L, Poggi L, Hosch R, Hamm CA, Penzkofer T. Analyzing the TotalSegmentator for facial feature removal in head CT scans. Radiography (Lond) 2025; 31:372-378. [PMID: 39754865 DOI: 10.1016/j.radi.2024.12.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2024] [Revised: 12/12/2024] [Accepted: 12/22/2024] [Indexed: 01/06/2025]
Abstract
BACKGROUND Facial recognition technology in medical imaging, particularly with head scans, poses privacy risks due to identifiable facial features. This study evaluates the use of facial recognition software in identifying facial features from head CT scans and explores a defacing pipeline using TotalSegmentator to reduce re-identification risks while preserving data integrity for research. METHODS 1404 high-quality renderings from the UCLH EIT Stroke dataset, both with and without defacing were analysed. The performance of defacing with the face mask created by TotalSegmentator was compared to a state-of-the-art CT defacing algorithm. Face detection was performed using deep learning models. The cosine similarity between facial embeddings for intra- and inter-patient images was compared. A Support Vector Machine was trained on cosine similarity values to assess defacing performance, determining if two renderings came from the same patient. This analysis was conducted on defaced and non-defaced images using 5-fold cross-validation. RESULTS Faces were detected in 76.5 % of non-defaced images. Intra-patient images exhibited a median cosine similarity of 0.65 (IQR: 0.47-0.80), compared to 0.50 (IQR: 0.39-0.62) for inter-patient images. A binary classifier performed moderately on non-defaced images, achieving a ROC-AUC of 0.69 (SD = 0.01) and an accuracy of 0.65 (SD = 0.01) in distinguishing whether a scan belonged to the same or a different individual. Following defacing, performance declined markedly. Defacing with the TotalSegmentator decreased the ROC-AUC to 0.55 (SD = 0.02) and the accuracy to 0.56 (SD = 0.01), whereas the CTA-DEFACE algorithm brought the performance down to a ROC-AUC of 0.60 (SD = 0.02) and an accuracy of 0.59 (SD = 0.01). These results demonstrate the effectiveness of defacing algorithms in mitigating re-identification risks, with the TotalSegmentator providing slightly superior privacy protection. CONCLUSION Facial recognition software can identify facial features from partial and complete head CT scan renderings. However, using the TotalSegmentator to deface images reduces re-identification risks to a near-chance level. We offer code to implement this privacy-preserving pipeline. IMPLICATIONS FOR PRACTICE Utilizing the TotalSegmentator framework, the proposed pipeline efficiently removes facial features from CT images, making it ideal for multi-site research and data sharing. It is a useful tool for radiographers and radiologists who must comply with medico-legal requirements necessitating the removal of facial features.
Collapse
Affiliation(s)
- M Lindholz
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany.
| | - R Ruppel
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - S Schulze-Weddige
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - G L Baumgärtner
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - I Schobert
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - A Panten
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - R Schmidt
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - T A Auer
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany; Berlin Institute of Health, Berlin, Germany
| | - J Nawabi
- Department of Neuroradiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - A-M Haack
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany
| | - L Stepansky
- Institute of Radiology, University Hospital Erlangen, Erlangen, Germany; University of Erlangen-Nuremberg, Erlangen, Germany
| | - L Poggi
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - R Hosch
- Institute for Artificial Intelligence in Medicine, University Hospital Essen, Essen, Germany
| | - C A Hamm
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany; Berlin Institute of Health, Berlin, Germany
| | - T Penzkofer
- Department of Radiology, Charité Universitätsmedizin Berlin, Berlin, Germany; Berlin Institute of Health, Berlin, Germany
| |
Collapse
|
4
|
Alsaigh R, Mehmood R, Katib I, Liang X, Alshanqiti A, Corchado JM, See S. Harmonizing AI governance regulations and neuroinformatics: perspectives on privacy and data sharing. Front Neuroinform 2024; 18:1472653. [PMID: 39741922 PMCID: PMC11685213 DOI: 10.3389/fninf.2024.1472653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2024] [Accepted: 12/04/2024] [Indexed: 01/03/2025] Open
Affiliation(s)
- Roba Alsaigh
- Department of Computer Science, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Rashid Mehmood
- Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah, Saudi Arabia
| | - Iyad Katib
- Department of Computer Science, Faculty of Computing and Information Technology (FCIT), King Abdulaziz University, Jeddah, Saudi Arabia
| | - Xiaohui Liang
- Department of Computer Science, University of Massachusetts, Boston, MA, United States
| | - Abdullah Alshanqiti
- Faculty of Computer and Information Systems, Islamic University of Madinah, Madinah, Saudi Arabia
| | - Juan M. Corchado
- BISITE Research Group, University of Salamanca, Salamanca, Spain
- Air Institute, IoT Digital Innovation Hub, Salamanca, Spain
- Department of Electronics, Information and Communication, Faculty of Engineering, Osaka Institute of Technology, Osaka, Japan
| | - Simon See
- NVIDIA AI Technology Center, NVIDIA Corporation, Santa Clara, CA, United States
| |
Collapse
|
5
|
Mahmutoglu MA, Rastogi A, Schell M, Foltyn-Dumitru M, Baumgartner M, Maier-Hein KH, Deike-Hofmann K, Radbruch A, Bendszus M, Brugnara G, Vollmuth P. Deep learning-based defacing tool for CT angiography: CTA-DEFACE. Eur Radiol Exp 2024; 8:111. [PMID: 39382818 PMCID: PMC11465008 DOI: 10.1186/s41747-024-00510-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 09/05/2024] [Indexed: 10/10/2024] Open
Abstract
The growing use of artificial neural network (ANN) tools for computed tomography angiography (CTA) data analysis underscores the necessity for elevated data protection measures. We aimed to establish an automated defacing pipeline for CTA data. In this retrospective study, CTA data from multi-institutional cohorts were utilized to annotate facemasks (n = 100) and train an ANN model, subsequently tested on an external institution's dataset (n = 50) and compared to a publicly available defacing algorithm. Face detection (MTCNN) and verification (FaceNet) networks were applied to measure the similarity between the original and defaced CTA images. Dice similarity coefficient (DSC), face detection probability, and face similarity measures were calculated to evaluate model performance. The CTA-DEFACE model effectively segmented soft face tissue in CTA data achieving a DSC of 0.94 ± 0.02 (mean ± standard deviation) on the test set. Our model was benchmarked against a publicly available defacing algorithm. After applying face detection and verification networks, our model showed substantially reduced face detection probability (p < 0.001) and similarity to the original CTA image (p < 0.001). The CTA-DEFACE model enabled robust and precise defacing of CTA data. The trained network is publicly accessible at www.github.com/neuroAI-HD/CTA-DEFACE . RELEVANCE STATEMENT: The ANN model CTA-DEFACE, developed for automatic defacing of CT angiography images, achieves significantly lower face detection probabilities and greater dissimilarity from the original images compared to a publicly available model. The algorithm has been externally validated and is publicly accessible. KEY POINTS: The developed ANN model (CTA-DEFACE) automatically generates facemasks for CT angiography images. CTA-DEFACE offers superior deidentification capabilities compared to a publicly available model. By means of graphics processing unit optimization, our model ensures rapid processing of medical images. Our model underwent external validation, underscoring its reliability for real-world application.
Collapse
Affiliation(s)
- Mustafa Ahmed Mahmutoglu
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany.
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany.
| | - Aditya Rastogi
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Marianne Schell
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Martha Foltyn-Dumitru
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Michael Baumgartner
- Division for Medical Image Computing, German Cancer Research Center, Heidelberg, Germany
- Helmholtz Imaging, Heidelberg, Germany
- Faculty of Mathematics and Computer Science, Heidelberg University, Heidelberg, Germany
| | | | - Katerina Deike-Hofmann
- Department of Neuroradiology, Bonn University Hospital, Bonn, Germany
- Clinical Neuroimaging Group, German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Alexander Radbruch
- Department of Neuroradiology, Bonn University Hospital, Bonn, Germany
- Clinical Neuroimaging Group, German Center for Neurodegenerative Diseases, DZNE, Bonn, Germany
| | - Martin Bendszus
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
| | - Gianluca Brugnara
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| | - Philipp Vollmuth
- Department of Neuroradiology, Heidelberg University Hospital, Heidelberg, Germany
- Division for Computational Neuroimaging, Heidelberg University Hospital, Heidelberg, Germany
| |
Collapse
|
6
|
Bou Hanna E, Partarrieu S, Berenbaum A, Allassonnière S, Besson FL. Exploring de-anonymization risks in PET imaging: Insights from a comprehensive analysis of 853 patient scans. Sci Data 2024; 11:932. [PMID: 39198445 PMCID: PMC11358492 DOI: 10.1038/s41597-024-03800-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2024] [Accepted: 08/20/2024] [Indexed: 09/01/2024] Open
Abstract
Due to their high resolution, anonymized CT scans can be reidentified using face recognition tools. However, little is known regarding PET deanonymization because of its lower resolution. In this study, we analysed PET/CT scans of 853 patients from a TCIA-restricted dataset (AutoPET). First, we built denoised 2D morphological reconstructions of both PET and CT scans, and then we determined how frequently a PET reconstruction could be matched to the correct CT reconstruction with no other metadata. Using the CT morphological reconstructions as ground truth allows us to frame the problem as a face recognition problem and to quantify our performance using traditional metrics (top k accuracies) without any use of patient pictures. Using our denoised PET 2D reconstructions, we achieved 72% top 10 accuracy after the realignment of all CTs in the same reference frame, and 71% top 10 accuracy after realignment and mixing within a larger face dataset of 10, 168 pictures. This highlights the need to consider face identification issues when dealing with PET imaging data.
Collapse
Affiliation(s)
| | | | - Arnaud Berenbaum
- Université Paris-Saclay, Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Centre National de la Recherche Scientifique (CNRS), Inserm, BioMaps, Orsay, France
| | - Stéphanie Allassonnière
- CRC, HeKA, Parisanté Campus, Université Paris Cité, Inria, Inserm, Sorbonne Université, Paris, France
| | - Florent L Besson
- Université Paris-Saclay, Commissariat à l'énergie atomique et aux énergies alternatives (CEA), Centre National de la Recherche Scientifique (CNRS), Inserm, BioMaps, Orsay, France.
- Department of Nuclear Medicine-Molecular Imaging, Hôpitaux Universitaires Paris-Saclay, Assistance Publique-Hôpitaux de Paris, DMU SMART IMAGING, CHU Bicêtre, Le Kremlin-Bicêtre, France.
- Université Paris-Saclay, School of Medicine, Le Kremlin-Bicêtre, France.
| |
Collapse
|
7
|
Bayerlein R, Swarnakar V, Selfridge A, Spencer BA, Nardo L, Badawi RD. Cloud-based serverless computing enables accelerated monte carlo simulations for nuclear medicine imaging. Biomed Phys Eng Express 2024; 10:10.1088/2057-1976/ad5847. [PMID: 38876087 PMCID: PMC11254166 DOI: 10.1088/2057-1976/ad5847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Accepted: 06/14/2024] [Indexed: 06/16/2024]
Abstract
Objective.This study investigates the potential of cloud-based serverless computing to accelerate Monte Carlo (MC) simulations for nuclear medicine imaging tasks. MC simulations can pose a high computational burden-even when executed on modern multi-core computing servers. Cloud computing allows simulation tasks to be highly parallelized and considerably accelerated.Approach.We investigate the computational performance of a cloud-based serverless MC simulation of radioactive decays for positron emission tomography imaging using Amazon Web Service (AWS) Lambda serverless computing platform for the first time in scientific literature. We provide a comparison of the computational performance of AWS to a modern on-premises multi-thread reconstruction server by measuring the execution times of the processes using between105and2·1010simulated decays. We deployed two popular MC simulation frameworks-SimSET and GATE-within the AWS computing environment. Containerized application images were used as a basis for an AWS Lambda function, and local (non-cloud) scripts were used to orchestrate the deployment of simulations. The task was broken down into smaller parallel runs, and launched on concurrently running AWS Lambda instances, and the results were postprocessed and downloaded via the Simple Storage Service.Main results.Our implementation of cloud-based MC simulations with SimSET outperforms local server-based computations by more than an order of magnitude. However, the GATE implementation creates more and larger output file sizes and reveals that the internet connection speed can become the primary bottleneck for data transfers. Simulating 109decays using SimSET is possible within 5 min and accrues computation costs of about $10 on AWS, whereas GATE would have to run in batches for more than 100 min at considerably higher costs.Significance.Adopting cloud-based serverless computing architecture in medical imaging research facilities can considerably improve processing times and overall workflow efficiency, with future research exploring additional enhancements through optimized configurations and computational methods.
Collapse
Affiliation(s)
- Reimund Bayerlein
- Department of Biomedical Engineering, University of California Davis, Davis, CA, USA
| | - Vivek Swarnakar
- Department of Radiology, University of California Davis, Davis, CA, USA
| | - Aaron Selfridge
- Department of Biomedical Engineering, University of California Davis, Davis, CA, USA
| | - Benjamin A Spencer
- Department of Biomedical Engineering, University of California Davis, Davis, CA, USA
- Department of Radiology, University of California Davis, Davis, CA, USA
| | - Lorenzo Nardo
- Department of Radiology, University of California Davis, Davis, CA, USA
| | - Ramsey D Badawi
- Department of Biomedical Engineering, University of California Davis, Davis, CA, USA
- Department of Radiology, University of California Davis, Davis, CA, USA
| |
Collapse
|
8
|
Sun Y, Cheng Z, Qiu J, Lu W. Performance and application of the total-body PET/CT scanner: a literature review. EJNMMI Res 2024; 14:38. [PMID: 38607510 PMCID: PMC11014840 DOI: 10.1186/s13550-023-01059-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 12/14/2023] [Indexed: 04/13/2024] Open
Abstract
BACKGROUND The total-body positron emission tomography/computed tomography (PET/CT) system, with a long axial field of view, represents the state-of-the-art PET imaging technique. Recently, the total-body PET/CT system has been commercially available. The total-body PET/CT system enables high-resolution whole-body imaging, even under extreme conditions such as ultra-low dose, extremely fast imaging speed, delayed imaging more than 10 h after tracer injection, and total-body dynamic scan. The total-body PET/CT system provides a real-time picture of the tracers of all organs across the body, which not only helps to explain normal human physiological process, but also facilitates the comprehensive assessment of systemic diseases. In addition, the total-body PET/CT system may play critical roles in other medical fields, including cancer imaging, drug development and immunology. MAIN BODY Therefore, it is of significance to summarize the existing studies of the total-body PET/CT systems and point out its future direction. This review collected research literatures from the PubMed database since the advent of commercially available total-body PET/CT systems to the present, and was divided into the following sections: Firstly, a brief introduction to the total-body PET/CT system was presented, followed by a summary of the literature on the performance evaluation of the total-body PET/CT. Then, the research and clinical applications of the total-body PET/CT were discussed. Fourthly, deep learning studies based on total-body PET imaging was reviewed. At last, the shortcomings of existing research and future directions for the total-body PET/CT were discussed. CONCLUSION Due to its technical advantages, the total-body PET/CT system is bound to play a greater role in clinical practice in the future.
Collapse
Affiliation(s)
- Yuanyuan Sun
- Department of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, 271016, China
| | - Zhaoping Cheng
- Department of PET-CT, The First Affiliated Hospital of Shandong First Medical University, Shandong Provincial Qianfoshan Hospital Affiliated to Shandong University, Jinan, 250014, China
| | - Jianfeng Qiu
- Department of Radiology, Shandong First Medical University & Shandong Academy of Medical Sciences, Taian, 271016, China
| | - Weizhao Lu
- Department of Radiology, The Second Affiliated Hospital of Shandong First Medical University, No. 366 Taishan Street, Taian, 271000, China.
| |
Collapse
|