1
|
Sultana J, Naznin M, Faisal TR. SSDL-an automated semi-supervised deep learning approach for patient-specific 3D reconstruction of proximal femur from QCT images. Med Biol Eng Comput 2024; 62:1409-1425. [PMID: 38217823 DOI: 10.1007/s11517-023-03013-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 12/27/2023] [Indexed: 01/15/2024]
Abstract
Deep Learning (DL) techniques have recently been used in medical image segmentation and the reconstruction of 3D anatomies of a human body. In this work, we propose a semi-supervised DL (SSDL) approach utilizing a CNN-based 3D U-Net model for femur segmentation from sparsely annotated quantitative computed tomography (QCT) slices. Specifically, QCT slices at the proximal end of the femur forming ball and socket joint with acetabulum were annotated for precise segmentation, where a segmenting binary mask was generated using a 3D U-Net model to segment the femur accurately. A total of 5474 QCT slices were considered for training among which 2316 slices were annotated. 3D femurs were further reconstructed from segmented slices employing polynomial spline interpolation. Both qualitative and quantitative performance of segmentation and 3D reconstruction were satisfactory with more than 90% accuracy achieved for all of the standard performance metrics considered. The spatial overlap index and reproducibility validation metric for segmentation-Dice Similarity Coefficient was 91.8% for unseen patients and 99.2% for validated patients. An average relative error of 12.02% and 10.75% for volume and surface area, respectively, were computed for 3D reconstructed femurs. The proposed approach demonstrates its effectiveness in accurately segmenting and reconstructing 3D femur from QCT slices.
Collapse
Affiliation(s)
- Jamalia Sultana
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Mahmuda Naznin
- Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
| | - Tanvir R Faisal
- Department of Mechanical Engineering, University of Louisiana at Lafayette, Lafayette, LA, 70503, USA.
| |
Collapse
|
2
|
Marsilio L, Moglia A, Rossi M, Manzotti A, Mainardi L, Cerveri P. Combined Edge Loss UNet for Optimized Segmentation in Total Knee Arthroplasty Preoperative Planning. Bioengineering (Basel) 2023; 10:1433. [PMID: 38136024 PMCID: PMC10740423 DOI: 10.3390/bioengineering10121433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 12/12/2023] [Accepted: 12/14/2023] [Indexed: 12/24/2023] Open
Abstract
Bone segmentation and 3D reconstruction are crucial for total knee arthroplasty (TKA) surgical planning with Personalized Surgical Instruments (PSIs). Traditional semi-automatic approaches are time-consuming and operator-dependent, although they provide reliable outcomes. Moreover, the recent expansion of artificial intelligence (AI) tools towards various medical domains is transforming modern healthcare. Accordingly, this study introduces an automated AI-based pipeline to replace the current operator-based tibia and femur 3D reconstruction procedure enhancing TKA preoperative planning. Leveraging an 822 CT image dataset, a novel patch-based method and an improved segmentation label generation algorithm were coupled to a Combined Edge Loss UNet (CEL-UNet), a novel CNN architecture featuring an additional decoding branch to boost the bone boundary segmentation. Root Mean Squared Errors and Hausdorff distances compared the predicted surfaces to the reference bones showing median and interquartile values of 0.26 (0.19-0.36) mm and 0.24 (0.18-0.32) mm, and of 1.06 (0.73-2.15) mm and 1.43 (0.82-2.86) mm for the tibia and femur, respectively, outperforming previous results of our group, state-of-the-art, and UNet models. A feasibility analysis for a PSI-based surgical plan revealed sub-millimetric distance errors and sub-angular alignment uncertainties in the PSI contact areas and the two cutting planes. Finally, operational environment testing underscored the pipeline's efficiency. More than half of the processed cases complied with the PSI prototyping requirements, reducing the overall time from 35 min to 13.1 s, while the remaining ones underwent a manual refinement step to achieve such PSI requirements, performing the procedure four to eleven times faster than the manufacturer standards. To conclude, this research advocates the need for real-world applicability and optimization of AI solutions in orthopedic surgical practice.
Collapse
Affiliation(s)
- Luca Marsilio
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy; (A.M.); (M.R.); (L.M.)
| | - Andrea Moglia
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy; (A.M.); (M.R.); (L.M.)
| | - Matteo Rossi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy; (A.M.); (M.R.); (L.M.)
| | | | - Luca Mainardi
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy; (A.M.); (M.R.); (L.M.)
| | - Pietro Cerveri
- Department of Electronics, Information and Bioengineering, Politecnico di Milano, 20133 Milan, Italy; (A.M.); (M.R.); (L.M.)
| |
Collapse
|
3
|
Liu J, Xing F, Shaikh A, French B, Linguraru MG, Porras AR. Joint Cranial Bone Labeling and Landmark Detection in Pediatric CT Images Using Context Encoding. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3117-3126. [PMID: 37216247 PMCID: PMC10760565 DOI: 10.1109/tmi.2023.3278493] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Image segmentation, labeling, and landmark detection are essential tasks for pediatric craniofacial evaluation. Although deep neural networks have been recently adopted to segment cranial bones and locate cranial landmarks from computed tomography (CT) or magnetic resonance (MR) images, they may be hard to train and provide suboptimal results in some applications. First, they seldom leverage global contextual information that can improve object detection performance. Second, most methods rely on multi-stage algorithm designs that are inefficient and prone to error accumulation. Third, existing methods often target simple segmentation tasks and have shown low reliability in more challenging scenarios such as multiple cranial bone labeling in highly variable pediatric datasets. In this paper, we present a novel end-to-end neural network architecture based on DenseNet that incorporates context regularization to jointly label cranial bone plates and detect cranial base landmarks from CT images. Specifically, we designed a context-encoding module that encodes global context information as landmark displacement vector maps and uses it to guide feature learning for both bone labeling and landmark identification. We evaluated our model on a highly diverse pediatric CT image dataset of 274 normative subjects and 239 patients with craniosynostosis (age 0.63 ± 0.54 years, range 0-2 years). Our experiments demonstrate improved performance compared to state-of-the-art approaches.
Collapse
|
4
|
Li L, Liu H, Li Q, Tian Z, Li Y, Geng W, Wang S. Near-Infrared Blood Vessel Image Segmentation Using Background Subtraction and Improved Mathematical Morphology. Bioengineering (Basel) 2023; 10:726. [PMID: 37370657 DOI: 10.3390/bioengineering10060726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Revised: 05/31/2023] [Accepted: 06/06/2023] [Indexed: 06/29/2023] Open
Abstract
The precise display of blood vessel information for doctors is crucial. This is not only true for facilitating intravenous injections, but also for the diagnosis and analysis of diseases. Currently, infrared cameras can be used to capture images of superficial blood vessels. However, their imaging quality always has the problems of noises, breaks, and uneven vascular information. In order to overcome these problems, this paper proposes an image segmentation algorithm based on the background subtraction and improved mathematical morphology. The algorithm regards the image as a superposition of blood vessels into the background, removes the noise by calculating the size of connected domains, achieves uniform blood vessel width, and smooths edges that reflect the actual blood vessel state. The algorithm is evaluated subjectively and objectively in this paper to provide a basis for vascular image quality assessment. Extensive experimental results demonstrate that the proposed method can effectively extract accurate and clear vascular information.
Collapse
Affiliation(s)
- Ling Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Haoting Liu
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Qing Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Zhen Tian
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Yajie Li
- Beijing Engineerin Research Center of Industrial Spectrum Imaging, School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
| | - Wenjia Geng
- Department of Traditional Chinese Medicine, Peking University People's Hospital, Beijing 100044, China
| | - Song Wang
- Department of Nephrology, Peking University Third Hospital, Beijing 100191, China
| |
Collapse
|
5
|
Yousef R, Khan S, Gupta G, Siddiqui T, Albahlal BM, Alajlan SA, Haq MA. U-Net-Based Models towards Optimal MR Brain Image Segmentation. Diagnostics (Basel) 2023; 13:diagnostics13091624. [PMID: 37175015 PMCID: PMC10178263 DOI: 10.3390/diagnostics13091624] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 04/14/2023] [Accepted: 04/25/2023] [Indexed: 05/15/2023] Open
Abstract
Brain tumor segmentation from MRIs has always been a challenging task for radiologists, therefore, an automatic and generalized system to address this task is needed. Among all other deep learning techniques used in medical imaging, U-Net-based variants are the most used models found in the literature to segment medical images with respect to different modalities. Therefore, the goal of this paper is to examine the numerous advancements and innovations in the U-Net architecture, as well as recent trends, with the aim of highlighting the ongoing potential of U-Net being used to better the performance of brain tumor segmentation. Furthermore, we provide a quantitative comparison of different U-Net architectures to highlight the performance and the evolution of this network from an optimization perspective. In addition to that, we have experimented with four U-Net architectures (3D U-Net, Attention U-Net, R2 Attention U-Net, and modified 3D U-Net) on the BraTS 2020 dataset for brain tumor segmentation to provide a better overview of this architecture's performance in terms of Dice score and Hausdorff distance 95%. Finally, we analyze the limitations and challenges of medical image analysis to provide a critical discussion about the importance of developing new architectures in terms of optimization.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
- Department of Computer Science and Engineering, University Centre for Research and Development, Chandigarh University, Mohali 140413, India
| | - Gaurav Gupta
- Yogananda School of AI, Computers and Data Sciences, Shoolini University, Solan 173229, India
| | - Tamanna Siddiqui
- Department of Computer Science, Aligarh Muslim University, Aligarh 202001, India
| | - Bader M Albahlal
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Saad Abdullah Alajlan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia
| | - Mohd Anul Haq
- Department of Computer Science, College of Computer and Information Sciences, Majmaah University, Al-Majmaah 11952, Saudi Arabia
| |
Collapse
|
6
|
Yang M, Wohlfahrt P, Shen C, Bouchard H. Dual- and multi-energy CT for particle stopping-power estimation: current state, challenges and potential. Phys Med Biol 2023; 68. [PMID: 36595276 DOI: 10.1088/1361-6560/acabfa] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Accepted: 12/15/2022] [Indexed: 12/23/2022]
Abstract
Range uncertainty has been a key factor preventing particle radiotherapy from reaching its full physical potential. One of the main contributing sources is the uncertainty in estimating particle stopping power (ρs) within patients. Currently, theρsdistribution in a patient is derived from a single-energy CT (SECT) scan acquired for treatment planning by converting CT number expressed in Hounsfield units (HU) of each voxel toρsusing a Hounsfield look-up table (HLUT), also known as the CT calibration curve. HU andρsshare a linear relationship with electron density but differ in their additional dependence on elemental composition through different physical properties, i.e. effective atomic number and mean excitation energy, respectively. Because of that, the HLUT approach is particularly sensitive to differences in elemental composition between real human tissues and tissue surrogates as well as tissue variations within and among individual patients. The use of dual-energy CT (DECT) forρsprediction has been shown to be effective in reducing the uncertainty inρsestimation compared to SECT. The acquisition of CT data over different x-ray spectra yields additional information on the material elemental composition. Recently, multi-energy CT (MECT) has been explored to deduct material-specific information with higher dimensionality, which has the potential to further improve the accuracy ofρsestimation. Even though various DECT and MECT methods have been proposed and evaluated over the years, these approaches are still only scarcely implemented in routine clinical practice. In this topical review, we aim at accelerating this translation process by providing: (1) a comprehensive review of the existing DECT/MECT methods forρsestimation with their respective strengths and weaknesses; (2) a general review of uncertainties associated with DECT/MECT methods; (3) a general review of different aspects related to clinical implementation of DECT/MECT methods; (4) other potential advanced DECT/MECT applications beyondρsestimation.
Collapse
Affiliation(s)
- Ming Yang
- The University of Texas MD Anderson Cancer Center, Department of Radiation Physics, 1515 Holcombe Blvd Houston, TX 77030, United States of America
| | - Patrick Wohlfahrt
- Massachusetts General Hospital and Harvard Medical School, Department of Radiation Oncology, Boston, MA 02115, United States of America
| | - Chenyang Shen
- University of Texas Southwestern Medical Center, Department of Radiation Oncology, 2280 Inwood Rd Dallas, TX 75235, United States of America
| | - Hugo Bouchard
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada.,Département de radio-oncologie, Centre hospitalier de l'Université de Montréal, 1051 Rue Sanguinet, Montréal, Québec H2X 3E4, Canada
| |
Collapse
|
7
|
Bonaldi L, Pretto A, Pirri C, Uccheddu F, Fontanella CG, Stecco C. Deep Learning-Based Medical Images Segmentation of Musculoskeletal Anatomical Structures: A Survey of Bottlenecks and Strategies. Bioengineering (Basel) 2023; 10:bioengineering10020137. [PMID: 36829631 PMCID: PMC9952222 DOI: 10.3390/bioengineering10020137] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
By leveraging the recent development of artificial intelligence algorithms, several medical sectors have benefited from using automatic segmentation tools from bioimaging to segment anatomical structures. Segmentation of the musculoskeletal system is key for studying alterations in anatomical tissue and supporting medical interventions. The clinical use of such tools requires an understanding of the proper method for interpreting data and evaluating their performance. The current systematic review aims to present the common bottlenecks for musculoskeletal structures analysis (e.g., small sample size, data inhomogeneity) and the related strategies utilized by different authors. A search was performed using the PUBMED database with the following keywords: deep learning, musculoskeletal system, segmentation. A total of 140 articles published up until February 2022 were obtained and analyzed according to the PRISMA framework in terms of anatomical structures, bioimaging techniques, pre/post-processing operations, training/validation/testing subset creation, network architecture, loss functions, performance indicators and so on. Several common trends emerged from this survey; however, the different methods need to be compared and discussed based on each specific case study (anatomical region, medical imaging acquisition setting, study population, etc.). These findings can be used to guide clinicians (as end users) to better understand the potential benefits and limitations of these tools.
Collapse
Affiliation(s)
- Lorenza Bonaldi
- Department of Civil, Environmental and Architectural Engineering, University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Andrea Pretto
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
| | - Carmelo Pirri
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
| | - Francesca Uccheddu
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| | - Chiara Giulia Fontanella
- Department of Industrial Engineering, University of Padova, Via Venezia 1, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
- Correspondence: ; Tel.: +39-049-8276754
| | - Carla Stecco
- Department of Neuroscience, University of Padova, Via A. Gabelli 65, 35121 Padova, Italy
- Centre for Mechanics of Biological Materials (CMBM), University of Padova, Via F. Marzolo 9, 35131 Padova, Italy
| |
Collapse
|
8
|
Mămuleanu M, Urhuț CM, Săndulescu LD, Kamal C, Pătrașcu AM, Ionescu AG, Șerbănescu MS, Streba CT. Deep Learning Algorithms in the Automatic Segmentation of Liver Lesions in Ultrasound Investigations. LIFE (BASEL, SWITZERLAND) 2022; 12:life12111877. [PMID: 36431012 PMCID: PMC9695234 DOI: 10.3390/life12111877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND The ultrasound is one of the most used medical imaging investigations worldwide. It is non-invasive and effective in assessing liver tumors or other types of parenchymal changes. METHODS The aim of the study was to build a deep learning model for image segmentation in ultrasound video investigations. The dataset used in the study was provided by the University of Medicine and Pharmacy Craiova, Romania and contained 50 video examinations from 49 patients. The mean age of the patients in the cohort was 69.57. Regarding presence of a subjacent liver disease, 36.73% had liver cirrhosis and 16.32% had chronic viral hepatitis (5 patients: chronic hepatitis C and 3 patients: chronic hepatitis B). Frames were extracted and cropped from each examination and an expert gastroenterologist labelled the lesions in each frame. After labelling, the labels were exported as binary images. A deep learning segmentation model (U-Net) was trained with focal Tversky loss as a loss function. Two models were obtained with two different sets of parameters for the loss function. The performance metrics observed were intersection over union and recall and precision. RESULTS Analyzing the intersection over union metric, the first segmentation model obtained performed better compared to the second model: 0.8392 (model 1) vs. 0.7990 (model 2). The inference time for both models was between 32.15 milliseconds and 77.59 milliseconds. CONCLUSIONS Two segmentation models were obtained in the study. The models performed similarly during training and validation. However, one model was trained to focus on hard-to-predict labels. The proposed segmentation models can represent a first step in automatically extracting time-intensity curves from CEUS examinations.
Collapse
Affiliation(s)
- Mădălin Mămuleanu
- Department of Automatic Control and Electronics, University of Craiova, 200585 Craiova, Romania
- Oncometrics S.R.L., 200677 Craiova, Romania
- Correspondence: ; Tel.: +4-0762-893-723
| | | | - Larisa Daniela Săndulescu
- Department of Gastroenterology, Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Constantin Kamal
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Ana-Maria Pătrașcu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Hematology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Alin Gabriel Ionescu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of History of Medicine, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Mircea-Sebastian Șerbănescu
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Medical Informatics and Statistics, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| | - Costin Teodor Streba
- Oncometrics S.R.L., 200677 Craiova, Romania
- Department of Gastroenterology, Research Center of Gastroenterology and Hepatology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
- Department of Pulmonology, University of Medicine and Pharmacy of Craiova, 200349 Craiova, Romania
| |
Collapse
|
9
|
Efficient lower-limb segmentation for large-scale volumetric CT by using projection view and voxel group attention. Med Biol Eng Comput 2022; 60:2201-2216. [DOI: 10.1007/s11517-022-02598-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Accepted: 04/12/2022] [Indexed: 10/18/2022]
|
10
|
Convolution Neural Networks for the Automatic Segmentation of 18F-FDG PET Brain as an Aid to Alzheimer’s Disease Diagnosis. ELECTRONICS 2022. [DOI: 10.3390/electronics11142260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Our work aims to exploit deep learning (DL) models to automatically segment diagnostic regions involved in Alzheimer’s disease (AD) in 18F-fluorodeoxyglucose positron emission tomography (18F-FDG PET) volumetric scans in order to provide a more objective diagnosis of this disease and to reduce the variability induced by manual segmentation. The dataset used in this study consists of 102 volumes (40 controls, 39 with established Alzheimer’s disease (AD), and 23 with established mild cognitive impairment (MCI)). The ground truth was generated by an expert user who identified six regions in original scans, including temporal lobes, parietal lobes, and frontal lobes. The implemented architectures are the U-Net3D and V-Net networks, which were appropriately adapted to our data to optimize performance. All trained segmentation networks were tested on 22 subjects using the Dice similarity coefficient (DSC) and other similarity indices, namely the overlapping area coefficient (AOC) and the extra area coefficient (EAC), to evaluate automatic segmentation. The results of each labeled brain region demonstrate an improvement of 50%, with DSC from about 0.50 for V-Net-based networks to about 0.77 for U-Net3D-based networks. The best performance was achieved by using U-Net3D, with DSC on average equal to 0.76 for frontal lobes, 0.75 for parietal lobes, and 0.76 for temporal lobes. U-Net3D is very promising and is able to segment each region and each class of subjects without being influenced by the presence of hypometabolic regions.
Collapse
|
11
|
The Development of an Automatic Rib Sequence Labeling System on Axial Computed Tomography Images with 3-Dimensional Region Growing. SENSORS 2022; 22:s22124530. [PMID: 35746310 PMCID: PMC9230858 DOI: 10.3390/s22124530] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Revised: 06/05/2022] [Accepted: 06/10/2022] [Indexed: 11/18/2022]
Abstract
This paper proposes a development of automatic rib sequence labeling systems on chest computed tomography (CT) images with two suggested methods and three-dimensional (3D) region growing. In clinical practice, radiologists usually define anatomical terms of location depending on the rib’s number. Thus, with the manual process of labeling 12 pairs of ribs and counting their sequence, it is necessary to refer to the annotations every time the radiologists read chest CT. However, the process is tedious, repetitive, and time-consuming as the demand for chest CT-based medical readings has increased. To handle the task efficiently, we proposed an automatic rib sequence labeling system and implemented comparison analysis on two methods. With 50 collected chest CT images, we implemented intensity-based image processing (IIP) and a convolutional neural network (CNN) for rib segmentation on this system. Additionally, three-dimensional (3D) region growing was used to classify each rib’s label and put in a sequence label. The IIP-based method reported a 92.0% and the CNN-based method reported a 98.0% success rate, which is the rate of labeling appropriate rib sequences over whole pairs (1st to 12th) for all slices. We hope for the applicability thereof in clinical diagnostic environments by this method-efficient automatic rib sequence labeling system.
Collapse
|
12
|
Aljabri M, AlAmir M, AlGhamdi M, Abdel-Mottaleb M, Collado-Mesa F. Towards a better understanding of annotation tools for medical imaging: a survey. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:25877-25911. [PMID: 35350630 PMCID: PMC8948453 DOI: 10.1007/s11042-022-12100-1] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 08/04/2021] [Accepted: 01/03/2022] [Indexed: 05/07/2023]
Abstract
Medical imaging refers to several different technologies that are used to view the human body to diagnose, monitor, or treat medical conditions. It requires significant expertise to efficiently and correctly interpret the images generated by each of these technologies, which among others include radiography, ultrasound, and magnetic resonance imaging. Deep learning and machine learning techniques provide different solutions for medical image interpretation including those associated with detection and diagnosis. Despite the huge success of deep learning algorithms in image analysis, training algorithms to reach human-level performance in these tasks depends on the availability of large amounts of high-quality training data, including high-quality annotations to serve as ground-truth. Different annotation tools have been developed to assist with the annotation process. In this survey, we present the currently available annotation tools for medical imaging, including descriptions of graphical user interfaces (GUI) and supporting instruments. The main contribution of this study is to provide an intensive review of the popular annotation tools and show their successful usage in annotating medical imaging dataset to guide researchers in this area.
Collapse
Affiliation(s)
- Manar Aljabri
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlAmir
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | - Manal AlGhamdi
- Department of Computer Science, Umm Al-Qura University, Mecca, Saudi Arabia
| | | | - Fernando Collado-Mesa
- Department of Radiology, University of Miami Miller School of Medicine, Florida, FL USA
| |
Collapse
|
13
|
Lim HK, Jung SK, Kim SH, Cho Y, Song IS. Deep semi-supervised learning for automatic segmentation of inferior alveolar nerve using a convolutional neural network. BMC Oral Health 2021; 21:630. [PMID: 34876105 PMCID: PMC8650351 DOI: 10.1186/s12903-021-01983-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 11/22/2021] [Indexed: 11/10/2022] Open
Abstract
Background The inferior alveolar nerve (IAN) innervates and regulates the sensation of the mandibular teeth and lower lip. The position of the IAN should be monitored prior to surgery. Therefore, a study using artificial intelligence (AI) was planned to image and track the position of the IAN automatically for a quicker and safer surgery. Methods A total of 138 cone-beam computed tomography datasets (Internal: 98, External: 40) collected from multiple centers (three hospitals) were used in the study. A customized 3D nnU-Net was used for image segmentation. Active learning, which consists of three steps, was carried out in iterations for 83 datasets with cumulative additions after each step. Subsequently, the accuracy of the model for IAN segmentation was evaluated using the 50 datasets. The accuracy by deriving the dice similarity coefficient (DSC) value and the segmentation time for each learning step were compared. In addition, visual scoring was considered to comparatively evaluate the manual and automatic segmentation. Results After learning, the DSC gradually increased to 0.48 ± 0.11 to 0.50 ± 0.11, and 0.58 ± 0.08. The DSC for the external dataset was 0.49 ± 0.12. The times required for segmentation were 124.8, 143.4, and 86.4 s, showing a large decrease at the final stage. In visual scoring, the accuracy of manual segmentation was found to be higher than that of automatic segmentation. Conclusions The deep active learning framework can serve as a fast, accurate, and robust clinical tool for demarcating IAN location.
Collapse
Affiliation(s)
- Ho-Kyung Lim
- Department of Oral and Maxillofacial Surgery, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seok-Ki Jung
- Department of Orthodontics, Korea University Guro Hospital, 148, Gurodong-ro, Guro-gu, Seoul, 08308, Republic of Korea
| | - Seung-Hyun Kim
- Department of Medical Humanities, Korea University College of Medicine, 46, Gaeunsa 2-gil, Seongbuk-gu, Seoul, 02842, Republic of Korea
| | - Yongwon Cho
- Department of Radiology and AI Center, Korea University College of Medicine, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| | - In-Seok Song
- Department of Oral and Maxillofacial Surgery, Korea University Anam Hospital, 73, Goryeodae-ro, Seongbuk-gu, Seoul, 02841, Republic of Korea.
| |
Collapse
|
14
|
Leydon P, O'Connell M, Greene D, Curran KM. Bone segmentation in contrast enhanced whole-body computed tomography. Biomed Phys Eng Express 2021; 8. [PMID: 34749353 DOI: 10.1088/2057-1976/ac37ab] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 11/08/2021] [Indexed: 11/12/2022]
Abstract
Segmentation of bone regions allows for enhanced diagnostics, disease characterisation and treatment monitoring in CT imaging. In contrast enhanced whole-body scans accurate automatic segmentation is particularly difficult as low dose whole body protocols reduce image quality and make contrast enhanced regions more difficult to separate when relying on differences in pixel intensities. This paper outlines a U-net architecture with novel preprocessing techniques, based on the windowing of training data and the modification of sigmoid activation threshold selection to successfully segment bone-bone marrow regions from low dose contrast enhanced whole-body CT scans. The proposed method achieved mean Dice coefficients of 0.979 ±0.02, 0.965 ±0.03, and 0.934 ±0.06 on two internal datasets and one external test dataset respectively. We have demonstrated that appropriate preprocessing is important for differentiating between bone and contrast dye, and that excellent results can be achieved with limited data.
Collapse
Affiliation(s)
- Patrick Leydon
- Applied Science, Limerick Institute of Technology, Moylish, Limerick, IRELAND
| | - Martin O'Connell
- School of Medicine, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Derek Greene
- School of Computer Science, University College Dublin, University College Dublin, Dublin, Dublin 4, IRELAND
| | - Kathleen M Curran
- School of Medicine, University College Dublin, University College Dublin, Dublin, 4, IRELAND
| |
Collapse
|
15
|
Kruis MF. Improving radiation physics, tumor visualisation, and treatment quantification in radiotherapy with spectral or dual-energy CT. J Appl Clin Med Phys 2021; 23:e13468. [PMID: 34743405 PMCID: PMC8803285 DOI: 10.1002/acm2.13468] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/13/2021] [Accepted: 10/19/2021] [Indexed: 12/11/2022] Open
Abstract
Over the past decade, spectral or dual‐energy CT has gained relevancy, especially in oncological radiology. Nonetheless, its use in the radiotherapy (RT) clinic remains limited. This review article aims to give an overview of the current state of spectral CT and to explore opportunities for applications in RT. In this article, three groups of benefits of spectral CT over conventional CT in RT are recognized. Firstly, spectral CT provides more information of physical properties of the body, which can improve dose calculation. Furthermore, it improves the visibility of tumors, for a wide variety of malignancies as well as organs‐at‐risk OARs, which could reduce treatment uncertainty. And finally, spectral CT provides quantitative physiological information, which can be used to personalize and quantify treatment.
Collapse
|
16
|
Lartaud PJ, Dupont C, Hallé D, Schleef A, Dessouky R, Vlachomitrou AS, Rouet JM, Nempont O, Boussel L. A conventional-to-spectral CT image translation augmentation workflow for robust contrast injection-independent organ segmentation. Med Phys 2021; 49:1108-1122. [PMID: 34689353 DOI: 10.1002/mp.15310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 10/07/2021] [Accepted: 10/11/2021] [Indexed: 11/07/2022] Open
Abstract
PURPOSE In cardiovascular imaging, the numerous contrast injection protocols used to enhance structures make it difficult to gather training datasets for deep learning applications supporting diverse protocols. Moreover, creating annotations on non-contrast scans is extremely tedious. Recently, spectral CT's virtual-non-contrast images (VNC) have been used as data augmentation to train segmentation networks performing on enhanced and true-non-contrast (TNC) scans alike, while improving results on protocols absent of their training dataset. However, spectral data are not widely available, making it difficult to gather specific datasets for each task. As a solution, we present a data augmentation workflow based on a trained image translation network, to bring spectral-like augmentation to any conventional CT dataset. METHOD The HU-to-spectral image translation network (HUSpecNet) was first trained to generate VNC from HU images, using an unannotated spectral dataset of 1830 patients. It was then tested on a second dataset of 300 spectral CT scans, by comparing generated VNC (VNCDL ) to their true counterparts. To illustrate and compare our workflow's efficiency with true spectral augmentation, HUSpecNet was applied to a third dataset of 112 spectral scans to generate VNCDL along HU and VNC images. Three different 3D networks (U-Net, X-Net, U-Net++) were trained for multi-label heart segmentation, following four augmentation strategies. As baselines, trainings were performed on contrasted images without (HUonly) and with conventional gray-values augmentation (HUaug). Then, the same networks were trained using a proportion of contrasted and VNC/VNCDL images (TrueSpec/GenSpec). Each training strategy applied to each architecture was evaluated using Dice coefficients on a fourth multi-centric multi-vendor single-energy CT dataset of 121 patients, including different contrast injection protocols and unenhanced scans. The U-Net++ results were further explored with distance metrics on every label. RESULTS Tested on 300 full scans, our HUSpectNet translation network shows a mean absolute error of 6.70±2.83 HU between VNCDL and VNC, while peak-signal-to-noise-ratio reaches 43.89 dB. GenSpec and TrueSpec show very close results regardless of the protocol and used architecture: mean Dice coefficients (DSCmean ) are equal with a margin of 0.006, ranging from 0.879 to 0.938. Their performances significantly increase on TNC scans (p-values<0.017 for all architectures) compared to HUonly and HUaug, with DSCmean of 0.448/0.770/0.879/0.885 for HUonly/HUaug/TrueSpec/GenSpec using the Unet++ architecture. Significant improvements are also noted for all architectures on chest-abdominal-pelvic scans (p-values<0.007) compared to HUonly and for pulmonary embolism scans (p-values<0.039) compared to HUaug. Using Unet++, DSCmean reaches 0.892/0.901/0.903 for HUonly/TrueSpec/GenSpec on pulmonary embolism scans and 0.872/0.896/0.896 for HUonly/TrueSpec/GenSpec on chest-abdominal-pelvic scans. CONCLUSION Using the proposed workflow, we trained versatile heart segmentation networks on a dataset of conventional enhanced CT scans, providing robust predictions on both enhanced scans with different contrast injection protocols and TNC scans. The performances obtained were not significantly inferior to training the model on a genuine spectral CT dataset, regardless of the architecture implemented. Using a general-purpose conventional-to-spectral CT translation network as data augmentation could therefore contribute to reducing data collection and annotation requirements for machine learning-based CT studies, while extending their range of application. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Pierre-Jean Lartaud
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Philips Research France, Suresnes, France
| | | | | | | | - Riham Dessouky
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Radiology Department, Faculty of Medicine, Zagazig University, Zagazig, Egypt
| | | | | | | | - Loïc Boussel
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
- Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
17
|
Jeuthe J, Sánchez JCG, Magnusson M, Sandborg M, Tedgren ÅC, Malusek A. SEMI-AUTOMATED 3D SEGMENTATION OF PELVIC REGION BONES IN CT VOLUMES FOR THE ANNOTATION OF MACHINE LEARNING DATASETS. RADIATION PROTECTION DOSIMETRY 2021; 195:172-176. [PMID: 34037238 PMCID: PMC8507443 DOI: 10.1093/rpd/ncab073] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Revised: 03/24/2021] [Accepted: 04/01/2021] [Indexed: 06/12/2023]
Abstract
Automatic segmentation of bones in computed tomography (CT) images is used for instance in beam hardening correction algorithms where it improves the accuracy of resulting CT numbers. Of special interest are pelvic bones, which-because of their strong attenuation-affect the accuracy of brachytherapy in this region. This work evaluated the performance of the JJ2016 algorithm with the performance of MK2014v2 and JS2018 algorithms; all these algorithms were developed by authors. Visual comparison, and, in the latter case, also Dice similarity coefficients derived from the ground truth were used. It was found that the 3D-based JJ2016 performed better than the 2D-based MK2014v2, mainly because of the more accurate hole filling that benefitted from information in adjacent slices. The neural network-based JS2018 outperformed both traditional algorithms. It was, however, limited to the resolution of 1283 owing to the limited amount of memory in the graphical processing unit (GPU).
Collapse
Affiliation(s)
- Julius Jeuthe
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
| | | | - Maria Magnusson
- Department of Electrical Engineering, Linköping University, Linköping, Sweden
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Michael Sandborg
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
| | - Åsa Carlsson Tedgren
- Department of Health, Medicine and Caring Sciences, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization (CMIV), Linköping University, Linköping, Sweden
- Department of Medical Radiation Physics and Nuclear Medicine, Karolinska University Hospital, Stockholm, Sweden
| | | |
Collapse
|
18
|
Lartaud PJ, Hallé D, Schleef A, Dessouky R, Vlachomitrou AS, Douek P, Rouet JM, Nempont O, Boussel L. Spectral augmentation for heart chambers segmentation on conventional contrasted and unenhanced CT scans: an in-depth study. Int J Comput Assist Radiol Surg 2021; 16:1699-1709. [PMID: 34363582 DOI: 10.1007/s11548-021-02468-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 07/23/2021] [Indexed: 01/26/2023]
Abstract
PURPOSE Recently, machine learning has outperformed established tools for automated segmentation in medical imaging. However, segmentation of cardiac chambers still proves challenging due to the variety of contrast agent injection protocols used in clinical practice, inducing disparities of contrast between cavities. Hence, training a generalist network requires large training datasets representative of these protocols. Furthermore, segmentation on unenhanced CT scans is further hindered by the challenge of obtaining ground truths from these images. Newly available spectral CT scanners allow innovative image reconstructions such as virtual non-contrast (VNC) imaging, mimicking non-contrasted conventional CT studies from a contrasted scan. Recent publications have demonstrated that networks can be trained using VNC to segment contrasted and unenhanced conventional CT scans to reduce annotated data requirements and the need for annotations on unenhanced scans. We propose an extensive evaluation of this statement. METHOD We undertake multiple trainings of a 3D multi-label heart segmentation network with (HU-VNC) and without (HUonly) VNC as augmentation, using decreasing training dataset sizes (114, 76, 57, 38, 29, 19 patients). At each step, both networks are tested on a multi-vendor, multi-centric dataset of 122 patients, including different protocols: pulmonary embolism (PE), chest-abdomen-pelvis (CAP), heart CT angiography (CTA) and true non-contrast scans (TNC). An in-depth comparison of resulting Dice coefficients and distance metrics is performed for the networks trained on the largest dataset. RESULTS HU-VNC-trained on 57 patients significantly outperforms HUonly trained on 114 regarding CAP and TNC scans (mean Dice coefficients of 0.881/0.835 and 0.882/0.416, respectively). When trained on the largest dataset, significant improvements in all labels are noted for TNC and CAP scans (mean Dice coefficient of 0.882/0.416 and 0.891/0.835, respectively). CONCLUSION Adding VNC images as training augmentation allows the network to perform on unenhanced scans and improves segmentations on other imaging protocols, while using a reduced training dataset.
Collapse
Affiliation(s)
- Pierre-Jean Lartaud
- Philips Research France, Suresnes, France. .,CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France.
| | | | | | - Riham Dessouky
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France
| | | | - Philippe Douek
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France.,Hospices Civils de Lyon, Lyon, France
| | | | | | - Loïc Boussel
- CREATIS UMR5220, INSERM U1044, INSA, Université de Lyon, Lyon, France.,Hospices Civils de Lyon, Lyon, France
| |
Collapse
|
19
|
Mao JZ, Khan A, Soliman MAR, Levy BR, McGuire MJ, Starling RV, Hess RM, Agyei JO, Meyers JE, Mullin JP, Pollina J. Use of the Scan-and-Plan Workflow in Next-Generation Robot-Assisted Pedicle Screw Insertion: Retrospective Cohort Study and Literature Review. World Neurosurg 2021; 151:e10-e18. [PMID: 33684584 DOI: 10.1016/j.wneu.2021.02.119] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Revised: 02/24/2021] [Accepted: 02/26/2021] [Indexed: 11/24/2022]
Abstract
OBJECTIVE To report our experience using the scan-and-plan workflow and review current literature on surgical efficiency, safety, and accuracy of next-generation robot-assisted (RA) spine surgery. METHODS The records of patients who underwent RA pedicle screw fixation were reviewed. The accuracy of pedicle screw placement was determined based on the Ravi classification system. To evaluate workflow efficiency, 3 demographically matched cohorts were created to analyze differences in time per screw placement (defined as operating room [OR] time divided by number of screws placed). Group A had <4 screws placed, Group B had 4 screws placed, and Group C had >4 screws placed. Intraoperative errors and postoperative complications were collected to elucidate safety. RESULTS Eighty-four RA cases (306 pedicle screws) were included for analysis. The mean number of screws placed was 2.1 ± 0.3 in Group A and 6.4 ± 1.2 in Group C; 4 screws were placed in Group B patients. The accuracy rate (Ravi grade I) was 98.4%. Screw placement time was significantly longer in Group A (101 ± 37.7 minutes) than Group B (50.5 ± 25.4 minutes) or C (43.6 ± 14.7 minutes). There were no intraoperative complications, robot failures, or in-hospital complications requiring a return to the OR. CONCLUSIONS The scan-and-plan workflow allowed for a high degree of accuracy. It was a safe method that provided a smooth and efficient OR workflow without registration errors or robotic failures. After the placement of 4 pedicle screws, the per-screw time remained constant. Further studies regarding efficiency and utility in multilevel procedures are necessary.
Collapse
Affiliation(s)
- Jennifer Z Mao
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA; Department of Biomedical Sciences, Philadelphia College of Osteopathic Medicine, Philadelphia, Pennsylvania, USA
| | - Asham Khan
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - Mohamed A R Soliman
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA; Department of Neurosurgery, Faculty of Medicine, Cairo University, Cairo, Egypt; Schulich School of Medicine and Dentistry, Western University, Ontario, Canada
| | - Bennett R Levy
- George Washington School of Medicine and Health Sciences, Washington, DC, USA
| | - Matthew J McGuire
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA
| | - Robert V Starling
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - Ryan M Hess
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - Justice O Agyei
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - Joshua E Meyers
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - Jeffrey P Mullin
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA
| | - John Pollina
- Department of Neurosurgery, Jacobs School of Medicine and Biomedical Sciences, University at Buffalo, New York, USA; Department of Neurosurgery, Buffalo General Medical Center, Kaleida Health, Buffalo, New York, USA.
| |
Collapse
|