1
|
Abadi E, Segars WP, Felice N, Sotoudeh-Paima S, Hoffman EA, Wang X, Wang W, Clark D, Ye S, Jadick G, Fryling M, Frush DP, Samei E. AAPM Truth-based CT (TrueCT) reconstruction grand challenge. Med Phys 2025; 52:1978-1990. [PMID: 39807653 PMCID: PMC11973969 DOI: 10.1002/mp.17619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2024] [Revised: 12/06/2024] [Accepted: 12/27/2024] [Indexed: 01/16/2025] Open
Abstract
BACKGROUND This Special Report summarizes the 2022, AAPM grand challenge on Truth-based CT image reconstruction. PURPOSE To provide an objective framework for evaluating CT reconstruction methods using virtual imaging resources consisting of a library of simulated CT projection images of a population of human models with various diseases. METHODS Two hundred unique anthropomorphic, computational models were created with varied diseases consisting of 67 emphysema, 67 lung lesions, and 66 liver lesions. The organs were modeled based on clinical CT images of real patients. The emphysematous regions were modeled using segmentations from patient CT cases in the COPDGene Phase I dataset. For the lung and liver lesion cases, 1-6 malignant lesions were created and inserted into the human models, with lesion diameters ranging from 5.6 to 21.9 mm for lung lesions and 3.9 to 14.9 mm for liver lesions. The contrast defined between the liver lesions and liver parenchyma was 82 ± 12 HU, ranging from 50 to 110 HU. Similarly, the contrast between the lung lesions and the lung parenchyma was defined as 781 ± 11 HU, ranging from 725 to 805 HU. For the emphysematous regions, the defined HU values were -950 ± 17 HU ranging from -918 to -979 HU. The developed human models were imaged with a validated CT simulator. The resulting CT sinograms were shared with the participants. The participants reconstructed CT images from the sinograms and sent back their reconstructed images. The reconstructed images were then scored by comparing the results against the corresponding ground truth values. The scores included both task-generic (root mean square error [RMSE] and structural similarity matrix [SSIM]), and task-specific (detectability index [d'] and lesion volume accuracy) metrics. For the cases with multiple lesions, the measured metric was averaged across all the lesions. To combine the metrics with each other, each metric was normalized to a range of 0 to 1 per disease type, with "0" and "1" being the worst and best measured values across all cases of the disease type for all received reconstructions. RESULTS The True-CT challenge attracted 52 participants, out of which 5 successfully completed the challenge and submitted the requested 200 reconstructions. Across all participants and disease types, SSIM absolute values ranged from 0.22 to 0.90, RMSE from 77.6 to 490.5 HU, d' from 0.1 to 64.6, and volume accuracy ranged from 1.2 to 753.1 mm3. The overall scores demonstrated that participant "A" had the best performance in all categories, except for the metrics of d' for lung lesions and RMSE for liver lesions. Participant "A" had an average normalized score of 0.41 ± 0.22, 0.48 ± 0.32, and 0.42 ± 0.33 for the emphysema, lung lesion, and liver lesion cases, respectively. CONCLUSIONS The True-CT challenge successfully enabled objective assessment of CT reconstructions with the unique advantage of access to a diverse population of diseased human models with known ground truth. This study highlights the significant potential of virtual imaging trials in objective assessment of medical imaging technologies.
Collapse
Affiliation(s)
- Ehsan Abadi
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, North Carolina, USA
| | - W. Paul Segars
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
| | - Nicholas Felice
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
| | - Saman Sotoudeh-Paima
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, North Carolina, USA
| | - Eric A. Hoffman
- Department of Radiology, Internal Medicine and Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Xiao Wang
- Computational Science and Engineering Division, Oak Ridge National Laboratories, Oak Ridge, Tennessee, USA
| | - Wei Wang
- Institute of Applied Mathematics, Shenzhen Polytechnic, Shenzhen, Guangdong, China
| | - Darin Clark
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Quantitative Imaging and Analysis Lab, Department of Radiology, Duke University, Durham, North Carolina, USA
| | - Siqi Ye
- Department of Radiation Oncology, Stanford University, Stanford, California, USA
| | - Giavanna Jadick
- Department of Radiology, University of Chicago, Chicago, Illinois, USA
| | - Milo Fryling
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Donald P. Frush
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
| | - Ehsan Samei
- Center for Virtual Imaging Trial, Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University School of Medicine, Durham, North Carolina, USA
- Medical Physics Graduate Program, Duke University, Durham, North Carolina, USA
- Department of Electrical & Computer Engineering, Duke University, Durham, North Carolina, USA
- Department of Biomedical Engineering, Duke University, Durham, North Carolina, USA
- Department of Physics, Duke University, Durham, North Carolina, USA
| |
Collapse
|
2
|
Kaftan P, Heinrich MP, Hansen L, Rasche V, Kestler HA, Bigalke A. Sparse keypoint segmentation of lung fissures: efficient geometric deep learning for abstracting volumetric images. Int J Comput Assist Radiol Surg 2025; 20:465-473. [PMID: 39775630 PMCID: PMC11929708 DOI: 10.1007/s11548-024-03310-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2024] [Accepted: 12/11/2024] [Indexed: 01/11/2025]
Abstract
PURPOSE Lung fissure segmentation on CT images often relies on 3D convolutional neural networks (CNNs). However, 3D-CNNs are inefficient for detecting thin structures like the fissures, which make up a tiny fraction of the entire image volume. We propose to make lung fissure segmentation more efficient by using geometric deep learning (GDL) on sparse point clouds. METHODS We abstract image data with sparse keypoint (KP) clouds. We train GDL models to segment the point cloud, comparing three major paradigms of models (PointNets, graph convolutional networks (GCNs), and PointTransformers). From the sparse point segmentations, 3D meshes of the objects are reconstructed to obtain a dense surface. The state-of-the-art Poisson surface reconstruction (PSR) makes up most of the time in our pipeline. Therefore, we propose an efficient point cloud to mesh autoencoder (PC-AE) that deforms a template mesh to fit a point cloud in a single forward pass. Our pipeline is evaluated extensively and compared to the 3D-CNN gold standard nnU-Net on diverse clinical and pathological data. RESULTS GCNs yield the best trade-off between inference time and accuracy, being 21 × faster with only 1.4 × increased error over the nnU-Net. Our PC-AE also achieves a favorable trade-off, being 3 × faster at 1.5 × the error compared to the PSR. CONCLUSION We present a KP-based fissure segmentation pipeline that is more efficient than 3D-CNNs and can greatly speed up large-scale analyses. A novel PC-AE for efficient mesh reconstruction from sparse point clouds is introduced, showing promise not only for fissure segmentation. Source code is available on https://github.com/kaftanski/fissure-segmentation-IJCARS.
Collapse
Affiliation(s)
- Paul Kaftan
- Institute of Medical Systems Biology, Ulm University, Albert-Einstein-Allee 11, 89081, Ulm, Germany
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
- International Graduate School in Molecular Medicine, Ulm University, Albert-Einstein-Allee 11, 89081, Ulm, Germany
- MoMAN Center for Translational Imaging, Ulm University, Albert-Einstein-Allee 23, 89081, Ulm, Germany
| | - Mattias P Heinrich
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany
| | - Lasse Hansen
- EchoScout GmbH, Maria-Goeppert-Str. 3, 23562, Lübeck, Germany
| | - Volker Rasche
- MoMAN Center for Translational Imaging, Ulm University, Albert-Einstein-Allee 23, 89081, Ulm, Germany
| | - Hans A Kestler
- Institute of Medical Systems Biology, Ulm University, Albert-Einstein-Allee 11, 89081, Ulm, Germany.
| | - Alexander Bigalke
- Institute of Medical Informatics, University of Lübeck, Ratzeburger Allee 160, 23562, Lübeck, Germany.
| |
Collapse
|
3
|
Jin Q, Zhang Z, Zhou T, Zhou X, Jiang X, Xia Y, Guan Y, Liu S, Fan L. Preserved ratio impaired spirometry: clinical, imaging and artificial intelligence perspective. J Thorac Dis 2025; 17:450-460. [PMID: 39975722 PMCID: PMC11833564 DOI: 10.21037/jtd-24-1582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2024] [Accepted: 12/13/2024] [Indexed: 02/21/2025]
Abstract
Preserved ratio impaired spirometry (PRISm) is a pulmonary function pattern characterized by a forced expiratory volume in one second (FEV1) to forced vital capacity ratio greater than 0.70, with an FEV1 that is below 80% of the predicted value, even after the use of bronchodilators. PRISm is considered a form of "Pre-Chronic Obstructive Pulmonary Disease (Pre-COPD)" within the broader scope of COPD. Clinically, it presents with respiratory symptoms and is more commonly observed in individuals with high body mass index, females, and those who are current smokers. Additionally, it is frequently associated with metabolic disorders and cardiovascular diseases. Regarding prognosis, PRISm shows considerable variation, ranging from improvement in lung function to the development of COPD. In this article, we review the epidemiology, comorbidities, and clinical outcomes of PRISm, with a particular emphasis on the crucial role of imaging assessments, especially computed tomography scans and magnetic resonance imaging (MRI) technology, in diagnosing, evaluating, and predicting the prognosis of PRISm. Comprehensive imaging provides a quantitative evaluation of lung volume, density, airways, and vasculature, while MRI technology can directly quantify ventilation function and pulmonary blood flow. We also emphasize the future potential of X-ray technology in this field. Moreover, the article discusses the application of artificial intelligence, including its role in predicting PRISm subtypes and modeling ventilation function.
Collapse
Affiliation(s)
- Qianxi Jin
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Ziwei Zhang
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Taohu Zhou
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Xiuxiu Zhou
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Xin'ang Jiang
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Yi Xia
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Yu Guan
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Shiyuan Liu
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| | - Li Fan
- Department of Radiology, Second Affiliated Hospital of Naval Medical University, Shanghai, China
| |
Collapse
|
4
|
Xie K, Yang J, Wei D, Weng Z, Fua P. Efficient anatomical labeling of pulmonary tree structures via deep point-graph representation-based implicit fields. Med Image Anal 2025; 99:103367. [PMID: 39437582 DOI: 10.1016/j.media.2024.103367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 06/06/2024] [Accepted: 10/02/2024] [Indexed: 10/25/2024]
Abstract
Pulmonary diseases rank prominently among the principal causes of death worldwide. Curing them will require, among other things, a better understanding of the complex 3D tree-shaped structures within the pulmonary system, such as airways, arteries, and veins. Traditional approaches using high-resolution image stacks and standard CNNs on dense voxel grids face challenges in computational efficiency, limited resolution, local context, and inadequate preservation of shape topology. Our method addresses these issues by shifting from dense voxel to sparse point representation, offering better memory efficiency and global context utilization. However, the inherent sparsity in point representation can lead to a loss of crucial connectivity in tree-shaped structures. To mitigate this, we introduce graph learning on skeletonized structures, incorporating differentiable feature fusion for improved topology and long-distance context capture. Furthermore, we employ an implicit function for efficient conversion of sparse representations into dense reconstructions end-to-end. The proposed method not only delivers state-of-the-art performance in labeling accuracy, both overall and at key locations, but also enables efficient inference and the generation of closed surface shapes. Addressing data scarcity in this field, we have also curated a comprehensive dataset to validate our approach. Data and code are available at https://github.com/M3DV/pulmonary-tree-labeling.
Collapse
Affiliation(s)
- Kangxian Xie
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne 1015, Switzerland; Boston College, Chestnut Hill, MA 02467, USA
| | - Jiancheng Yang
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne 1015, Switzerland.
| | - Donglai Wei
- Boston College, Chestnut Hill, MA 02467, USA
| | - Ziqiao Weng
- University of Sydney, Camperdown NSW 2050, Australia
| | - Pascal Fua
- Computer Vision Laboratory, Swiss Federal Institute of Technology Lausanne (EPFL), Lausanne 1015, Switzerland
| |
Collapse
|
5
|
Fufin M, Makarov V, Alfimov VI, Ananev VV, Ananeva A. Pulmonary Fissure Segmentation in CT Images Using Image Filtering and Machine Learning. Tomography 2024; 10:1645-1664. [PMID: 39453038 PMCID: PMC11510873 DOI: 10.3390/tomography10100121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2024] [Revised: 09/24/2024] [Accepted: 09/30/2024] [Indexed: 10/26/2024] Open
Abstract
BACKGROUND Both lung lobe segmentation and lung fissure segmentation are useful in the clinical diagnosis and evaluation of lung disease. It is often of clinical interest to quantify each lobe separately because many diseases are associated with specific lobes. Fissure segmentation is important for a significant proportion of lung lobe segmentation methods, as well as for assessing fissure completeness, since there is an increasing requirement for the quantification of fissure integrity. METHODS We propose a method for the fully automatic segmentation of pulmonary fissures on lung computed tomography (CT) based on U-Net and PAN models using a Derivative of Stick (DoS) filter for data preprocessing. Model ensembling is also used to improve prediction accuracy. RESULTS Our method achieved an F1 score of 0.916 for right-lung fissures and 0.933 for left-lung fissures, which are significantly higher than the standalone DoS results (0.724 and 0.666, respectively). We also performed lung lobe segmentation using fissure segmentation. The lobe segmentation algorithm shows results close to those of state-of-the-art methods, with an average Dice score of 0.989. CONCLUSIONS The proposed method segments pulmonary fissures efficiently and have low memory requirements, which makes it suitable for further research in this field involving rapid experimentation.
Collapse
Affiliation(s)
- Mikhail Fufin
- Medical Informatics Laboratory, Yaroslav-the-Wise Novgorod State University, 41 B. St. Petersburgskaya, Veliky Novgorod 173003, Russia; (V.M.); (V.I.A.); (V.V.A.); (A.A.)
| | | | | | | | | |
Collapse
|
6
|
Spina S, Mantz L, Xin Y, Moscho DC, Ribeiro De Santis Santiago R, Grassi L, Nova A, Gerard SE, Bittner EA, Fintelmann FJ, Berra L, Cereda M. The pleural gradient does not reflect the superimposed pressure in patients with class III obesity. Crit Care 2024; 28:306. [PMID: 39285477 PMCID: PMC11406718 DOI: 10.1186/s13054-024-05097-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 09/12/2024] [Indexed: 09/19/2024] Open
Abstract
BACKGROUND The superimposed pressure is the primary determinant of the pleural pressure gradient. Obesity is associated with elevated end-expiratory esophageal pressure, regardless of lung disease severity, and the superimposed pressure might not be the only determinant of the pleural pressure gradient. The study aims to measure partitioned respiratory mechanics and superimposed pressure in a cohort of patients admitted to the ICU with and without class III obesity (BMI ≥ 40 kg/m2), and to quantify the amount of thoracic adipose tissue and muscle through advanced imaging techniques. METHODS This is a single-center observational study including ICU-admitted patients with acute respiratory failure who underwent a chest computed tomography scan within three days before/after esophageal manometry. The superimposed pressure was calculated from lung density and height of the largest axial lung slice. Automated deep-learning pipelines segmented lung parenchyma and quantified thoracic adipose tissue and skeletal muscle. RESULTS N = 18 participants (50% female, age 60 [30-66] years), with 9 having BMI < 30 and 9 ≥ 40 kg/m2. Groups showed no significant differences in age, sex, clinical severity scores, or mortality. Patients with BMI ≥ 40 exhibited higher esophageal pressure (15.8 ± 2.6 vs. 8.3 ± 4.9 cmH2O, p = 0.001), higher pleural pressure gradient (11.1 ± 4.5 vs. 6.3 ± 4.9 cmH2O, p = 0.04), while superimposed pressure did not differ (6.8 ± 1.1 vs. 6.5 ± 1.5 cmH2O, p = 0.59). Subcutaneous and intrathoracic adipose tissue were significantly higher in subjects with BMI ≥ 40 and correlated positively with esophageal pressure and pleural pressure gradient (p < 0.05). Muscle areas did not differ between groups. CONCLUSIONS In patients with class III obesity, the superimposed pressure does not approximate the pleural pressure gradient, which is higher than in patients with lower BMI. The quantity and distribution of subcutaneous and intrathoracic adiposity also contribute to increased pleural pressure gradients in individuals with BMI ≥ 40. This study introduces a novel physiological concept that provides a solid rationale for tailoring mechanical ventilation in patients with high BMI, where specific guidelines recommendations are lacking.
Collapse
Affiliation(s)
- Stefano Spina
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA.
- Harvard Medical School, Boston, USA.
| | - Lea Mantz
- Department of Radiology, Massachusetts General Hospital, Boston, USA
- Department of Diagnostic and Interventional Radiology, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Yi Xin
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | - David C Moscho
- Department of Radiology, Massachusetts General Hospital, Boston, USA
- Department of Diagnostic and Interventional Radiology, Medical Faculty, University Clinic Duesseldorf, Heinrich-Heine University Duesseldorf, Düsseldorf, Germany
| | - Roberta Ribeiro De Santis Santiago
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | - Luigi Grassi
- Anestesia Rianimazione Donna-Bambino, Ospedale Maggiore Policlinico, Milan, Italy
| | - Alice Nova
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | - Sarah E Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - Edward A Bittner
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | - Florian J Fintelmann
- Harvard Medical School, Boston, USA
- Department of Radiology, Massachusetts General Hospital, Boston, USA
| | - Lorenzo Berra
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| | - Maurizio Cereda
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, USA
- Harvard Medical School, Boston, USA
| |
Collapse
|
7
|
Lagier D, Zeng C, Kaczka DW, Zhu M, Grogg K, Gerard SE, Reinhardt JM, Ribeiro GCM, Rashid A, Winkler T, Vidal Melo MF. Mechanical ventilation guided by driving pressure optimizes local pulmonary biomechanics in an ovine model. Sci Transl Med 2024; 16:eado1097. [PMID: 39141699 DOI: 10.1126/scitranslmed.ado1097] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Revised: 05/13/2024] [Accepted: 07/24/2024] [Indexed: 08/16/2024]
Abstract
Mechanical ventilation exposes the lung to injurious stresses and strains that can negatively affect clinical outcomes in acute respiratory distress syndrome or cause pulmonary complications after general anesthesia. Excess global lung strain, estimated as increased respiratory system driving pressure, is associated with mortality related to mechanical ventilation. The role of small-dimension biomechanical factors underlying this association and their spatial heterogeneity within the lung are currently unknown. Using four-dimensional computed tomography with a voxel resolution of 2.4 cubic millimeters and a multiresolution convolutional neural network for whole-lung image segmentation, we dynamically measured voxel-wise lung inflation and tidal parenchymal strains. Healthy or injured ovine lungs were evaluated as the mechanical ventilation positive end-expiratory pressure (PEEP) was titrated from 20 to 2 centimeters of water. The PEEP of minimal driving pressure (PEEPDP) optimized local lung biomechanics. We observed a greater rate of change in nonaerated lung mass with respect to PEEP below PEEPDP compared with PEEP values above this threshold. PEEPDP similarly characterized a breaking point in the relationships between PEEP and SD of local tidal parenchymal strain, the 95th percentile of local strains, and the magnitude of tidal overdistension. These findings advance the understanding of lung collapse, tidal overdistension, and strain heterogeneity as local triggers of ventilator-induced lung injury in large-animal lungs similar to those of humans and could inform the clinical management of mechanical ventilation to improve local lung biomechanics.
Collapse
Affiliation(s)
- David Lagier
- Experimental Interventional Imaging Laboratory (LIIE), European Center for Research in Medical Imaging (CERIMED), Aix Marseille University, Marseille 13005, France
- Department of Anesthesia and Critical Care, University Hospital La Timone, APHM, Marseille 13005, France
| | - Congli Zeng
- Department of Anesthesiology, Vagelos College of Physicians and Surgeons, Columbia University, New York City, NY 10032, USA
| | - David W Kaczka
- Departments of Anesthesia and Radiology, University of Iowa, Iowa City, IA 52242, USA
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Min Zhu
- Guizhou University South Campus, Guiyang City 550025, China
| | - Kira Grogg
- Yale PET Center, Department of Radiology and Biomedical Imaging, Yale University, New Haven, CT 06520, USA
| | - Sarah E Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Joseph M Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242, USA
| | - Gabriel C Motta Ribeiro
- Biomedical Engineering Program, Alberto Luiz Coimbra Institute for Graduate Studies and Research in Engineering, Universidade Federal do Rio de Janeiro, Rio de Janeiro 21941-594, Brazil
| | - Azman Rashid
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Tilo Winkler
- Department of Anesthesia, Critical Care, and Pain Medicine, Massachusetts General Hospital and Harvard Medical School, Boston, MA 02114, USA
| | - Marcos F Vidal Melo
- Department of Anesthesiology, Vagelos College of Physicians and Surgeons, Columbia University, New York City, NY 10032, USA
| |
Collapse
|
8
|
Gerard SE, Dougherty TM, Nagpal P, Jin D, Han MK, Newell JD, Saha PK, Comellas AP, Cooper CB, Couper D, Fortis S, Guo J, Hansel NN, Kanner RE, Kazeroni EA, Martinez FJ, Motahari A, Paine R, Rennard S, Schroeder JD, Woodruff PG, Barr RG, Smith BM, Hoffman EA. Vessel and Airway Characteristics in One-Year Computed Tomography-defined Rapid Emphysema Progression: SPIROMICS. Ann Am Thorac Soc 2024; 21:1022-1033. [PMID: 38530051 PMCID: PMC11284327 DOI: 10.1513/annalsats.202304-383oc] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 03/22/2024] [Indexed: 03/27/2024] Open
Abstract
Rationale: Rates of emphysema progression vary in chronic obstructive pulmonary disease (COPD), and the relationships with vascular and airway pathophysiology remain unclear. Objectives: We sought to determine if indices of peripheral (segmental and beyond) pulmonary arterial dilation measured on computed tomography (CT) are associated with a 1-year index of emphysema (EI; percentage of voxels <-950 Hounsfield units) progression. Methods: Five hundred ninety-nine former and never-smokers (Global Initiative for Chronic Obstructive Lung Disease stages 0-3) were evaluated from the SPIROMICS (Subpopulations and Intermediate Outcome Measures in COPD Study) cohort: rapid emphysema progressors (RPs; n = 188, 1-year ΔEI > 1%), nonprogressors (n = 301, 1-year ΔEI ± 0.5%), and never-smokers (n = 110). Segmental pulmonary arterial cross-sectional areas were standardized to associated airway luminal areas (segmental pulmonary artery-to-airway ratio [PAARseg]). Full-inspiratory CT scan-derived total (arteries and veins) pulmonary vascular volume (TPVV) was compared with small vessel volume (radius smaller than 0.75 mm). Ratios of airway to lung volume (an index of dysanapsis and COPD risk) were compared with ratios of TPVV to lung volume. Results: Compared with nonprogressors, RPs exhibited significantly larger PAARseg (0.73 ± 0.29 vs. 0.67 ± 0.23; P = 0.001), lower ratios of TPVV to lung volume (3.21 ± 0.42% vs. 3.48 ± 0.38%; P = 5.0 × 10-12), lower ratios of airway to lung volume (0.031 ± 0.003 vs. 0.034 ± 0.004; P = 6.1 × 10-13), and larger ratios of small vessel volume to TPVV (37.91 ± 4.26% vs. 35.53 ± 4.89%; P = 1.9 × 10-7). In adjusted analyses, an increment of 1 standard deviation in PAARseg was associated with a 98.4% higher rate of severe exacerbations (95% confidence interval, 29-206%; P = 0.002) and 79.3% higher odds of being in the RP group (95% confidence interval, 24-157%; P = 0.001). At 2-year follow-up, the CT-defined RP group demonstrated a significant decline in postbronchodilator percentage predicted forced expiratory volume in 1 second. Conclusions: Rapid one-year progression of emphysema was associated with indices indicative of higher peripheral pulmonary vascular resistance and a possible role played by pulmonary vascular-airway dysanapsis.
Collapse
Affiliation(s)
| | | | - Prashant Nagpal
- Department of Radiology, University of Wisconsin–Madison, Madison, Wisconsin
| | - Dakai Jin
- Department of Electrical and Computer Engineering
| | | | - John D. Newell
- Roy J. Carver Department of Biomedical Engineering
- Department of Radiology, and
| | - Punam K. Saha
- Department of Electrical and Computer Engineering
- Department of Radiology, and
| | | | - Christopher B. Cooper
- Department of Medicine, University of California, Los Angeles, Los Angeles, California
| | - David Couper
- Department of Biostatistics, University of North Carolina, Chapel Hill, North Carolina
| | | | - Junfeng Guo
- Roy J. Carver Department of Biomedical Engineering
- Department of Radiology, and
| | - Nadia N. Hansel
- Department of Medicine, The Johns Hopkins University, Baltimore, Maryland
| | | | - Ella A. Kazeroni
- Department of Radiology, Medical School, University of Michigan, Ann Arbor, Michigan
| | | | | | | | - Stephen Rennard
- Department of Internal Medicine, University of Nebraska, Omaha, Nebraska
| | | | - Prescott G. Woodruff
- Department of Medicine, University of California, San Francisco, San Francisco, California
| | - R. Graham Barr
- Department of Medicine and
- Department of Epidemiology, College of Medicine, Columbia University, New York, New York; and
| | - Benjamin M. Smith
- Department of Medicine and
- Department of Epidemiology, College of Medicine, Columbia University, New York, New York; and
- Department of Medicine, McGill University, Montreal, Quebec, Canada
| | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering
- Department of Radiology, and
- Department of Medicine, University of Iowa, Iowa City, Iowa
| |
Collapse
|
9
|
Chaudhary MFA, Gerard SE, Christensen GE, Cooper CB, Schroeder JD, Hoffman EA, Reinhardt JM. LungViT: Ensembling Cascade of Texture Sensitive Hierarchical Vision Transformers for Cross-Volume Chest CT Image-to-Image Translation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:2448-2465. [PMID: 38373126 PMCID: PMC11227912 DOI: 10.1109/tmi.2024.3367321] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Chest computed tomography (CT) at inspiration is often complemented by an expiratory CT to identify peripheral airways disease. Additionally, co-registered inspiratory-expiratory volumes can be used to derive various markers of lung function. Expiratory CT scans, however, may not be acquired due to dose or scan time considerations or may be inadequate due to motion or insufficient exhale; leading to a missed opportunity to evaluate underlying small airways disease. Here, we propose LungViT- a generative adversarial learning approach using hierarchical vision transformers for translating inspiratory CT intensities to corresponding expiratory CT intensities. LungViT addresses several limitations of the traditional generative models including slicewise discontinuities, limited size of generated volumes, and their inability to model texture transfer at volumetric level. We propose a shifted-window hierarchical vision transformer architecture with squeeze-and-excitation decoder blocks for modeling dependencies between features. We also propose a multiview texture similarity distance metric for texture and style transfer in 3D. To incorporate global information into the training process and refine the output of our model, we use ensemble cascading. LungViT is able to generate large 3D volumes of size 320×320×320 . We train and validate our model using a diverse cohort of 1500 subjects with varying disease severity. To assess model generalizability beyond the development set biases, we evaluate our model on an out-of-distribution external validation set of 200 subjects. Clinical validation on internal and external testing sets shows that synthetic volumes could be reliably adopted for deriving clinical endpoints of chronic obstructive pulmonary disease.
Collapse
|
10
|
Tada DK, Teng P, Vyapari K, Banola A, Foster G, Diaz E, Kim GHJ, Goldin JG, Abtin F, McNitt-Gray M, Brown MS. Quantifying lung fissure integrity using a three-dimensional patch-based convolutional neural network on CT images for emphysema treatment planning. J Med Imaging (Bellingham) 2024; 11:034502. [PMID: 38817711 PMCID: PMC11135203 DOI: 10.1117/1.jmi.11.3.034502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 04/19/2024] [Accepted: 05/03/2024] [Indexed: 06/01/2024] Open
Abstract
Purpose Evaluation of lung fissure integrity is required to determine whether emphysema patients have complete fissures and are candidates for endobronchial valve (EBV) therapy. We propose a deep learning (DL) approach to segment fissures using a three-dimensional patch-based convolutional neural network (CNN) and quantitatively assess fissure integrity on CT to evaluate it in subjects with severe emphysema. Approach From an anonymized image database of patients with severe emphysema, 129 CT scans were used. Lung lobe segmentations were performed to identify lobar regions, and the boundaries among these regions were used to construct approximate interlobar regions of interest (ROIs). The interlobar ROIs were annotated by expert image analysts to identify voxels where the fissure was present and create a reference ROI that excluded non-fissure voxels (where the fissure is incomplete). A CNN configured by nnU-Net was trained using 86 CT scans and their corresponding reference ROIs to segment the ROIs of left oblique fissure (LOF), right oblique fissure (ROF), and right horizontal fissure (RHF). For an independent test set of 43 cases, fissure integrity was quantified by mapping the segmented fissure ROI along the interlobar ROI. A fissure integrity score (FIS) was then calculated as the percentage of labeled fissure voxels divided by total voxels in the interlobar ROI. Predicted FIS (p-FIS) was quantified from the CNN output, and statistical analyses were performed comparing p-FIS and reference FIS (r-FIS). Results The absolute percent error mean (±SD) between r-FIS and p-FIS for the test set was 4.0% (± 4.1 % ), 6.0% (± 9.3 % ), and 12.2% (± 12.5 % ) for the LOF, ROF, and RHF, respectively. Conclusions A DL approach was developed to segment lung fissures on CT images and accurately quantify FIS. It has potential to assist in the identification of emphysema patients who would benefit from EBV treatment.
Collapse
Affiliation(s)
- Dallas K. Tada
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Pangyu Teng
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Kalyani Vyapari
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Ashley Banola
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - George Foster
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Esteban Diaz
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Grace Hyun J. Kim
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Jonathan G. Goldin
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Fereidoun Abtin
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Michael McNitt-Gray
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| | - Matthew S. Brown
- The University of California, Los Angeles (UCLA), David Geffen School of Medicine at UCLA, Center for Computer Vision and Imaging Biomarkers, Department of Radiological Sciences, Los Angeles, California, United States
| |
Collapse
|
11
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
12
|
Saha PK, Nadeem SA, Comellas AP. A Survey on Artificial Intelligence in Pulmonary Imaging. WILEY INTERDISCIPLINARY REVIEWS. DATA MINING AND KNOWLEDGE DISCOVERY 2023; 13:e1510. [PMID: 38249785 PMCID: PMC10796150 DOI: 10.1002/widm.1510] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2022] [Accepted: 06/21/2023] [Indexed: 01/23/2024]
Abstract
Over the last decade, deep learning (DL) has contributed a paradigm shift in computer vision and image recognition creating widespread opportunities of using artificial intelligence in research as well as industrial applications. DL has been extensively studied in medical imaging applications, including those related to pulmonary diseases. Chronic obstructive pulmonary disease, asthma, lung cancer, pneumonia, and, more recently, COVID-19 are common lung diseases affecting nearly 7.4% of world population. Pulmonary imaging has been widely investigated toward improving our understanding of disease etiologies and early diagnosis and assessment of disease progression and clinical outcomes. DL has been broadly applied to solve various pulmonary image processing challenges including classification, recognition, registration, and segmentation. This paper presents a survey of pulmonary diseases, roles of imaging in translational and clinical pulmonary research, and applications of different DL architectures and methods in pulmonary imaging with emphasis on DL-based segmentation of major pulmonary anatomies such as lung volumes, lung lobes, pulmonary vessels, and airways as well as thoracic musculoskeletal anatomies related to pulmonary diseases.
Collapse
Affiliation(s)
- Punam K Saha
- Departments of Radiology and Electrical and Computer Engineering, University of Iowa, Iowa City, IA, 52242
| | | | | |
Collapse
|
13
|
Gerard SE, Chaudhary MFA, Herrmann J, Christensen GE, Estépar RSJ, Reinhardt JM, Hoffman EA. Direct estimation of regional lung volume change from paired and single CT images using residual regression neural network. Med Phys 2023; 50:5698-5714. [PMID: 36929883 PMCID: PMC10743098 DOI: 10.1002/mp.16365] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2022] [Revised: 02/11/2023] [Accepted: 03/01/2023] [Indexed: 03/18/2023] Open
Abstract
BACKGROUND Chest computed tomography (CT) enables characterization of pulmonary diseases by producing high-resolution and high-contrast images of the intricate lung structures. Deformable image registration is used to align chest CT scans at different lung volumes, yielding estimates of local tissue expansion and contraction. PURPOSE We investigated the utility of deep generative models for directly predicting local tissue volume change from lung CT images, bypassing computationally expensive iterative image registration and providing a method that can be utilized in scenarios where either one or two CT scans are available. METHODS A residual regression convolutional neural network, called Reg3DNet+, is proposed for directly regressing high-resolution images of local tissue volume change (i.e., Jacobian) from CT images. Image registration was performed between lung volumes at total lung capacity (TLC) and functional residual capacity (FRC) using a tissue mass- and structure-preserving registration algorithm. The Jacobian image was calculated from the registration-derived displacement field and used as the ground truth for local tissue volume change. Four separate Reg3DNet+ models were trained to predict Jacobian images using a multifactorial study design to compare the effects of network input (i.e., single image vs. paired images) and output space (i.e., FRC vs. TLC). The models were trained and evaluated on image datasets from the COPDGene study. Models were evaluated against the registration-derived Jacobian images using local, regional, and global evaluation metrics. RESULTS Statistical analysis revealed that both factors - network input and output space - were significant determinants for change in evaluation metrics. Paired-input models performed better than single-input models, and model performance was better in the output space of FRC rather than TLC. Mean structural similarity index for paired-input models was 0.959 and 0.956 for FRC and TLC output spaces, respectively, and for single-input models was 0.951 and 0.937. Global evaluation metrics demonstrated correlation between registration-derived Jacobian mean and predicted Jacobian mean: coefficient of determination (r2 ) for paired-input models was 0.974 and 0.938 for FRC and TLC output spaces, respectively, and for single-input models was 0.598 and 0.346. After correcting for effort, registration-derived lobar volume change was strongly correlated with the predicted lobar volume change: for paired-input models r2 was 0.899 for both FRC and TLC output spaces, and for single-input models r2 was 0.803 and 0.862, respectively. CONCLUSIONS Convolutional neural networks can be used to directly predict local tissue mechanics, eliminating the need for computationally expensive image registration. Networks that use paired CT images acquired at TLC and FRC allow for more accurate prediction of local tissue expansion compared to networks that use a single image. Networks that only require a single input image still show promising results, particularly after correcting for effort, and allow for local tissue expansion estimation in cases where multiple CT scans are not available. For single-input networks, the FRC image is more predictive of local tissue volume change compared to the TLC image.
Collapse
Affiliation(s)
- Sarah E. Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | | | - Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Gary E. Christensen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiation Oncology, University of Iowa, Iowa City, Iowa, USA
| | | | - Joseph M. Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | - Eric A. Hoffman
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
- Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
14
|
Althof ZW, Gerard SE, Eskandari A, Galizia MS, Hoffman EA, Reinhardt JM. Attention U-net for automated pulmonary fissure integrity analysis in lung computed tomography images. Sci Rep 2023; 13:14135. [PMID: 37644125 PMCID: PMC10465516 DOI: 10.1038/s41598-023-41322-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Accepted: 08/24/2023] [Indexed: 08/31/2023] Open
Abstract
Computed Tomography (CT) imaging is routinely used for imaging of the lungs. Deep learning can effectively automate complex and laborious tasks in medical imaging. In this work, a deep learning technique is utilized to assess lobar fissure completeness (also known as fissure integrity) from pulmonary CT images. The human lungs are divided into five separate lobes, divided by the lobar fissures. Fissure integrity assessment is important to endobronchial valve treatment screening. Fissure integrity is known to be a biomarker of collateral ventilation between lobes impacting the efficacy of valves designed to block airflow to diseased lung regions. Fissure integrity is also likely to impact lobar sliding which has recently been shown to affect lung biomechanics. Further widescale study of fissure integrity's impact on disease susceptibility and progression requires rapid, reproducible, and noninvasive fissure integrity assessment. In this paper we describe IntegrityNet, an attention U-Net based automatic fissure integrity analysis tool. IntegrityNet is able to predict fissure integrity with an accuracy of 95.8%, 96.1%, and 89.8% for left oblique, right oblique, and right horizontal fissures, compared to manual analysis on a dataset of 82 subjects. We also show that our method is robust to COPD severity and reproducible across subject scans acquired at different time points.
Collapse
Affiliation(s)
- Zachary W Althof
- 5601 Seamans Center for the Engineering Arts and Sciences, University of Iowa Roy J. Carver Department of Biomedical Engineering, Iowa City, IA, 52242, USA
| | - Sarah E Gerard
- University of Iowa Department of Radiology, Iowa City, IA, USA
| | - Ali Eskandari
- University of Iowa Department of Radiology, Iowa City, IA, USA
| | | | - Eric A Hoffman
- 5601 Seamans Center for the Engineering Arts and Sciences, University of Iowa Roy J. Carver Department of Biomedical Engineering, Iowa City, IA, 52242, USA
- University of Iowa Department of Radiology, Iowa City, IA, USA
| | - Joseph M Reinhardt
- 5601 Seamans Center for the Engineering Arts and Sciences, University of Iowa Roy J. Carver Department of Biomedical Engineering, Iowa City, IA, 52242, USA.
- University of Iowa Department of Radiology, Iowa City, IA, USA.
| |
Collapse
|
15
|
Wallat EM, Wuschner AE, Flakus MJ, Gerard SE, Christensen GE, Reinhardt JM, Bayouth JE. Predicting pulmonary ventilation damage after radiation therapy for nonsmall cell lung cancer using a ResNet generative adversarial network. Med Phys 2023; 50:3199-3209. [PMID: 36779695 DOI: 10.1002/mp.16311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2022] [Revised: 02/01/2023] [Accepted: 02/06/2023] [Indexed: 02/14/2023] Open
Abstract
BACKGROUND Functional lung avoidance radiation therapy (RT) is a technique being investigated to preferentially avoid specific regions of the lung that are predicted to be more susceptible to radiation-induced damage. Reducing the dose delivered to high functioning regions may reduce the occurrence radiation-induced lung injuries (RILIs) and toxicities. However, in order to develop effective lung function-sparing plans, accurate predictions of post-RT ventilation change are needed to determine which regions of the lung should be spared. PURPOSE To predict pulmonary ventilation change following RT for nonsmall cell lung cancer using machine learning. METHODS A conditional generative adversarial network (cGAN) was developed with data from 82 human subjects enrolled in a randomized clinical trial approved by the institution's IRB to predict post-RT pulmonary ventilation change. The inputs to the network were the pre-RT pulmonary ventilation map and radiation dose distribution. The loss function was a combination of the binary cross-entropy loss and an asymmetrical structural similarity index measure (aSSIM) function designed to increase penalization of under-prediction of ventilation damage. Network performance was evaluated against a previously developed polynomial regression model using a paired sample t-test for comparison. Evaluation was performed using eight-fold cross-validation. RESULTS From the eight-fold cross-validation, we found that relative to the polynomial model, the cGAN model significantly improved predicting regions of ventilation damage following radiotherapy based on true positive rate (TPR), 0.14±0.15 to 0.72±0.21, and Dice similarity coefficient (DSC), 0.19±0.16 to 0.46±0.14, but significantly declined in true negative rate, 0.97±0.05 to 0.62±0.21, and accuracy, 0.79±0.08 to 0.65±0.14. Additionally, the average true positive volume increased from 104±119 cc in the POLY model to 565±332 cc in the cGAN model, and the average false negative volume decreased from 654±361 cc in the POLY model to 193±163 cc in the cGAN model. CONCLUSIONS The proposed cGAN model demonstrated significant improvement in TPR and DSC. The higher sensitivity of the cGAN model can improve the clinical utility of functional lung avoidance RT by identifying larger volumes of functional lung that can be spared and thus decrease the probability of the patient developing RILIs.
Collapse
Affiliation(s)
- Eric M Wallat
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Antonia E Wuschner
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Mattison J Flakus
- Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin, USA
| | - Sarah E Gerard
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA
| | - Gary E Christensen
- Department of Electrical and Computer Engineering, University of Iowa, Iowa City, Iowa, USA.,Department of Radiation Oncology, University of Iowa, Iowa City, Iowa, USA
| | - Joseph M Reinhardt
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, Iowa, USA.,Department of Radiology, University of Iowa, Iowa City, Iowa, USA
| | - John E Bayouth
- Department of Radiation Medicine, Oregon Health & Science University, Portland, Oregon, USA
| |
Collapse
|
16
|
He L, Meng Y, Zhong J, Tang L, Chui C, Zhang J. Preoperative path planning algorithm for lung puncture biopsy based on path constraint and multidimensional space distance optimization. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
17
|
Hsia CCW, Bates JHT, Driehuys B, Fain SB, Goldin JG, Hoffman EA, Hogg JC, Levin DL, Lynch DA, Ochs M, Parraga G, Prisk GK, Smith BM, Tawhai M, Vidal Melo MF, Woods JC, Hopkins SR. Quantitative Imaging Metrics for the Assessment of Pulmonary Pathophysiology: An Official American Thoracic Society and Fleischner Society Joint Workshop Report. Ann Am Thorac Soc 2023; 20:161-195. [PMID: 36723475 PMCID: PMC9989862 DOI: 10.1513/annalsats.202211-915st] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023] Open
Abstract
Multiple thoracic imaging modalities have been developed to link structure to function in the diagnosis and monitoring of lung disease. Volumetric computed tomography (CT) renders three-dimensional maps of lung structures and may be combined with positron emission tomography (PET) to obtain dynamic physiological data. Magnetic resonance imaging (MRI) using ultrashort-echo time (UTE) sequences has improved signal detection from lung parenchyma; contrast agents are used to deduce airway function, ventilation-perfusion-diffusion, and mechanics. Proton MRI can measure regional ventilation-perfusion ratio. Quantitative imaging (QI)-derived endpoints have been developed to identify structure-function phenotypes, including air-blood-tissue volume partition, bronchovascular remodeling, emphysema, fibrosis, and textural patterns indicating architectural alteration. Coregistered landmarks on paired images obtained at different lung volumes are used to infer airway caliber, air trapping, gas and blood transport, compliance, and deformation. This document summarizes fundamental "good practice" stereological principles in QI study design and analysis; evaluates technical capabilities and limitations of common imaging modalities; and assesses major QI endpoints regarding underlying assumptions and limitations, ability to detect and stratify heterogeneous, overlapping pathophysiology, and monitor disease progression and therapeutic response, correlated with and complementary to, functional indices. The goal is to promote unbiased quantification and interpretation of in vivo imaging data, compare metrics obtained using different QI modalities to ensure accurate and reproducible metric derivation, and avoid misrepresentation of inferred physiological processes. The role of imaging-based computational modeling in advancing these goals is emphasized. Fundamental principles outlined herein are critical for all forms of QI irrespective of acquisition modality or disease entity.
Collapse
|
18
|
Carmo D, Ribeiro J, Dertkigil S, Appenzeller S, Lotufo R, Rittner L. A Systematic Review of Automated Segmentation Methods and Public Datasets for the Lung and its Lobes and Findings on Computed Tomography Images. Yearb Med Inform 2022; 31:277-295. [PMID: 36463886 PMCID: PMC9719778 DOI: 10.1055/s-0042-1742517] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022] Open
Abstract
OBJECTIVES Automated computational segmentation of the lung and its lobes and findings in X-Ray based computed tomography (CT) images is a challenging problem with important applications, including medical research, surgical planning, and diagnostic decision support. With the increase in large imaging cohorts and the need for fast and robust evaluation of normal and abnormal lungs and their lobes, several authors have proposed automated methods for lung assessment on CT images. In this paper we intend to provide a comprehensive summarization of these methods. METHODS We used a systematic approach to perform an extensive review of automated lung segmentation methods. We chose Scopus, PubMed, and Scopus to conduct our review and included methods that perform segmentation of the lung parenchyma, lobes or internal disease related findings. The review was not limited by date, but rather by only including methods providing quantitative evaluation. RESULTS We organized and classified all 234 included articles into various categories according to methodological similarities among them. We provide summarizations of quantitative evaluations, public datasets, evaluation metrics, and overall statistics indicating recent research directions of the field. CONCLUSIONS We noted the rise of data-driven models in the last decade, especially due to the deep learning trend, increasing the demand for high-quality data annotation. This has instigated an increase of semi-supervised and uncertainty guided works that try to be less dependent on human annotation. In addition, the question of how to evaluate the robustness of data-driven methods remains open, given that evaluations derived from specific datasets are not general.
Collapse
Affiliation(s)
- Diedre Carmo
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | - Jean Ribeiro
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | | | | | - Roberto Lotufo
- School of Electrical and Computer Engineering, University of Campinas, Brazil
| | - Leticia Rittner
- School of Electrical and Computer Engineering, University of Campinas, Brazil,Correspondence to: Leticia Rittner Av. Albert Einstein, 400, Cidade Universitária Zeferino Vaz, Barão Geraldo - Campinas - SP 13083-852Brazil
| |
Collapse
|
19
|
Wang L. Deep Learning Techniques to Diagnose Lung Cancer. Cancers (Basel) 2022; 14:5569. [PMID: 36428662 PMCID: PMC9688236 DOI: 10.3390/cancers14225569] [Citation(s) in RCA: 34] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 11/11/2022] [Accepted: 11/11/2022] [Indexed: 11/15/2022] Open
Abstract
Medical imaging tools are essential in early-stage lung cancer diagnostics and the monitoring of lung cancer during treatment. Various medical imaging modalities, such as chest X-ray, magnetic resonance imaging, positron emission tomography, computed tomography, and molecular imaging techniques, have been extensively studied for lung cancer detection. These techniques have some limitations, including not classifying cancer images automatically, which is unsuitable for patients with other pathologies. It is urgently necessary to develop a sensitive and accurate approach to the early diagnosis of lung cancer. Deep learning is one of the fastest-growing topics in medical imaging, with rapidly emerging applications spanning medical image-based and textural data modalities. With the help of deep learning-based medical imaging tools, clinicians can detect and classify lung nodules more accurately and quickly. This paper presents the recent development of deep learning-based imaging techniques for early lung cancer detection.
Collapse
Affiliation(s)
- Lulu Wang
- Biomedical Device Innovation Center, Shenzhen Technology University, Shenzhen 518118, China
| |
Collapse
|
20
|
Xue M, Han L, Song Y, Rao F, Peng D. A Fissure-Aided Registration Approach for Automatic Pulmonary Lobe Segmentation Using Deep Learning. SENSORS (BASEL, SWITZERLAND) 2022; 22:8560. [PMID: 36366258 PMCID: PMC9656539 DOI: 10.3390/s22218560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/03/2022] [Accepted: 11/03/2022] [Indexed: 06/16/2023]
Abstract
The segmentation of pulmonary lobes is important in clinical assessment, lesion location, and surgical planning. Automatic lobe segmentation is challenging, mainly due to the incomplete fissures or the morphological variation resulting from lung disease. In this work, we propose a learning-based approach that incorporates information from the local fissures, the whole lung, and priori pulmonary anatomy knowledge to separate the lobes robustly and accurately. The prior pulmonary atlas is registered to the test CT images with the aid of the detected fissures. The result of the lobe segmentation is obtained by mapping the deformation function on the lobes-annotated atlas. The proposed method is evaluated in a custom dataset with COPD. Twenty-four CT scans randomly selected from the custom dataset were segmented manually and are available to the public. The experiments showed that the average dice coefficients were 0.95, 0.90, 0.97, 0.97, and 0.97, respectively, for the right upper, right middle, right lower, left upper, and left lower lobes. Moreover, the comparison of the performance with a former learning-based segmentation approach suggests that the presented method could achieve comparable segmentation accuracy and behave more robustly in cases with morphological specificity.
Collapse
Affiliation(s)
- Mengfan Xue
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Lu Han
- Philips Healthcare, Shanghai 200072, China
| | - Yiran Song
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
| | - Fan Rao
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| | - Dongliang Peng
- School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China
- Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou 311121, China
| |
Collapse
|
21
|
Herrmann J, Kollisch-Singule M, Satalin J, Nieman GF, Kaczka DW. Assessment of Heterogeneity in Lung Structure and Function During Mechanical Ventilation: A Review of Methodologies. JOURNAL OF ENGINEERING AND SCIENCE IN MEDICAL DIAGNOSTICS AND THERAPY 2022; 5:040801. [PMID: 35832339 PMCID: PMC9132008 DOI: 10.1115/1.4054386] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/13/2022] [Indexed: 06/15/2023]
Abstract
The mammalian lung is characterized by heterogeneity in both its structure and function, by incorporating an asymmetric branching airway tree optimized for maintenance of efficient ventilation, perfusion, and gas exchange. Despite potential benefits of naturally occurring heterogeneity in the lungs, there may also be detrimental effects arising from pathologic processes, which may result in deficiencies in gas transport and exchange. Regardless of etiology, pathologic heterogeneity results in the maldistribution of regional ventilation and perfusion, impairments in gas exchange, and increased work of breathing. In extreme situations, heterogeneity may result in respiratory failure, necessitating support with a mechanical ventilator. This review will present a summary of measurement techniques for assessing and quantifying heterogeneity in respiratory system structure and function during mechanical ventilation. These methods have been grouped according to four broad categories: (1) inverse modeling of heterogeneous mechanical function; (2) capnography and washout techniques to measure heterogeneity of gas transport; (3) measurements of heterogeneous deformation on the surface of the lung; and finally (4) imaging techniques used to observe spatially-distributed ventilation or regional deformation. Each technique varies with regard to spatial and temporal resolution, degrees of invasiveness, risks posed to patients, as well as suitability for clinical implementation. Nonetheless, each technique provides a unique perspective on the manifestations and consequences of mechanical heterogeneity in the diseased lung.
Collapse
Affiliation(s)
- Jacob Herrmann
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242
| | | | - Joshua Satalin
- Department of Surgery, SUNY Upstate Medical University, Syracuse, NY 13210
| | - Gary F. Nieman
- Department of Surgery, SUNY Upstate Medical University, Syracuse, NY 13210
| | - David W. Kaczka
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA 52242; Department of Anesthesia, University of Iowa, Iowa City, IA 52242; Department of Radiology, University of Iowa, Iowa City, IA 52242
| |
Collapse
|
22
|
Doraiswami PR, Sarveshwaran V, Swamidason ITJ, Sorna SCD. Jaya-tunicate swarm algorithm based generative adversarial network for COVID-19 prediction with chest computed tomography images. CONCURRENCY AND COMPUTATION : PRACTICE & EXPERIENCE 2022; 34:e7211. [PMID: 35945987 PMCID: PMC9353441 DOI: 10.1002/cpe.7211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2021] [Revised: 03/30/2022] [Accepted: 05/06/2022] [Indexed: 06/15/2023]
Abstract
A novel corona virus (COVID-19) has materialized as the respiratory syndrome in recent decades. Chest computed tomography scanning is the significant technology for monitoring and predicting COVID-19. To predict the patients of COVID-19 at early stage poses an open challenge in the research community. Therefore, an effective prediction mechanism named Jaya-tunicate swarm algorithm driven generative adversarial network (Jaya-TSA with GAN) is proposed in this research to find patients of COVID-19 infections. The developed Jaya-TSA is the incorporation of Jaya algorithm with tunicate swarm algorithm (TSA). However, lungs lobs are segmented using Bayesian fuzzy clustering, which effectively find the boundary regions of lung lobes. Based on the extracted features, the process of COVID-19 prediction is accomplished using GAN. The optimal solution is obtained by training GAN using proposed Jaya-TSA with respect to fitness measure. The dimensionality of features is reduced by extracting the optimal features, which enable to increase the speed of training process. Moreover, the developed Jaya-TSA based GAN attained outstanding effectiveness by considering the factors, like, specificity, accuracy, and sensitivity that captured the importance as 0.8857, 0.8727, and 0.85 by varying training data.
Collapse
Affiliation(s)
| | - Velliangiri Sarveshwaran
- Department of Computational IntelligenceSRM Institute of Science and Technology, Kattankulathur CampusChennaiIndia
| | | | | |
Collapse
|
23
|
A semi-supervised learning approach for COVID-19 detection from chest CT scans. Neurocomputing 2022; 503:314-324. [PMID: 35765410 PMCID: PMC9221925 DOI: 10.1016/j.neucom.2022.06.076] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 05/11/2022] [Accepted: 06/18/2022] [Indexed: 01/17/2023]
Abstract
COVID-19 has spread rapidly all over the world and has infected more than 200 countries and regions. Early screening of suspected infected patients is essential for preventing and combating COVID-19. Computed Tomography (CT) is a fast and efficient tool which can quickly provide chest scan results. To reduce the burden on doctors of reading CTs, in this article, a high precision diagnosis algorithm of COVID-19 from chest CTs is designed for intelligent diagnosis. A semi-supervised learning approach is developed to solve the problem when only small amount of labelled data is available. While following the MixMatch rules to conduct sophisticated data augmentation, we introduce a model training technique to reduce the risk of model over-fitting. At the same time, a new data enhancement method is proposed to modify the regularization term in MixMatch. To further enhance the generalization of the model, a convolutional neural network based on an attention mechanism is then developed that enables to extract multi-scale features on CT scans. The proposed algorithm is evaluated on an independent CT dataset of the chest from COVID-19 and achieves the area under the receiver operating characteristic curve (AUC) value of 0.932, accuracy of 90.1%, sensitivity of 91.4%, specificity of 88.9%, and F1-score of 89.9%. The results show that the proposed algorithm can accurately diagnose whether a chest CT belongs to a positive or negative indication of COVID-19, and can help doctors to diagnose rapidly in the early stages of a COVID-19 outbreak.
Collapse
|
24
|
Boubnovski MM, Chen M, Linton-Reid K, Posma JM, Copley SJ, Aboagye EO. Development of a multi-task learning V-Net for pulmonary lobar segmentation on CT and application to diseased lungs. Clin Radiol 2022; 77:e620-e627. [PMID: 35636974 DOI: 10.1016/j.crad.2022.04.012] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Accepted: 04/21/2022] [Indexed: 02/08/2023]
Abstract
AIM To develop a multi-task learning (MTL) V-Net for pulmonary lobar segmentation on computed tomography (CT) and application to diseased lungs. MATERIALS AND METHODS The described methodology utilises tracheobronchial tree information to enhance segmentation accuracy through the algorithm's spatial familiarity to define lobar extent more accurately. The method undertakes parallel segmentation of lobes and auxiliary tissues simultaneously by employing MTL in conjunction with V-Net-attention, a popular convolutional neural network in the imaging realm. Its performance was validated by an external dataset of patients with four distinct lung conditions: severe lung cancer, COVID-19 pneumonitis, collapsed lungs, and chronic obstructive pulmonary disease (COPD), even though the training data included none of these cases. RESULTS The following Dice scores were achieved on a per-segment basis: normal lungs 0.97, COPD 0.94, lung cancer 0.94, COVID-19 pneumonitis 0.94, and collapsed lung 0.92, all at p<0.05. CONCLUSION Despite severe abnormalities, the model provided good performance at segmenting lobes, demonstrating the benefit of tissue learning. The proposed model is poised for adoption in the clinical setting as a robust tool for radiologists and researchers to define the lobar distribution of lung diseases and aid in disease treatment planning.
Collapse
Affiliation(s)
- M M Boubnovski
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK
| | - M Chen
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK; Department of Radiology, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London W12 0HS, UK
| | - K Linton-Reid
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK
| | - J M Posma
- Department of Metabolism, Digestion and Reproduction, South Kensington, London SW7 2AZ, UK
| | - S J Copley
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK; Department of Radiology, Hammersmith Hospital, Imperial College Healthcare NHS Trust, London W12 0HS, UK
| | - E O Aboagye
- Comprehensive Cancer Imaging Centre, Department of Surgery & Cancer, Imperial College London, Hammersmith Hospital, London W12 0NN, UK.
| |
Collapse
|
25
|
Pang H, Wu Y, Qi S, Li C, Shen J, Yue Y, Qian W, Wu J. A fully automatic segmentation pipeline of pulmonary lobes before and after lobectomy from computed tomography images. Comput Biol Med 2022; 147:105792. [PMID: 35780601 DOI: 10.1016/j.compbiomed.2022.105792] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Revised: 06/18/2022] [Accepted: 06/26/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND AND OBJECTIVE Lobectomy is a curative treatment for localized lung cancer. The study aims to construct an automatic pipeline for segmenting pulmonary lobes before and after lobectomy from CT images. MATERIALS AND METHODS Six datasets (D1 to D6) of 865 CT scans were collected from two hospitals and public resources. Four nnU-Net-based segmentation models were trained. A lobectomy classification was proposed to automatically recognize the category of the input CT images: before lobectomy or one of five types after lobectomy. Finally, the lobe segmentation before and after lobectomy was realized by integrating the four models and lobectomy classification. The dice similarity coefficient (DSC), 95% Hausdorff distance (HD95) and average symmetric surface distance (ASSD) were used to evaluate the segmentations. RESULTS The pre-operative model achieved an average DSC of 0.964, 0.929, 0.934, and 0.891 in the four datasets. In D1 and D2, the average HD95 was 4.18 and 7.74 mm and the average ASSD was 0.86 and 1.32 mm, respectively. The lobectomy classification achieved an accuracy of 100%. After lobectomy, an average DSC of 0.973 and 0.936, an average HD95 of 2.70 and 6.92 mm, an average ASSD of 0.57 and 1.78 mm were obtained in D1 and D2, respectively. The postoperative segmentation pipeline outperformed other counterparts and training strategies. CONCLUSIONS The proposed pipeline can automatically segment pulmonary lobes before and after lobectomy from CT images and be applied to manage patients with lung cancer after lobectomy.
Collapse
Affiliation(s)
- Haowen Pang
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Yanan Wu
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Shouliang Qi
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Chen Li
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China.
| | - Yong Yue
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China.
| | - Wei Qian
- College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, China.
| |
Collapse
|
26
|
Wang JM, Ram S, Labaki WW, Han MK, Galbán CJ. CT-Based Commercial Software Applications: Improving Patient Care Through Accurate COPD Subtyping. Int J Chron Obstruct Pulmon Dis 2022; 17:919-930. [PMID: 35502294 PMCID: PMC9056100 DOI: 10.2147/copd.s334592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Accepted: 04/03/2022] [Indexed: 12/14/2022] Open
Abstract
Chronic obstructive pulmonary disease (COPD) is heterogenous in its clinical manifestations and disease progression. Patients often have disease courses that are difficult to predict with readily available data, such as lung function testing. The ability to better classify COPD into well-defined groups will allow researchers and clinicians to tailor novel therapies, monitor their effects, and improve patient-centered outcomes. Different modalities of assessing these COPD phenotypes are actively being studied, and an area of great promise includes the use of quantitative computed tomography (QCT) techniques focused on key features such as airway anatomy, lung density, and vascular morphology. Over the last few decades, companies around the world have commercialized automated CT software packages that have proven immensely useful in these endeavors. This article reviews the key features of several commercial platforms, including the technologies they are based on, the metrics they can generate, and their clinical correlations and applications. While such tools are increasingly being used in research and clinical settings, they have yet to be consistently adopted for diagnostic work-up and treatment planning, and their full potential remains to be explored.
Collapse
Affiliation(s)
- Jennifer M Wang
- Division of Pulmonary and Critical Care Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Sundaresh Ram
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA
| | - Wassim W Labaki
- Division of Pulmonary and Critical Care Medicine, University of Michigan, Ann Arbor, MI, USA
| | - MeiLan K Han
- Division of Pulmonary and Critical Care Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Craig J Galbán
- Department of Radiology, University of Michigan, Ann Arbor, MI, USA,Correspondence: Craig J Galbán, Department of Radiology, University of Michigan, BSRB, Room A506, 109 Zina Pitcher Place, Ann Arbor, MI, 48109-2200, USA, Tel +1 734-764-8726, Fax +1 734-615-1599, Email
| |
Collapse
|
27
|
Overview of Deep Learning Models in Biomedical Domain with the Help of R Statistical Software. SERBIAN JOURNAL OF EXPERIMENTAL AND CLINICAL RESEARCH 2022. [DOI: 10.2478/sjecr-2018-0063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Abstract
With the increase in volume of data and presence of structured and unstructured data in the biomedical filed, there is a need for building models which can handle complex & non-linear relations in the data and also predict and classify outcomes with higher accuracy. Deep learning models are one of such models which can handle complex and nonlinear data and are being increasingly used in the biomedical filed in the recent years. Deep learning methodology evolved from artificial neural networks which process the input data through multiple hidden layers with higher level of abstraction. Deep Learning networks are used in various fields such as image processing, speech recognition, fraud deduction, classification and prediction. Objectives of this paper is to provide an overview of Deep Learning Models and its application in the biomedical domain using R Statistical software Deep Learning concepts are illustrated by using the R statistical software package. X-ray Images from NIH datasets used to explain the prediction accuracy of the deep learning models. Deep Learning models helped to classify the outcomes under study with 91% accuracy. The paper provided an overview of Deep Learning Models, its types, its application in biomedical domain. - is paper has shown the effect of deep learning network in classifying images into normal and disease with 91% accuracy with help of the R statistical package.
Collapse
|
28
|
San José Estépar R. Artificial intelligence in functional imaging of the lung. Br J Radiol 2022; 95:20210527. [PMID: 34890215 PMCID: PMC9153712 DOI: 10.1259/bjr.20210527] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2021] [Revised: 07/11/2021] [Accepted: 07/28/2021] [Indexed: 12/16/2022] Open
Abstract
Artificial intelligence (AI) is transforming the way we perform advanced imaging. From high-resolution image reconstruction to predicting functional response from clinically acquired data, AI is promising to revolutionize clinical evaluation of lung performance, pushing the boundary in pulmonary functional imaging for patients suffering from respiratory conditions. In this review, we overview the current developments and expound on some of the encouraging new frontiers. We focus on the recent advances in machine learning and deep learning that enable reconstructing images, quantitating, and predicting functional responses of the lung. Finally, we shed light on the potential opportunities and challenges ahead in adopting AI for functional lung imaging in clinical settings.
Collapse
Affiliation(s)
- Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, United States
| |
Collapse
|
29
|
Hoffman EA. Origins of and lessons from quantitative functional X-ray computed tomography of the lung. Br J Radiol 2022; 95:20211364. [PMID: 35193364 PMCID: PMC9153696 DOI: 10.1259/bjr.20211364] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2021] [Revised: 01/20/2022] [Accepted: 01/27/2022] [Indexed: 12/16/2022] Open
Abstract
Functional CT of the lung has emerged from quantitative CT (qCT). Structural details extracted at multiple lung volumes offer indices of function. Additionally, single volumetric images, if acquired at standardized lung volumes and body posture, can be used to model function by employing such engineering techniques as computational fluid dynamics. With the emergence of multispectral CT imaging including dual energy from energy integrating CT scanners and multienergy binning using the newly released photon counting CT technology, function is tagged via use of contrast agents. Lung disease phenotypes have previously been lumped together by the limitations of spirometry and plethysmography. QCT and its functional embodiment have been imbedded into studies seeking to characterize chronic obstructive pulmonary disease, severe asthma, interstitial lung disease and more. Reductions in radiation dose by an order of magnitude or more have been achieved. At the same time, we have seen significant increases in spatial and density resolution along with methodologic validations of extracted metrics. Together, these have allowed attention to turn towards more mild forms of disease and younger populations. In early applications, clinical CT offered anatomic details of the lung. Functional CT offers regional measures of lung mechanics, the assessment of functional small airways disease, as well as regional ventilation-perfusion matching (V/Q) and more. This paper will focus on the use of quantitative/functional CT for the non-invasive exploration of dynamic three-dimensional functioning of the breathing lung and beating heart within the unique negative pressure intrathoracic environment of the closed chest.
Collapse
Affiliation(s)
- Eric A Hoffman
- Departments of Radiology, Internal Medicine and Biomedical Engineering University of Iowa, Iowa, United States
| |
Collapse
|
30
|
Astley JR, Wild JM, Tahir BA. Deep learning in structural and functional lung image analysis. Br J Radiol 2022; 95:20201107. [PMID: 33877878 PMCID: PMC9153705 DOI: 10.1259/bjr.20201107] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022] Open
Abstract
The recent resurgence of deep learning (DL) has dramatically influenced the medical imaging field. Medical image analysis applications have been at the forefront of DL research efforts applied to multiple diseases and organs, including those of the lungs. The aims of this review are twofold: (i) to briefly overview DL theory as it relates to lung image analysis; (ii) to systematically review the DL research literature relating to the lung image analysis applications of segmentation, reconstruction, registration and synthesis. The review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. 479 studies were initially identified from the literature search with 82 studies meeting the eligibility criteria. Segmentation was the most common lung image analysis DL application (65.9% of papers reviewed). DL has shown impressive results when applied to segmentation of the whole lung and other pulmonary structures. DL has also shown great potential for applications in image registration, reconstruction and synthesis. However, the majority of published studies have been limited to structural lung imaging with only 12.9% of reviewed studies employing functional lung imaging modalities, thus highlighting significant opportunities for further research in this field. Although the field of DL in lung image analysis is rapidly expanding, concerns over inconsistent validation and evaluation strategies, intersite generalisability, transparency of methodological detail and interpretability need to be addressed before widespread adoption in clinical lung imaging workflow.
Collapse
Affiliation(s)
| | - Jim M Wild
- Department of Oncology and Metabolism, The University of Sheffield, Sheffield, United Kingdom
| | | |
Collapse
|
31
|
Pompe E, Mohamed Hoesein FAA. Role of visual assessment of chronic obstructive pulmonary disease on chest CT: beauty is in the eye of the beholder. J Thorac Dis 2022; 13:6936-6939. [PMID: 35070377 PMCID: PMC8743402 DOI: 10.21037/jtd-21-1527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2021] [Accepted: 11/12/2021] [Indexed: 11/25/2022]
Affiliation(s)
- Esther Pompe
- Department of Radiology, University Medical Center Utrecht, Utrecht, The Netherlands
| | | |
Collapse
|
32
|
Song J, Ding C, Huang Q, Luo T, Xu X, Chen Z, Li S. Deep learning predicts epidermal growth factor receptor mutation subtypes in lung adenocarcinoma. Med Phys 2021; 48:7891-7899. [PMID: 34669994 DOI: 10.1002/mp.15307] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2021] [Revised: 10/13/2021] [Accepted: 10/14/2021] [Indexed: 12/14/2022] Open
Abstract
PURPOSE This study aimed to explore the predictive ability of deep learning (DL) for the common epidermal growth factor receptor (EGFR) mutation subtypes in patients with lung adenocarcinoma. METHODS A total of 665 patients with lung adenocarcinoma (528/137) were recruited from two different institutions. In the training set, an 18-layer convolutional neural network (CNN) and fivefold cross-validation strategy were used to establish a CNN model. Subsequently, an independent external validation cohort from the other institution was used to evaluate the predictive efficacy of the CNN model. Grad-weighted class activation mapping (Grad-CAM) technology was used for the visual interpretation of the CNN model. In addition, this study also compared the prediction abilities of the radiomics and CNN models. Receiver operating characteristic (ROC) curves, accuracy and precision values, and recall and F1-score were used to evaluate the effectiveness of the CNN model and compare its performance with that of the radiomics model. RESULTS In the validation set, the micro- and macroaverage values of the area under the ROC curve of the CNN model to identify the three EGFR subtypes were 0.78 and 0.79, respectively. All evaluation indicators of the CNN model were better than those of the radiomics model. CONCLUSIONS Our study confirmed the potential of DL for predicting the EGFR mutation status in lung adenocarcinoma. The imaging phenotypes of the three mutation subtypes were found to be different, which can provide a basis for choosing more accurate and personalized treatment in patients with lung adenocarcinoma.
Collapse
Affiliation(s)
- Jiangdian Song
- School of Medical Informatics, China Medical University, Shenyang, China
| | - Changwei Ding
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qinlai Huang
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Ting Luo
- Department of Radiology, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Xiaoman Xu
- Department of Pulmonary and Critical Care Medicine, Shengjing Hospital of China Medical University, Shenyang, China
| | - Zongjian Chen
- School of Medical Informatics, China Medical University, Shenyang, China
| | - Shu Li
- School of Medical Informatics, China Medical University, Shenyang, China
| |
Collapse
|
33
|
Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, Prince JL, Rueckert D, Summers RM. A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2021; 109:820-838. [PMID: 37786449 PMCID: PMC10544772 DOI: 10.1109/jproc.2021.3054390] [Citation(s) in RCA: 267] [Impact Index Per Article: 66.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
Since its renaissance, deep learning has been widely used in various medical imaging tasks and has achieved remarkable success in many medical imaging applications, thereby propelling us into the so-called artificial intelligence (AI) era. It is known that the success of AI is mostly attributed to the availability of big data with annotations for a single task and the advances in high performance computing. However, medical imaging presents unique challenges that confront deep learning approaches. In this survey paper, we first present traits of medical imaging, highlight both clinical needs and technical challenges in medical imaging, and describe how emerging trends in deep learning are addressing these issues. We cover the topics of network architecture, sparse and noisy labels, federating learning, interpretability, uncertainty quantification, etc. Then, we present several case studies that are commonly found in clinical practice, including digital pathology and chest, brain, cardiovascular, and abdominal imaging. Rather than presenting an exhaustive literature survey, we instead describe some prominent research highlights related to these case study applications. We conclude with a discussion and presentation of promising future directions.
Collapse
Affiliation(s)
- S Kevin Zhou
- School of Biomedical Engineering, University of Science and Technology of China and Institute of Computing Technology, Chinese Academy of Sciences
| | - Hayit Greenspan
- Biomedical Engineering Department, Tel-Aviv University, Israel
| | - Christos Davatzikos
- Radiology Department and Electrical and Systems Engineering Department, University of Pennsylvania, USA
| | - James S Duncan
- Departments of Biomedical Engineering and Radiology & Biomedical Imaging, Yale University
| | | | - Anant Madabhushi
- Department of Biomedical Engineering, Case Western Reserve University and Louis Stokes Cleveland Veterans Administration Medical Center, USA
| | - Jerry L Prince
- Electrical and Computer Engineering Department, Johns Hopkins University, USA
| | - Daniel Rueckert
- Klinikum rechts der Isar, TU Munich, Germany and Department of Computing, Imperial College, UK
| | | |
Collapse
|
34
|
RPLS-Net: pulmonary lobe segmentation based on 3D fully convolutional networks and multi-task learning. Int J Comput Assist Radiol Surg 2021; 16:895-904. [PMID: 33846890 DOI: 10.1007/s11548-021-02360-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Accepted: 03/25/2021] [Indexed: 02/05/2023]
Abstract
PURPOSE The robust and automatic segmentation of the pulmonary lobe is vital to surgical planning and regional image analysis of pulmonary related diseases in real-time Computer Aided Diagnosis systems. While a number of studies have examined this issue, the segmentation of unclear borders of the five lobes of the lung remains challenging because of incomplete fissures, the diversity of anatomical pulmonary information, and obstructive lesions caused by pulmonary diseases. This study proposes a model called Regularized Pulmonary Lobe Segmentation Network to accurately predict the lobes as well as the borders. METHODS First, a 3D fully convolutional network is constructed to extract contextual features from computed tomography images. Second, multi-task learning is employed to learn the segmentations of the lobes and the borders between them to train the neural network to better predict the borders via shared representation. Third, a 3D depth-wise separable de-convolution block is proposed for deep supervision to efficiently train the network. We also propose a hybrid loss function by combining cross-entropy loss with focal loss using adaptive parameters to focus on the tissues and the borders of the lobes. RESULTS Experiments are conducted on a dataset annotated by experienced clinical radiologists. A 4-fold cross-validation result demonstrates that the proposed approach can achieve a mean dice coefficient of 0.9421 and average symmetric surface distance of 1.3546 mm, which is comparable to state of the art methods. The proposed approach has the capability to accurately segment voxels that are near the lung wall and fissure. CONCLUSION In this paper, a 3D fully convolutional networks framework is proposed to segment pulmonary lobes in chest CT images accurately. Experimental results show the effectiveness of the proposed approach in segmenting the tissues as well as the borders of the lobes.
Collapse
|
35
|
Hasenstab KA, Yuan N, Retson T, Conrad DJ, Kligerman S, Lynch DA, Hsiao A. Automated CT Staging of Chronic Obstructive Pulmonary Disease Severity for Predicting Disease Progression and Mortality with a Deep Learning Convolutional Neural Network. Radiol Cardiothorac Imaging 2021; 3:e200477. [PMID: 33969307 DOI: 10.1148/ryct.2021200477] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Revised: 01/29/2021] [Accepted: 02/05/2021] [Indexed: 11/11/2022]
Abstract
Purpose To develop a deep learning-based algorithm to stage the severity of chronic obstructive pulmonary disease (COPD) through quantification of emphysema and air trapping on CT images and to assess the ability of the proposed stages to prognosticate 5-year progression and mortality. Materials and Methods In this retrospective study, an algorithm using co-registration and lung segmentation was developed in-house to automate quantification of emphysema and air trapping from inspiratory and expiratory CT images. The algorithm was then tested in a separate group of 8951 patients from the COPD Genetic Epidemiology study (date range, 2007-2017). With measurements of emphysema and air trapping, bivariable thresholds were determined to define CT stages of severity (mild, moderate, severe, and very severe) and were evaluated for their ability to prognosticate disease progression and mortality using logistic regression and Cox regression. Results On the basis of CT stages, the odds of disease progression were greatest among patients with very severe disease (odds ratio [OR], 2.67; 95% CI: 2.02, 3.53; P < .001) and were elevated in patients with moderate disease (OR, 1.50; 95% CI: 1.22, 1.84; P = .001). The hazard ratio of mortality for very severe disease at CT was 2.23 times the normal ratio (95% CI: 1.93, 2.58; P < .001). When combined with Global Initiative for Chronic Obstructive Lung Disease (GOLD) staging, patients with GOLD stage 2 disease had the greatest odds of disease progression when the CT stage was severe (OR, 4.48; 95% CI: 3.18, 6.31; P < .001) or very severe (OR, 4.72; 95% CI: 3.13, 7.13; P < .001). Conclusion Automated CT algorithms can facilitate staging of COPD severity, have diagnostic performance comparable with that of spirometric GOLD staging, and provide further prognostic value when used in conjunction with GOLD staging.Supplemental material is available for this article.© RSNA, 2021See also commentary by Kalra and Ebrahimian in this issue.
Collapse
Affiliation(s)
- Kyle A Hasenstab
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Nancy Yuan
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Tara Retson
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Douglas J Conrad
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Seth Kligerman
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - David A Lynch
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | - Albert Hsiao
- Department of Radiology (K.A.H., N.Y., T.R., S.K., A.H.) and Department of Medicine (D.J.C.), University of California San Diego, 9452 Medical Center Dr, La Jolla, CA 92037; Department of Mathematics and Statistics, San Diego State University, San Diego, Calif (K.A.H.); and Department of Radiology, National Jewish Health, Denver, Colo (D.A.L.)
| | | |
Collapse
|
36
|
Barragán-Montero A, Javaid U, Valdés G, Nguyen D, Desbordes P, Macq B, Willems S, Vandewinckele L, Holmström M, Löfman F, Michiels S, Souris K, Sterpin E, Lee JA. Artificial intelligence and machine learning for medical imaging: A technology review. Phys Med 2021; 83:242-256. [PMID: 33979715 PMCID: PMC8184621 DOI: 10.1016/j.ejmp.2021.04.016] [Citation(s) in RCA: 131] [Impact Index Per Article: 32.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 04/15/2021] [Accepted: 04/18/2021] [Indexed: 02/08/2023] Open
Abstract
Artificial intelligence (AI) has recently become a very popular buzzword, as a consequence of disruptive technical advances and impressive experimental results, notably in the field of image analysis and processing. In medicine, specialties where images are central, like radiology, pathology or oncology, have seized the opportunity and considerable efforts in research and development have been deployed to transfer the potential of AI to clinical applications. With AI becoming a more mainstream tool for typical medical imaging analysis tasks, such as diagnosis, segmentation, or classification, the key for a safe and efficient use of clinical AI applications relies, in part, on informed practitioners. The aim of this review is to present the basic technological pillars of AI, together with the state-of-the-art machine learning methods and their application to medical imaging. In addition, we discuss the new trends and future research directions. This will help the reader to understand how AI methods are now becoming an ubiquitous tool in any medical image analysis workflow and pave the way for the clinical implementation of AI-based solutions.
Collapse
Affiliation(s)
- Ana Barragán-Montero
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium.
| | - Umair Javaid
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Gilmer Valdés
- Department of Radiation Oncology, Department of Epidemiology and Biostatistics, University of California, San Francisco, USA
| | - Dan Nguyen
- Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, UT Southwestern Medical Center, USA
| | - Paul Desbordes
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Benoit Macq
- Information and Communication Technologies, Electronics and Applied Mathematics (ICTEAM), UCLouvain, Belgium
| | - Siri Willems
- ESAT/PSI, KU Leuven Belgium & MIRC, UZ Leuven, Belgium
| | | | | | | | - Steven Michiels
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Kevin Souris
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| | - Edmond Sterpin
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium; KU Leuven, Department of Oncology, Laboratory of Experimental Radiotherapy, Belgium
| | - John A Lee
- Molecular Imaging, Radiation and Oncology (MIRO) Laboratory, UCLouvain, Belgium
| |
Collapse
|
37
|
Nagpal P, Guo J, Shin KM, Lim JK, Kim KB, Comellas AP, Kaczka DW, Peterson S, Lee CH, Hoffman EA. Quantitative CT imaging and advanced visualization methods: potential application in novel coronavirus disease 2019 (COVID-19) pneumonia. BJR Open 2021; 3:20200043. [PMID: 33718766 PMCID: PMC7931412 DOI: 10.1259/bjro.20200043] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Revised: 12/01/2020] [Accepted: 12/16/2020] [Indexed: 12/13/2022] Open
Abstract
Increasingly, quantitative lung computed tomography (qCT)-derived metrics are providing novel insights into chronic inflammatory lung diseases, including chronic obstructive pulmonary disease, asthma, interstitial lung disease, and more. Metrics related to parenchymal, airway, and vascular anatomy together with various measures associated with lung function including regional parenchymal mechanics, air trapping associated with functional small airways disease, and dual-energy derived measures of perfused blood volume are offering the ability to characterize disease phenotypes associated with the chronic inflammatory pulmonary diseases. With the emergence of COVID-19, together with its widely varying degrees of severity, its rapid progression in some cases, and the potential for lengthy post-COVID-19 morbidity, there is a new role in applying well-established qCT-based metrics. Based on the utility of qCT tools in other lung diseases, previously validated supervised classical machine learning methods, and emerging unsupervised machine learning and deep-learning approaches, we are now able to provide desperately needed insight into the acute and the chronic phases of this inflammatory lung disease. The potential areas in which qCT imaging can be beneficial include improved accuracy of diagnosis, identification of clinically distinct phenotypes, improvement of disease prognosis, stratification of care, and early objective evaluation of intervention response. There is also a potential role for qCT in evaluating an increasing population of post-COVID-19 lung parenchymal changes such as fibrosis. In this work, we discuss the basis of various lung qCT methods, using case-examples to highlight their potential application as a tool for the exploration and characterization of COVID-19, and offer scanning protocols to serve as templates for imaging the lung such that these established qCT analyses have the best chance at yielding the much needed new insights.
Collapse
Affiliation(s)
- Prashant Nagpal
- Department of Radiology, University of Iowa, Carver College of Medicine, Iowa City, IA, USA
| | | | | | - Jae-Kwang Lim
- Department of Radiology, School of Medicine, Kyungpook National University, Daegu, South Korea
| | - Ki Beom Kim
- Department of Radiology, Daegu Fatima Hospital, Daegu, South Korea
| | - Alejandro P Comellas
- Department of Internal Medicine, University of Iowa, Carver College of Medicine, Iowa City, IA, USA
| | | | | | | | | |
Collapse
|
38
|
Gerard SE, Herrmann J, Xin Y, Martin KT, Rezoagli E, Ippolito D, Bellani G, Cereda M, Guo J, Hoffman EA, Kaczka DW, Reinhardt JM. CT image segmentation for inflamed and fibrotic lungs using a multi-resolution convolutional neural network. Sci Rep 2021; 11:1455. [PMID: 33446781 PMCID: PMC7809065 DOI: 10.1038/s41598-020-80936-4] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Accepted: 12/29/2020] [Indexed: 02/08/2023] Open
Abstract
The purpose of this study was to develop a fully-automated segmentation algorithm, robust to various density enhancing lung abnormalities, to facilitate rapid quantitative analysis of computed tomography images. A polymorphic training approach is proposed, in which both specifically labeled left and right lungs of humans with COPD, and nonspecifically labeled lungs of animals with acute lung injury, were incorporated into training a single neural network. The resulting network is intended for predicting left and right lung regions in humans with or without diffuse opacification and consolidation. Performance of the proposed lung segmentation algorithm was extensively evaluated on CT scans of subjects with COPD, confirmed COVID-19, lung cancer, and IPF, despite no labeled training data of the latter three diseases. Lobar segmentations were obtained using the left and right lung segmentation as input to the LobeNet algorithm. Regional lobar analysis was performed using hierarchical clustering to identify radiographic subtypes of COVID-19. The proposed lung segmentation algorithm was quantitatively evaluated using semi-automated and manually-corrected segmentations in 87 COVID-19 CT images, achieving an average symmetric surface distance of [Formula: see text] mm and Dice coefficient of [Formula: see text]. Hierarchical clustering identified four radiographical phenotypes of COVID-19 based on lobar fractions of consolidated and poorly aerated tissue. Lower left and lower right lobes were consistently more afflicted with poor aeration and consolidation. However, the most severe cases demonstrated involvement of all lobes. The polymorphic training approach was able to accurately segment COVID-19 cases with diffuse consolidation without requiring COVID-19 cases for training.
Collapse
Affiliation(s)
- Sarah E Gerard
- Department of Radiology, University of Iowa, Iowa City, IA, USA.
| | - Jacob Herrmann
- Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Yi Xin
- Department of Radiology, University of Pennsylvania, Philadelphia, PA, USA
| | - Kevin T Martin
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA
| | - Emanuele Rezoagli
- Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Emergency and Intensive Care, San Gerardo Hospital, Monza, Italy
| | - Davide Ippolito
- Department of Diagnostic and Interventional Radiology, San Gerardo Hospital, Monza, Italy
| | - Giacomo Bellani
- Department of Medicine and Surgery, University of Milano-Bicocca, Monza, Italy
- Department of Emergency and Intensive Care, San Gerardo Hospital, Monza, Italy
| | - Maurizio Cereda
- Department of Anesthesiology and Critical Care, University of Pennsylvania, Philadelphia, PA, USA
| | - Junfeng Guo
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - Eric A Hoffman
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| | - David W Kaczka
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
- Department of Anesthesia, University of Iowa, Iowa City, IA, USA
| | - Joseph M Reinhardt
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Roy J. Carver Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
39
|
Abdel-Basset M, Chang V, Hawash H, Chakrabortty RK, Ryan M. FSS-2019-nCov: A deep learning architecture for semi-supervised few-shot segmentation of COVID-19 infection. Knowl Based Syst 2021; 212:106647. [PMID: 33519100 PMCID: PMC7836902 DOI: 10.1016/j.knosys.2020.106647] [Citation(s) in RCA: 60] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 10/20/2020] [Accepted: 11/30/2020] [Indexed: 01/06/2023]
Abstract
The newly discovered coronavirus (COVID-19) pneumonia is providing major challenges to research in terms of diagnosis and disease quantification. Deep-learning (DL) techniques allow extremely precise image segmentation; yet, they necessitate huge volumes of manually labeled data to be trained in a supervised manner. Few-Shot Learning (FSL) paradigms tackle this issue by learning a novel category from a small number of annotated instances. We present an innovative semi-supervised few-shot segmentation (FSS) approach for efficient segmentation of 2019-nCov infection (FSS-2019-nCov) from only a few amounts of annotated lung CT scans. The key challenge of this study is to provide accurate segmentation of COVID-19 infection from a limited number of annotated instances. For that purpose, we propose a novel dual-path deep-learning architecture for FSS. Every path contains encoder-decoder (E-D) architecture to extract high-level information while maintaining the channel information of COVID-19 CT slices. The E-D architecture primarily consists of three main modules: a feature encoder module, a context enrichment (CE) module, and a feature decoder module. We utilize the pre-trained ResNet34 as an encoder backbone for feature extraction. The CE module is designated by a newly introduced proposed Smoothed Atrous Convolution (SAC) block and Multi-scale Pyramid Pooling (MPP) block. The conditioner path takes the pairs of CT images and their labels as input and produces a relevant knowledge representation that is transferred to the segmentation path to be used to segment the new images. To enable effective collaboration between both paths, we propose an adaptive recombination and recalibration (RR) module that permits intensive knowledge exchange between paths with a trivial increase in computational complexity. The model is extended to multi-class labeling for various types of lung infections. This contribution overcomes the limitation of the lack of large numbers of COVID-19 CT scans. It also provides a general framework for lung disease diagnosis in limited data situations.
Collapse
Affiliation(s)
- Mohamed Abdel-Basset
- Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
| | - Victor Chang
- School of Computing, Engineering and Digital Technologies, Teesside University, Middlesbrough, UK
| | - Hossam Hawash
- Faculty of Computers and Informatics, Zagazig University, Zagazig, Sharqiyah, 44519, Egypt
| | - Ripon K Chakrabortty
- Capability Systems Centre, School of Engineering and IT, UNSW Canberra, Australia
| | - Michael Ryan
- Capability Systems Centre, School of Engineering and IT, UNSW Canberra, Australia
| |
Collapse
|
40
|
Khadidos A, Khadidos AO, Kannan S, Natarajan Y, Mohanty SN, Tsaramirsis G. Analysis of COVID-19 Infections on a CT Image Using DeepSense Model. Front Public Health 2020; 8:599550. [PMID: 33330341 PMCID: PMC7714903 DOI: 10.3389/fpubh.2020.599550] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 10/16/2020] [Indexed: 11/17/2022] Open
Abstract
In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.
Collapse
Affiliation(s)
- Adil Khadidos
- Department of Information Technology, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Alaa O Khadidos
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, Saudi Arabia
| | - Srihari Kannan
- Department of Computer Science and Engineering, SNS College of Engineering, Coimbatore, India
| | - Yuvaraj Natarajan
- Research and Development, Information Communication Technology Academy, Chennai, India
| | - Sachi Nandan Mohanty
- Department of Computer Science and Engineering, Institute of Chartered Financial Analysts of India Foundation of Higher Education, Hyderabad, India
| | | |
Collapse
|
41
|
Shankar K, Perumal E. A novel hand-crafted with deep learning features based fusion model for COVID-19 diagnosis and classification using chest X-ray images. COMPLEX INTELL SYST 2020; 7:1277-1293. [PMID: 34777955 PMCID: PMC7659408 DOI: 10.1007/s40747-020-00216-6] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2020] [Accepted: 10/06/2020] [Indexed: 11/25/2022]
Abstract
COVID-19 pandemic is increasing in an exponential rate, with restricted accessibility of rapid test kits. So, the design and implementation of COVID-19 testing kits remain an open research problem. Several findings attained using radio-imaging approaches recommend that the images comprise important data related to coronaviruses. The application of recently developed artificial intelligence (AI) techniques, integrated with radiological imaging, is helpful in the precise diagnosis and classification of the disease. In this view, the current research paper presents a novel fusion model hand-crafted with deep learning features called FM-HCF-DLF model for diagnosis and classification of COVID-19. The proposed FM-HCF-DLF model comprises three major processes, namely Gaussian filtering-based preprocessing, FM for feature extraction and classification. FM model incorporates the fusion of handcrafted features with the help of local binary patterns (LBP) and deep learning (DL) features and it also utilizes convolutional neural network (CNN)-based Inception v3 technique. To further improve the performance of Inception v3 model, the learning rate scheduler using Adam optimizer is applied. At last, multilayer perceptron (MLP) is employed to carry out the classification process. The proposed FM-HCF-DLF model was experimentally validated using chest X-ray dataset. The experimental outcomes inferred that the proposed model yielded superior performance with maximum sensitivity of 93.61%, specificity of 94.56%, precision of 94.85%, accuracy of 94.08%, F score of 93.2% and kappa value of 93.5%.
Collapse
Affiliation(s)
- K Shankar
- Department of Computer Applications, Alagappa University, Karaikudi, India
| | - Eswaran Perumal
- Department of Computer Applications, Alagappa University, Karaikudi, India
| |
Collapse
|
42
|
Xie W, Jacobs C, Charbonnier JP, van Ginneken B. Relational Modeling for Robust and Efficient Pulmonary Lobe Segmentation in CT Scans. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:2664-2675. [PMID: 32730216 PMCID: PMC7393217 DOI: 10.1109/tmi.2020.2995108] [Citation(s) in RCA: 56] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/07/2023]
Abstract
Pulmonary lobe segmentation in computed tomography scans is essential for regional assessment of pulmonary diseases. Recent works based on convolution neural networks have achieved good performance for this task. However, they are still limited in capturing structured relationships due to the nature of convolution. The shape of the pulmonary lobes affect each other and their borders relate to the appearance of other structures, such as vessels, airways, and the pleural wall. We argue that such structural relationships play a critical role in the accurate delineation of pulmonary lobes when the lungs are affected by diseases such as COVID-19 or COPD. In this paper, we propose a relational approach (RTSU-Net) that leverages structured relationships by introducing a novel non-local neural network module. The proposed module learns both visual and geometric relationships among all convolution features to produce self-attention weights. With a limited amount of training data available from COVID-19 subjects, we initially train and validate RTSU-Net on a cohort of 5000 subjects from the COPDGene study (4000 for training and 1000 for evaluation). Using models pre-trained on COPDGene, we apply transfer learning to retrain and evaluate RTSU-Net on 470 COVID-19 suspects (370 for retraining and 100 for evaluation). Experimental results show that RTSU-Net outperforms three baselines and performs robustly on cases with severe lung infection due to COVID-19.
Collapse
|
43
|
Raajan NR, Lakshmi VSR, Prabaharan N. Non-Invasive Technique-Based Novel Corona(COVID-19) Virus Detection Using CNN. NATIONAL ACADEMY SCIENCE LETTERS-INDIA 2020; 44:347-350. [PMID: 32836613 PMCID: PMC7391230 DOI: 10.1007/s40009-020-01009-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 06/18/2020] [Accepted: 07/17/2020] [Indexed: 12/24/2022]
Abstract
A novel human coronavirus 2 (SARS-CoV-2) is an extremely acute respiratory syndrome which was reported in Wuhan, China in the later half 2019. Most of its primary epidemiological aspects are not appropriately known, which has a direct effect on monitoring, practices and controls. The main objective of this work is to propose a high speed, accurate and highly sensitive CT scan approach for diagnosis of COVID19. The CT scan images display several small patches of shadows and interstitial shifts, particularly in the lung periphery. The proposed method utilizes the ResNet architecture Convolution Neural Network for training the images provided by the CT scan to diagnose the coronavirus-affected patients effectively. By comparing the testing images with the training images, the affected patient is identified accurately. The accuracy and specificity are obtained 95.09% and 81.89%, respectively, on the sample dataset based on CT images without the inclusion of another set of data such as geographical location, population density, etc. Also, the sensitivity is obtained 100% in this method. Based on the results, it is evident that the COVID-19 positive patients can be classified perfectly by using the proposed method.
Collapse
Affiliation(s)
- N R Raajan
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| | - V S Ramya Lakshmi
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| | - Natarajan Prabaharan
- Present Address: School of EEE, SASTRA Deemed University, Thanjavur, Tamil nadu India
| |
Collapse
|
44
|
Farhat H, Sakr GE, Kilany R. Deep learning applications in pulmonary medical imaging: recent updates and insights on COVID-19. MACHINE VISION AND APPLICATIONS 2020; 31:53. [PMID: 32834523 PMCID: PMC7386599 DOI: 10.1007/s00138-020-01101-5] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Revised: 06/21/2020] [Accepted: 07/07/2020] [Indexed: 05/07/2023]
Abstract
Shortly after deep learning algorithms were applied to Image Analysis, and more importantly to medical imaging, their applications increased significantly to become a trend. Likewise, deep learning applications (DL) on pulmonary medical images emerged to achieve remarkable advances leading to promising clinical trials. Yet, coronavirus can be the real trigger to open the route for fast integration of DL in hospitals and medical centers. This paper reviews the development of deep learning applications in medical image analysis targeting pulmonary imaging and giving insights of contributions to COVID-19. It covers more than 160 contributions and surveys in this field, all issued between February 2017 and May 2020 inclusively, highlighting various deep learning tasks such as classification, segmentation, and detection, as well as different pulmonary pathologies like airway diseases, lung cancer, COVID-19 and other infections. It summarizes and discusses the current state-of-the-art approaches in this research domain, highlighting the challenges, especially with COVID-19 pandemic current situation.
Collapse
Affiliation(s)
- Hanan Farhat
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - George E. Sakr
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| | - Rima Kilany
- Saint Joseph University of Beirut, Mar Roukos, Beirut, Lebanon
| |
Collapse
|
45
|
Wang M, Jin R, Jiang N, Liu H, Jiang S, Li K, Zhou X. Automated labeling of the airway tree in terms of lobes based on deep learning of bifurcation point detection. Med Biol Eng Comput 2020; 58:2009-2024. [PMID: 32613598 DOI: 10.1007/s11517-020-02184-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2019] [Accepted: 05/01/2020] [Indexed: 12/19/2022]
Abstract
This paper presents an automatic lobe-based labeling of airway tree method, which can detect the bifurcation points for reconstructing and labeling the airway tree from a computed tomography image. A deep learning-based network structure is designed to identify the four key bifurcation points. Then, based on the detected bifurcation points, the entire airway tree is reconstructed by a new region-growing method. Finally, with the basic airway tree anatomy and topology knowledge, individual branches of the airway tree are classified into different categories in terms of pulmonary lobes. There are several advantages in our method such as the detection of the bifurcation points does not depend on the segmentation of airway tree and only four bifurcation points need to be manually labeled for each sample to prepare the training dataset. The segmentation of airway tree is guided by the detected points, which overcomes the difficulty of manual seed selection of conventional region-growing algorithm. In addition, the bifurcation points can help analyze the tree structure, which provides a basis for effective airway tree labeling. Experimental results show that our method is fast, stable, and the accuracy of our method is 97.85%, which is higher than that of the traditional skeleton-based method. Graphical Abstract The pipeline of our proposed lobe-based airway tree labeling method. Given a raw CT volume, a neural network structure is designed to predict major bifurcation points of airway tree. Based on the detected points, airway tree is reconstructed and labeled in terms of lobes.
Collapse
Affiliation(s)
- Manyang Wang
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Renchao Jin
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China. .,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.
| | - Nanchuan Jiang
- Department of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, 430022, China.,Hubei Province Key Laboratory of Molecular Imaging, Huazhong University of Science and Technology, Wuhan, 430022, China
| | - Hong Liu
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Shan Jiang
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - Kang Li
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| | - XueXin Zhou
- School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China.,Key Laboratory of Education Ministry for Image Processing and Intelligence Control, Huazhong University of Science and Technology, Wuhan, Hubei, 430074, China
| |
Collapse
|
46
|
Ross JC, Nardelli P, Onieva J, Gerard SE, Harmouche R, Okajima Y, Diaz AA, Washko G, San José Estépar R. An open-source framework for pulmonary fissure completeness assessment. Comput Med Imaging Graph 2020; 83:101712. [PMID: 32115275 PMCID: PMC7363554 DOI: 10.1016/j.compmedimag.2020.101712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2019] [Revised: 12/02/2019] [Accepted: 02/17/2020] [Indexed: 11/20/2022]
Abstract
We present an open-source framework for pulmonary fissure completeness assessment. Fissure incompleteness has been shown to associate with emphysema treatment outcomes, motivating the development of tools that facilitate completeness estimation. Generally, the task of fissure completeness assessment requires accurate detection of fissures and definition of the boundary surfaces separating the lung lobes. The framework we describe acknowledges a) the modular nature of fissure detection and lung lobe segmentation (lobe boundary detection), and b) that methods to address these challenges are varied and continually developing. It is designed to be readily deployable on existing lung lobe segmentation and fissure detection data sets. The framework consists of multiple components: a flexible quality control module that enables rapid assessment of lung lobe segmentations, an interactive lobe segmentation tool exposed through 3D Slicer for handling challenging cases, a flexible fissure representation using particles-based sampling that can handle fissure feature-strength or binary fissure detection images, and a module that performs fissure completeness estimation using voxel counting and a novel surface area estimation approach. We demonstrate the usage of the proposed framework by deploying on 100 cases exhibiting various levels of fissure completeness. We compare the two completeness level approaches and also compare to visual reads. The code is available to the community via github as part of the Chest Imaging Platform and a 3D Slicer extension module.
Collapse
Affiliation(s)
- James C Ross
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States.
| | - Pietro Nardelli
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Jorge Onieva
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States; Biomedical Image Technologies Laboratory (BIT), ETSI Telecomunicación, Universidad Politécnica de Madrid and CIBER-BBN, Madrid, Spain
| | - Sarah E Gerard
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Rola Harmouche
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Yuka Okajima
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Alejandro A Diaz
- Division of Pulmonary and Critical Care Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - George Washko
- Division of Pulmonary and Critical Care Medicine, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| | - Raúl San José Estépar
- Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
47
|
Roy R, Mazumdar S, Chowdhury AS. MDL-IWS: Multi-view Deep Learning with Iterative Watershed for Pulmonary Fissure Segmentation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1282-1285. [PMID: 33018222 DOI: 10.1109/embc44109.2020.9175310] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Pulmonary fissure segmentation is important for localization of lung lesions which include nodules at respective lobar territories. This can be very useful for diagnosis as well as treatment planning. In this paper, we propose a novel coarse-to-fine fissure segmentation approach by proposing a Multi-View Deep Learning driven Iterative WaterShed Algorithm (MDL-IWS). Coarse fissure segmentation obtained from multi-view deep learning yields incomplete fissure volume of interest (VOI) with additional false positives. An iterative watershed algorithm (IWS) is presented to achieve fine segmentation of fissure surfaces. As a part of the IWS algorithm, surface fitting is used to generate a more accurate fissure VOI with substantial reduction in false positives. Additionally, a weight map is used to reduce the over-segmentation of watershed in subsequent iterations. Experiments on the publicly available LOLA11 dataset clearly reveal that our method outperforms several state-of-the-art competitors.
Collapse
|
48
|
Improving Neural Network Detection Accuracy of Electric Power Bushings in Infrared Images by Hough Transform. SENSORS 2020; 20:s20102931. [PMID: 32455742 PMCID: PMC7287725 DOI: 10.3390/s20102931] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 05/15/2020] [Accepted: 05/18/2020] [Indexed: 01/23/2023]
Abstract
To improve the neural network detection accuracy of the electric power bushings in infrared images, a modified algorithm based on the You Only Look Once version 2 (YOLOv2) network is proposed to achieve better recognition results. Specifically, YOLOv2 corresponds to a convolutional neural network (CNN), although its rotation invariance is poor, and some bounding boxes (BBs) exhibit certain deviations. To solve this problem, the standard Hough transform and image rotation are utilized to determine the optimal recognition angle for target detection, such that an optimal recognition effect of YOLOv2 on inclined objects (for example, bushing) is achieved. With respect to the problem that the BB is biased, the shape feature of the bushing is extracted by the Gap statistic algorithm, based on K-means clustering; thereafter, the sliding window (SW) is utilized to determine the optimal recognition area. Experimental verification indicates that the proposed rotating image method can improve the recognition effect, and the SW can further modify the BB. The accuracy of target detection increases to 97.33%, and the recall increases to 95%.
Collapse
|
49
|
Pathak Y, Shukla PK, Tiwari A, Stalin S, Singh S, Shukla PK. Deep Transfer Learning Based Classification Model for COVID-19 Disease. Ing Rech Biomed 2020; 43:87-92. [PMID: 32837678 PMCID: PMC7238986 DOI: 10.1016/j.irbm.2020.05.003] [Citation(s) in RCA: 155] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 05/10/2020] [Accepted: 05/15/2020] [Indexed: 12/15/2022]
Abstract
The COVID-19 infection is increasing at a rapid rate, with the availability of limited number of testing kits. Therefore, the development of COVID-19 testing kits is still an open area of research. Recently, many studies have shown that chest Computed Tomography (CT) images can be used for COVID-19 testing, as chest CT images show a bilateral change in COVID-19 infected patients. However, the classification of COVID-19 patients from chest CT images is not an easy task as predicting the bilateral change is defined as an ill-posed problem. Therefore, in this paper, a deep transfer learning technique is used to classify COVID-19 infected patients. Additionally, a top-2 smooth loss function with cost-sensitive attributes is also utilized to handle noisy and imbalanced COVID-19 dataset kind of problems. Experimental results reveal that the proposed deep transfer learning-based COVID-19 classification model provides efficient results as compared to the other supervised learning models.
Collapse
Affiliation(s)
- Y Pathak
- Department of Information Technology, Indian Institute of Information Technology (IIIT-Bhopal), Bhopal (MP), 462003, India
| | - P K Shukla
- Department of Computer Science & Engineering, School of Engineering & Technology, Jagran Lake City University (JLU), Bhopal-462044 (MP), India
| | - A Tiwari
- Department of CSE & IT, Madhav Institute of Technology and Science, Gwalior-474005 (MP), India
| | - S Stalin
- Department of CSE, Maulana Azad National Institute of Technology (MANIT), Bhopal, MP, 462003, India
| | - S Singh
- Department of Computer Science & Engineering, Jabalpur Engineering College, Jabalpur-482001 (MP), India
| | - P K Shukla
- Department of Computer Science & Engineering, University Institute of Technology, RGPV, Bhopal (MP), 462033, India
| |
Collapse
|
50
|
Estépar RSJ. Artificial Intelligence in COPD: New Venues to Study a Complex Disease. BARCELONA RESPIRATORY NETWORK REVIEWS 2020; 6:144-160. [PMID: 33521399 PMCID: PMC7842269 DOI: 10.23866/brnrev:2019-0014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2020] [Accepted: 09/02/2020] [Indexed: 06/12/2023]
Abstract
Chronic obstructive pulmonary disease (COPD) is a complex and heterogeneous disease that can benefit from novel approaches to understanding its evolution and divergent trajectories. Artificial intelligence (AI) has revolutionized how we can use clinical, imaging, and molecular data to understand and model complex systems. AI has shown impressive results in areas related to automated clinical decision making, radiological interpretation and prognostication. The unique nature of COPD and the accessibility to well-phenotyped populations result in an ideal scenario for AI development. This review provides an introduction to AI and deep learning and presents some recent successes in applying AI in COPD. Finally, we will discuss some of the opportunities, challenges, and limitations for AI applications in the context of COPD.
Collapse
Affiliation(s)
- Raúl San José Estépar
- Applied Chest Imaging Laboratory, Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|