1
|
Yeats E, Gupta D, Xu Z, Hall TL. Effects of phase aberration on transabdominal focusing for a large aperture, low f-number histotripsy transducer. Phys Med Biol 2022; 67:10.1088/1361-6560/ac7d90. [PMID: 35772383 PMCID: PMC9396534 DOI: 10.1088/1361-6560/ac7d90] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Accepted: 06/30/2022] [Indexed: 11/12/2022]
Abstract
Objective. Soft tissue phase aberration may be particularly severe for histotripsy due to large aperture and lowf-number transducer geometries. This study investigated how phase aberration from human abdominal tissue affects focusing of a large, strongly curved histotripsy transducer.Approach.A computational model (k-Wave) was experimentally validated withex vivoporcine abdominal tissue and used to simulate focusing a histotripsy transducer (radius: 14.2 cm,f-number: 0.62, central frequencyfc: 750 kHz) through the human abdomen. Abdominal computed tomography images from 10 human subjects were segmented to create three-dimensional acoustic property maps. Simulations were performed focusing at 3 target locations in the liver of each subject with ideal phase correction, without phase correction, and after separately matching the sound speed of water and fat to non-fat soft tissue.Main results.Experimental validation in porcine abdominal tissue showed that simulated and measured arrival time differences agreed well (average error, ∼0.10 acoustic cycles atfc). In simulations with human tissue, aberration created arrival time differences of 0.65μs (∼0.5 cycles) at the target and shifted the focus from the target by 6.8 mm (6.4 mm pre-focally along depth direction), on average. Ideal phase correction increased maximum pressure amplitude by 95%, on average. Matching the sound speed of water and fat to non-fat soft tissue decreased the average pre-focal shift by 3.6 and 0.5 mm and increased pressure amplitude by 2% and 69%, respectively.Significance.Soft tissue phase aberration of large aperture, lowf-number histotripsy transducers is substantial despite low therapeutic frequencies. Phase correction could potentially recover substantial pressure amplitude for transabdominal histotripsy. Additionally, different heterogeneity sources distinctly affect focusing quality. The water path strongly affects the focal shift, while irregular tissue boundaries (e.g. fat) dominate pressure loss.
Collapse
Affiliation(s)
- Ellen Yeats
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, United States of America
| | - Dinank Gupta
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, United States of America
| | - Zhen Xu
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, United States of America
| | - Timothy L Hall
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, United States of America
| |
Collapse
|
2
|
Wang H, Yi F, Wang J, Yi Z, Zhang H. RECISTSup: Weakly-Supervised Lesion Volume Segmentation Using RECIST Measurement. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:1849-1861. [PMID: 35120001 DOI: 10.1109/tmi.2022.3149168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Lesion volume segmentation in medical imaging is an effective tool for assessing lesion/tumor sizes and monitoring changes in growth. Since manually segmentation of lesion volume is not only time-consuming but also requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Although RECIST measurement is coarse compared with voxel-level annotation, it can reflect the lesion's location, length, and width, resulting in a possibility of segmenting lesion volume directly via RECIST measurement. In this study, a novel weakly-supervised method called RECISTSup is proposed to automatically segment lesion volume via RECIST measurement. Based on RECIST measurement, a new RECIST measurement propagation algorithm is proposed to generate pseudo masks, which are then used to train the segmentation networks. Due to the spatial prior knowledge provided by RECIST measurement, two new losses are also designed to make full use of it. In addition, the automatically segmented lesion results are used to supervise the model training iteratively for further improving segmentation performance. A series of experiments are carried out on three datasets to evaluate the proposed method, including ablation experiments, comparison of various methods, annotation cost analyses, visualization of results. Experimental results show that the proposed RECISTSup achieves the state-of-the-art result compared with other weakly-supervised methods. The results also demonstrate that RECIST measurement can produce similar performance to voxel-level annotation while significantly saving the annotation cost.
Collapse
|
3
|
Iuga AI, Carolus H, Höink AJ, Brosch T, Klinder T, Maintz D, Persigehl T, Baeßler B, Püsken M. Automated detection and segmentation of thoracic lymph nodes from CT using 3D foveal fully convolutional neural networks. BMC Med Imaging 2021; 21:69. [PMID: 33849483 PMCID: PMC8045346 DOI: 10.1186/s12880-021-00599-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 04/02/2021] [Indexed: 01/04/2023] Open
Abstract
BACKGROUND In oncology, the correct determination of nodal metastatic disease is essential for patient management, as patient treatment and prognosis are closely linked to the stage of the disease. The aim of the study was to develop a tool for automatic 3D detection and segmentation of lymph nodes (LNs) in computed tomography (CT) scans of the thorax using a fully convolutional neural network based on 3D foveal patches. METHODS The training dataset was collected from the Computed Tomography Lymph Nodes Collection of the Cancer Imaging Archive, containing 89 contrast-enhanced CT scans of the thorax. A total number of 4275 LNs was segmented semi-automatically by a radiologist, assessing the entire 3D volume of the LNs. Using this data, a fully convolutional neuronal network based on 3D foveal patches was trained with fourfold cross-validation. Testing was performed on an unseen dataset containing 15 contrast-enhanced CT scans of patients who were referred upon suspicion or for staging of bronchial carcinoma. RESULTS The algorithm achieved a good overall performance with a total detection rate of 76.9% for enlarged LNs during fourfold cross-validation in the training dataset with 10.3 false-positives per volume and of 69.9% in the unseen testing dataset. In the training dataset a better detection rate was observed for enlarged LNs compared to smaller LNs, the detection rate for LNs with a short-axis diameter (SAD) ≥ 20 mm and SAD 5-10 mm being 91.6% and 62.2% (p < 0.001), respectively. Best detection rates were obtained for LNs located in Level 4R (83.6%) and Level 7 (80.4%). CONCLUSIONS The proposed 3D deep learning approach achieves an overall good performance in the automatic detection and segmentation of thoracic LNs and shows reasonable generalizability, yielding the potential to facilitate detection during routine clinical work and to enable radiomics research without observer-bias.
Collapse
Affiliation(s)
- Andra-Iza Iuga
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Heike Carolus
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - Anna J. Höink
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Tom Brosch
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - Tobias Klinder
- Philips Research, Röntgenstraße 24, 22335 Hamburg, Germany
| | - David Maintz
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Thorsten Persigehl
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| | - Bettina Baeßler
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
- Institute of Diagnostic and Interventional Radiology, University Hospital Zürich, Zürich, Switzerland
| | - Michael Püsken
- Institute of Diagnostic and Interventional Radiology, Medical Faculty and University Hospital Cologne, University of Cologne, Kerpener Str. 62, 50937 Cologne, Germany
| |
Collapse
|
4
|
Li Z, Xia Y. Deep Reinforcement Learning for Weakly-Supervised Lymph Node Segmentation in CT Images. IEEE J Biomed Health Inform 2021; 25:774-783. [PMID: 32749988 DOI: 10.1109/jbhi.2020.3008759] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Accurate and automated lymph node segmentation is pivotal for quantitatively accessing disease progression and potential therapeutics. The complex variation of lymph node morphology and the difficulty of acquiring voxel-wise manual annotations make lymph node segmentation a challenging task. Since the Response Evaluation Criteria in Solid Tumors (RECIST) annotation, which indicates the location, length, and width of a lymph node, is commonly available in hospital data archives, we advocate to use RECIST annotations as the supervision, and thus formulate this segmentation task into a weakly-supervised learning problem. In this paper, we propose a deep reinforcement learning-based lymph node segmentation (DRL-LNS) model. Based on RECIST annotations, we segment RECIST-slices in an unsupervised way to produce pseudo ground truths, which are then used to train U-Net as a segmentation network. Next, we train a DRL model, in which the segmentation network interacts with the policy network to optimize the lymph node bounding boxes and segmentation results simultaneously. The proposed DRL-LNS model was evaluated against three widely used image segmentation networks on a public thoracoabdominal Computed Tomography (CT) dataset that contains 984 3D lymph nodes, and achieves the mean Dice similarity coefficient (DSC) of 77.17% and the mean Intersection over Union (IoU) of 64.78% in the four-fold cross-validation. Our results suggest that the DRL-based bounding box prediction strategy outperforms the label propagation strategy and the proposed DRL-LNS model is able to achieve the state-of-the-art performance on this weakly-supervised lymph node segmentation task.
Collapse
|
5
|
Summers RM. Are we at a crossroads or a plateau? Radiomics and machine learning in abdominal oncology imaging. Abdom Radiol (NY) 2019; 44:1985-1989. [PMID: 29730736 DOI: 10.1007/s00261-018-1613-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Advances in radiomics and machine learning have driven a technology boom in the automated analysis of radiology images. For the past several years, expectations have been nearly boundless for these new technologies to revolutionize radiology image analysis and interpretation. In this editorial, I compare the expectations with the realities with particular attention to applications in abdominal oncology imaging. I explore whether these technologies will leave us at a crossroads to an exciting future or to a sustained plateau and disillusionment.
Collapse
Affiliation(s)
- Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences Department, National Institutes of Health Clinical Center, Bldg. 10 Room 1C224D, MSC 1182, Bethesda, MD, 20892-1182, USA.
| |
Collapse
|
6
|
Voulodimos A, Doulamis N, Doulamis A, Protopapadakis E. Deep Learning for Computer Vision: A Brief Review. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2018; 2018:7068349. [PMID: 29487619 PMCID: PMC5816885 DOI: 10.1155/2018/7068349] [Citation(s) in RCA: 649] [Impact Index Per Article: 92.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2017] [Accepted: 11/27/2017] [Indexed: 12/13/2022]
Abstract
Over the last years deep learning methods have been shown to outperform previous state-of-the-art machine learning techniques in several fields, with computer vision being one of the most prominent cases. This review paper provides a brief overview of some of the most significant deep learning schemes used in computer vision problems, that is, Convolutional Neural Networks, Deep Boltzmann Machines and Deep Belief Networks, and Stacked Denoising Autoencoders. A brief account of their history, structure, advantages, and limitations is given, followed by a description of their applications in various computer vision tasks, such as object detection, face recognition, action and activity recognition, and human pose estimation. Finally, a brief overview is given of future directions in designing deep learning schemes for computer vision problems and the challenges involved therein.
Collapse
Affiliation(s)
- Athanasios Voulodimos
- Department of Informatics, Technological Educational Institute of Athens, 12210 Athens, Greece
- National Technical University of Athens, 15780 Athens, Greece
| | | | | | | |
Collapse
|
7
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
8
|
Three Aspects on Using Convolutional Neural Networks for Computer-Aided Detection in Medical Imaging. DEEP LEARNING AND CONVOLUTIONAL NEURAL NETWORKS FOR MEDICAL IMAGE COMPUTING 2017. [DOI: 10.1007/978-3-319-42999-1_8] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
|
9
|
Abstract
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation.
Collapse
|
10
|
Automatic Lymph Node Cluster Segmentation Using Holistically-Nested Neural Networks and Structured Optimization in CT Images. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION – MICCAI 2016 2016. [DOI: 10.1007/978-3-319-46723-8_45] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|