1
|
Mi J, Wang R, Feng Q, Han L, Zhuang Y, Chen K, Chen Z, Hua Z, Luo Y, Lin J. Three-dimensional visualization of thyroid ultrasound images based on multi-scale features fusion and hierarchical attention. Biomed Eng Online 2024; 23:31. [PMID: 38468262 PMCID: PMC10926618 DOI: 10.1186/s12938-024-01215-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Accepted: 02/02/2024] [Indexed: 03/13/2024] Open
Abstract
BACKGROUND Ultrasound three-dimensional visualization, a cutting-edge technology in medical imaging, enhances diagnostic accuracy by providing a more comprehensive and readable portrayal of anatomical structures compared to traditional two-dimensional ultrasound. Crucial to this visualization is the segmentation of multiple targets. However, challenges like noise interference, inaccurate boundaries, and difficulties in segmenting small structures exist in the multi-target segmentation of ultrasound images. This study, using neck ultrasound images, concentrates on researching multi-target segmentation methods for the thyroid and surrounding tissues. METHOD We improved the Unet++ to propose PA-Unet++ to enhance the multi-target segmentation accuracy of the thyroid and its surrounding tissues by addressing ultrasound noise interference. This involves integrating multi-scale feature information using a pyramid pooling module to facilitate segmentation of structures of various sizes. Additionally, an attention gate mechanism is applied to each decoding layer to progressively highlight target tissues and suppress the impact of background pixels. RESULTS Video data obtained from 2D ultrasound thyroid serial scans served as the dataset for this paper.4600 images containing 23,000 annotated regions were divided into training and test sets at a ratio of 9:1, the results showed that: compared with the results of U-net++, the Dice of our model increased from 78.78% to 81.88% (+ 3.10%), the mIOU increased from 73.44% to 80.35% (+ 6.91%), and the PA index increased from 92.95% to 94.79% (+ 1.84%). CONCLUSIONS Accurate segmentation is fundamental for various clinical applications, including disease diagnosis, treatment planning, and monitoring. This study will have a positive impact on the improvement of 3D visualization capabilities and clinical decision-making and research in the context of ultrasound image.
Collapse
Affiliation(s)
- Junyu Mi
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Rui Wang
- Department of Ultrasound, General Hospital of Western Theater Command, Chengdu, Sichuan, China
| | - Qian Feng
- Department of Ultrasound, General Hospital of Western Theater Command, Chengdu, Sichuan, China
| | - Lin Han
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, China
- Highong Intellimage Medical Technology (Tianjin) Co., Ltd, Tianjin, China
| | - Yan Zhuang
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Ke Chen
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, China
| | - Zhong Chen
- Department of Ultrasound, General Hospital of Western Theater Command, Chengdu, Sichuan, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China.
| | - Yan Luo
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, Sichuan, China
| | - Jiangli Lin
- College of Biomedical Engineering, Sichuan University, Chengdu, Sichuan, China.
| |
Collapse
|
2
|
Luan S, Ou-Yang J, Yang X, Wei W, Xue X, Zhu B. A multi-modal vision-language pipeline strategy for contour quality assurance and adaptive optimization. Phys Med Biol 2024; 69:065005. [PMID: 38373347 DOI: 10.1088/1361-6560/ad2a97] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Accepted: 02/19/2024] [Indexed: 02/21/2024]
Abstract
Objective.Accurate delineation of organs-at-risk (OARs) is a critical step in radiotherapy. The deep learning generated segmentations usually need to be reviewed and corrected by oncologists manually, which is time-consuming and operator-dependent. Therefore, an automated quality assurance (QA) and adaptive optimization correction strategy was proposed to identify and optimize 'incorrect' auto-segmentations.Approach.A total of 586 CT images and labels from nine institutions were used. The OARs included the brainstem, parotid, and mandible. The deep learning generated contours were compared with the manual ground truth delineations. In this study, we proposed a novel contour quality assurance and adaptive optimization (CQA-AO) strategy, which consists of the following three main components: (1) the contour QA module classified the deep learning generated contours as either accepted or unaccepted; (2) the unacceptable contour categories analysis module provided the potential error reasons (five unacceptable category) and locations (attention heatmaps); (3) the adaptive correction of unacceptable contours module integrate vision-language representations and utilize convex optimization algorithms to achieve adaptive correction of 'incorrect' contours.Main results. In the contour QA tasks, the sensitivity (accuracy, precision) of CQA-AO strategy reached 0.940 (0.945, 0.948), 0.962 (0.937, 0.913), and 0.967 (0.962, 0.957) for brainstem, parotid and mandible, respectively. The unacceptable contour category analysis, the(FI,AccI,Fmicro,Fmacro)of CQA-AO strategy reached (0.901, 0.763, 0.862, 0.822), (0.855, 0.737, 0.837, 0.784), and (0.907, 0.762, 0.858, 0.821) for brainstem, parotid and mandible, respectively. After adaptive optimization correction, the DSC values of brainstem, parotid and mandible have been improved by 9.4%, 25.9%, and 13.5%, and Hausdorff distance values decreased by 62%, 70.6%, and 81.6%, respectively.Significance. The proposed CQA-AO strategy, which combines QA of contour and adaptive optimization correction for OARs contouring, demonstrated superior performance compare to conventional methods. This method can be implemented in the clinical contouring procedures and improve the efficiency of delineating and reviewing workflow.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuits, Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Jun Ou-Yang
- School of Integrated Circuits, Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Xiaofei Yang
- School of Integrated Circuits, Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Wei Wei
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Benpeng Zhu
- School of Integrated Circuits, Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| |
Collapse
|
3
|
Luan S, Ding Y, Shao J, Zou B, Yu X, Qin N, Zhu B, Wei W, Xue X. Deep learning for head and neck semi-supervised semantic segmentation. Phys Med Biol 2024; 69:055008. [PMID: 38306968 DOI: 10.1088/1361-6560/ad25c2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 02/01/2024] [Indexed: 02/04/2024]
Abstract
Objective. Radiation therapy (RT) represents a prevalent therapeutic modality for head and neck (H&N) cancer. A crucial phase in RT planning involves the precise delineation of organs-at-risks (OARs), employing computed tomography (CT) scans. Nevertheless, the manual delineation of OARs is a labor-intensive process, necessitating individual scrutiny of each CT image slice, not to mention that a standard CT scan comprises hundreds of such slices. Furthermore, there is a significant domain shift between different institutions' H&N data, which makes traditional semi-supervised learning strategies susceptible to confirmation bias. Therefore, effectively using unlabeled datasets to support annotated datasets for model training has become a critical issue for preventing domain shift and confirmation bias.Approach. In this work, we proposed an innovative cross-domain orthogon-based-perspective consistency (CD-OPC) strategy within a two-branch collaborative training framework, which compels the two sub-networks to acquire valuable features from unrelated perspectives. More specifically, a novel generative pretext task cross-domain prediction (CDP) was designed for learning inherent properties of CT images. Then this prior knowledge was utilized to promote the independent learning of distinct features by the two sub-networks from identical inputs, thereby enhancing the perceptual capabilities of the sub-networks through orthogon-based pseudo-labeling knowledge transfer.Main results. Our CD-OPC model was trained on H&N datasets from nine different institutions, and validated on the four local intuitions' H&N datasets. Among all datasets CD-OPC achieved more advanced performance than other semi-supervised semantic segmentation algorithms.Significance. The CD-OPC method successfully mitigates domain shift and prevents network collapse. In addition, it enhances the network's perceptual abilities, and generates more reliable predictions, thereby further addressing the confirmation bias issue.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Jiakang Shao
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Bing Zou
- Department of Oncology, The Second Affiliated Hospital of Nanchang University, Nanchang, People's Republic of China
| | - Xiao Yu
- Department of Radiation Oncology, The First Affiliated Hospital of USTC, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, People's Republic of China
| | - Nannan Qin
- The First Affiliated Hospital of Bengbu Medical College, Bengbu, People's Republic of China
| | - Benpeng Zhu
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Wei Wei
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| |
Collapse
|
4
|
Luan S, Wu K, Wu Y, Zhu B, Wei W, Xue X. Accurate and robust auto-segmentation of head and neck organ-at-risks based on a novel CNN fine-tuning workflow. J Appl Clin Med Phys 2024; 25:e14248. [PMID: 38128058 PMCID: PMC10795444 DOI: 10.1002/acm2.14248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 12/08/2023] [Accepted: 12/11/2023] [Indexed: 12/23/2023] Open
Abstract
PURPOSE Obvious inconsistencies in auto-segmentations exist among various AI software. In this study, we have developed a novel convolutional neural network (CNN) fine-tuning workflow to achieve precise and robust localized segmentation. METHODS The datasets include Hubei Cancer Hospital dataset, Cetuximab Head and Neck Public Dataset, and Québec Public Dataset. Seven organs-at-risks (OARs), including brain stem, left parotid gland, esophagus, left optic nerve, optic chiasm, mandible, and pharyngeal constrictor, were selected. The auto-segmentation results from four commercial AI software were first compared with the manual delineations. Then a new multi-scale lightweight residual CNN model with an attention module (named as HN-Net) was trained and tested on 40 samples and 10 samples from Hubei Cancer Hospital, respectively. To enhance the network's accuracy and generalization ability, the fine-tuning workflow utilized an uncertainty estimation method for automatic selection of candidate samples of worthiness from Cetuximab Head and Neck Public Dataset for further training. The segmentation performances were evaluated on the Hubei Cancer Hospital dataset and/or the entire Québec Public Dataset. RESULTS A maximum difference of 0.13 and 0.7 mm in average Dice value and Hausdorff distance value for the seven OARs were observed by four AI software. The proposed HN-Net achieved an average Dice value of 0.14 higher than that of the AI software, and it also outperformed other popular CNN models (HN-Net: 0.79, U-Net: 0.78, U-Net++: 0.78, U-Net-Multi-scale: 0.77, AI software: 0.65). Additionally, the HN-Net fine-tuning workflow by using the local datasets and external public datasets further improved the automatic segmentation with the average Dice value by 0.02. CONCLUSION The delineations of commercial AI software need to be carefully reviewed, and localized further training is necessary for clinical practice. The proposed fine-tuning workflow could be feasibly adopted to implement an accurate and robust auto-segmentation model by using local datasets and external public datasets.
Collapse
Affiliation(s)
- Shunyao Luan
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
- School of Integrated CircuitsLaboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhanChina
| | - Kun Wu
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Yuan Wu
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Benpeng Zhu
- School of Integrated CircuitsLaboratory for OptoelectronicsHuazhong University of Science and TechnologyWuhanChina
| | - Wei Wei
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| | - Xudong Xue
- Department of Radiation OncologyHubei Cancer Hospital, Tongji Medical CollegeHuazhong University of Science and TechnologyWuhanChina
| |
Collapse
|
5
|
Luan S, Yu X, Lei S, Ma C, Wang X, Xue X, Ding Y, Ma T, Zhu B. Deep learning for fast super-resolution ultrasound microvessel imaging. Phys Med Biol 2023; 68:245023. [PMID: 37934040 DOI: 10.1088/1361-6560/ad0a5a] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2023] [Accepted: 11/07/2023] [Indexed: 11/08/2023]
Abstract
Objective. Ultrasound localization microscopy (ULM) enables microvascular reconstruction by localizing microbubbles (MBs). Although ULM can obtain microvascular images that are beyond the ultimate resolution of the ultrasound (US) diffraction limit, it requires long data processing time, and the imaging accuracy is susceptible to the density of MBs. Deep learning (DL)-based ULM is proposed to alleviate these limitations, which simulated MBs at low-resolution and mapped them to coordinates at high-resolution by centroid localization. However, traditional DL-based ULMs are imprecise and computationally complex. Also, the performance of DL is highly dependent on the training datasets, which are difficult to realistically simulate.Approach. A novel architecture called adaptive matching network (AM-Net) and a dataset generation method named multi-mapping (MMP) was proposed to overcome the above challenges. The imaging performance and processing time of the AM-Net have been assessed by simulation andin vivoexperiments.Main results. Simulation results show that at high density (20 MBs/frame), when compared to other DL-based ULM, AM-Net achieves higher localization accuracy in the lateral/axial direction.In vivoexperiment results show that the AM-Net can reconstruct ∼24.3μm diameter micro-vessels and separate two ∼28.3μm diameter micro-vessels. Furthermore, when processing a 128 × 128 pixels image in simulation experiments and an 896 × 1280 pixels imagein vivoexperiment, the processing time of AM-Net is ∼13 s and ∼33 s, respectively, which are 0.3-0.4 orders of magnitude faster than other DL-based ULM.Significance. We proposes a promising solution for ULM with low computing costs and high imaging performance.
Collapse
Affiliation(s)
- Shunyao Luan
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Xiangyang Yu
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Shuang Lei
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Chi Ma
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States of America
| | - Xiao Wang
- Department of Radiation Oncology, Rutgers-Cancer Institute of New Jersey, Rutgers-Robert Wood Johnson Medical School, New Brunswick, NJ, United States of America
| | - Xudong Xue
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Yi Ding
- Department of Radiation Oncology, Hubei Cancer Hospital, TongJi Medical College, Huazhong University of Science and Technology, Wuhan, Hubei, People's Republic of China
| | - Teng Ma
- The Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, People's Republic of China
| | - Benpeng Zhu
- School of Integrated Circuits, Laboratory for optoelectronics, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| |
Collapse
|