1
|
Sha Q, Sun K, Jiang C, Xu M, Xue Z, Cao X, Shen D. Detail-preserving image warping by enforcing smooth image sampling. Neural Netw 2024; 178:106426. [PMID: 38878640 DOI: 10.1016/j.neunet.2024.106426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2024] [Revised: 04/14/2024] [Accepted: 06/01/2024] [Indexed: 08/13/2024]
Abstract
Multi-phase dynamic contrast-enhanced magnetic resonance imaging image registration makes a substantial contribution to medical image analysis. However, existing methods (e.g., VoxelMorph, CycleMorph) often encounter the problem of image information misalignment in deformable registration tasks, posing challenges to the practical application. To address this issue, we propose a novel smooth image sampling method to align full organic information to realize detail-preserving image warping. In this paper, we clarify that the phenomenon about image information mismatch is attributed to imbalanced sampling. Then, a sampling frequency map constructed by sampling frequency estimators is utilized to instruct smooth sampling by reducing the spatial gradient and discrepancy between all-ones matrix and sampling frequency map. In addition, our estimator determines the sampling frequency of a grid voxel in the moving image by aggregating the sum of interpolation weights from warped non-grid sampling points in its vicinity and vectorially constructs sampling frequency map through projection and scatteration. We evaluate the effectiveness of our approach through experiments on two in-house datasets. The results showcase that our method preserves nearly complete details with ideal registration accuracy compared with several state-of-the-art registration methods. Additionally, our method exhibits a statistically significant difference in the regularity of the registration field compared to other methods, at a significance level of p < 0.05. Our code will be released at https://github.com/QingRui-Sha/SFM.
Collapse
Affiliation(s)
- Qingrui Sha
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Kaicong Sun
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Caiwen Jiang
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China.
| | - Mingze Xu
- School of Science and Engineering, Chinese University of Hong Kong-Shenzhen, Guangdong, China.
| | - Zhong Xue
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Xiaohuan Cao
- Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China.
| | - Dinggang Shen
- School of Biomedical Engineering, ShanghaiTech, Shanghai, China; Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
2
|
Gou F, Liu J, Xiao C, Wu J. Research on Artificial-Intelligence-Assisted Medicine: A Survey on Medical Artificial Intelligence. Diagnostics (Basel) 2024; 14:1472. [PMID: 39061610 PMCID: PMC11275417 DOI: 10.3390/diagnostics14141472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2024] [Revised: 07/04/2024] [Accepted: 07/05/2024] [Indexed: 07/28/2024] Open
Abstract
With the improvement of economic conditions and the increase in living standards, people's attention in regard to health is also continuously increasing. They are beginning to place their hopes on machines, expecting artificial intelligence (AI) to provide a more humanized medical environment and personalized services, thus greatly expanding the supply and bridging the gap between resource supply and demand. With the development of IoT technology, the arrival of the 5G and 6G communication era, and the enhancement of computing capabilities in particular, the development and application of AI-assisted healthcare have been further promoted. Currently, research on and the application of artificial intelligence in the field of medical assistance are continuously deepening and expanding. AI holds immense economic value and has many potential applications in regard to medical institutions, patients, and healthcare professionals. It has the ability to enhance medical efficiency, reduce healthcare costs, improve the quality of healthcare services, and provide a more intelligent and humanized service experience for healthcare professionals and patients. This study elaborates on AI development history and development timelines in the medical field, types of AI technologies in healthcare informatics, the application of AI in the medical field, and opportunities and challenges of AI in the field of medicine. The combination of healthcare and artificial intelligence has a profound impact on human life, improving human health levels and quality of life and changing human lifestyles.
Collapse
Affiliation(s)
- Fangfang Gou
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
| | - Jun Liu
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Chunwen Xiao
- The Second People's Hospital of Huaihua, Huaihua 418000, China
| | - Jia Wu
- State Key Laboratory of Public Big Data, College of Computer Science and Technology, Guizhou University, Guiyang 550025, China
- Research Center for Artificial Intelligence, Monash University, Melbourne, Clayton, VIC 3800, Australia
| |
Collapse
|
3
|
Rao F, Lyu T, Feng Z, Wu Y, Ni Y, Zhu W. A landmark-supervised registration framework for multi-phase CT images with cross-distillation. Phys Med Biol 2024; 69:115059. [PMID: 38768601 DOI: 10.1088/1361-6560/ad4e01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Accepted: 05/20/2024] [Indexed: 05/22/2024]
Abstract
Objective.Multi-phase computed tomography (CT) has become a leading modality for identifying hepatic tumors. Nevertheless, the presence of misalignment in the images of different phases poses a challenge in accurately identifying and analyzing the patient's anatomy. Conventional registration methods typically concentrate on either intensity-based features or landmark-based features in isolation, so imposing limitations on the accuracy of the registration process.Method.We establish a nonrigid cycle-registration network that leverages semi-supervised learning techniques, wherein a point distance term based on Euclidean distance between registered landmark points is introduced into the loss function. Additionally, a cross-distillation strategy is proposed in network training to further improve registration performance which incorporates response-based knowledge concerning the distances between feature points.Results.We conducted experiments using multi-centered liver CT datasets to evaluate the performance of the proposed method. The results demonstrate that our method outperforms baseline methods in terms of target registration error. Additionally, Dice scores of the warped tumor masks were calculated. Our method consistently achieved the highest scores among all the comparing methods. Specifically, it achieved scores of 82.9% and 82.5% in the hepatocellular carcinoma and the intrahepatic cholangiocarcinoma dataset, respectively.Significance.The superior registration performance indicates its potential to serve as an important tool in hepatic tumor identification and analysis.
Collapse
Affiliation(s)
- Fan Rao
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Tianling Lyu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Zhan Feng
- Department of Radiology, College of Medicine, The First Affiliated Hospital, Zhejiang University, Hangzhou 311100, People's Republic of China
| | - Yuanfeng Wu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Yangfan Ni
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| | - Wentao Zhu
- Research Center for Augmented Intelligence, Zhejiang Lab, Hangzhou 310000, People's Republic of China
| |
Collapse
|
4
|
Zhu Y, Zhuang L, Lin Y, Zhang T, Tabatabaei H, Aberle DR, Prosper AE, Chien A, Hsu W. DART: DEFORMABLE ANATOMY-AWARE REGISTRATION TOOLKIT FOR LUNG CT REGISTRATION WITH KEYPOINTS SUPERVISION. PROCEEDINGS. IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING 2024; 2024:10.1109/ISBI56570.2024.10635326. [PMID: 39309597 PMCID: PMC11412684 DOI: 10.1109/isbi56570.2024.10635326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/25/2024]
Abstract
Spatially aligning two computed tomography (CT) scans of the lung using automated image registration techniques is a challenging task due to the deformable nature of the lung. However, existing deep-learning-based lung CT registration models are not trained with explicit anatomical knowledge. We propose the deformable anatomy-aware registration toolkit (DART), a masked autoencoder (MAE)-based approach, to improve the keypoint-supervised registration of lung CTs. Our method incorporates features from multiple decoders of networks trained to segment anatomical structures, including the lung, ribs, vertebrae, lobes, vessels, and airways, to ensure that the MAE learns relevant features corresponding to the anatomy of the lung. The pretrained weights of the transformer encoder and patch embeddings are then used as the initialization for the training of downstream registration. We compare DART to existing state-of-the-art registration models. Our experiments show that DART outperforms the baseline models (Voxelmorph, ViT-V-Net, and MAE-TransRNet) in terms of target registration error of both corrField-generated keypoints with 17%, 13%, and 9% relative improvement, respectively, and bounding box centers of nodules with 27%, 10%, and 4% relative improvement, respectively. Our implementation is available at https://github.com/yunzhengzhu/DART.
Collapse
Affiliation(s)
- Yunzheng Zhu
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
- Department of Electrical & Computer Engineering, UCLA Samueli School of Engineering
| | - Luoting Zhuang
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
- Medical Informatics Home Area, UCLA Graduate Programs in Bioscience
| | - Yannan Lin
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
| | - Tengyue Zhang
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
- Department of Computer Science, UCLA Samueli School of Engineering
| | - Hossein Tabatabaei
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
| | - Denise R Aberle
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
| | - Ashley E Prosper
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
| | - Aichi Chien
- Department of Radiological Sciences, David Geffen School of Medicine at UCLA
| | - William Hsu
- Medical & Imaging Informatics, Department of Radiological Sciences, David Geffen School of Medicine at UCLA
- Department of Bioengineering, UCLA Samueli School of Engineering
| |
Collapse
|
5
|
Sun Y, Gu Y, Shi F, Liu J, Li G, Feng Q, Shen D. Coarse-to-fine registration and time-intensity curves constraint for liver DCE-MRI synthesis. Comput Med Imaging Graph 2024; 111:102319. [PMID: 38147798 DOI: 10.1016/j.compmedimag.2023.102319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 11/03/2023] [Accepted: 12/06/2023] [Indexed: 12/28/2023]
Abstract
Image registration plays a crucial role in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI), used as a fundamental step for the subsequent diagnosis of benign and malignant tumors. However, the registration process encounters significant challenges due to the substantial intensity changes observed among different time points, resulting from the injection of contrast agents. Furthermore, previous studies have often overlooked the alignment of small structures, such as tumors and vessels. In this work, we propose a novel DCE-MRI registration framework that can effectively align the DCE-MRI time series. Specifically, our DCE-MRI registration framework consists of two steps, i.e., a de-enhancement synthesis step and a coarse-to-fine registration step. In the de-enhancement synthesis step, a disentanglement network separates DCE-MRI images into a content component representing the anatomical structures and a style component indicating the presence or absence of contrast agents. This step generates synthetic images where the contrast agents are removed from the original images, alleviating the negative effects of intensity changes on the subsequent registration process. In the registration step, we utilize a coarse registration network followed by a refined registration network. These two networks facilitate the estimation of both the coarse and refined displacement vector fields (DVFs) in a pairwise and groupwise registration manner, respectively. In addition, to enhance the alignment accuracy for small structures, a voxel-wise constraint is further conducted by assessing the smoothness of the time-intensity curves (TICs). Experimental results on liver DCE-MRI demonstrate that our proposed method outperforms state-of-the-art approaches, offering more robust and accurate alignment results.
Collapse
Affiliation(s)
- Yuhang Sun
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China; School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Yuning Gu
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Feng Shi
- Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China
| | - Jiameng Liu
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Guoqiang Li
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, China.
| | - Dinggang Shen
- School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China; Department of Research and Development, Shanghai United Imaging Intelligence Co., Ltd., Shanghai, China; Shanghai Clinical Research and Trial Center, Shanghai, China.
| |
Collapse
|
6
|
Xiao H, Xue X, Zhu M, Jiang X, Xia Q, Chen K, Li H, Long L, Peng K. Deep learning-based lung image registration: A review. Comput Biol Med 2023; 165:107434. [PMID: 37696177 DOI: 10.1016/j.compbiomed.2023.107434] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 08/13/2023] [Accepted: 08/28/2023] [Indexed: 09/13/2023]
Abstract
Lung image registration can effectively describe the relative motion of lung tissues, thereby helping to solve series problems in clinical applications. Since the lungs are soft and fairly passive organs, they are influenced by respiration and heartbeat, resulting in discontinuity of lung motion and large deformation of anatomic features. This poses great challenges for accurate registration of lung image and its applications. The recent application of deep learning (DL) methods in the field of medical image registration has brought promising results. However, a versatile registration framework has not yet emerged due to diverse challenges of registration for different regions of interest (ROI). DL-based image registration methods used for other ROI cannot achieve satisfactory results in lungs. In addition, there are few review articles available on DL-based lung image registration. In this review, the development of conventional methods for lung image registration is briefly described and a more comprehensive survey of DL-based methods for lung image registration is illustrated. The DL-based methods are classified according to different supervision types, including fully-supervised, weakly-supervised and unsupervised. The contributions of researchers in addressing various challenges are described, as well as the limitations of these approaches. This review also presents a comprehensive statistical analysis of the cited papers in terms of evaluation metrics and loss functions. In addition, publicly available datasets for lung image registration are also summarized. Finally, the remaining challenges and potential trends in DL-based lung image registration are discussed.
Collapse
Affiliation(s)
- Hanguang Xiao
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Xufeng Xue
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Mi Zhu
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| | - Xin Jiang
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Qingling Xia
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Kai Chen
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Huanqi Li
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Li Long
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China
| | - Ke Peng
- College of Artificial Intelligent, Chongqing University of Technology, Chongqing 401135, China.
| |
Collapse
|
7
|
Registration on DCE-MRI images via multi-domain image-to-image translation. Comput Med Imaging Graph 2023; 104:102169. [PMID: 36586196 DOI: 10.1016/j.compmedimag.2022.102169] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 12/24/2022] [Accepted: 12/24/2022] [Indexed: 12/29/2022]
Abstract
Registration of dynamic contrast enhanced magnetic resonance imaging (DCE-MRI) is challenging as rapid intensity changes caused by a contrast agent lead to large registration errors. To address this problem, we propose a novel multi-domain image-to-image translation (MDIT) network based on image disentangling for separating motion from contrast changes before registration. In particular, the DCE images are disentangled into a domain-invariant content space (motion) and a domain-specific attribute space (contrast changes). The disentangled representations are then used to generate images, where the contrast changes have been removed from the motion. After that the resulting deformations can be directly derived from the generated images using an FFD registration. The method is tested on 10 lung DCE-MRI cases. The proposed method reaches an average root mean squared error of 0.3 ± 0.41 and the separation time is about 2.4 s for each case. Results show that the proposed method improves the registration efficiency without losing the registration accuracy compared with several state-of-the-art registration methods.
Collapse
|
8
|
Nguyen V, Alves Pereira LF, Liang Z, Mielke F, Van Houtte J, Sijbers J, De Beenhouwer J. Automatic landmark detection and mapping for 2D/3D registration with BoneNet. Front Vet Sci 2022; 9:923449. [PMID: 36061115 PMCID: PMC9434378 DOI: 10.3389/fvets.2022.923449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 07/27/2022] [Indexed: 11/13/2022] Open
Abstract
The 3D musculoskeletal motion of animals is of interest for various biological studies and can be derived from X-ray fluoroscopy acquisitions by means of image matching or manual landmark annotation and mapping. While the image matching method requires a robust similarity measure (intensity-based) or an expensive computation (tomographic reconstruction-based), the manual annotation method depends on the experience of operators. In this paper, we tackle these challenges by a strategic approach that consists of two building blocks: an automated 3D landmark extraction technique and a deep neural network for 2D landmarks detection. For 3D landmark extraction, we propose a technique based on the shortest voxel coordinate variance to extract the 3D landmarks from the 3D tomographic reconstruction of an object. For 2D landmark detection, we propose a customized ResNet18-based neural network, BoneNet, to automatically detect geometrical landmarks on X-ray fluoroscopy images. With a deeper network architecture in comparison to the original ResNet18 model, BoneNet can extract and propagate feature vectors for accurate 2D landmark inference. The 3D poses of the animal are then reconstructed by aligning the extracted 2D landmarks from X-ray radiographs and the corresponding 3D landmarks in a 3D object reference model. Our proposed method is validated on X-ray images, simulated from a real piglet hindlimb 3D computed tomography scan and does not require manual annotation of landmark positions. The simulation results show that BoneNet is able to accurately detect the 2D landmarks in simulated, noisy 2D X-ray images, resulting in promising rigid and articulated parameter estimations.
Collapse
Affiliation(s)
- Van Nguyen
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- *Correspondence: Van Nguyen
| | - Luis F. Alves Pereira
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Departamento de Ciência da Computação, Universidade Federal do Agreste de Pernambuco, Garanhuns, Brazil
| | - Zhihua Liang
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Falk Mielke
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
- Department of Biology, University of Antwerp, Antwerp, Belgium
| | - Jeroen Van Houtte
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan Sijbers
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| | - Jan De Beenhouwer
- Imec—Vision Lab, Department of Physics, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
9
|
Vadivu NS, Gupta G, Naveed QN, Rasheed T, Singh SK, Dhabliya D. Correlation-Based Mutual Information Model for Analysis of Lung Cancer CT Image. BIOMED RESEARCH INTERNATIONAL 2022; 2022:6451770. [PMID: 35958823 PMCID: PMC9363227 DOI: 10.1155/2022/6451770] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Revised: 07/01/2022] [Accepted: 07/12/2022] [Indexed: 11/29/2022]
Abstract
Most of the people all over the world pass away from complications related to lung cancer every single day. It is a deadly form of the disease. To improve a person's chances of survival, an early diagnosis is a necessary prerequisite. In this regard, the existing methods of tumour detection, such as CT scans, are most commonly used to recognize infected regions. Despite this, there are certain obstacles presented by CT imaging, so this paper proposes a novel model which is a correlation-based model designed for analysis of lung cancer. When registering pictures of thoracic and abdominal organs with slider motion, the total variation regularization term may correct the border discontinuous displacement field, but it cannot maintain the local characteristics of the image and loses the registration accuracy. The thin-plate spline energy operator and the total variation operator are spatially weighted via the spatial position weight of the pixel points to construct an adaptive thin-plate spline total variation regular term for lung image CT single-mode registration and CT/PET dual-mode registration. The regular term is then combined with the CRMI similarity measure and the L-BFGS optimization approach to create a nonrigid registration procedure. The proposed method assures the smoothness of interior of the picture while ensuring the discontinuous motion of the border and has greater registration accuracy, according to the experimental findings on the DIR-Lab 4D-CT public dataset and the CT/PET clinical dataset.
Collapse
Affiliation(s)
- N. Shanmuga Vadivu
- Department of Electronics and Communications Engineering, RVS College of Engineering and Technology, Coimbatore, India
| | - Gauri Gupta
- Department of Biomedical Engineering, SGSITS, Indore, India
| | | | - Tariq Rasheed
- Department of English, College of Science and Humanities, Al-Kharj, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia
| | - Sitesh Kumar Singh
- Department of Civil Engineering, Wollega University, Nekemte, Oromia, Ethiopia
| | - Dharmesh Dhabliya
- Department of Information Technology, Vishwakarma Institute of Information Technology, Pune, Maharashtra, India
| |
Collapse
|
10
|
He Y, Wang A, Li S, Hao A. Hierarchical anatomical structure-aware based thoracic CT images registration. Comput Biol Med 2022; 148:105876. [PMID: 35863247 DOI: 10.1016/j.compbiomed.2022.105876] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 06/17/2022] [Accepted: 07/09/2022] [Indexed: 11/25/2022]
Abstract
Accurate thoracic CT image registration remains challenging due to complex joint deformations and different motion patterns in multiple organs/tissues during breathing. To combat this, we devise a hierarchical anatomical structure-aware based registration framework. It affords a coordination scheme necessary for constraining a general free-form deformation (FFD) during thoracic CT registration. The key is to integrate the deformations of different anatomical structures in a divide-and-conquer way. Specifically, a deformation ability-aware dissimilarity metric is proposed for complex joint deformations containing large-scale flexible deformation of the lung region, rigid displacement of the bone region, and small-scale flexible deformation of the rest region. Furthermore, a motion pattern-aware regularization is devised to handle different motion patterns, which contain sliding motion along the lung surface, almost no displacement of the spine and smooth deformation of other regions. Moreover, to accommodate large-scale deformation, a novel hierarchical strategy, wherein different anatomical structures are fused on the same control lattice, registers images from coarse to fine via elaborate Gaussian pyramids. Extensive experiments and comprehensive evaluations have been executed on the 4D-CT DIR and 3D DIR COPD datasets. It confirms that this newly proposed method is locally comparable to state-of-the-art registration methods specializing in local deformations, while guaranteeing overall accuracy. Additionally, in contrast to the current popular learning-based methods that typically require dozens of hours or more pre-training with powerful graphics cards, our method only takes an average of 63 s to register a case with an ordinary graphics card of RTX2080 SUPER, making our method still worth promoting. Our code is available at https://github.com/heluxixue/Structure_Aware_Registration/tree/master.
Collapse
Affiliation(s)
- Yuanbo He
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aoyu Wang
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China.
| | - Shuai Li
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| | - Aimin Hao
- State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing, 100191, China; Beijing Advanced Innovation Center for Biomedical Engineering,Beihang University, Beijing, 100191, China; Peng Cheng Laboratory, Shenzhen, 518055, China.
| |
Collapse
|
11
|
Ye G, He S, Pan R, Zhu L, Zhou D, Lu R. Research on DCE-MRI Images Based on Deep Transfer Learning in Breast Cancer Adjuvant Curative Effect Prediction. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4477099. [PMID: 35251566 PMCID: PMC8890845 DOI: 10.1155/2022/4477099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 01/21/2022] [Accepted: 01/27/2022] [Indexed: 11/17/2022]
Abstract
Breast cancer is a serious threat to women's physical and mental health. In recent years, its incidence has been on the rise and it has become the top female malignant tumor in China. At present, adjuvant chemotherapy for breast cancer has become the standard mode of breast cancer treatment, but the response results usually need to be completed after the implementation of adjuvant chemotherapy, and the optimization of the treatment plan and the implementation of breast-conserving therapy need to be based on accurate estimation of the pathological response. Therefore, to predict the efficacy of adjuvant chemotherapy for breast cancer patients is to find a predictive method that is conducive to individualized choice of chemotherapy regimens. This article introduces the research of DCE-MRI images based on deep transfer learning in breast cancer adjuvant curative effect prediction. Deep transfer learning algorithms are used to process images, and then, the features of breast cancer after adjuvant chemotherapy are collected through image feature collection. Predictions are made, and the research results show that the accuracy of the prediction reaches 70%.
Collapse
Affiliation(s)
- Guolin Ye
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Suqun He
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Ruilin Pan
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Lewei Zhu
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - Dan Zhou
- Department of Breast Surgery, The First People's Hospital of Foshan, Foshan 528000, China
| | - RuiLiang Lu
- MRI Room, The First People's Hospital of Foshan, Foshan 528000, China
| |
Collapse
|