1
|
Kubota Y, Kodera S, Hirata A. A novel transfer learning framework for non-uniform conductivity estimation with limited data in personalized brain stimulation. Phys Med Biol 2025; 70:105002. [PMID: 40280154 DOI: 10.1088/1361-6560/add105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2024] [Accepted: 04/25/2025] [Indexed: 04/29/2025]
Abstract
Objective. Personalized transcranial magnetic stimulation (TMS) requires individualized head models that incorporate non-uniform conductivity to enable target-specific stimulation. Accurately estimating non-uniform conductivity in individualized head models remains a challenge due to the difficulty of obtaining precise ground truth data. To address this issue, we have developed a novel transfer learning-based approach for automatically estimating non-uniform conductivity in a human head model with limited data.Approach. The proposed method complements the limitations of the previous conductivity network (CondNet) and improves the conductivity estimation accuracy. This method generates a segmentation model from T1- and T2-weighted magnetic resonance images, which is then used for conductivity estimation via transfer learning. To enhance the model's representation capability, a Transformer was incorporated into the segmentation model, while the conductivity estimation model was designed using a combination of Attention Gates and Residual Connections, enabling efficient learning even with a small amount of data.Main results. The proposed method was evaluated using 1494 images, demonstrating a 2.4% improvement in segmentation accuracy and a 29.1% increase in conductivity estimation accuracy compared with CondNet. Furthermore, the proposed method achieved superior conductivity estimation accuracy even with only three training cases, outperforming CondNet, which was trained on an adequate number of cases. The conductivity maps generated by the proposed method yielded better results in brain electrical field simulations than CondNet.Significance. These findings demonstrate the high utility of the proposed method in brain electrical field simulations and suggest its potential applicability to other medical image analysis tasks and simulations.
Collapse
Affiliation(s)
- Yoshiki Kubota
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Sachiko Kodera
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| | - Akimasa Hirata
- Department of Electrical and Mechanical Engineering, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya 466-8555, Japan
| |
Collapse
|
2
|
Bahloul MA, Jabeen S, Benoumhani S, Alsaleh HA, Belkhatir Z, Al‐Wabil A. Advancements in synthetic CT generation from MRI: A review of techniques, and trends in radiation therapy planning. J Appl Clin Med Phys 2024; 25:e14499. [PMID: 39325781 PMCID: PMC11539972 DOI: 10.1002/acm2.14499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 06/27/2024] [Accepted: 07/26/2024] [Indexed: 09/28/2024] Open
Abstract
BACKGROUND Magnetic resonance imaging (MRI) and Computed tomography (CT) are crucial imaging techniques in both diagnostic imaging and radiation therapy. MRI provides excellent soft tissue contrast but lacks the direct electron density data needed to calculate dosage. CT, on the other hand, remains the gold standard due to its accurate electron density information in radiation therapy planning (RTP) but it exposes patients to ionizing radiation. Synthetic CT (sCT) generation from MRI has been a focused study field in the last few years due to cost effectiveness as well as for the objective of minimizing side-effects of using more than one imaging modality for treatment simulation. It offers significant time and cost efficiencies, bypassing the complexities of co-registration, and potentially improving treatment accuracy by minimizing registration-related errors. In an effort to navigate the quickly developing field of precision medicine, this paper investigates recent advancements in sCT generation techniques, particularly those using machine learning (ML) and deep learning (DL). The review highlights the potential of these techniques to improve the efficiency and accuracy of sCT generation for use in RTP by improving patient care and reducing healthcare costs. The intricate web of sCT generation techniques is scrutinized critically, with clinical implications and technical underpinnings for enhanced patient care revealed. PURPOSE This review aims to provide an overview of the most recent advancements in sCT generation from MRI with a particular focus of its use within RTP, emphasizing on techniques, performance evaluation, clinical applications, future research trends and open challenges in the field. METHODS A thorough search strategy was employed to conduct a systematic literature review across major scientific databases. Focusing on the past decade's advancements, this review critically examines emerging approaches introduced from 2013 to 2023 for generating sCT from MRI, providing a comprehensive analysis of their methodologies, ultimately fostering further advancement in the field. This study highlighted significant contributions, identified challenges, and provided an overview of successes within RTP. Classifying the identified approaches, contrasting their advantages and disadvantages, and identifying broad trends were all part of the review's synthesis process. RESULTS The review identifies various sCT generation approaches, consisting atlas-based, segmentation-based, multi-modal fusion, hybrid approaches, ML and DL-based techniques. These approaches are evaluated for image quality, dosimetric accuracy, and clinical acceptability. They are used for MRI-only radiation treatment, adaptive radiotherapy, and MR/PET attenuation correction. The review also highlights the diversity of methodologies for sCT generation, each with its own advantages and limitations. Emerging trends incorporate the integration of advanced imaging modalities including various MRI sequences like Dixon sequences, T1-weighted (T1W), T2-weighted (T2W), as well as hybrid approaches for enhanced accuracy. CONCLUSIONS The study examines MRI-based sCT generation, to minimize negative effects of acquiring both modalities. The study reviews 2013-2023 studies on MRI to sCT generation methods, aiming to revolutionize RTP by reducing use of ionizing radiation and improving patient outcomes. The review provides insights for researchers and practitioners, emphasizing the need for standardized validation procedures and collaborative efforts to refine methods and address limitations. It anticipates the continued evolution of techniques to improve the precision of sCT in RTP.
Collapse
Affiliation(s)
- Mohamed A. Bahloul
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Saima Jabeen
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- Translational Biomedical Engineering Research Lab, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | - Sara Benoumhani
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| | | | - Zehor Belkhatir
- School of Electronics and Computer ScienceUniversity of SouthamptonSouthamptonUK
| | - Areej Al‐Wabil
- College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
- AI Research Center, College of EngineeringAlfaisal UniversityRiyadhSaudi Arabia
| |
Collapse
|
3
|
Zhong L, Chen Z, Shu H, Zheng K, Li Y, Chen W, Wu Y, Ma J, Feng Q, Yang W. Multi-Scale Tokens-Aware Transformer Network for Multi-Region and Multi-Sequence MR-to-CT Synthesis in a Single Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:794-806. [PMID: 37782590 DOI: 10.1109/tmi.2023.3321064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/04/2023]
Abstract
The superiority of magnetic resonance (MR)-only radiotherapy treatment planning (RTP) has been well demonstrated, benefiting from the synthesis of computed tomography (CT) images which supplements electron density and eliminates the errors of multi-modal images registration. An increasing number of methods has been proposed for MR-to-CT synthesis. However, synthesizing CT images of different anatomical regions from MR images with different sequences using a single model is challenging due to the large differences between these regions and the limitations of convolutional neural networks in capturing global context information. In this paper, we propose a multi-scale tokens-aware Transformer network (MTT-Net) for multi-region and multi-sequence MR-to-CT synthesis in a single model. Specifically, we develop a multi-scale image tokens Transformer to capture multi-scale global spatial information between different anatomical structures in different regions. Besides, to address the limited attention areas of tokens in Transformer, we introduce a multi-shape window self-attention into Transformer to enlarge the receptive fields for learning the multi-directional spatial representations. Moreover, we adopt a domain classifier in generator to introduce the domain knowledge for distinguishing the MR images of different regions and sequences. The proposed MTT-Net is evaluated on a multi-center dataset and an unseen region, and remarkable performance was achieved with MAE of 69.33 ± 10.39 HU, SSIM of 0.778 ± 0.028, and PSNR of 29.04 ± 1.32 dB in head & neck region, and MAE of 62.80 ± 7.65 HU, SSIM of 0.617 ± 0.058 and PSNR of 25.94 ± 1.02 dB in abdomen region. The proposed MTT-Net outperforms state-of-the-art methods in both accuracy and visual quality.
Collapse
|
4
|
Sharma P, Nayak DR, Balabantaray BK, Tanveer M, Nayak R. A survey on cancer detection via convolutional neural networks: Current challenges and future directions. Neural Netw 2024; 169:637-659. [PMID: 37972509 DOI: 10.1016/j.neunet.2023.11.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 10/21/2023] [Accepted: 11/04/2023] [Indexed: 11/19/2023]
Abstract
Cancer is a condition in which abnormal cells uncontrollably split and damage the body tissues. Hence, detecting cancer at an early stage is highly essential. Currently, medical images play an indispensable role in detecting various cancers; however, manual interpretation of these images by radiologists is observer-dependent, time-consuming, and tedious. An automatic decision-making process is thus an essential need for cancer detection and diagnosis. This paper presents a comprehensive survey on automated cancer detection in various human body organs, namely, the breast, lung, liver, prostate, brain, skin, and colon, using convolutional neural networks (CNN) and medical imaging techniques. It also includes a brief discussion about deep learning based on state-of-the-art cancer detection methods, their outcomes, and the possible medical imaging data used. Eventually, the description of the dataset used for cancer detection, the limitations of the existing solutions, future trends, and challenges in this domain are discussed. The utmost goal of this paper is to provide a piece of comprehensive and insightful information to researchers who have a keen interest in developing CNN-based models for cancer detection.
Collapse
Affiliation(s)
- Pallabi Sharma
- School of Computer Science, UPES, Dehradun, 248007, Uttarakhand, India.
| | - Deepak Ranjan Nayak
- Department of Computer Science and Engineering, Malaviya National Institute of Technology, Jaipur, 302017, Rajasthan, India.
| | - Bunil Kumar Balabantaray
- Computer Science and Engineering, National Institute of Technology Meghalaya, Shillong, 793003, Meghalaya, India.
| | - M Tanveer
- Department of Mathematics, Indian Institute of Technology Indore, Simrol, 453552, Indore, India.
| | - Rajashree Nayak
- School of Applied Sciences, Birla Global University, Bhubaneswar, 751029, Odisha, India.
| |
Collapse
|
5
|
Krokos G, MacKewn J, Dunn J, Marsden P. A review of PET attenuation correction methods for PET-MR. EJNMMI Phys 2023; 10:52. [PMID: 37695384 PMCID: PMC10495310 DOI: 10.1186/s40658-023-00569-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 08/07/2023] [Indexed: 09/12/2023] Open
Abstract
Despite being thirteen years since the installation of the first PET-MR system, the scanners constitute a very small proportion of the total hybrid PET systems installed. This is in stark contrast to the rapid expansion of the PET-CT scanner, which quickly established its importance in patient diagnosis within a similar timeframe. One of the main hurdles is the development of an accurate, reproducible and easy-to-use method for attenuation correction. Quantitative discrepancies in PET images between the manufacturer-provided MR methods and the more established CT- or transmission-based attenuation correction methods have led the scientific community in a continuous effort to develop a robust and accurate alternative. These can be divided into four broad categories: (i) MR-based, (ii) emission-based, (iii) atlas-based and the (iv) machine learning-based attenuation correction, which is rapidly gaining momentum. The first is based on segmenting the MR images in various tissues and allocating a predefined attenuation coefficient for each tissue. Emission-based attenuation correction methods aim in utilising the PET emission data by simultaneously reconstructing the radioactivity distribution and the attenuation image. Atlas-based attenuation correction methods aim to predict a CT or transmission image given an MR image of a new patient, by using databases containing CT or transmission images from the general population. Finally, in machine learning methods, a model that could predict the required image given the acquired MR or non-attenuation-corrected PET image is developed by exploiting the underlying features of the images. Deep learning methods are the dominant approach in this category. Compared to the more traditional machine learning, which uses structured data for building a model, deep learning makes direct use of the acquired images to identify underlying features. This up-to-date review goes through the literature of attenuation correction approaches in PET-MR after categorising them. The various approaches in each category are described and discussed. After exploring each category separately, a general overview is given of the current status and potential future approaches along with a comparison of the four outlined categories.
Collapse
Affiliation(s)
- Georgios Krokos
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK.
| | - Jane MacKewn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Joel Dunn
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| | - Paul Marsden
- School of Biomedical Engineering and Imaging Sciences, The PET Centre at St Thomas' Hospital London, King's College London, 1st Floor Lambeth Wing, Westminster Bridge Road, London, SE1 7EH, UK
| |
Collapse
|
6
|
Wang J, Wu QMJ, Pourpanah F. DC-cycleGAN: Bidirectional CT-to-MR synthesis from unpaired data. Comput Med Imaging Graph 2023; 108:102249. [PMID: 37290374 DOI: 10.1016/j.compmedimag.2023.102249] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 05/02/2023] [Accepted: 05/23/2023] [Indexed: 06/10/2023]
Abstract
Magnetic resonance (MR) and computer tomography (CT) images are two typical types of medical images that provide mutually-complementary information for accurate clinical diagnosis and treatment. However, obtaining both images may be limited due to some considerations such as cost, radiation dose and modality missing. Recently, medical image synthesis has aroused gaining research interest to cope with this limitation. In this paper, we propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesize medical images from unpaired data. Specifically, a dual contrast loss is introduced into the discriminators to indirectly build constraints between real source and synthetic images by taking advantage of samples from the source domain as negative samples and enforce the synthetic images to fall far away from the source domain. In addition, cross-entropy and structural similarity index (SSIM) are integrated into the DC-cycleGAN in order to consider both the luminance and structure of samples when synthesizing images. The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods such as cycleGAN, RegGAN, DualGAN, and NiceGAN. Code is available at https://github.com/JiayuanWang-JW/DC-cycleGAN.
Collapse
Affiliation(s)
- Jiayuan Wang
- Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada.
| | - Q M Jonathan Wu
- Department of Electrical and Computer Engineering, University of Windsor, Windsor, ON, Canada.
| | - Farhad Pourpanah
- Department of Electrical and Computer Engineering, Queen's University, Kingston, ON, Canada.
| |
Collapse
|
7
|
Zhong L, Huang P, Shu H, Li Y, Zhang Y, Feng Q, Wu Y, Yang W. United multi-task learning for abdominal contrast-enhanced CT synthesis through joint deformable registration. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 231:107391. [PMID: 36804266 DOI: 10.1016/j.cmpb.2023.107391] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/13/2022] [Accepted: 01/30/2023] [Indexed: 06/18/2023]
Abstract
Synthesizing abdominal contrast-enhanced computed tomography (CECT) images from non-enhanced CT (NECT) images is of great importance, in the delineation of radiotherapy target volumes, to reduce the risk of iodinated contrast agent and the registration error between NECT and CECT for transferring the delineations. NECT images contain structural information that can reflect the contrast difference between lesions and surrounding tissues. However, existing methods treat synthesis and registration as two separate tasks, which neglects the task collaborative and fails to address misalignment between images after the standard image pre-processing in training a CECT synthesis model. Thus, we propose an united multi-task learning (UMTL) for joint synthesis and deformable registration of abdominal CECT. Specifically, our UMTL is an end-to-end multi-task framework, which integrates a deformation field learning network for reducing the misalignment errors and a 3D generator for synthesizing CECT images. Furthermore, the learning of enhanced component images and the multi-loss function are adopted for enhancing the performance of synthetic CECT images. The proposed method is evaluated on two different resolution datasets and a separate test dataset from another center. The synthetic venous phase CECT images of the separate test dataset yield mean absolute error (MAE) of 32.78±7.27 HU, mean MAE of 24.15±5.12 HU on liver region, mean peak signal-to-noise rate (PSNR) of 27.59±2.45 dB, and mean structural similarity (SSIM) of 0.96±0.01. The Dice similarity coefficients of liver region between the true and synthetic venous phase CECT images are 0.96±0.05 (high-resolution) and 0.95±0.07 (low-resolution), respectively. The proposed method has great potential in aiding the delineation of radiotherapy target volumes.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Pinyu Huang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yin Li
- Department of Information, the Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou 510515, China.
| |
Collapse
|
8
|
Li Y, Xu S, Chen H, Sun Y, Bian J, Guo S, Lu Y, Qi Z. CT synthesis from multi-sequence MRI using adaptive fusion network. Comput Biol Med 2023; 157:106738. [PMID: 36924728 DOI: 10.1016/j.compbiomed.2023.106738] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Revised: 02/09/2023] [Accepted: 03/01/2023] [Indexed: 03/13/2023]
Abstract
OBJECTIVE To investigate a method using multi-sequence magnetic resonance imaging (MRI) to synthesize computed tomography (CT) for MRI-only radiation therapy. APPROACH We proposed an adaptive multi-sequence fusion network (AMSF-Net) to exploit both voxel- and context-wise cross-sequence correlations from multiple MRI sequences to synthesize CT using element- and patch-wise fusions, respectively. The element- and patch-wise fusion feature spaces were combined, and the most representative features were selected for modeling. Finally, a densely connected convolutional decoder was applied to utilize the selected features to produce synthetic CT images. MAIN RESULTS This study includes a total number of 90 patients' T1-weighted MRI, T2-weighted MRI and CT data. The AMSF-Net reduced the average mean absolute error (MAE) from 52.88-57.23 to 49.15 HU, increased the peak signal-to-noise ratio (PSNR) from 24.82-25.32 to 25.63 dB, increased the structural similarity index measure (SSIM) from 0.857-0.869 to 0.878, and increased the dice coefficient of bone from 0.886-0.896 to 0.903 compared to the other three existing multi-sequence learning models. The improvements were statistically significant according to two-tailed paired t-test. In addition, AMSF-Net reduced the intensity difference with real CT in five organs at risk, four types of normal tissue and tumor compared with the baseline models. The MAE decreases in parotid and spinal cord were over 8% and 16% with reference to the mean intensity value of the corresponding organ, respectively. Further, the qualitative evaluations confirmed that AMSF-Net exhibited superior structural image quality of synthesized bone and small organs such as the eye lens. SIGNIFICANCE The proposed method can improve the intensity and structural image quality of synthetic CT and has potential for use in clinical applications.
Collapse
Affiliation(s)
- Yan Li
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Sisi Xu
- Department of Radiation Oncology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital & Shenzhen Hospital, Chinese Academy of Medical Sciences, Peking Union Medical College, Shenzhen, PR China
| | | | - Ying Sun
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China
| | - Jing Bian
- School of Data and Computer Engineering, Sun Yat-sen University, Guangzhou, PR China
| | - Shuanshuan Guo
- The Fifth Affiliated Hospital of Sun Yat-sen University, Cancer Center, Guangzhou, PR China.
| | - Yao Lu
- School of Computer Science and Engineering, Sun Yat-sen University, Guangdong Province Key Laboratory of Computational Science, Guangzhou, PR China.
| | - Zhenyu Qi
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, State Key Laboratory of Oncology in South China, Collaborative Innovation Center of Cancer Medicine, Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, PR China.
| |
Collapse
|
9
|
Poonkodi S, Kanchana M. 3D-MedTranCSGAN: 3D Medical Image Transformation using CSGAN. Comput Biol Med 2023; 153:106541. [PMID: 36652868 DOI: 10.1016/j.compbiomed.2023.106541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Revised: 11/30/2022] [Accepted: 01/10/2023] [Indexed: 01/15/2023]
Abstract
Computer vision techniques are a rapidly growing area of transforming medical images for various specific medical applications. In an end-to-end application, this paper proposes a 3D Medical Image Transformation Using a CSGAN model named a 3D-MedTranCSGAN. The 3D-MedTranCSGAN model is an integration of non-adversarial loss components and the Cyclic Synthesized Generative Adversarial Networks. The proposed model utilizes PatchGAN's discriminator network, to penalize the difference between the synthesized image and the original image. The model also computes the non-adversary loss functions such as content, perception, and style transfer losses. 3DCascadeNet is a new generator architecture introduced in the paper, which is used to enhance the perceptiveness of the transformed medical image by encoding-decoding pairs. We use the 3D-MedTranCSGAN model to do various tasks without modifying specific applications: PET to CT image transformation; reconstruction of CT to PET; modification of movement artefacts in MR images; and removing noise in PET images. We found that 3D-MedTranCSGAN outperformed other transformation methods in our experiments. For the first task, the proposed model yields SSIM is 0.914, PSNR is 26.12, MSE is 255.5, VIF is 0.4862, UQI is 0.9067 and LPIPs is 0.2284. For the second task, the model yields 0.9197, 25.7, 257.56, 0.4962, 0.9027, 0.2262. For the third task, the model yields 0.8862, 24.94, 0.4071, 0.6410, 0.2196. For the final task, the model yields 0.9521, 33.67, 33.57, 0.6091, 0.9255, 0.0244. Based on the result analysis, the proposed model outperforms the other techniques.
Collapse
Affiliation(s)
- S Poonkodi
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India
| | - M Kanchana
- Department of Computing Technologies, School of Computing, SRM Institute of Science and Technology, Kattankulathur, India.
| |
Collapse
|
10
|
Zhong L, Chen Z, Shu H, Zheng Y, Zhang Y, Wu Y, Feng Q, Li Y, Yang W. QACL: Quartet attention aware closed-loop learning for abdominal MR-to-CT synthesis via simultaneous registration. Med Image Anal 2023; 83:102692. [PMID: 36442293 DOI: 10.1016/j.media.2022.102692] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 10/27/2022] [Accepted: 11/09/2022] [Indexed: 11/18/2022]
Abstract
Synthesis of computed tomography (CT) images from magnetic resonance (MR) images is an important task to overcome the lack of electron density information in MR-only radiotherapy treatment planning (RTP). Some innovative methods have been proposed for abdominal MR-to-CT synthesis. However, it is still challenging due to the large misalignment between preprocessed abdominal MR and CT images and the insufficient feature information learned by models. Although several studies have used the MR-to-CT synthesis to alleviate the difficulty of multi-modal registration, this misalignment remains unsolved when training the MR-to-CT synthesis model. In this paper, we propose an end-to-end quartet attention aware closed-loop learning (QACL) framework for MR-to-CT synthesis via simultaneous registration. Specifically, the proposed quartet attention generator and mono-modal registration network form a closed-loop to improve the performance of MR-to-CT synthesis via simultaneous registration. In particular, a quartet-attention mechanism is developed to enlarge the receptive fields in networks to extract the long-range and cross-dimension spatial dependencies. Experimental results on two independent abdominal datasets demonstrate that our QACL achieves impressive results with MAE of 55.30±10.59 HU, PSNR of 22.85±1.43 dB, and SSIM of 0.83±0.04 for synthesis, and with Dice of 0.799±0.129 for registration. The proposed QACL outperforms the state-of-the-art MR-to-CT synthesis and multi-modal registration methods.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Zeli Chen
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, New York, NY, 10003, United States
| | - Yikai Zheng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yiwen Zhang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China
| | - Yin Li
- Department of Information, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510655, China.
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China.
| |
Collapse
|
11
|
Bi-MGAN: Bidirectional T1-to-T2 MRI images prediction using multi-generative multi-adversarial nets. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
12
|
He L. Non-rigid Multi-Modal Medical Image Registration Based on Improved Maximum Mutual Information PV Image Interpolation Method. Front Public Health 2022; 10:863307. [PMID: 35719652 PMCID: PMC9198292 DOI: 10.3389/fpubh.2022.863307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Accepted: 03/21/2022] [Indexed: 11/13/2022] Open
Abstract
With the continuous improvement of medical imaging equipment, CT, MRI and PET images can obtain accurate anatomical information of the same patient site. However, due to the fuzziness of medical image physiological evaluation and the unhealthy understanding of objects, the registration effect of many methods is not ideal. Therefore, based on the medical image registration model of Partial Volume (PV) image interpolation method and rigid medical image registration method, this paper established the non-rigid registration model of maximum mutual information Novel Partial Volume (NPV) image interpolation method. The proposed NPV interpolation method uses the Davidon-Fletcher-Powell algorithm (DFP) algorithm optimization method to solve the transformation parameter matrix and realize the accurate transformation of the floating image. In addition, the cubic B-spline is used as the kernel function to improve the image interpolation, which effectively improves the accuracy of the registration image. Finally, the proposed NPV method is compared with the PV interpolation method through the human brain CT-MRI-PET image to obtain a clear CT-MRI-PET image. The results show that the proposed NPV method has higher accuracy, better robustness, and easier realization. The model should also have guiding significance in face recognition and fingerprint recognition.
Collapse
Affiliation(s)
- Liting He
- School of Computer and Information Science, Southwest University, Chongqing, China
| |
Collapse
|
13
|
Wang J, Xiang K, Chen K, Liu R, Ni R, Zhu H, Xiong Y. Medical Image Registration Algorithm Based on Bounded Generalized Gaussian Mixture Model. Front Neurosci 2022; 16:911957. [PMID: 35720703 PMCID: PMC9201218 DOI: 10.3389/fnins.2022.911957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
In this paper, a method for medical image registration based on the bounded generalized Gaussian mixture model is proposed. The bounded generalized Gaussian mixture model is used to approach the joint intensity of source medical images. The mixture model is formulated based on a maximum likelihood framework, and is solved by an expectation-maximization algorithm. The registration performance of the proposed approach on different medical images is verified through extensive computer simulations. Empirical findings confirm that the proposed approach is significantly better than other conventional ones.
Collapse
Affiliation(s)
- Jingkun Wang
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, China
| | - Kun Xiang
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Kuo Chen
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Rui Liu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Ruifeng Ni
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Hao Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing, China
| | - Yan Xiong
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, China
| |
Collapse
|
14
|
Deng L, Hu J, Wang J, Huang S, Yang X. Synthetic CT generation based on CBCT using respath-cycleGAN. Med Phys 2022; 49:5317-5329. [PMID: 35488299 DOI: 10.1002/mp.15684] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 04/08/2022] [Accepted: 04/13/2022] [Indexed: 11/10/2022] Open
Abstract
PURPOSE Cone-beam computed tomography (CBCT) plays an important role in radiotherapy, but the presence of a large number of artifacts limits its application. The purpose of this study was to use respath-cycleGAN to synthesize CT (sCT) similar to planning CT (pCT) from CBCT for future clinical practice. METHODS The method integrates the respath concept into the original cycleGAN, called respath-cycleGAN, to map CBCT to pCT. Thirty patients were used for training, and 15 for testing. RESULTS The mean absolute error (MAE), root mean square error (RMSE), peak signal to noise ratio (PSNR), structural similarity index (SSIM), and spatial non-uniformity (SNU) were calculated to assess the quality of sCT generated from CBCT. Compared with CBCT images, the MAE improved from 197.72 to 140.7, RMSE from 339.17 to 266.51, and PSNR from 22.07 to 24.44, while SSIM increased from 0.948 to 0.964. Both visually and quantitatively, sCT with respath is superior to sCT without respath. We also performed a generalization test of the head-and-neck (H&N) model on a pelvic dataset. The results again showed that our model was superior. CONCLUSION We developed a respath-cycleGAN method to synthesize CT with good quality from CBCT. In future clinical practice, this method may be used to develop radiotherapy plans. This article is protected by copyright. All rights reserved.
Collapse
Affiliation(s)
- Liwei Deng
- Heilongjiang Provincial Key Laboratory of Complex Intelligent System and Integration, School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jie Hu
- School of Automation, Harbin University of Science and Technology, Harbin, Heilongjiang, 150080, China
| | - Jing Wang
- School of Biomedical Engineering, Guangzhou Xinhua University, Guangzhou, Guangdong, 510520, China
| | - Sijuan Huang
- Huang Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| | - Xin Yang
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center; State Key Laboratory of Oncology in South China; Collaborative Innovation Center for Cancer Medicine; Guangdong Key Laboratory of Nasopharyngeal Carcinoma Diagnosis and Therapy, Guangzhou, Guangdong, 510060, China
| |
Collapse
|
15
|
Performance Evaluation of Feature Matching Techniques for Detecting Reinforced Soil Retaining Wall Displacement. REMOTE SENSING 2022. [DOI: 10.3390/rs14071697] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Image registration technology is widely applied in various matching methods. In this study, we aim to evaluate the feature matching performance and to find an optimal technique for detecting three types of behaviors—facing displacement, settlement, and combined displacement—in reinforced soil retaining walls (RSWs). For a single block with an artificial target and a multiblock structure with artificial and natural targets, five popular detectors and descriptors—KAZE, SURF, MinEigen, ORB, and BRISK—were used to evaluate the resolution performance. For comparison, the repeatability, matching score, and inlier matching features were analyzed based on the number of extracted and matched features. The axial registration error (ARE) was used to verify the accuracy of the methods by comparing the position between the estimated and real features. The results showed that the KAZE method was the best detector and descriptor for RSWs (block shape target), with the highest probability of successfully matching features. In the multiblock experiment, the block used as a natural target showed similar matching performance to that of the block with an artificial target attached. Therefore, the behaviors of RSW blocks can be analyzed using the KAZE method without installing an artificial target.
Collapse
|
16
|
Yang Z, Leng L, Li M, Chu J. A computer-aid multi-task light-weight network for macroscopic feces diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:15671-15686. [PMID: 35250359 PMCID: PMC8884099 DOI: 10.1007/s11042-022-12565-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 06/15/2021] [Accepted: 01/31/2022] [Indexed: 06/14/2023]
Abstract
The abnormal traits and colors of feces typically indicate that the patients are probably suffering from tumor or digestive-system diseases. Thus a fast, accurate and automatic health diagnosis system based on feces is urgently necessary for improving the examination speed and reducing the infection risk. The rarity of the pathological images would deteriorate the accuracy performance of the trained models. In order to alleviate this problem, we employ augmentation and over-sampling to expand the samples of the classes that have few samples in the training batch. In order to achieve an impressive recognition performance and leverage the latent correlation between the traits and colors of feces pathological samples, a multi-task network is developed to recognize colors and traits of the macroscopic feces images. The parameter number of a single multi-task network is generally much smaller than the total parameter number of multiple single-task networks, so the storage cost is reduced. The loss function of the multi-task network is the weighted sum of the losses of the two tasks. In this paper, the weights of the tasks are determined according to their difficulty levels that are measured by the fitted linear functions. The sufficient experiments confirm that the proposed method can yield higher accuracies, and the efficiency is also improved.
Collapse
Affiliation(s)
- Ziyuan Yang
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
- College of Computer Science, Sichuan University, Chengdu, 610065 People’s Republic of China
| | - Lu Leng
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
- School of Electrical and Electronic Engineering, College of Engineering, Yonsei University, Seoul, 120749 Republic of Korea
| | - Ming Li
- School of Information Engineering, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
| | - Jun Chu
- School of Software, Nanchang Hangkong University, Nanchang, 330063 People’s Republic of China
| |
Collapse
|
17
|
Abstract
Attenuation correction has been one of the main methodological challenges in the integrated positron emission tomography and magnetic resonance imaging (PET/MRI) field. As standard transmission or computed tomography approaches are not available in integrated PET/MRI scanners, MR-based attenuation correction approaches had to be developed. Aspects that have to be considered for implementing accurate methods include the need to account for attenuation in bone tissue, normal and pathological lung and the MR hardware present in the PET field-of-view, to reduce the impact of subject motion, to minimize truncation and susceptibility artifacts, and to address issues related to the data acquisition and processing both on the PET and MRI sides. The standard MR-based attenuation correction techniques implemented by the PET/MRI equipment manufacturers and their impact on clinical and research PET data interpretation and quantification are first discussed. Next, the more advanced methods, including the latest generation deep learning-based approaches that have been proposed for further minimizing the attenuation correction related bias are described. Finally, a future perspective focused on the needed developments in the field is given.
Collapse
Affiliation(s)
- Ciprian Catana
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, United States of America
| |
Collapse
|
18
|
Mecheter I, Alic L, Abbod M, Amira A, Ji J. MR Image-Based Attenuation Correction of Brain PET Imaging: Review of Literature on Machine Learning Approaches for Segmentation. J Digit Imaging 2020; 33:1224-1241. [PMID: 32607906 PMCID: PMC7573060 DOI: 10.1007/s10278-020-00361-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Recent emerging hybrid technology of positron emission tomography/magnetic resonance (PET/MR) imaging has generated a great need for an accurate MR image-based PET attenuation correction. MR image segmentation, as a robust and simple method for PET attenuation correction, has been clinically adopted in commercial PET/MR scanners. The general approach in this method is to segment the MR image into different tissue types, each assigned an attenuation constant as in an X-ray CT image. Machine learning techniques such as clustering, classification and deep networks are extensively used for brain MR image segmentation. However, only limited work has been reported on using deep learning in brain PET attenuation correction. In addition, there is a lack of clinical evaluation of machine learning methods in this application. The aim of this review is to study the use of machine learning methods for MR image segmentation and its application in attenuation correction for PET brain imaging. Furthermore, challenges and future opportunities in MR image-based PET attenuation correction are discussed.
Collapse
Affiliation(s)
- Imene Mecheter
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK.
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar.
| | - Lejla Alic
- Magnetic Detection and Imaging Group, Faculty of Science and Technology, University of Twente, Enschede, Netherlands
| | - Maysam Abbod
- Department of Electronic and Computer Engineering, Brunel University London, Uxbridge, UK
| | - Abbes Amira
- Institute of Artificial Intelligence, De Montfort University, Leicester, UK
| | - Jim Ji
- Department of Electrical and Computer Engineering, Texas A & M University at Qatar, Doha, Qatar
- Department of Electrical and Computer Engineering, Texas A & M University, College Station, TX, USA
| |
Collapse
|
19
|
Xu L, Zeng X, Zhang H, Li W, Lei J, Huang Z. BPGAN: Bidirectional CT-to-MRI prediction using multi-generative multi-adversarial nets with spectral normalization and localization. Neural Netw 2020; 128:82-96. [DOI: 10.1016/j.neunet.2020.05.001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2019] [Revised: 03/21/2020] [Accepted: 05/02/2020] [Indexed: 01/18/2023]
|
20
|
Tripartite-GAN: Synthesizing liver contrast-enhanced MRI to improve tumor detection. Med Image Anal 2020; 63:101667. [DOI: 10.1016/j.media.2020.101667] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Revised: 01/15/2020] [Accepted: 02/15/2020] [Indexed: 01/08/2023]
|
21
|
Hu Y, Zhang L. MRI-only Radiation Therapy: Pseudo-CT Based on Cubic-Feature Extraction and Alternative Regression Forest. INT J PATTERN RECOGN 2020. [DOI: 10.1142/s0218001420540336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Despite the extensive attention attracted by magnetic resonance imaging (MRI) in the radiation therapy, computed tomography was reintroduced by the researchers. During the calculation process of the 3D dose distribution of tissues, there were some arguments about the electron density information obtained from the CT scan. However, the CT-provided bones are accurate for constructing a radiograph. Recently, the advantages boosted by the soft tissue contrast relying on MRI and as well as the advantages boosted by CT imaging have been combined by the using of MRI/CT. Unfortunately, disadvantages still exist in the MRI/CT workflow because the voxel-intensities are unbalanced in the MRI and the CT scan. Here, based on the mapping method of CT and MRI, the potential of pseudo-CT (PCT) instead of CT planning was studied. The estimated PCT only from the corresponding MRI was obtained by using the patch-based random forest regression. The CT voxel target was trained by 3D Gabor feature in the MRI cube and the Local Binary Pattern (LBP). Besides, the regression task was solved by the alternative regression forest. According to the experiment, the method performs better than the current dictionary learning-based (DLB) method or atlas-based (AB) method.
Collapse
Affiliation(s)
- Yongsheng Hu
- School of Electrical and Information Engineering, Tianjin University, Tianjin, P. R. China
- School of Information Engineering, Binzhou University, Shandong, P. R. China
| | - Liyi Zhang
- School of Electrical and Information Engineering, Tianjin University, Tianjin, P. R. China
- School of Information Engineering, Tianjin University of Commerce, Tianjin, P. R. China
| |
Collapse
|
22
|
Liu X, Li JB, Pan JS. Feature Point Matching Based on Distinct Wavelength Phase Congruency and Log-Gabor Filters in Infrared and Visible Images. SENSORS 2019; 19:s19194244. [PMID: 31569596 PMCID: PMC6806253 DOI: 10.3390/s19194244] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/29/2019] [Revised: 09/20/2019] [Accepted: 09/24/2019] [Indexed: 11/17/2022]
Abstract
Infrared and visible image matching methods have been rising in popularity with the emergence of more kinds of sensors, which provide more applications in visual navigation, precision guidance, image fusion, and medical image analysis. In such applications, image matching is utilized for location, fusion, image analysis, and so on. In this paper, an infrared and visible image matching approach, based on distinct wavelength phase congruency (DWPC) and log-Gabor filters, is proposed. Furthermore, this method is modified for non-linear image matching with different physical wavelengths. Phase congruency (PC) theory is utilized to obtain PC images with intrinsic and affluent image features for images containing complex intensity changes or noise. Then, the maximum and minimum moments of the PC images are computed to obtain the corners in the matched images. In order to obtain the descriptors, log-Gabor filters are utilized and overlapping subregions are extracted in a neighborhood of certain pixels. In order to improve the accuracy of the algorithm, the moments of PCs in the original image and a Gaussian smoothed image are combined to detect the corners. Meanwhile, it is improper that the two matched images have the same PC wavelengths, due to the images having different physical wavelengths. Thus, in the experiment, the wavelength of the PC is changed for different physical wavelengths. For realistic application, BiDimRegression method is proposed to compute the similarity between two points set in infrared and visible images. The proposed approach is evaluated on four data sets with 237 pairs of visible and infrared images, and its performance is compared with state-of-the-art approaches: the edge-oriented histogram descriptor (EHD), phase congruency edge-oriented histogram descriptor (PCEHD), and log-Gabor histogram descriptor (LGHD) algorithms. The experimental results indicate that the accuracy rate of the proposed approach is 50% higher than the traditional approaches in infrared and visible images.
Collapse
Affiliation(s)
- Xiaomin Liu
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China.
- Information and Electronic Technology Institute, Jiamusi University, Jiamusi 154002, China.
| | - Jun-Bao Li
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China.
| | - Jeng-Shyang Pan
- College of Computer Science and Engineering, Shandong University of Science and Technology, Qingdao 266510, China.
- Fujian Provincial Key Laboratory of Big Data Minning and Applications, Fujian University of Technology, Fuzhou 350118, China.
- College of Informatics, Chaoyang University of Science and Technology, Taichung 413, Taiwan.
| |
Collapse
|
23
|
Zhong L, Chen Y, Zhang X, Liu S, Wu Y, Liu Y, Lin L, Feng Q, Chen W, Yang W. Flexible Prediction of CT Images From MRI Data Through Improved Neighborhood Anchored Regression for PET Attenuation Correction. IEEE J Biomed Health Inform 2019; 24:1114-1124. [PMID: 31295129 DOI: 10.1109/jbhi.2019.2927368] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Given the complicated relationship between the magnetic resonance imaging (MRI) signals and the attenuation values, the attenuation correction in hybrid positron emission tomography (PET)/MRI systems remains a challenging task. Currently, existing methods are either time-consuming or require sufficient samples to train the models. In this paper, an efficient approach for predicting pseudo computed tomography (CT) images from T1- and T2-weighted MRI data with limited data is proposed. The proposed approach uses improved neighborhood anchored regression (INAR) as a baseline method to pre-calculate projected matrices to flexibly predict the pseudo CT patches. Techniques, including the augmentation of the MR/CT dataset, learning of the nonlinear descriptors of MR images, hierarchical search for nearest neighbors, data-driven optimization, and multi-regressor ensemble, are adopted to improve the effectiveness of the proposed approach. In total, 22 healthy subjects were enrolled in the study. The pseudo CT images obtained using INAR with multi-regressor ensemble yielded mean absolute error (MAE) of 92.73 ± 14.86 HU, peak signal-to-noise ratio of 29.77 ± 1.63 dB, Pearson linear correlation coefficient of 0.82 ± 0.05, dice similarity coefficient of 0.81 ± 0.03, and the relative mean absolute error (rMAE) in PET attenuation correction of 1.30 ± 0.20% compared with true CT images. Moreover, our proposed INAR method, without any refinement strategies, can achieve considerable results with only seven subjects (MAE 106.89 ± 14.43 HU, rMAE 1.51 ± 0.21%). The experiments prove the superior performance of the proposed method over the six innovative methods. Moreover, the proposed method can rapidly generate the pseudo CT images that are suitable for PET attenuation correction.
Collapse
|
24
|
Pang S, Su Z, Leung S, Nachum IB, Chen B, Feng Q, Li S. Direct automated quantitative measurement of spine by cascade amplifier regression network with manifold regularization. Med Image Anal 2019; 55:103-115. [DOI: 10.1016/j.media.2019.04.012] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2018] [Revised: 02/25/2019] [Accepted: 04/17/2019] [Indexed: 11/30/2022]
|