1
|
Hou C, Huang T, Hu K, Ye Z, Guo J, Zhou H. Artificial intelligence-assisted multimodal imaging for the clinical applications of breast cancer: a bibliometric analysis. Discov Oncol 2025; 16:537. [PMID: 40237900 PMCID: PMC12003249 DOI: 10.1007/s12672-025-02329-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/20/2025] [Accepted: 04/08/2025] [Indexed: 04/18/2025] Open
Abstract
BACKGROUND Breast cancer (BC) remains a leading cause of cancer-related mortality among women globally, with increasing incidence rates posing significant public health challenges. Recent advancements in artificial intelligence (AI) have revolutionized medical imaging, particularly in enhancing diagnostic accuracy and prognostic capabilities for BC. While multimodal imaging combined with AI has shown remarkable potential, a comprehensive analysis is needed to synthesize current research and identify emerging trends and hotspots in AI-assisted multimodal imaging for BC. METHODS This study analyzed literature on AI-assisted multimodal imaging in BC from January 2010 to November 2024 in Web of Science Core Collection (WoSCC). Bibliometric and visualization tools, including VOSviewer, CiteSpace, and the Bibliometrix R package, were employed to assess countries, institutions, authors, journals, and keywords. RESULTS A total of 80 publications were included, revealing a steady increase in annual publications and citations, with a notable surge post-2021. China led in productivity and citations, while Germany exhibited the highest citation average. The United States demonstrated the strongest international collaboration. The most productive institution and author are Radboud University Nijmegen and Xi, Xiaoming. Publications were predominantly published in Computerized Medical Imaging and Graphics, with Qian, XJ's 2021 study on BC risk prediction under deep learning frameworks being the most influential. Keyword analysis highlighted themes such as "breast cancer", "classification", and "deep learning". CONCLUSIONS AI-assisted multimodal imaging has significantly advanced BC diagnosis and management, with promising future developments. This study offers researchers a comprehensive overview of current frameworks and emerging research directions. Future efforts are expected to focus on improving diagnostic precision and refining therapeutic strategies through optimized imaging techniques and AI algorithms, emphasizing international collaboration to drive innovation and clinical translation.
Collapse
Affiliation(s)
- Chenke Hou
- Hangzhou TCM Hospital of Zhejiang Chinese Medical University (Hangzhou Hospital of Traditional Chinese Medicine), Hangzhou, 310007, Zhejiang, China
| | - Ting Huang
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Keke Hu
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Zhifeng Ye
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Junhua Guo
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China
| | - Heran Zhou
- Department of Oncology, Hangzhou TCM Hospital Affiliated to Zhejiang Chinese Medical University, No. 453 Stadium Road, Xihu District, Hangzhou, 310007, Zhejiang, China.
| |
Collapse
|
2
|
Wang H, Wang T, Hao Y, Ding S, Feng J. Breast tumor segmentation via deep correlation analysis of multi-sequence MRI. Med Biol Eng Comput 2024; 62:3801-3814. [PMID: 39031329 DOI: 10.1007/s11517-024-03166-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Accepted: 07/03/2024] [Indexed: 07/22/2024]
Abstract
Precise segmentation of breast tumors from MRI is crucial for breast cancer diagnosis, as it allows for detailed calculation of tumor characteristics such as shape, size, and edges. Current segmentation methodologies face significant challenges in accurately modeling the complex interrelationships inherent in multi-sequence MRI data. This paper presents a hybrid deep network framework with three interconnected modules, aimed at efficiently integrating and exploiting the spatial-temporal features among multiple MRI sequences for breast tumor segmentation. The first module involves an advanced multi-sequence encoder with a densely connected architecture, separating the encoding pathway into multiple streams for individual MRI sequences. To harness the intricate correlations between different sequence features, we propose a sequence-awareness and temporal-awareness method that adeptly fuses spatial-temporal features of MRI in the second multi-scale feature embedding module. Finally, the decoder module engages in the upsampling of feature maps, meticulously refining the resolution to achieve highly precise segmentation of breast tumors. In contrast to other popular methods, the proposed method learns the interrelationships inherent in multi-sequence MRI. We justify the proposed method through extensive experiments. It achieves notable improvements in segmentation performance, with Dice Similarity Coefficient (DSC), Intersection over Union (IoU), and Positive Predictive Value (PPV) scores of 80.57%, 74.08%, and 84.74% respectively.
Collapse
Affiliation(s)
- Hongyu Wang
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China.
| | - Tonghui Wang
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China
| | - Yanfang Hao
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Songtao Ding
- School of Computer Science and Technology, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Shaanxi Key Laboratory of Network Data Analysis and Intelligent Processing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
- Xi'an Key Laboratory of Big Data and Intelligent Computing, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi, 710121, China
| | - Jun Feng
- Department of Information Science and Technology, Northwest University, Xi'an, Shaanxi, 7101127, China.
| |
Collapse
|
3
|
Cao W, Guo J, You X, Liu Y, Li L, Cui W, Cao Y, Chen X, Zheng J. NeighborNet: Learning Intra- and Inter-Image Pixel Neighbor Representation for Breast Lesion Segmentation. IEEE J Biomed Health Inform 2024; 28:4761-4771. [PMID: 38743530 DOI: 10.1109/jbhi.2024.3400802] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/16/2024]
Abstract
Breast lesion segmentation from ultrasound images is essential in computer-aided breast cancer diagnosis. To alleviate the problems of blurry lesion boundaries and irregular morphologies, common practices combine CNN and attention to integrate global and local information. However, previous methods use two independent modules to extract global and local features separately, such feature-wise inflexible integration ignores the semantic gap between them, resulting in representation redundancy/insufficiency and undesirable restrictions in clinic practices. Moreover, medical images are highly similar to each other due to the imaging methods and human tissues, but the captured global information by transformer-based methods in the medical domain is limited within images, the semantic relations and common knowledge across images are largely ignored. To alleviate the above problems, in the neighbor view, this paper develops a pixel neighbor representation learning method (NeighborNet) to flexibly integrate global and local context within and across images for lesion morphology and boundary modeling. Concretely, we design two neighbor layers to investigate two properties (i.e., number and distribution) of neighbors. The neighbor number for each pixel is not fixed but determined by itself. The neighbor distribution is extended from one image to all images in the datasets. With the two properties, for each pixel at each feature level, the proposed NeighborNet can evolve into the transformer or degenerate into the CNN for adaptive context representation learning to cope with the irregular lesion morphologies and blurry boundaries. The state-of-the-art performances on three ultrasound datasets prove the effectiveness of the proposed NeighborNet.
Collapse
|
4
|
Zhang W, Tao Y, Huang Z, Li Y, Chen Y, Song T, Ma X, Zhang Y. Multi-phase features interaction transformer network for liver tumor segmentation and microvascular invasion assessment in contrast-enhanced CT. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2024; 21:5735-5761. [PMID: 38872556 DOI: 10.3934/mbe.2024253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
Precise segmentation of liver tumors from computed tomography (CT) scans is a prerequisite step in various clinical applications. Multi-phase CT imaging enhances tumor characterization, thereby assisting radiologists in accurate identification. However, existing automatic liver tumor segmentation models did not fully exploit multi-phase information and lacked the capability to capture global information. In this study, we developed a pioneering multi-phase feature interaction Transformer network (MI-TransSeg) for accurate liver tumor segmentation and a subsequent microvascular invasion (MVI) assessment in contrast-enhanced CT images. In the proposed network, an efficient multi-phase features interaction module was introduced to enable bi-directional feature interaction among multiple phases, thus maximally exploiting the available multi-phase information. To enhance the model's capability to extract global information, a hierarchical transformer-based encoder and decoder architecture was designed. Importantly, we devised a multi-resolution scales feature aggregation strategy (MSFA) to optimize the parameters and performance of the proposed model. Subsequent to segmentation, the liver tumor masks generated by MI-TransSeg were applied to extract radiomic features for the clinical applications of the MVI assessment. With Institutional Review Board (IRB) approval, a clinical multi-phase contrast-enhanced CT abdominal dataset was collected that included 164 patients with liver tumors. The experimental results demonstrated that the proposed MI-TransSeg was superior to various state-of-the-art methods. Additionally, we found that the tumor mask predicted by our method showed promising potential in the assessment of microvascular invasion. In conclusion, MI-TransSeg presents an innovative paradigm for the segmentation of complex liver tumors, thus underscoring the significance of multi-phase CT data exploitation. The proposed MI-TransSeg network has the potential to assist radiologists in diagnosing liver tumors and assessing microvascular invasion.
Collapse
Affiliation(s)
- Wencong Zhang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
- Department of Biomedical Engineering, College of Design and Engineering, National University of Singapore, Singapore
| | - Yuxi Tao
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Zhanyao Huang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Yue Li
- School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China
| | - Yingjia Chen
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Tengfei Song
- Department of Radiology, The Fifth Affiliated Hospital of Sun Yat-sen University, Zhuhai, China
| | - Xiangyuan Ma
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| | - Yaqin Zhang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Shantou, China
| |
Collapse
|
5
|
Zhao Z, Du S, Xu Z, Yin Z, Huang X, Huang X, Wong C, Liang Y, Shen J, Wu J, Qu J, Zhang L, Cui Y, Wang Y, Wee L, Dekker A, Han C, Liu Z, Shi Z, Liang C. SwinHR: Hemodynamic-powered hierarchical vision transformer for breast tumor segmentation. Comput Biol Med 2024; 169:107939. [PMID: 38194781 DOI: 10.1016/j.compbiomed.2024.107939] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Revised: 12/12/2023] [Accepted: 01/01/2024] [Indexed: 01/11/2024]
Abstract
Accurate and automated segmentation of breast tumors in dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays a critical role in computer-aided diagnosis and treatment of breast cancer. However, this task is challenging, due to random variation in tumor sizes, shapes, appearances, and blurred boundaries of tumors caused by inherent heterogeneity of breast cancer. Moreover, the presence of ill-posed artifacts in DCE-MRI further complicate the process of tumor region annotation. To address the challenges above, we propose a scheme (named SwinHR) integrating prior DCE-MRI knowledge and temporal-spatial information of breast tumors. The prior DCE-MRI knowledge refers to hemodynamic information extracted from multiple DCE-MRI phases, which can provide pharmacokinetics information to describe metabolic changes of the tumor cells over the scanning time. The Swin Transformer with hierarchical re-parameterization large kernel architecture (H-RLK) can capture long-range dependencies within DCE-MRI while maintaining computational efficiency by a shifted window-based self-attention mechanism. The use of H-RLK can extract high-level features with a wider receptive field, which can make the model capture contextual information at different levels of abstraction. Extensive experiments are conducted in large-scale datasets to validate the effectiveness of our proposed SwinHR scheme, demonstrating its superiority over recent state-of-the-art segmentation methods. Also, a subgroup analysis split by MRI scanners, field strength, and tumor size is conducted to verify its generalization. The source code is released on (https://github.com/GDPHMediaLab/SwinHR).
Collapse
Affiliation(s)
- Zhihe Zhao
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Siyao Du
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Zeyan Xu
- Department of Radiology, The Third Affiliated Hospital of Kunming Medical University, Yunnan Cancer Hospital, Yunnan Cancer Center, Kunming, 650118, China
| | - Zhi Yin
- Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Xiaomei Huang
- Department of Medical Imaging, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Xin Huang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Shantou University Medical College, Shantou, 515041, China
| | - Chinting Wong
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Yanting Liang
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China
| | - Jing Shen
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jianlin Wu
- Department of Radiology, Affiliated Zhongshan Hospital of Dalian University, Dalian, 116001, China
| | - Jinrong Qu
- Department of Radiology, The Affiliated Cancer Hospital of Zhengzhou University & Henan Cancer Hospital, Zhengzhou, 450008, China
| | - Lina Zhang
- Department of Radiology, The First Hospital of China Medical University, Shenyang, Liaoning Province, 110001, China
| | - Yanfen Cui
- Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Department of Radiology, Shanxi Province Cancer Hospital/ Shanxi Hospital Affiliated to Cancer Hospital, Chinese Academy of Medical Sciences/Cancer Hospital Affiliated to Shanxi Medical University, Taiyuan, 030013, China
| | - Ying Wang
- Department of Medical Ultrasonics, The First Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510120, China
| | - Leonard Wee
- Clinical Data Science, Faculty of Health Medicine Life Sciences, Maastricht University, Maastricht, 6229 ET, The Netherlands; Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Andre Dekker
- Department of Radiation Oncology (Maastro), GROW School of Oncology and Reproduction, Maastricht University Medical Centre+, Maastricht, 6229 ET, The Netherlands
| | - Chu Han
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Zaiyi Liu
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| | - Zhenwei Shi
- Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China; Medical Research Institute, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China.
| | - Changhong Liang
- School of Medicine, South China University of Technology, Guangzhou, 510006, China; Department of Radiology, Guangdong Provincial People's Hospital (Guangdong Academy of Medical Sciences), Southern Medical University, Guangzhou, 510080, China; Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application, Guangzhou, 510080, China.
| |
Collapse
|
6
|
Ahamed MF, Hossain MM, Nahiduzzaman M, Islam MR, Islam MR, Ahsan M, Haider J. A review on brain tumor segmentation based on deep learning methods with federated learning techniques. Comput Med Imaging Graph 2023; 110:102313. [PMID: 38011781 DOI: 10.1016/j.compmedimag.2023.102313] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 11/13/2023] [Accepted: 11/13/2023] [Indexed: 11/29/2023]
Abstract
Brain tumors have become a severe medical complication in recent years due to their high fatality rate. Radiologists segment the tumor manually, which is time-consuming, error-prone, and expensive. In recent years, automated segmentation based on deep learning has demonstrated promising results in solving computer vision problems such as image classification and segmentation. Brain tumor segmentation has recently become a prevalent task in medical imaging to determine the tumor location, size, and shape using automated methods. Many researchers have worked on various machine and deep learning approaches to determine the most optimal solution using the convolutional methodology. In this review paper, we discuss the most effective segmentation techniques based on the datasets that are widely used and publicly available. We also proposed a survey of federated learning methodologies to enhance global segmentation performance and ensure privacy. A comprehensive literature review is suggested after studying more than 100 papers to generalize the most recent techniques in segmentation and multi-modality information. Finally, we concentrated on unsolved problems in brain tumor segmentation and a client-based federated model training strategy. Based on this review, future researchers will understand the optimal solution path to solve these issues.
Collapse
Affiliation(s)
- Md Faysal Ahamed
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Munawar Hossain
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Nahiduzzaman
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Rabiul Islam
- Department of Computer Science & Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Md Robiul Islam
- Department of Electrical & Computer Engineering, Rajshahi University of Engineering & Technology, Rajshahi 6204, Bangladesh
| | - Mominul Ahsan
- Department of Computer Science, University of York, Deramore Lane, Heslington, York YO10 5GH, UK
| | - Julfikar Haider
- Department of Engineering, Manchester Metropolitan University, Chester St, Manchester M1 5GD, UK.
| |
Collapse
|
7
|
ME-CCNN: Multi-encoded images and a cascade convolutional neural network for breast tumor segmentation and recognition. Artif Intell Rev 2023. [DOI: 10.1007/s10462-023-10426-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/21/2023]
|
8
|
Iqbal A, Sharif M. BTS-ST: Swin transformer network for segmentation and classification of multimodality breast cancer images. Knowl Based Syst 2023. [DOI: 10.1016/j.knosys.2023.110393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
9
|
Yang H, Chen Q, Fu K, Zhu L, Jin L, Qiu B, Ren Q, Du H, Lu Y. Boosting medical image segmentation via conditional-synergistic convolution and lesion decoupling. Comput Med Imaging Graph 2022; 101:102110. [PMID: 36057184 DOI: 10.1016/j.compmedimag.2022.102110] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 06/09/2022] [Accepted: 07/28/2022] [Indexed: 01/27/2023]
Abstract
Medical image segmentation is a critical step in pathology assessment and monitoring. Extensive methods tend to utilize a deep convolutional neural network for various medical segmentation tasks, such as polyp segmentation, skin lesion segmentation, etc. However, due to the inherent difficulty of medical images and tremendous data variations, they usually perform poorly in some intractable cases. In this paper, we propose an input-specific network called conditional-synergistic convolution and lesion decoupling network (CCLDNet) to solve these issues. First, in contrast to existing CNN-based methods with stationary convolutions, we propose the conditional synergistic convolution (CSConv) that aims to generate a specialist convolution kernel for each lesion. CSConv has the ability of dynamic modeling and could be leveraged as a basic block to construct other networks in a broad range of vision tasks. Second, we devise a lesion decoupling strategy (LDS) to decouple the original lesion segmentation map into two soft labels, i.e., lesion center label and lesion boundary label, for reducing the segmentation difficulty. Besides, we use a transformer network as the backbone, further erasing the fixed structure of the standard CNN and empowering dynamic modeling capability of the whole framework. Our CCLDNet outperforms state-of-the-art approaches by a large margin on a variety of benchmarks, including polyp segmentation (89.22% dice score on EndoScene) and skin lesion segmentation (91.15% dice score on ISIC2018). Our code is available at https://github.com/QianChen98/CCLD-Net.
Collapse
Affiliation(s)
- Huakun Yang
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China
| | - Qian Chen
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Keren Fu
- College of Computer Science, National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University, Chengdu 610065, China
| | - Lei Zhu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Lujia Jin
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Bensheng Qiu
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China
| | - Qiushi Ren
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China; Department of Biomedical Engineering, College of Future Technology, Peking University, Beijing 100871, China
| | - Hongwei Du
- College of Information Science and Technology, University of Science and Technology of China, Hefei 230041, China.
| | - Yanye Lu
- Institute of Medical Technology, Peking University Health Science Center, Peking University, Beijing 100191, China.
| |
Collapse
|
10
|
Liu Q, Wang J, Zuo M, Cao W, Zheng J, Zhao H, Xie J. NCRNet: Neighborhood Context Refinement Network for skin lesion segmentation. Comput Biol Med 2022; 146:105545. [PMID: 35477048 DOI: 10.1016/j.compbiomed.2022.105545] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2022] [Revised: 04/11/2022] [Accepted: 04/17/2022] [Indexed: 12/24/2022]
Abstract
Accurate skin lesion segmentation plays a fundamental role in computer-aided melanoma analysis. Recently, some FCN-based methods have been proposed and achieved promising results in lesion segmentation tasks. However, due to the variable shapes, different scales, noise interference, and ambiguous boundaries of skin lesions, the capabilities of lesion location and boundary delineation of these works are still insufficient. To overcome the above challenges, in this paper, we propose a novel Neighborhood Context Refinement Network (NCRNet) by using the coarse-to-fine strategy to achieve accurate skin lesion segmentation. The proposed NCRNet contains a shared encoder and two different but closely related decoders for locating the skin lesions and refining the lesion boundaries. Specifically, we first design the Parallel Attention Decoder (PAD), which can effectively extract and fuse the local detail information and global semantic information on multiple levels to locate skin lesions of different sizes and shapes. Then, based on the initial lesion location, we further design the Neighborhood Context Refinement Decoder (NCRD), aiming at leveraging the fine-grained multi-stage neighborhood context cues to refine the lesion boundaries continuously. Furthermore, the neighborhood-based deep supervision used in the NCRD can make the shared encoder pay more attention to the lesion boundary areas and promote convergence of the segmentation network. The public skin lesion segmentation dataset ISIC2017 is adopted to validate the effectiveness of the proposed NCRNet. Comprehensive experiments prove that the proposed NCRNet reaches the state-of-the-art performance than the other nine competitive methods and gets 78.62%, 86.55%, and 94.01% on Jaccard, Dice, and Accuracy, respectively.
Collapse
Affiliation(s)
- Qi Liu
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Jingkun Wang
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China
| | - Mengying Zuo
- Department of Cardiology, Children's Hospital of Soochow University, Suzhou, 215003, China
| | - Weiwei Cao
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China.
| | - Jian Zheng
- School of Biomedical Engineering (Suzhou), Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, 230026, China; Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Suzhou, 215163, China; Jinan Guoke Medical Technology Development Co., Ltd, Jinan, 250101, China
| | - Hui Zhao
- The Wenzhou Third Clinical Institute Affiliated to Wenzhou Medical University, (Wenzhou People's Hospital), Wenzhou, 325000, China
| | - Jing Xie
- The Wenzhou Third Clinical Institute Affiliated to Wenzhou Medical University, (Wenzhou People's Hospital), Wenzhou, 325000, China.
| |
Collapse
|
11
|
Dewangan KK, Dewangan DK, Sahu SP, Janghel R. Breast cancer diagnosis in an early stage using novel deep learning with hybrid optimization technique. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:13935-13960. [PMID: 35233181 PMCID: PMC8874754 DOI: 10.1007/s11042-022-12385-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 01/17/2022] [Accepted: 01/21/2022] [Indexed: 05/17/2023]
Abstract
Breast cancer is one of the primary causes of death that is occurred in females around the world. So, the recognition and categorization of initial phase breast cancer are necessary to help the patients to have suitable action. However, mammography images provide very low sensitivity and efficiency while detecting breast cancer. Moreover, Magnetic Resonance Imaging (MRI) provides high sensitivity than mammography for predicting breast cancer. In this research, a novel Back Propagation Boosting Recurrent Wienmed model (BPBRW) with Hybrid Krill Herd African Buffalo Optimization (HKH-ABO) mechanism is developed for detecting breast cancer in an earlier stage using breast MRI images. Initially, the MRI breast images are trained to the system, and an innovative Wienmed filter is established for preprocessing the MRI noisy image content. Moreover, the projected BPBRW with HKH-ABO mechanism categorizes the breast cancer tumor as benign and malignant. Additionally, this model is simulated using Python, and the performance of the current research work is evaluated with prevailing works. Hence, the comparative graph shows that the current research model produces improved accuracy of 99.6% with a 0.12% lower error rate.
Collapse
Affiliation(s)
- Kranti Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Deepak Kumar Dewangan
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Satya Prakash Sahu
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| | - Rekhram Janghel
- Department of Information Technology, National Institute of Technology, Raipur, Chhatisgarh 492010 India
| |
Collapse
|