1
|
Yang Y, Wang J. Research on breast cancer pathological image classification method based on wavelet transform and YOLOv8. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2024; 32:677-687. [PMID: 38189740 DOI: 10.3233/xst-230296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Breast cancer is one of the cancers with high morbidity and mortality in the world, which is a serious threat to the health of women. With the development of deep learning, the recognition about computer-aided diagnosis technology is getting higher and higher. And the traditional data feature extraction technology has been gradually replaced by the feature extraction technology based on convolutional neural network which helps to realize the automatic recognition and classification of pathological images. In this paper, a novel method based on deep learning and wavelet transform is proposed to classify the pathological images of breast cancer. Firstly, the image flip technique is used to expand the data set, then the two-level wavelet decomposition and reconfiguration technology is used to sharpen and enhance the pathological images. Secondly, the processed data set is divided into the training set and the test set according to 8:2 and 7:3, and the YOLOv8 network model is selected to perform the eight classification tasks of breast cancer pathological images. Finally, the classification accuracy of the proposed method is compared with the classification accuracy obtained by YOLOv8 for the original BreaKHis dataset, and it is found that the algorithm can improve the classification accuracy of images with different magnifications, which proves the effectiveness of combining two-level wavelet decomposition and reconfiguration with YOLOv8 network model.
Collapse
Affiliation(s)
- Yunfeng Yang
- Department of Mathematics and Statistics, Northeast Petroleum University, Daqing, China
| | - Jiaqi Wang
- Department of Mathematics and Statistics, Northeast Petroleum University, Daqing, China
| |
Collapse
|
2
|
Mirikharaji Z, Abhishek K, Bissoto A, Barata C, Avila S, Valle E, Celebi ME, Hamarneh G. A survey on deep learning for skin lesion segmentation. Med Image Anal 2023; 88:102863. [PMID: 37343323 DOI: 10.1016/j.media.2023.102863] [Citation(s) in RCA: 20] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 02/01/2023] [Accepted: 05/31/2023] [Indexed: 06/23/2023]
Abstract
Skin cancer is a major public health problem that could benefit from computer-aided diagnosis to reduce the burden of this common disease. Skin lesion segmentation from images is an important step toward achieving this goal. However, the presence of natural and artificial artifacts (e.g., hair and air bubbles), intrinsic factors (e.g., lesion shape and contrast), and variations in image acquisition conditions make skin lesion segmentation a challenging task. Recently, various researchers have explored the applicability of deep learning models to skin lesion segmentation. In this survey, we cross-examine 177 research papers that deal with deep learning-based segmentation of skin lesions. We analyze these works along several dimensions, including input data (datasets, preprocessing, and synthetic data generation), model design (architecture, modules, and losses), and evaluation aspects (data annotation requirements and segmentation performance). We discuss these dimensions both from the viewpoint of select seminal works, and from a systematic viewpoint, examining how those choices have influenced current trends, and how their limitations should be addressed. To facilitate comparisons, we summarize all examined works in a comprehensive table as well as an interactive table available online3.
Collapse
Affiliation(s)
- Zahra Mirikharaji
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Kumar Abhishek
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada
| | - Alceu Bissoto
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Catarina Barata
- Institute for Systems and Robotics, Instituto Superior Técnico, Avenida Rovisco Pais, Lisbon 1049-001, Portugal
| | - Sandra Avila
- RECOD.ai Lab, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, Campinas 13083-852, Brazil
| | - Eduardo Valle
- RECOD.ai Lab, School of Electrical and Computing Engineering, University of Campinas, Av. Albert Einstein 400, Campinas 13083-952, Brazil
| | - M Emre Celebi
- Department of Computer Science and Engineering, University of Central Arkansas, 201 Donaghey Ave., Conway, AR 72035, USA.
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, Burnaby V5A 1S6, Canada.
| |
Collapse
|
3
|
Jin S, Yu S, Peng J, Wang H, Zhao Y. A novel medical image segmentation approach by using multi-branch segmentation network based on local and global information synchronous learning. Sci Rep 2023; 13:6762. [PMID: 37185374 PMCID: PMC10127969 DOI: 10.1038/s41598-023-33357-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 04/12/2023] [Indexed: 05/17/2023] Open
Abstract
In recent years, there have been several solutions to medical image segmentation, such as U-shaped structure, transformer-based network, and multi-scale feature learning method. However, their network parameters and real-time performance are often neglected and cannot segment boundary regions well. The main reason is that such networks have deep encoders, a large number of channels, and excessive attention to local information rather than global information, which is crucial to the accuracy of image segmentation. Therefore, we propose a novel multi-branch medical image segmentation network MBSNet. We first design two branches using a parallel residual mixer (PRM) module and dilate convolution block to capture the local and global information of the image. At the same time, a SE-Block and a new spatial attention module enhance the output features. Considering the different output features of the two branches, we adopt a cross-fusion method to effectively combine and complement the features between different layers. MBSNet was tested on five datasets ISIC2018, Kvasir, BUSI, COVID-19, and LGG. The combined results show that MBSNet is lighter, faster, and more accurate. Specifically, for a [Formula: see text] input, MBSNet's FLOPs is 10.68G, with an F1-Score of [Formula: see text] on the Kvasir test dataset, well above [Formula: see text] for UNet++ with FLOPs of 216.55G. We also use the multi-criteria decision making method TOPSIS based on F1-Score, IOU and Geometric-Mean (G-mean) for overall analysis. The proposed MBSNet model performs better than other competitive methods. Code is available at https://github.com/YuLionel/MBSNet .
Collapse
Affiliation(s)
- Shangzhu Jin
- Information Office, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Sheng Yu
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China.
| | - Jun Peng
- College of Mathematics, Physics and Data Science, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Hongyi Wang
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China
| | - Yan Zhao
- College of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing, 401331, China
| |
Collapse
|
4
|
Han Q, Wang H, Hou M, Weng T, Pei Y, Li Z, Chen G, Tian Y, Qiu Z. HWA-SegNet: Multi-channel skin lesion image segmentation network with hierarchical analysis and weight adjustment. Comput Biol Med 2023; 152:106343. [PMID: 36481758 DOI: 10.1016/j.compbiomed.2022.106343] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Revised: 10/30/2022] [Accepted: 11/21/2022] [Indexed: 11/30/2022]
Abstract
Convolutional neural networks (CNNs) show excellent performance in accurate medical image segmentation. However, the characteristics of sample with small size and insufficient feature expression, irregular shape of the segmented target and inaccurate judgment of edge texture have always been problems to be faced in the field of skin lesion image segmentation. Therefore, in order to solve these problems, discrete Fourier transform (DFT) is introduced to enrich the input data and a CNN architecture (HWA-SegNet) is proposed in this paper. Firstly, DFT is improved to analyze the features of the skin lesions image, and multi-channel data is extended for each image. Secondly, a hierarchical dilated analysis module is constructed to understand the semantic features under multi-channel. Finally, the pre-prediction results are fine-tuned using a weight adjustment structure with fully connected layers to obtain higher accuracy prediction results. Then, 520 skin lesion images are tested on the ISIC 2018 dataset. Extensive experimental results show that our HWA-SegNet improve the average segmentation Dice Similarity Coefficient from 88.30% to 91.88%, Sensitivity from 89.29% to 92.99%, and Jaccard similarity index from 81.15% to 85.90% compared with U-Net. Compared with the State-of-the-Art method, the Jaccard similarity index and Specificity are close, but the Dice Similarity Coefficient is higher. The experimental data show that the data augmentation strategy based on improved DFT and HWA-SegNet are effective for skin lesion image segmentation.
Collapse
Affiliation(s)
- Qi Han
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Hongyi Wang
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China.
| | - Mingyang Hou
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Tengfei Weng
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yangjun Pei
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Zhong Li
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Guorong Chen
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Yuan Tian
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| | - Zicheng Qiu
- School of Intelligent Technology and Engineering, Chongqing University of Science and Technology, Chongqing 401331, China
| |
Collapse
|
5
|
Peng T, Wu Y, Qin J, Wu QJ, Cai J. H-ProSeg: Hybrid ultrasound prostate segmentation based on explainability-guided mathematical model. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 219:106752. [PMID: 35338887 DOI: 10.1016/j.cmpb.2022.106752] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Revised: 02/16/2022] [Accepted: 03/11/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate and robust prostate segmentation in transrectal ultrasound (TRUS) images is of great interest for image-guided prostate interventions and prostate cancer diagnosis. However, it remains a challenging task for various reasons, including a missing or ambiguous boundary between the prostate and surrounding tissues, the presence of shadow artifacts, intra-prostate intensity heterogeneity, and anatomical variations. METHODS Here, we present a hybrid method for prostate segmentation (H-ProSeg) in TRUS images, using a small number of radiologist-defined seed points as the prior points. This method consists of three subnetworks. The first subnetwork uses an improved principal curve-based model to obtain data sequences consisting of seed points and their corresponding projection index. The second subnetwork uses an improved differential evolution-based artificial neural network for training to decrease the model error. The third subnetwork uses the parameters of the artificial neural network to explain the smooth mathematical description of the prostate contour. The performance of the H-ProSeg method was assessed in 55 brachytherapy patients using Dice similarity coefficient (DSC), Jaccard similarity coefficient (Ω), and accuracy (ACC) values. RESULTS The H-ProSeg method achieved excellent segmentation accuracy, with DSC, Ω, and ACC values of 95.8%, 94.3%, and 95.4%, respectively. Meanwhile, the DSC, Ω, and ACC values of the proposed method were as high as 93.3%, 91.9%, and 93%, respectively, due to the influence of Gaussian noise (standard deviation of Gaussian function, σ = 50). Although the σ increased from 10 to 50, the DSC, Ω, and ACC values fluctuated by a maximum of approximately 2.5%, demonstrating the excellent robustness of our method. CONCLUSIONS Here, we present a hybrid method for accurate and robust prostate ultrasound image segmentation. The H-ProSeg method achieved superior performance compared with current state-of-the-art techniques. The knowledge of precise boundaries of the prostate is crucial for the conservation of risk structures. The proposed models have the potential to improve prostate cancer diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| | - Yiyun Wu
- Department of Medical Technology, Jiangsu Province Hospital, Nanjing, Jiangsu, China
| | - Jing Qin
- Department of Nursing, The Hong Kong Polytechnic University, Hong Kong, China
| | - Qingrong Jackie Wu
- Department of Radiation Oncology, Duke University Medical Center, Durham, NC, USA
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
| |
Collapse
|