1
|
Zhang J, Zhang L, Wang J, Wei X, Li J, Jiang X, Du D. SA-RPN: A Spacial Aware Region Proposal Network for Acne Detection. IEEE J Biomed Health Inform 2023; 27:5439-5448. [PMID: 37578919 DOI: 10.1109/jbhi.2023.3304727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
Automated detection of skin lesions offers excellent potential for interpretative diagnosis and precise treatment of acne vulgar. However, the blurry boundary and small size of lesions make it challenging to detect acne lesions with traditional object detection methods. To better understand the acne detection task, we construct a new benchmark dataset named AcneSCU, consisting of 276 facial images with 31777 instance-level annotations from clinical dermatology. To the best of our knowledge, AcneSCU is the first acne dataset with high-resolution imageries, precise annotations, and fine-grained lesion categories, which enables the comprehensive study of acne detection. More importantly, we propose a novel method called Spatial Aware Region Proposal Network (SA-RPN) to improve the proposal quality of two-stage detection methods. Specifically, the representation learning for the classification and localization task is disentangled with a double head component to promote the proposals for hard samples. Then, Normalized Wasserstein Distance of each proposal is predicted to improve the correlation between the classification scores and the proposals' intersection-over-unions (IoUs). SA-RPN can serve as a plug-and-play module to enhance standard two-stage detectors. Extensive experiments are conducted on both AcneSCU and the public dataset ACNE04, and the results show that the proposed method can consistently outperform state-of-the-art methods.
Collapse
|
2
|
Jakkaladiki SP, Maly F. An efficient transfer learning based cross model classification (TLBCM) technique for the prediction of breast cancer. PeerJ Comput Sci 2023; 9:e1281. [PMID: 37346575 PMCID: PMC10280457 DOI: 10.7717/peerj-cs.1281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2022] [Accepted: 02/16/2023] [Indexed: 06/23/2023]
Abstract
Breast cancer has been the most life-threatening disease in women in the last few decades. The high mortality rate among women is due to breast cancer because of less awareness and a minimum number of medical facilities to detect the disease in the early stages. In the recent era, the situation has changed with the help of many technological advancements and medical equipment to observe breast cancer development. The machine learning technique supports vector machines (SVM), logistic regression, and random forests have been used to analyze the images of cancer cells on different data sets. Although the particular technique has performed better on the smaller data set, accuracy still needs to catch up in most of the data, which needs to be fairer to apply in the real-time medical environment. In the proposed research, state-of-the-art deep learning techniques, such as transfer learning, based cross model classification (TLBCM), convolution neural network (CNN) and transfer learning, residual network (ResNet), and Densenet proposed for efficient prediction of breast cancer with the minimized error rating. The convolution neural network and transfer learning are the most prominent techniques for predicting the main features in the data set. The sensitive data is protected using a cyber-physical system (CPS) while using the images virtually over the network. CPS act as a virtual connection between human and networks. While the data is transferred in the network, it must monitor using CPS. The ResNet changes the data on many layers without compromising the minimum error rate. The DenseNet conciliates the problem of vanishing gradient issues. The experiment is carried out on the data sets Breast Cancer Wisconsin (Diagnostic) and Breast Cancer Histopathological Dataset (BreakHis). The convolution neural network and the transfer learning have achieved a validation accuracy of 98.3%. The results of these proposed methods show the highest classification rate between the benign and the malignant data. The proposed method improves the efficiency and speed of classification, which is more convenient for discovering breast cancer in earlier stages than the previously proposed methodologies.
Collapse
|
3
|
Jiang J, Peng J, Hu C, Jian W, Wang X, Liu W. Breast cancer detection and classification in mammogram using a three-stage deep learning framework based on PAA algorithm. Artif Intell Med 2022; 134:102419. [PMID: 36462904 DOI: 10.1016/j.artmed.2022.102419] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Revised: 07/20/2022] [Accepted: 10/02/2022] [Indexed: 12/13/2022]
Abstract
In recent years, deep learning has been used to develop an automatic breast cancer detection and classification tool to assist doctors. In this paper, we proposed a three-stage deep learning framework based on an anchor-free object detection algorithm, named the Probabilistic Anchor Assignment (PAA) to improve diagnosis performance by automatically detecting breast lesions (i.e., mass and calcification) and further classifying mammograms into benign or malignant. Firstly, a single-stage PAA-based detector roundly finds suspicious breast lesions in mammogram. Secondly, we designed a two-branch ROI detector to further classify and regress these lesions that aim to reduce the number of false positives. Besides, in this stage, we introduced a threshold-adaptive post-processing algorithm with dense breast information. Finally, the benign or malignant lesions would be classified by an ROI classifier which combines local-ROI features and global-image features. In addition, considering the strong correlation between the task of detection head of PAA and the task of whole mammogram classification, we added an image classifier that utilizes the same global-image features to perform image classification. The image classifier and the ROI classifier jointly guide to enhance the feature extraction ability and further improve the performance of classification. We integrated three public datasets of mammograms (CBIS-DDSM, INbreast, MIAS) to train and test our model and compared our framework with recent state-of-the-art methods. The results show that our proposed method can improve the diagnostic efficiency of radiologists by automatically detecting and classifying breast lesions and classifying benign and malignant mammograms.
Collapse
Affiliation(s)
- Jiale Jiang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Junchuan Peng
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China
| | - Chuting Hu
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Wenjing Jian
- Department of Breast and Thyroid Surgery, The Second People's Hospital of Shenzhen, Shenzhen 518035, Guangdong, China
| | - Xianming Wang
- Department of Breast and Thyroid Surgery, South China Hospital Affiliated to Shenzhen University, Shenzhen 518111, Guangdong, China.
| | - Weixiang Liu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China; Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, Shenzhen University, Shenzhen 518060, Guangdong, China; National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen 518060, Guangdong, China; College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen 518060, Guangdong, China.
| |
Collapse
|
4
|
Liu W, Shu X, Zhang L, Li D, Lv Q. Deep Multiscale Multi-Instance Networks With Regional Scoring for Mammogram Classification. IEEE TRANSACTIONS ON ARTIFICIAL INTELLIGENCE 2022. [DOI: 10.1109/tai.2021.3136146] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Wenjie Liu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Xin Shu
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Lei Zhang
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Dong Li
- Machine Intelligence Laboratory, College of Computer Science, Sichuan University, Chengdu, China
| | - Qing Lv
- Department of Galactophore Surgery, West China Hospital, Sichuan University, Chengdu, P.R. China
| |
Collapse
|
5
|
Huang W, Shu X, Wang Z, Zhang L, Chen C, Xu J, Yi Z. Feature Pyramid Network With Level-Aware Attention for Meningioma Segmentation. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2022.3146965] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
6
|
Liu W, Zhang L, Dai G, Zhang X, Li G, Yi Z. Deep Neural Network with Structural Similarity Difference and Orientation-based Loss for Position Error Classification in The Radiotherapy of Graves' Ophthalmopathy Patients. IEEE J Biomed Health Inform 2021; 26:2606-2614. [PMID: 34941537 DOI: 10.1109/jbhi.2021.3137451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Identifying position errors for Graves' ophthalmopathy (GO) patients using electronic portal imaging device (EPID) transmission fluence maps is helpful in monitoring treatment.} However, most of the existing models only extract features from dose difference maps computed from EPID images, which do not fully characterize all information of the positional errors. In addition, the position error has a three-dimensional spatial nature, which has never been explored in previous work. To address the above problems, a deep neural network (DNN) model with structural similarity difference and orientation-based loss is proposed in this paper, which consists of a feature extraction network and a feature enhancement network. To capture more information, three types of Structural SIMilarity (SSIM) sub-index maps are computed to enhance the luminance, contrast, and structural features of EPID images, respectively. These maps and the dose difference maps are fed into different networks to extract radiomic features. To acquire spatial features of the position errors, an orientation-based loss function is proposed for optimal training. It makes the data distribution more consistent with the realistic 3D space by integrating the error deviations of the predicted values in the left-right, superior-inferior, anterior-posterior directions. Experimental results on a constructed dataset demonstrate the effectiveness of the proposed model, compared with other related models and existing state-of-the-art methods.
Collapse
|