AlGhamdi M, Abdel-Mottaleb M. DV-DCNN: Dual-view deep convolutional neural network for matching detected masses in mammograms.
COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021;
207:106152. [PMID:
34058629 DOI:
10.1016/j.cmpb.2021.106152]
[Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Accepted: 04/30/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE
Mammography is an X-ray imaging technique used for breast cancer screening. Each breast is usually screened at two different angles generating two views known as mediolateral oblique (MLO) and craniocaudal (CC), which are clinically used by radiologists to detect suspicious masses and diagnose breast cancer. Previous studies applied deep learning models to process each view separately and concatenate the features from the two views to detect and classifying masses. However, direct concatenation is not enough to uncover the relationship between the masses that appear in the two views because they can substantially vary in terms of shape, size, and texture. The relationship between the two views should be established by matching correspondence between their extracted masses. This paper presents a dual-view deep convolutional neural network (DV-DCNN) model for matching masses detected from the two views by establishing correspondence between their extracted patches, which leads to more robust mass detection.
METHODS
Given a pair of patches as input, the presented model determines whether these patches represent the same mass or not. The network contains two parts: a feature extraction part using tied dense blocks, and a neighborhood patch matching part with three consecutive layers, i.e., a cross-input neighborhood differences layer to find the relationship between the two patches, a patch summary features layer to define a summary of the neighborhood differences and an across-patch features layer to learn a higher-level representation across neighborhood differences.
RESULTS
To evaluate the model's performance in diverse cases, several experimental scenarios were followed for training and testing using two public datasets, i.e., CBIS-DDSM and INbreast. We also evaluate the contribution of our mass-matching model within a mass detection framework. Experiments show that DV-DCNN outperforms other related deep learning models and demonstrate that the detection results improve when using our model.
CONCLUSIONS
Matching potential masses between two different views of the same breast leads to more robust mass detection. Experimental results demonstrate the efficacy of a dual-view deep learning model in matching masses, which helps in increasing the accuracy of mass detection and decreasing the false positive rates.
Collapse