1
|
Abbassi M, Besbes B, Elkadri N, Hachicha S, Boudiche S, Daly F, Ben Halima M, Jebberi Z, Ouali S, Mghaieth F. Characterization of epicardial adipose tissue thickness and structure by ultrasound radiomics in acute and chronic coronary patients. THE INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING 2025; 41:477-488. [PMID: 39915372 DOI: 10.1007/s10554-025-03329-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2024] [Accepted: 01/01/2025] [Indexed: 03/06/2025]
Abstract
We hypothesize that epicardial adipose tissue (EAT) structure differs between patients with coronary disease and healthy individuals and that EAT may undergo changes during an acute coronary syndrome (ACS). This study aimed to investigate EAT thickness (EATt) and structure using ultrasound radiomics in patients with ACS, patients with chronic coronary syndrome (CCS), and controls and compare the findings between the three groups. This prospective monocentric comparative cohort study included three patient groups: ACS, CCS, and asymptomatic controls. EATt was assessed using transthoracic echocardiography. Geometrical features (as mean gray value and raw integrated density) and texture features (as angular second moment, contrast and correlation) were computed from grayscale Tagged Image File Format biplane images using ImageJ software. EATt did not significantly differ between the ACS group (8.14 ± 3.17 mm) and the control group (6.92 ± 2.50 mm), whereas CCS patients (9.96 ± 3.19 mm) had significantly thicker EAT compared to both the ACS group (p = 0.025) and the control group (p < 0.001). Radiomics analysis revealed differences in geometrical parameters with discriminatory capabilities between both ACS group and controls and CCS group and controls. A multivariate analysis comparing ACS and CCS patients revealed that differences in EAT characteristics were significant only in patients with a body mass index below 26.25 kg/m². In this subgroup, patients older than 68 exhibited a higher modal gray value (p = 0.016), whereas those younger than 68 had a lower minimum gray value (p = 0.05). Radiomic analysis highlights its potential in developing imaging biomarkers for early diagnosis and coronary artery disease progression monitoring.
Collapse
Affiliation(s)
- Manel Abbassi
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia.
- University of Medicine, Tunis, Tunisia.
| | - Bouthaina Besbes
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
| | | | - Salmen Hachicha
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
| | - Selim Boudiche
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
| | - Foued Daly
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
- University of Medicine, Tunis, Tunisia
| | - Manel Ben Halima
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
- University of Medicine, Tunis, Tunisia
| | - Zeynab Jebberi
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
- University of Medicine, Tunis, Tunisia
| | - Sana Ouali
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
- University of Medicine, Tunis, Tunisia
| | - Fathia Mghaieth
- Department of Cardiology, The Rabta Teaching Hospital, University of Medicine, Tunis, Tunisia
- University of Medicine, Tunis, Tunisia
| |
Collapse
|
2
|
de Izaguirre F, Del Castillo M, Ferreira ED, Suárez H. Gait patterns in unstable older patients related with vestibular hypofunction. Preliminary results in assessment with time-frequency analysis. Acta Otolaryngol 2025:1-6. [PMID: 39840938 DOI: 10.1080/00016489.2025.2450221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2024] [Accepted: 12/23/2024] [Indexed: 01/23/2025]
Abstract
BACKGROUND Gait instability and falls significantly impact life quality and morbi-mortality in elderly populations. Early diagnosis of gait disorders is one of the most effective approaches to minimize severe injuries. OBJECTIVE To find a gait instability pattern in older adults through an image representation of data collected by a single sensor. METHODS A sample of 13 older adults (71-85 years old) with instability by vestibular hypofunction is compared to a sample of 19 adults (21-75 years old) without instability and normal vestibular function. Image representations of the gait signals acquired on a specific walk path were generated using a continuous wavelet transform and analyzed as a texture using grey level co-occurrence matrix metrics as features. A support vector machine (SVM) algorithm was used to discriminate subjects. RESULTS First results show a good classification performance. According to analysis of extracted features, most information relevant to instability is concentrated in the medio-lateral acceleration (X axis) and the frontal plane angular rotation (Z axis gyroscope). Performing a ten-fold cross-validation through the first ten seconds of the sample dataset, the algorithm achieves a 92,3 F1 score corresponding to 12 true-positives, 1 false positive and 1 false negative. DISCUSSION This preliminary report suggests that the method has potential use in assessing gait disorders in controlled and non-controlled environments. It suggests that deep learning methods could be explored given the availability of a larger population and data samples.
Collapse
Affiliation(s)
- Francisco de Izaguirre
- Uruguay - School of Engineering - Electrical Engineering Department, Universidad de la República, Montevideo, Uruguay
| | - Mariana Del Castillo
- Uruguay - School of Engineering - Electrical Engineering Department, Universidad de la República, Montevideo, Uruguay
| | - Enrique D Ferreira
- Uruguay - Engineering Department, Universidad Católica del Uruguay, Montevideo, Uruguay
| | - Hamlet Suárez
- Laboratory of Otoneurology British Hospital, Montevideo, Uruguay
| |
Collapse
|
3
|
Mansour IR, Miksys N, Beaulieu L, Vigneault É, Thomson RM. Haralick texture feature analysis for Monte Carlo dose distributions of permanent implant prostate brachytherapy. Brachytherapy 2025; 24:122-133. [PMID: 39532616 DOI: 10.1016/j.brachy.2024.08.256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2024] [Revised: 07/10/2024] [Accepted: 08/26/2024] [Indexed: 11/16/2024]
Abstract
PURPOSE Demonstrate quantitative characterization of 3D patient-specific absorbed dose distributions using Haralick texture analysis, and interpret measures in terms of underlying physics and radiation dosimetry. METHODS Retrospective analysis is performed for 137 patients who underwent permanent implant prostate brachytherapy using two simulation conditions: "TG186" (realistic tissues including 0-3.8% intraprostatic calcifications; interseed attenuation) and "TG43" (water-model; no interseed attenuation). Five Haralick features (homogeneity, contrast, correlation, local homogeneity, entropy) are calculated using the original Haralick formalism, and a modified approach designed to reduce grey-level quantization sensitivity. Trends in textural features are compared to clinical dosimetric measures (D90; minimum absorbed dose to the hottest 90% of a volume) and changes in patient target volume % intraprostatic calcifications by volume (%IC). RESULTS Both original and modified measures quantify the spatial differences in absorbed dose distributions. Strong correlations between differences in textural measures calculated under TG43 and TG186 conditions and %IC are observed for all measures. For example, differences between measures of contrast and correlation increase and decrease respectively as patients with higher levels of %IC are evaluated, reflecting the large differences across adjacent voxels (higher absorbed dose in voxels with calcification) when calculated under TG186 conditions. Conversely, the D90 metric is relatively weakly correlated with textural measures, as it generally does not characterize the spatial distribution of absorbed dose. CONCLUSION Patient-specific 3D dose distributions may be quantified using Haralick analysis, and trends may be interpreted in terms of fundamental physics. Promising future directions include investigations of novel treatment modalities and clinical outcomes.
Collapse
Affiliation(s)
- Iymad R Mansour
- Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, Ottawa, ON, Canada
| | | | - Luc Beaulieu
- Service de Physique Médicale et de Radioprotection, Centre Intégré de Cancérologie, CHU de Québec- Université Laval et Centre de recherche du CHU de Québec, Québec, QC, Canada; Département de Physique, de Génie Physique et d'Optique et Centre de Recherche sur le Cancer, Université Laval, Québec, QC, Canada
| | - Éric Vigneault
- Centre de recherche sur le cancer, Département de Radio-Oncologie et Centre de recherche du CHU de Québec, Université Laval, Québec, QC, Canada
| | - Rowan M Thomson
- Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, Ottawa, ON, Canada.
| |
Collapse
|
4
|
Zhao X, Yan Y, Xie W, Qin Z, Zhao L, Liu C, Zhang S, Liu J, Ma L. Radiomics for differential diagnosis of Bosniak II-IV renal masses via CT imaging. BMC Cancer 2024; 24:1508. [PMID: 39643905 PMCID: PMC11622457 DOI: 10.1186/s12885-024-13283-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Accepted: 12/03/2024] [Indexed: 12/09/2024] Open
Abstract
RATIONALE AND OBJECTIVES The management of complex renal cysts is guided by the Bosniak classification system, which may be inadequate for risk stratification of patients to determine the appropriate intervention. Radiomics models based on CT imaging may provide additional useful information. MATERIALS AND METHODS A total of 322 patients with Bosniak II-IV cysts were included in the study from January 2010 to December 2019. Contrast-enhanced CT scans were performed on all patients. ITK-snap was used for segmentation, and the PyRadiomics 3.0.1 package was used for feature extraction. The radiomics features were screened via the least absolute shrinkage and selection operator (LASSO) regression method. After feature selection, a logistic regression (LR) model, support vector machine (SVM) model and random forest (RF) model were constructed. RESULTS In the present study, 217 benign renal cysts (67.4%) and 105 cystic renal cell carcinomas (32.6%) were identified. According to the Bosniak classification, the sample included 179 (55.6%) Bosniak II cysts, 38 (11.8%) Bosniak IIF cysts, 44 (13.7%) Bosniak III cysts and 61 (18.9%) Bosniak IV cysts. A total of 1334 radiomics features were extracted from both unenhanced and cortical CT scans. After LASSO regression, all the models (LR, SVM and RF) showed satisfactory discrimination and reliability in both unenhanced and cortical CT scans (AUC > 0.950). In the Bosniak IIF-III subgroup analysis, the diagnostic accuracy of the LR model was very low for both the unenhanced and cortical scans. In contrast, the SVM model and RF model showed excellent and stable performance in classifying Bosniak IIF-III cysts. The AUCs of the models were all > 0.85, with a maximum of 0.941. The sensitivity, specificity, accuracy, and AUC of the RF model were 0.889, 0.913, 0.902, and 0.941, respectively. CONCLUSION Our data indicate that radiomics models can effectively distinguish between cystic renal cell carcinoma (cRCC) and complex renal cysts (Bosniak II-IV). Radiomics models may still have high diagnostic accuracy even for Bosniak IIF-III cysts that are clinically difficult to distinguish. However, external validation of these findings is still needed.
Collapse
Affiliation(s)
- Xun Zhao
- Department of Urology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, P.R. China
| | - Ye Yan
- Department of Urology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, P.R. China
| | - Wanfang Xie
- School of Engineering Medicine, Beihang University, Beijing, 100191, P.R. China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology of the People's Republic of China, Beijing, P.R. China
| | - Zijian Qin
- Department of Urology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, P.R. China
| | - Litao Zhao
- School of Engineering Medicine, Beihang University, Beijing, 100191, P.R. China
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology of the People's Republic of China, Beijing, P.R. China
| | - Cheng Liu
- Department of Urology, Shanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Hongkou District, Shanghai, P.R. China
| | - Shudong Zhang
- Department of Urology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, P.R. China.
| | - Jiangang Liu
- School of Engineering Medicine, Beihang University, Beijing, 100191, P.R. China.
- Key Laboratory of Big Data-Based Precision Medicine, Beihang University, Ministry of Industry and Information Technology of the People's Republic of China, Beijing, P.R. China.
- Beijing Engineering Research Center of Cardiovascular Wisdom Diagnosis and Treatment, Beijing, P.R. China.
| | - Lulin Ma
- Department of Urology, Peking University Third Hospital, 49 Huayuan North Road, Haidian District, Beijing, 100191, P.R. China.
| |
Collapse
|
5
|
Karwat P, Piotrzkowska-Wroblewska H, Klimonda Z, Dobruch-Sobczak KS, Litniewski J. Monitoring Breast Cancer Response to Neoadjuvant Chemotherapy Using Probability Maps Derived from Quantitative Ultrasound Parametric Images. IEEE Trans Biomed Eng 2024; 71:2620-2629. [PMID: 38557626 DOI: 10.1109/tbme.2024.3383920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
OBJECTIVE Neoadjuvant chemotherapy (NAC) is widely used in the treatment of breast cancer. However, to date, there are no fully reliable, non-invasive methods for monitoring NAC. In this article, we propose a new method for classifying NAC-responsive and unresponsive tumors using quantitative ultrasound. METHODS The study used ultrasound data collected from breast tumors treated with NAC. The proposed method is based on the hypothesis that areas that characterize the effect of therapy particularly well can be found. For this purpose, parametric images of texture features calculated from tumor images were converted into NAC response probability maps, and areas with a probability above 0.5 were used for classification. RESULTS The results obtained after the third cycle of NAC show that the classification of tumors using the traditional method (area under the ROC curve AUC = 0.81-0.88) can be significantly improved thanks to the proposed new approach (AUC = 0.84-0.94). This improvement is achieved over a wide range of cutoff values (0.2-0.7), and the probability maps obtained from different quantitative parameters correlate well. CONCLUSION The results suggest that there are tumor areas that are particularly well suited to assessing response to NAC. SIGNIFICANCE The proposed approach to monitoring the effects of NAC not only leads to a better classification of responses, but also may contribute to a better understanding of the microstructure of neoplastic tumors observed in an ultrasound examination.
Collapse
|
6
|
Gómez-Flores W, Gregorio-Calas MJ, Coelho de Albuquerque Pereira W. BUS-BRA: A breast ultrasound dataset for assessing computer-aided diagnosis systems. Med Phys 2024; 51:3110-3123. [PMID: 37937827 DOI: 10.1002/mp.16812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 10/10/2023] [Accepted: 10/12/2023] [Indexed: 11/09/2023] Open
Abstract
PURPOSE Computer-aided diagnosis (CAD) systems on breast ultrasound (BUS) aim to increase the efficiency and effectiveness of breast screening, helping specialists to detect and classify breast lesions. CAD system development requires a set of annotated images, including lesion segmentation, biopsy results to specify benign and malignant cases, and BI-RADS categories to indicate the likelihood of malignancy. Besides, standardized partitions of training, validation, and test sets promote reproducibility and fair comparisons between different approaches. Thus, we present a publicly available BUS dataset whose novelty is the substantial increment of cases with the above-mentioned annotations and the inclusion of standardized partitions to objectively assess and compare CAD systems. ACQUISITION AND VALIDATION METHODS The BUS dataset comprises 1875 anonymized images from 1064 female patients acquired via four ultrasound scanners during systematic studies at the National Institute of Cancer (Rio de Janeiro, Brazil). The dataset includes biopsy-proven tumors divided into 722 benign and 342 malignant cases. Besides, a senior ultrasonographer performed a BI-RADS assessment in categories 2 to 5. Additionally, the ultrasonographer manually outlined the breast lesions to obtain ground truth segmentations. Furthermore, 5- and 10-fold cross-validation partitions are provided to standardize the training and test sets to evaluate and reproduce CAD systems. Finally, to validate the utility of the BUS dataset, an evaluation framework is implemented to assess the performance of deep neural networks for segmenting and classifying breast lesions. DATA FORMAT AND USAGE NOTES The BUS dataset is publicly available for academic and research purposes through an open-access repository under the name BUS-BRA: A Breast Ultrasound Dataset for Assessing CAD Systems. BUS images and reference segmentations are saved in Portable Network Graphic (PNG) format files, and the dataset information is stored in separate Comma-Separated Value (CSV) files. POTENTIAL APPLICATIONS The BUS-BRA dataset can be used to develop and assess artificial intelligence-based lesion detection and segmentation methods, and the classification of BUS images into pathological classes and BI-RADS categories. Other potential applications include developing image processing methods like despeckle filtering and contrast enhancement methods to improve image quality and feature engineering for image description.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Tamaulipas, Mexico
| | | | | |
Collapse
|
7
|
Tasnim J, Hasan MK. CAM-QUS guided self-tuning modular CNNs with multi-loss functions for fully automated breast lesion classification in ultrasound images. Phys Med Biol 2023; 69:015018. [PMID: 38056017 DOI: 10.1088/1361-6560/ad1319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2023] [Accepted: 12/06/2023] [Indexed: 12/08/2023]
Abstract
Objective.Breast cancer is the major cause of cancer death among women worldwide. Deep learning-based computer-aided diagnosis (CAD) systems for classifying lesions in breast ultrasound images can help materialise the early detection of breast cancer and enhance survival chances.Approach.This paper presents a completely automated BUS diagnosis system with modular convolutional neural networks tuned with novel loss functions. The proposed network comprises a dynamic channel input enhancement network, an attention-guided InceptionV3-based feature extraction network, a classification network, and a parallel feature transformation network to map deep features into quantitative ultrasound (QUS) feature space. These networks function together to improve classification accuracy by increasing the separation of benign and malignant class-specific features and enriching them simultaneously. Unlike the categorical crossentropy (CCE) loss-based traditional approaches, our method uses two additional novel losses: class activation mapping (CAM)-based and QUS feature-based losses, to capacitate the overall network learn the extraction of clinically valued lesion shape and texture-related properties focusing primarily the lesion area for explainable AI (XAI).Main results.Experiments on four public, one private, and a combined breast ultrasound dataset are used to validate our strategy. The suggested technique obtains an accuracy of 97.28%, sensitivity of 93.87%, F1-score of 95.42% on dataset 1 (BUSI), and an accuracy of 91.50%, sensitivity of 89.38%, and F1-score of 89.31% on the combined dataset, consisting of 1494 images collected from hospitals in five demographic locations using four ultrasound systems of different manufacturers. These results outperform techniques reported in the literature by a considerable margin.Significance.The proposed CAD system provides diagnosis from the auto-focused lesion area of B-mode BUS images, avoiding the explicit requirement of any segmentation or region of interest extraction, and thus can be a handy tool for making accurate and reliable diagnoses even in unspecialized healthcare centers.
Collapse
Affiliation(s)
- Jarin Tasnim
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| | - Md Kamrul Hasan
- Department of Electrical and Electronic Engineering, Bangladesh University of Engineering and Technology (BUET), Dhaka 1205, Bangladesh
| |
Collapse
|
8
|
Misra S, Yoon C, Kim K, Managuli R, Barr RG, Baek J, Kim C. Deep learning-based multimodal fusion network for segmentation and classification of breast cancers using B-mode and elastography ultrasound images. Bioeng Transl Med 2023; 8:e10480. [PMID: 38023698 PMCID: PMC10658476 DOI: 10.1002/btm2.10480] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Revised: 12/02/2022] [Accepted: 12/13/2022] [Indexed: 12/01/2023] Open
Abstract
Ultrasonography is one of the key medical imaging modalities for evaluating breast lesions. For differentiating benign from malignant lesions, computer-aided diagnosis (CAD) systems have greatly assisted radiologists by automatically segmenting and identifying features of lesions. Here, we present deep learning (DL)-based methods to segment the lesions and then classify benign from malignant, utilizing both B-mode and strain elastography (SE-mode) images. We propose a weighted multimodal U-Net (W-MM-U-Net) model for segmenting lesions where optimum weight is assigned on different imaging modalities using a weighted-skip connection method to emphasize its importance. We design a multimodal fusion framework (MFF) on cropped B-mode and SE-mode ultrasound (US) lesion images to classify benign and malignant lesions. The MFF consists of an integrated feature network (IFN) and a decision network (DN). Unlike other recent fusion methods, the proposed MFF method can simultaneously learn complementary information from convolutional neural networks (CNNs) trained using B-mode and SE-mode US images. The features from the CNNs are ensembled using the multimodal EmbraceNet model and DN classifies the images using those features. The experimental results (sensitivity of 100 ± 0.00% and specificity of 94.28 ± 7.00%) on the real-world clinical data showed that the proposed method outperforms the existing single- and multimodal methods. The proposed method predicts seven benign patients as benign three times out of five trials and six malignant patients as malignant five out of five trials. The proposed method would potentially enhance the classification accuracy of radiologists for breast cancer detection in US images.
Collapse
Affiliation(s)
- Sampa Misra
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Chiho Yoon
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| | - Kwang‐Ju Kim
- Daegu‐Gyeongbuk Research CenterElectronics and Telecommunications Research Institute (ETRI)DaeguSouth Korea
| | - Ravi Managuli
- Department of BioengineeringUniversity of WashingtonSeattleWashingtonUSA
| | - Richard G. Barr
- Department of RadiologyNortheastern Ohio Medical UniversityYoungstownOhioUSA
| | - Jongduk Baek
- School of Integrated TechnologyYonsei UniversitySeoulSouth Korea
| | - Chulhong Kim
- Department of Electrical Engineering, Convergence IT Engineering, Mechanical Engineering, Medical Device Innovation Center, and Graduate School of Artificial IntelligencePohang University of Science and TechnologyPohangSouth Korea
| |
Collapse
|
9
|
Mansour IR, Thomson RM. Haralick texture analysis for microdosimetry: characterization of Monte Carlo generated 3D specific energy distributions. Phys Med Biol 2023; 68:185003. [PMID: 37591252 DOI: 10.1088/1361-6560/acf183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 08/17/2023] [Indexed: 08/19/2023]
Abstract
Objective.Explore the application of Haralick textural analysis to 3D distributions of specific energy (energy imparted per unit mass) scored in cell-scale targets considering varying mean specific energy (absorbed dose), target volume, and incident spectrum.Approach.Monte Carlo simulations are used to generate specific energy distributions in cell-scale water voxels ((1μm)3-(15μm)3) irradiated by photon sources (mean energies: 0.02-2 MeV) to varying mean specific energies (10-400 mGy). Five Haralick features (homogeneity, contrast, entropy, correlation, local homogeneity) are calculated using an implementation of Haralick analysis designed to reduce sensitivity to grey level quantization and are interpreted using fundamental radiation physics.Main results.Haralick measures quantify differences in 3D specific energy distributions observed with varying voxel volume, absorbed dose magnitude, and source spectrum. For example, specific energy distributions in small (1-3μm) voxels with low magnitudes of absorbed dose (10 mGy) have relatively high measures of homogeneity and local homogeneity and relatively low measures of contrast and entropy (all relative to measures for larger voxels), reflecting the many voxels with zero specific energy in an otherwise sporadic distribution. With increasing target size, energy is shared across more target voxels, and trends in Haralick measures, such as decreasing homogeneity and increasing contrast and entropy, reflect characteristics of each 3D specific energy distribution. Specific energy distributions for sources of differing mean energy are characterized by Haralick measures, e.g. contrast generally decreases with increasing source energy, correlation and homogeneity are often (not always) higher for higher energy sources.Significance.Haralick texture analysis successfully quantifies spatial trends in 3D specific energy distributions characteristic of radiation source, target size, and absorbed dose magnitude, thus offering new avenues to quantify microdosimetric data beyond first order histogram features. Promising future directions include investigations of multiscale tissue models, targeted radiation therapy techniques, and biological response to radiation.
Collapse
Affiliation(s)
- Iymad R Mansour
- Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, 1125 Colonel By Dr, Ottawa, K1S 5B6, Ontario, Canada
| | - Rowan M Thomson
- Carleton Laboratory for Radiotherapy Physics, Physics Department, Carleton University, 1125 Colonel By Dr, Ottawa, K1S 5B6, Ontario, Canada
| |
Collapse
|
10
|
Kondo S, Satoh M, Nishida M, Sakano R, Takagi K. Ceusia-Breast: computer-aided diagnosis with contrast enhanced ultrasound image analysis for breast lesions. BMC Med Imaging 2023; 23:114. [PMID: 37644398 PMCID: PMC10466705 DOI: 10.1186/s12880-023-01072-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 08/02/2023] [Indexed: 08/31/2023] Open
Abstract
BACKGROUND In recent years, contrast-enhanced ultrasonography (CEUS) has been used for various applications in breast diagnosis. The superiority of CEUS over conventional B-mode imaging in the ultrasound diagnosis of the breast lesions in clinical practice has been widely confirmed. On the other hand, there have been many proposals for computer-aided diagnosis of breast lesions on B-mode ultrasound images, but few for CEUS. We propose a semi-automatic classification method based on machine learning in CEUS of breast lesions. METHODS The proposed method extracts spatial and temporal features from CEUS videos and breast tumors are classified as benign or malignant using linear support vector machines (SVM) with combination of selected optimal features. In the proposed method, tumor regions are extracted using the guidance information specified by the examiners, then morphological and texture features of tumor regions obtained from B-mode and CEUS images and TIC features obtained from CEUS video are extracted. Then, our method uses SVM classifiers to classify breast tumors as benign or malignant. During SVM training, many features are prepared, and useful features are selected. We name our proposed method "Ceucia-Breast" (Contrast Enhanced UltraSound Image Analysis for BREAST lesions). RESULTS The experimental results on 119 subjects show that the area under the receiver operating curve, accuracy, precision, and recall are 0.893, 0.816, 0.841 and 0.920, respectively. The classification performance is improved by our method over conventional methods using only B-mode images. In addition, we confirm that the selected features are consistent with the CEUS guidelines for breast tumor diagnosis. Furthermore, we conduct an experiment on the operator dependency of specifying guidance information and find that the intra-operator and inter-operator kappa coefficients are 1.0 and 0.798, respectively. CONCLUSION The experimental results show a significant improvement in classification performance compared to conventional classification methods using only B-mode images. We also confirm that the selected features are related to the findings that are considered important in clinical practice. Furthermore, we verify the intra- and inter-examiner correlation in the guidance input for region extraction and confirm that both correlations are in strong agreement.
Collapse
|
11
|
Deb SD, Jha RK. Breast UltraSound Image classification using fuzzy-rank-based ensemble network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/28/2023]
|
12
|
Luo Y, Lu Z, Liu L, Huang Q. Deep fusion of human-machine knowledge with attention mechanism for breast cancer diagnosis. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
|
13
|
Jo S, Lee H, Kim HJ, Suh CH, Kim SJ, Lee Y, Roh JH, Lee JH. Do radiomics or diffusion-tensor images provide additional information to predict brain amyloid-beta positivity? Sci Rep 2023; 13:9755. [PMID: 37328578 PMCID: PMC10275931 DOI: 10.1038/s41598-023-36639-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Accepted: 06/07/2023] [Indexed: 06/18/2023] Open
Abstract
The aim of the present study was to predict amyloid-beta positivity using a conventional T1-weighted image, radiomics, and a diffusion-tensor image obtained by magnetic resonance imaging (MRI). We included 186 patients with mild cognitive impairment (MCI) who underwent Florbetaben positron emission tomography (PET), MRI (three-dimensional T1-weighted and diffusion-tensor images), and neuropsychological tests at the Asan Medical Center. We developed a stepwise machine learning algorithm using demographics, T1 MRI features (volume, cortical thickness and radiomics), and diffusion-tensor image to distinguish amyloid-beta positivity on Florbetaben PET. We compared the performance of each algorithm based on the MRI features used. The study population included 72 patients with MCI in the amyloid-beta-negative group and 114 patients with MCI in the amyloid-beta-positive group. The machine learning algorithm using T1 volume performed better than that using only clinical information (mean area under the curve [AUC]: 0.73 vs. 0.69, p < 0.001). The machine learning algorithm using T1 volume showed better performance than that using cortical thickness (mean AUC: 0.73 vs. 0.68, p < 0.001) or texture (mean AUC: 0.73 vs. 0.71, p = 0.002). The performance of the machine learning algorithm using fractional anisotropy in addition to T1 volume was not better than that using T1 volume alone (mean AUC: 0.73 vs. 0.73, p = 0.60). Among MRI features, T1 volume was the best predictor of amyloid PET positivity. Radiomics or diffusion-tensor images did not provide additional benefits.
Collapse
Affiliation(s)
- Sungyang Jo
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Hyunna Lee
- Bigdata Research Center, Asan Institute for Life Science, Asan Medical Center, Seoul, Republic of Korea
| | - Hyung-Ji Kim
- Department of Neurology, Uijeongbu Eulji Medical Center, Eulji University School of Medicine, Uijeongbu, Republic of Korea
| | - Chong Hyun Suh
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Sang Joon Kim
- Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea
| | - Yoojin Lee
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea
| | - Jee Hoon Roh
- Department of Physiology, Korea University College of Medicine, Seoul, Republic of Korea
| | - Jae-Hong Lee
- Department of Neurology, Asan Medical Center, University of Ulsan College of Medicine, 88, Olympic-ro 43-gil, Songpa-gu, Seoul, 05505, Republic of Korea.
| |
Collapse
|
14
|
Chang HH, Yeh SJ, Chiang MC, Hsieh ST. RU-Net: skull stripping in rat brain MR images after ischemic stroke with rat U-Net. BMC Med Imaging 2023; 23:44. [PMID: 36973775 PMCID: PMC10045128 DOI: 10.1186/s12880-023-00994-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 03/08/2023] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Experimental ischemic stroke models play a fundamental role in interpreting the mechanism of cerebral ischemia and appraising the development of pathological extent. An accurate and automatic skull stripping tool for rat brain image volumes with magnetic resonance imaging (MRI) are crucial in experimental stroke analysis. Due to the deficiency of reliable rat brain segmentation methods and motivated by the demand for preclinical studies, this paper develops a new skull stripping algorithm to extract the rat brain region in MR images after stroke, which is named Rat U-Net (RU-Net). METHODS Based on a U-shape like deep learning architecture, the proposed framework integrates batch normalization with the residual network to achieve efficient end-to-end segmentation. A pooling index transmission mechanism between the encoder and decoder is exploited to reinforce the spatial correlation. Two different modalities of diffusion-weighted imaging (DWI) and T2-weighted MRI (T2WI) corresponding to two in-house datasets with each consisting of 55 subjects were employed to evaluate the performance of the proposed RU-Net. RESULTS Extensive experiments indicated great segmentation accuracy across diversified rat brain MR images. It was suggested that our rat skull stripping network outperformed several state-of-the-art methods and achieved the highest average Dice scores of 98.04% (p < 0.001) and 97.67% (p < 0.001) in the DWI and T2WI image datasets, respectively. CONCLUSION The proposed RU-Net is believed to be potential for advancing preclinical stroke investigation and providing an efficient tool for pathological rat brain image extraction, where accurate segmentation of the rat brain region is fundamental.
Collapse
Affiliation(s)
- Herng-Hua Chang
- Computational Biomedical Engineering Laboratory (CBEL), Department of Engineering Science and Ocean Engineering, National Taiwan University, No. 1 Sec. 4 Roosevelt Road, Daan, Taipei, 10617, Taiwan.
| | - Shin-Joe Yeh
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei, 10002, Taiwan
| | - Ming-Chang Chiang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, 11221, Taiwan
| | - Sung-Tsang Hsieh
- Department of Neurology and Stroke Center, National Taiwan University Hospital, Taipei, 10002, Taiwan
- Graduate Institute of Anatomy and Cell Biology, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Graduate Institute of Clinical Medicine, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Graduate Institute of Brain and Mind Sciences, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
- Center of Precision Medicine, College of Medicine, National Taiwan University, Taipei, 10051, Taiwan
| |
Collapse
|
15
|
Nam K, Torkzaban M, Halegoua-DeMarzio D, Wessner CE, Lyshchik A. Improving diagnostic accuracy of ultrasound texture features in detecting and quantifying hepatic steatosis using various beamforming sound speeds. Phys Med Biol 2023; 68:10.1088/1361-6560/acb635. [PMID: 36696691 PMCID: PMC10009771 DOI: 10.1088/1361-6560/acb635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 01/25/2023] [Indexed: 01/26/2023]
Abstract
Objective.While ultrasound image texture has been utilized to detect and quantify hepatic steatosis, the texture features extracted using a single (conventionally 1540 m s-1) beamforming speed of sound (SoS) failed to achieve reliable diagnostic performance. This study aimed to investigate if the texture features extracted using various beamforming SoSs can improve the accuracy of hepatic steatosis detection and quantification.Approach.Patients with suspected non-alcoholic fatty liver disease underwent liver biopsy or MRI proton density fat fraction (PDFF) as part of standard of care, were prospectively enrolled. The radio-frequency data from subjects' right and left liver lobes were collected using 6 beamforming SoSs: 1300, 1350, 1400, 1450, 1500 and 1540 m s-1and analyzed offline. The texture features, i.e. Contrast, Correlation, Energy and Homogeneity from gray-level co-occurrence matrix of normalized envelope were obtained from a region of interest in the liver parenchyma.Main results.Forty-three subjects (67.2%) were diagnosed with steatosis while 21 had no steatosis. Homogeneity showed the area under the curve (AUC) of 0.75-0.82 and 0.58-0.81 for left and right lobes, respectively with varying beamforming SoSs. The combined Homogeneity value over 1300-1540 m s-1from left and right lobes showed the AUC of 0.90 and 0.81, respectively. Furthermore, the combined Homogeneity values from left and right lobes over 1300-1540 m s-1improved the AUC to 0.94. The correlation between texture features and steatosis severity was improved by using the images from various beamforming SoSs. The combined Contrast values over 1300-1540 m s-1from left and right lobes demonstrated the highest correlation (r= 0.90) with the MRI PDFF while the combined Homogeneity values over 1300-1540 m s-1from left and right lobes showed the highest correlation with the biopsy grades (r= -0.81).Significance.The diagnostic accuracy of ultrasound texture features in detecting and quantifying hepatic steatosis was improved by combining its values extracted using various beamforming SoSs.
Collapse
Affiliation(s)
- Kibo Nam
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Mehnoosh Torkzaban
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Dina Halegoua-DeMarzio
- Department of Medicine, Division of Gastroenterology & Hepatology, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Corinne E. Wessner
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA 19107, USA
| | - Andrej Lyshchik
- Department of Radiology, Thomas Jefferson University, Philadelphia, PA 19107, USA
| |
Collapse
|
16
|
Kuo CFJ, Chen HY, Barman J, Ko KH, Hsu HH. Complete, Fully Automatic Detection and Classification of Benign and Malignant Breast Tumors Based on CT Images Using Artificial Intelligent and Image Processing. J Clin Med 2023; 12:1582. [PMID: 36836118 PMCID: PMC9960342 DOI: 10.3390/jcm12041582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 12/09/2022] [Accepted: 12/19/2022] [Indexed: 02/19/2023] Open
Abstract
Breast cancer is the most common type of cancer in women, and early detection is important to significantly reduce its mortality rate. This study introduces a detection and diagnosis system that automatically detects and classifies breast tumors in CT scan images. First, the contours of the chest wall are extracted from computed chest tomography images, and two-dimensional image characteristics and three-dimensional image features, together with the application of active contours without edge and geodesic active contours methods, are used to detect, locate, and circle the tumor. Then, the computer-assisted diagnostic system extracts features, quantifying and classifying benign and malignant breast tumors using a greedy algorithm and a support vector machine. The study used 174 breast tumors for experiment and training and performed cross-validation 10 times (k-fold cross-validation) to evaluate performance of the system. The accuracy, sensitivity, specificity, and positive and negative predictive values of the system were 99.43%, 98.82%, 100%, 100%, and 98.89% respectively. This system supports the rapid extraction and classification of breast tumors as either benign or malignant, helping physicians to improve clinical diagnosis.
Collapse
Affiliation(s)
- Chung-Feng Jeffrey Kuo
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Hsuan-Yu Chen
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Jagadish Barman
- Department of Materials Science and Engineering, National Taiwan University of Science and Technology, Taipei 106, Taiwan
| | - Kai-Hsiung Ko
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| | - Hsian-He Hsu
- Department of Radiology, Tri-Service General Hospital, National Defense Medical Center, Taipei 114, Taiwan
| |
Collapse
|
17
|
Xu Q, Cai J, Ma L, Tan B, Li Z, Sun L. Custom-Developed Reflection-Transmission Integrated Vision System for Rapid Detection of Huanglongbing Based on the Features of Blotchy Mottled Texture and Starch Accumulation in Leaves. PLANTS (BASEL, SWITZERLAND) 2023; 12:616. [PMID: 36771700 PMCID: PMC9921774 DOI: 10.3390/plants12030616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 01/15/2023] [Accepted: 01/17/2023] [Indexed: 06/18/2023]
Abstract
Huanglongbing (HLB) is a highly contagious and devastating citrus disease that causes huge economic losses to the citrus industry. Because it cannot be cured, timely detection of the HLB infection status of plants and removal of diseased trees are effective ways to reduce losses. However, complex HLB symptoms, such as single HLB-symptomatic or zinc deficiency + HLB-positive, cannot be identified by a single reflection imaging method at present. In this study, a vision system with an integrated reflection-transmission image acquisition module, human-computer interaction module, and power supply module was developed for rapid HLB detection in the field. In reflection imaging mode, 660 nm polarized light was used as the illumination source to enhance the contrast of the HLB symptoms in the images based on the differences in the absorption of narrow-band light by the components within the leaves. In transmission imaging mode, polarization images were obtained in four directions, and the polarization angle images were calculated using the Stokes vector to detect the optical activity of starch. A step-by-step classification model with four steps was used for the identification of six classes of samples (healthy, HLB-symptomatic, zinc deficiency, zinc deficiency + HLB-positive, magnesium deficiency, and boron deficiency). The results showed that the model had an accuracy of 96.92% for the full category of samples and 98.08% for the identification of multiple types of HLB (HLB-symptomatic and zinc deficiency + HLB-positive). In addition, the classification model had good recognition of zinc deficiency and zinc deficiency + HLB-positive samples, at 92.86%.
Collapse
Affiliation(s)
| | | | | | | | | | - Li Sun
- Correspondence: (J.C.); (L.S.)
| |
Collapse
|
18
|
Efficient Deep Feature Based Semantic Image Retrieval. Neural Process Lett 2023. [DOI: 10.1007/s11063-022-11079-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|
19
|
Abstract
Early prediction of delayed healing for venous leg ulcers could improve management outcomes by enabling earlier initiation of adjuvant therapies. In this paper, we propose a framework for computerised prediction of healing for venous leg ulcers assessed in home settings using thermal images of the 0 week. Wound data of 56 older participants over 12 weeks were used for the study. Thermal images of the wounds were collected in their homes and labelled as healed or unhealed at the 12th week follow up. Textural information of the thermal images at week 0 was extracted. Thermal images of unhealed wounds had a higher variation of grey tones distribution. We demonstrated that the first three principal components of the textural features from one timepoint can be used as an input to a Bayesian neural network to discriminate between healed and unhealed wounds. Using the optimal Bayesian neural network, the classification results showed 78.57% sensitivity and 60.00% specificity. This non-contact method, incorporating machine learning, can provide a computerised prediction of this delay in the first assessment (week 0) in participants' homes compared to the current method that is able to do this in 3rd week and requires contact digital planimetry.
Collapse
|
20
|
On the Quantification of Visual Texture Complexity. J Imaging 2022; 8:jimaging8090248. [PMID: 36135413 PMCID: PMC9505268 DOI: 10.3390/jimaging8090248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 08/26/2022] [Accepted: 09/08/2022] [Indexed: 11/20/2022] Open
Abstract
Complexity is one of the major attributes of the visual perception of texture. However, very little is known about how humans visually interpret texture complexity. A psychophysical experiment was conducted to visually quantify the seven texture attributes of a series of textile fabrics: complexity, color variation, randomness, strongness, regularity, repetitiveness, and homogeneity. It was found that the observers could discriminate between the textures with low and high complexity using some high-level visual cues such as randomness, color variation, strongness, etc. The results of principal component analysis (PCA) on the visual scores of the above attributes suggest that complexity and homogeneity could be essentially the underlying attributes of the same visual texture dimension, with complexity at the negative extreme and homogeneity at the positive extreme of this dimension. We chose to call this dimension visual texture complexity. Several texture measures including the first-order image statistics, co-occurrence matrix, local binary pattern, and Gabor features were computed for images of the textiles in sRGB, and four luminance-chrominance color spaces (i.e., HSV, YCbCr, Ohta’s I1I2I3, and CIELAB). The relationships between the visually quantified texture complexity of the textiles and the corresponding texture measures of the images were investigated. Analyzing the relationships showed that simple standard deviation of the image luminance channel had a strong correlation with the corresponding visual ratings of texture complexity in all five color spaces. Standard deviation of the energy of the image after convolving with an appropriate Gabor filter and entropy of the co-occurrence matrix, both computed for the image luminance channel, also showed high correlations with the visual data. In this comparison, sRGB, YCbCr, and HSV always outperformed the I1I2I3 and CIELAB color spaces. The highest correlations between the visual data and the corresponding image texture features in the luminance-chrominance color spaces were always obtained for the luminance channel of the images, and one of the two chrominance channels always performed better than the other. This result indicates that the arrangement of the image texture elements that impacts the observer’s perception of visual texture complexity cannot be represented properly by the chrominance channels. This must be carefully considered when choosing an image channel to quantify the visual texture complexity. Additionally, the good performance of the luminance channel in the five studied color spaces proves that variations in the luminance of the texture, or as one could call the luminance contrast, plays a crucial role in creating visual texture complexity.
Collapse
|
21
|
Integrating patient symptoms, clinical readings, and radiologist feedback with computer-aided diagnosis system for detection of infectious pulmonary disease: a feasibility study. Med Biol Eng Comput 2022; 60:2549-2565. [DOI: 10.1007/s11517-022-02611-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 06/07/2022] [Indexed: 10/17/2022]
|
22
|
Khomkham B, Lipikorn R. Pulmonary Lesion Classification Framework Using the Weighted Ensemble Classification with Random Forest and CNN Models for EBUS Images. Diagnostics (Basel) 2022; 12:1552. [PMID: 35885458 PMCID: PMC9319293 DOI: 10.3390/diagnostics12071552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2022] [Revised: 06/18/2022] [Accepted: 06/24/2022] [Indexed: 11/16/2022] Open
Abstract
Lung cancer is a deadly disease with a high mortality rate. Endobronchial ultrasonography (EBUS) is one of the methods for detecting pulmonary lesions. Computer-aided diagnosis of pulmonary lesions from images can help radiologists to classify lesions; however, most of the existing methods need a large volume of data to give good results. Thus, this paper proposes a novel pulmonary lesion classification framework for EBUS images that works well with small datasets. The proposed framework integrates the statistical results from three classification models using the weighted ensemble classification. The three classification models include the radiomics feature and patient data-based model, the single-image-based model, and the multi-patch-based model. The radiomics features are combined with the patient data to be used as input data for the random forest, whereas the EBUS images are used as input data to the other two CNN models. The performance of the proposed framework was evaluated on a set of 200 EBUS images consisting of 124 malignant lesions and 76 benign lesions. The experimental results show that the accuracy, sensitivity, specificity, positive predictive value, negative predictive value, and area under the curve are 95.00%, 100%, 86.67%, 92.59%, 100%, and 93.33%, respectively. This framework can significantly improve the pulmonary lesion classification.
Collapse
Affiliation(s)
| | - Rajalida Lipikorn
- Machine Intelligence and Multimedia Information Technology Laboratory (MIMIT), Department of Mathematics and Computer Science, Faculty of Science, Chulalongkorn University, Bangkok 10330, Thailand;
| |
Collapse
|
23
|
Breast Tumor Classification Using Intratumoral Quantitative Ultrasound Descriptors. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:1633858. [PMID: 35295204 PMCID: PMC8920646 DOI: 10.1155/2022/1633858] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/05/2021] [Revised: 02/15/2022] [Accepted: 02/23/2022] [Indexed: 12/11/2022]
Abstract
Breast cancer is a global epidemic, responsible for one of the highest mortality rates among women. Ultrasound imaging is becoming a popular tool for breast cancer screening, and quantitative ultrasound (QUS) techniques are being increasingly applied by researchers in an attempt to characterize breast tissue. Several different quantitative descriptors for breast cancer have been explored by researchers. This study proposes a breast tumor classification system using the three major types of intratumoral QUS descriptors which can be extracted from ultrasound radiofrequency (RF) data: spectral features, envelope statistics features, and texture features. A total of 16 features were extracted from ultrasound RF data across two different datasets, of which one is balanced and the other is severely imbalanced. The balanced dataset contains RF data of 100 patients with breast tumors, of which 48 are benign and 52 are malignant. The imbalanced dataset contains RF data of 130 patients with breast tumors, of which 104 are benign and 26 are malignant. Holdout validation was used to split the balanced dataset into 60% training and 40% testing sets. Feature selection was applied on the training set to identify the most relevant subset for the classification of benign and malignant breast tumors, and the performance of the features was evaluated on the test set. A maximum classification accuracy of 95% and an area under the receiver operating characteristic curve (AUC) of 0.968 was obtained on the test set. The performance of the identified relevant features was further validated on the imbalanced dataset, where a hybrid resampling strategy was firstly utilized to create an optimal balance between benign and malignant samples. A maximum classification accuracy of 93.01%, sensitivity of 94.62%, specificity of 91.4%, and AUC of 0.966 were obtained. The results indicate that the identified features are able to distinguish between benign and malignant breast lesions very effectively, and the combination of the features identified in this research has the potential to be a significant tool in the noninvasive rapid and accurate diagnosis of breast cancer.
Collapse
|
24
|
Wang B, Gao Y, Yuan X, Xiong S. Local R-Symmetry Co-Occurrence: Characterising Leaf Image Patterns for Identifying Cultivars. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2022; 19:1018-1031. [PMID: 33055018 DOI: 10.1109/tcbb.2020.3031280] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Leaf image recognition techniques have been actively researched for plant species identification. However it remains unclear whether analysing leaf patterns can provide sufficient information for further differentiating cultivars. This paper reports our attempt on cultivar recognition from leaves as a general very fine-grained pattern recognition problem, which is not only a challenging research problem but also important for cultivar evaluation, selection and production in agriculture. We propose a novel local R-symmetry co-occurrence method for characterising discriminative local symmetry patterns to distinguish subtle differences among cultivars. Through scalable and moving R-relation radius pairs, we generate a set of radius symmetry co-occurrence matrices (RsCoM)and their measures for describing the local symmetry properties of interior regions. By varying the size of the radius pair, the RsCoM measures local R-symmetry co-occurrence from global/coarse to fine scales. A new two-phase strategy of analysing the distribution of local RsCoM measures is designed to match the multiple scale appearance symmetry pattern distributions of similar cultivar leaf images. We constructed three leaf image databases, SoyCultivar, CottCultivar, and PeanCultivar, for an extensive experimental evaluation on recognition across soybean, cotton and peanut cultivars. Encouraging experimental results of the proposed method in comparison with the state-of-the-art leaf species recognition methods demonstrate the effectiveness of the proposed method for cultivar identification, which may advance the research in leaf recognition from species to cultivar.
Collapse
|
25
|
A method for eliminating the disturbance of pseudo-textural-direction in ultrasound image feature extraction. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103176] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
26
|
Muhtadi S, Chowdhury A, Razzaque RR, Shafiullah A. Analyzing the Texture of Nakagami Parametric Images for Classification of Breast Cancer. 2021 IEEE NATIONAL BIOMEDICAL ENGINEERING CONFERENCE (NBEC) 2021:100-105. [DOI: 10.1109/nbec53282.2021.9618762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
27
|
Shamsil A, Naish MD, Patel RV. Texture-based Intraoperative Image Guidance for Tumor Localization in Minimally Invasive Surgery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:3526-3530. [PMID: 34892000 DOI: 10.1109/embc46164.2021.9629758] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Intraoperative tumor localization in a deflated lung in minimally invasive surgery (MIS) is challenging as the lung cannot be manually palpated through small incisions. To do so remotely, an articulated multisensory imaging device combining tactile and ultrasound sensors was developed. It visualizes the surface tactile map and the depth of the tissue. However, with little maneuverability in MIS, localizing tumors using instrumented palpation is both tedious and inefficient. In this paper, a texture- based image guidance system that classifies tactile-guided ultrasound texture regions and provides beliefs on their types is proposed. The resulting interactive feedback allows directed palpation in MIS. A k-means classifier is used to first cluster gray-level co-occurrence matrix (GLCM)-based texture features of the ultrasound regions, followed by hidden Markov model-based belief propagation to establish confidence about the clustered features observing repeated patterns. When the beliefs converge, the system autonomously detects tumor and nontumor textures. The approach was tested on 20 ex vivo soft tissue specimens in a staged MIS. The results showed that with guidance, tumors in MIS could be localized with 98% accuracy, 99% sensitivity, and 97% specificity.Clinical Relevance- Texture-based image guidance adds efficiency and control to instrumented palpation in MIS. It renders fluidity and accuracy in image acquisition using a hand-held device where fatigue from prolonged handling affects imaging quality.
Collapse
|
28
|
Koyuncu H, Barstuğan M. COVID-19 discrimination framework for X-ray images by considering radiomics, selective information, feature ranking, and a novel hybrid classifier. SIGNAL PROCESSING. IMAGE COMMUNICATION 2021; 97:116359. [PMID: 34219966 PMCID: PMC8241421 DOI: 10.1016/j.image.2021.116359] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 06/12/2021] [Accepted: 06/13/2021] [Indexed: 05/17/2023]
Abstract
In medical imaging procedures for the detection of coronavirus, apart from medical tests, approval of diagnosis has special significance. Imaging procedures are also useful for detecting the damage caused by COVID-19. Chest X-ray imaging is frequently used to diagnose COVID-19 and different pneumonias. This paper presents a task-specific framework to detect coronavirus in X-ray images. Binary classification of three different labels (healthy, bacterial pneumonia, and COVID-19) was performed on two differentiated data sets in which corona is stated as positive. First-order statistics, gray level co-occurrence matrix, gray level run length matrix, and gray level size zone matrix were analyzed to form fifteen sub-data sets and to ascertain the necessary radiomics. Two normalization methods are compared to make the data meaningful. Furthermore, five feature ranking approaches (Bhattacharyya, entropy, Roc, t-test, and Wilcoxon) are mentioned to provide necessary information to a state-of-the-art classifier based on Gauss-map-based chaotic particle swarm optimization and neural networks. The proposed framework was designed according to the analyses about radiomics, normalization approaches, and filter-based feature ranking methods. In experiments, seven metrics were evaluated to objectively determine the results: accuracy, area under the receiver operating characteristic (ROC) curve, sensitivity, specificity, g-mean, precision, and f-measure. The proposed framework showed promising scores on two X-ray-based data sets, especially with the accuracy and area under the ROC curve rates exceeding 99% for the classification of coronavirus vs. others.
Collapse
Affiliation(s)
- Hasan Koyuncu
- Konya Technical University, Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya, Turkey
| | - Mücahid Barstuğan
- Konya Technical University, Faculty of Engineering and Natural Sciences, Electrical & Electronics Engineering Department, Konya, Turkey
| |
Collapse
|
29
|
Wang Q, Liu W, Chen X, Wang X, Chen G, Zhu X. Quantification of scar collagen texture and prediction of scar development via second harmonic generation images and a generative adversarial network. BIOMEDICAL OPTICS EXPRESS 2021; 12:5305-5319. [PMID: 34513258 PMCID: PMC8407811 DOI: 10.1364/boe.431096] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Revised: 07/12/2021] [Accepted: 07/16/2021] [Indexed: 05/29/2023]
Abstract
Widely used for medical analysis, the texture of the human scar tissue is characterized by irregular and extensive types. The quantitative detection and analysis of the scar texture as enabled by image analysis technology is of great significance to clinical practice. However, the existing methods remain disadvantaged by various shortcomings, such as the inability to fully extract the features of texture. Hence, the integration of second harmonic generation (SHG) imaging and deep learning algorithm is proposed in this study. Through combination with Tamura texture features, a regression model of the scar texture can be constructed to develop a novel method of computer-aided diagnosis, which can assist clinical diagnosis. Based on wavelet packet transform (WPT) and generative adversarial network (GAN), the model is trained with scar texture images of different ages. Generalized Boosted Regression Trees (GBRT) is also adopted to perform regression analysis. Then, the extracted features are further used to predict the age of scar. The experimental results obtained by our proposed model are better compared to the previously published methods. It thus contributes to the better understanding of the mechanism behind scar development and possibly the further development of SHG for skin analysis and clinic practice.
Collapse
Affiliation(s)
- Qing Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Weiping Liu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xinghong Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiumei Wang
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Guannan Chen
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| | - Xiaoqin Zhu
- Key Laboratory of OptoElectronic Science and Technology for Medicine of Ministry of Education, Fujian Provincial Key Laboratory of Photonics Technology, Fujian Normal University, Fuzhou 350007, China
- Fujian Provincial Engineering Technology Research Center of Photoelectric Sensing Application, Fujian Normal University, Fuzhou 350007, China
| |
Collapse
|
30
|
Ledgerwood M, Zifan A, Lin W, de Alva J, Chen H, Mittal RK. Novel gel bolus to improve impedance-based measurements of esophageal cross-sectional area during primary peristalsis. Neurogastroenterol Motil 2021; 33:e14071. [PMID: 33373474 DOI: 10.1111/nmo.14071] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 12/05/2020] [Accepted: 12/14/2020] [Indexed: 02/08/2023]
Abstract
INTRODUCTION Intraluminal esophageal impedance (ILEE) has the potential to measure esophageal luminal distension during swallow-induced peristalsis in the esophagus. A potential cause of inaccuracy in the ILEE measurement is the swallow-induced air in the bolus. AIM Compare a novel gel bolus to the current alternatives for the measurement of impedance-based luminal distension (cross-sectional area, CSA) during primary peristalsis. METHODS 12 healthy subjects were studied using high-resolution impedance manometry (HRMZ) and concurrently performed intraluminal ultrasound (US) imaging of the esophagus. Three test bolus materials were used: 1) novel gel, 2) 0.5 N saline, and 3) commercially available Diversatek EFTV viscous. Testing was performed in the supine and Trendelenburg (-15°) positions. US imaging assessed air in the bolus and luminal CSA. The Nadir impedance values were correlated to the US measured CSA. A custom Matlab software was used to assess the bolus travel times and impedance-based luminal CSA. RESULTS The novel gel bolus had the least amount of air in the bolus during its passage through the esophagus, as assessed by US image analysis. The novel gel bolus in the supine and Trendelenburg positions had the best linear fit between the US measured CSA and nadir impedance value (R2 = 0.88 & R2 = 0.90). The impedance-based calculation of the CSA correlated best with the US measured CSA with the use of the novel gel bolus. CONCLUSION We suggest the use of novel gel to assess distension along with contraction during routine clinical HRM testing.
Collapse
Affiliation(s)
- Melissa Ledgerwood
- Division of Gastroenterology, Department of Medicine, University of California San Diego, La Jolla, CA, USA.,Department of Material Science & Engineering, Jacobs School of Engineering, University of California, La Jolla, CA, USA
| | - Ali Zifan
- Division of Gastroenterology, Department of Medicine, University of California San Diego, La Jolla, CA, USA
| | - William Lin
- Division of Biology, University of California San Diego, La Jolla, CA, USA
| | - Jesse de Alva
- Department of Electrical & Computer Engineering, Jacobs School of Engineering, University of California San Diego, La Jolla, CA, USA
| | - Haojin Chen
- Department of Mechanical and Aerospace Engineering, Jacobs School of Engineering, University of California, La Jolla, CA, USA
| | - Ravinder K Mittal
- Division of Gastroenterology, Department of Medicine, University of California San Diego, La Jolla, CA, USA
| |
Collapse
|
31
|
Huang R, Lin Z, Dou H, Wang J, Miao J, Zhou G, Jia X, Xu W, Mei Z, Dong Y, Yang X, Zhou J, Ni D. AW3M: An auto-weighting and recovery framework for breast cancer diagnosis using multi-modal ultrasound. Med Image Anal 2021; 72:102137. [PMID: 34216958 DOI: 10.1016/j.media.2021.102137] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/23/2021] [Accepted: 06/14/2021] [Indexed: 10/21/2022]
Abstract
Recently, more clinicians have realized the diagnostic value of multi-modal ultrasound in breast cancer identification and began to incorporate Doppler imaging and Elastography in the routine examination. However, accurately recognizing patterns of malignancy in different types of sonography requires expertise. Furthermore, an accurate and robust diagnosis requires proper weights of multi-modal information as well as the ability to process missing data in practice. These two aspects are often overlooked by existing computer-aided diagnosis (CAD) approaches. To overcome these challenges, we propose a novel framework (called AW3M) that utilizes four types of sonography (i.e. B-mode, Doppler, Shear-wave Elastography, and Strain Elastography) jointly to assist breast cancer diagnosis. It can extract both modality-specific and modality-invariant features using a multi-stream CNN model equipped with self-supervised consistency loss. Instead of assigning the weights of different streams empirically, AW3M automatically learns the optimal weights using reinforcement learning techniques. Furthermore, we design a light-weight recovery block that can be inserted to a trained model to handle different modality-missing scenarios. Experimental results on a large multi-modal dataset demonstrate that our method can achieve promising performance compared with state-of-the-art methods. The AW3M framework is also tested on another independent B-mode dataset to prove its efficacy in general settings. Results show that the proposed recovery block can learn from the joint distribution of multi-modal features to further boost the classification accuracy given single modality input during the test.
Collapse
Affiliation(s)
- Ruobing Huang
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Zehui Lin
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Haoran Dou
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jian Wang
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Juzheng Miao
- School of Biological Sciences and Medical Engineering, Southeast University, China
| | - Guangquan Zhou
- School of Biological Sciences and Medical Engineering, Southeast University, China
| | - Xiaohong Jia
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, China
| | - Wenwen Xu
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, China
| | - Zihan Mei
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, China
| | - Yijie Dong
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, China
| | - Xin Yang
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Jianqiao Zhou
- Department of Ultrasound Medicine, Ruijin Hospital, School of Medicine, Shanghai Jiaotong University, China.
| | - Dong Ni
- Medical UltraSound Image Computing (MUSIC) Lab, School of Biomedical Engineering, Health Science Center, Shenzhen University, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| |
Collapse
|
32
|
Saha P, Mukherjee D, Singh PK, Ahmadian A, Ferrara M, Sarkar R. GraphCovidNet: A graph neural network based model for detecting COVID-19 from CT scans and X-rays of chest. Sci Rep 2021; 11:8304. [PMID: 33859222 PMCID: PMC8050058 DOI: 10.1038/s41598-021-87523-1] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/29/2021] [Indexed: 02/08/2023] Open
Abstract
COVID-19, a viral infection originated from Wuhan, China has spread across the world and it has currently affected over 115 million people. Although vaccination process has already started, reaching sufficient availability will take time. Considering the impact of this widespread disease, many research attempts have been made by the computer scientists to screen the COVID-19 from Chest X-Rays (CXRs) or Computed Tomography (CT) scans. To this end, we have proposed GraphCovidNet, a Graph Isomorphic Network (GIN) based model which is used to detect COVID-19 from CT-scans and CXRs of the affected patients. Our proposed model only accepts input data in the form of graph as we follow a GIN based architecture. Initially, pre-processing is performed to convert an image data into an undirected graph to consider only the edges instead of the whole image. Our proposed GraphCovidNet model is evaluated on four standard datasets: SARS-COV-2 Ct-Scan dataset, COVID-CT dataset, combination of covid-chestxray-dataset, Chest X-Ray Images (Pneumonia) dataset and CMSC-678-ML-Project dataset. The model shows an impressive accuracy of 99% for all the datasets and its prediction capability becomes 100% accurate for the binary classification problem of detecting COVID-19 scans. Source code of this work can be found at GitHub-link .
Collapse
Affiliation(s)
- Pritam Saha
- Department of Electrical Engineering, Jadavpur University, Kolkata, 700032, India
| | - Debadyuti Mukherjee
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| | - Pawan Kumar Singh
- Department of Information Technology, Jadavpur University, Kolkata, 700106, India
| | - Ali Ahmadian
- Institute of IR 4.0, The National University of Malaysia, Bangi, 43600 UKM, Selangor, Malaysia.
- School of Mathematical Sciences, College of Science and Technology, Wenzhou-Kean University, Wenzhou, China.
| | - Massimiliano Ferrara
- ICRIOS-The Invernizzi Centre for Research in Innovation, Organization, Strategy and Entrepreneurship, Department of Management and Technology, Bocconi University, Via Sarfatti, 25, 20136, Milan (MI), Italy
| | - Ram Sarkar
- Department of Computer Science and Engineering, Jadavpur University, Kolkata, 700032, India
| |
Collapse
|
33
|
Chen W, Li W. Definition and Usage of Texture Feature for Biological Sequence. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:773-776. [PMID: 32070991 DOI: 10.1109/tcbb.2020.2973084] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
In recent years, sequencing technology has developed rapidly. This produces a large number of biological sequence data. Because of its importance, there have been many studies on biological sequences. However, there is still a lack of an effective quantitative method for defining and calculating texture features of biological sequences. Texture is an important visual feature. It is generally used to describe the spatial arrangement of intensities of images. Here we defined the texture features of biological sequence. Combining the digital coding of biological sequence with the calculation method of image texture features, we defined the texture features of biological sequence and designed the calculation method. We applied this method to DNA sequence features quantification and analysis. Using these quantified features, we can compute the similarity distance matrix of DNA sequences and construct the phylogenetic relationships based on the clustering of the quantified features. This method can be applied to analyze any biological sequence, and all biological sequences can be digitally coded and texture features can be calculated by this method. This is a novel study of biological sequence texture features. This will usher in a new era of quantitative and mathematical calculation of biological sequence features.
Collapse
|
34
|
Chandra TB, Verma K, Singh BK, Jain D, Netam SS. Coronavirus disease (COVID-19) detection in Chest X-Ray images using majority voting based classifier ensemble. EXPERT SYSTEMS WITH APPLICATIONS 2021; 165:113909. [PMID: 32868966 PMCID: PMC7448820 DOI: 10.1016/j.eswa.2020.113909] [Citation(s) in RCA: 122] [Impact Index Per Article: 30.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 08/09/2020] [Accepted: 08/19/2020] [Indexed: 05/02/2023]
Abstract
Novel coronavirus disease (nCOVID-19) is the most challenging problem for the world. The disease is caused by severe acute respiratory syndrome coronavirus-2 (SARS-COV-2), leading to high morbidity and mortality worldwide. The study reveals that infected patients exhibit distinct radiographic visual characteristics along with fever, dry cough, fatigue, dyspnea, etc. Chest X-Ray (CXR) is one of the important, non-invasive clinical adjuncts that play an essential role in the detection of such visual responses associated with SARS-COV-2 infection. However, the limited availability of expert radiologists to interpret the CXR images and subtle appearance of disease radiographic responses remains the biggest bottlenecks in manual diagnosis. In this study, we present an automatic COVID screening (ACoS) system that uses radiomic texture descriptors extracted from CXR images to identify the normal, suspected, and nCOVID-19 infected patients. The proposed system uses two-phase classification approach (normal vs. abnormal and nCOVID-19 vs. pneumonia) using majority vote based classifier ensemble of five benchmark supervised classification algorithms. The training-testing and validation of the ACoS system are performed using 2088 (696 normal, 696 pneumonia and 696 nCOVID-19) and 258 (86 images of each category) CXR images, respectively. The obtained validation results for phase-I (accuracy (ACC) = 98.062%, area under curve (AUC) = 0.956) and phase-II (ACC = 91.329% and AUC = 0.831) show the promising performance of the proposed system. Further, the Friedman post-hoc multiple comparisons and z-test statistics reveals that the results of ACoS system are statistically significant. Finally, the obtained performance is compared with the existing state-of-the-art methods.
Collapse
Affiliation(s)
- Tej Bahadur Chandra
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India
| | - Kesari Verma
- Department of Computer Applications, National Institute of Technology Raipur, Chhattisgarh, India
| | - Bikesh Kumar Singh
- Department of Biomedical Engineering, National Institute of Technology Raipur, Chhattisgarh, India
| | - Deepak Jain
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| | - Satyabhuwan Singh Netam
- Department of Radiodiagnosis, Pt. Jawahar Lal Nehru Memorial Medical College, Raipur, Chhattisgarh, India
| |
Collapse
|
35
|
Heidari M, Lakshmivarahan S, Mirniaharikandehei S, Danala G, Maryada SKR, Liu H, Zheng B. Applying a Random Projection Algorithm to Optimize Machine Learning Model for Breast Lesion Classification. IEEE Trans Biomed Eng 2021; 68:2764-2775. [PMID: 33493108 DOI: 10.1109/tbme.2021.3054248] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
OBJECTIVE Since computer-aided diagnosis (CAD) schemes of medical images usually computes large number of image features, which creates a challenge of how to identify a small and optimal feature vector to build robust machine learning models, the objective of this study is to investigate feasibility of applying a random projection algorithm (RPA) to build an optimal feature vector from the initially CAD-generated large feature pool and improve performance of machine learning model. METHODS We assemble a retrospective dataset involving 1,487 cases of mammograms in which 644 cases have confirmed malignant mass lesions and 843 have benign lesions. A CAD scheme is first applied to segment mass regions and initially compute 181 features. Then, support vector machine (SVM) models embedded with several feature dimensionality reduction methods are built to predict likelihood of lesions being malignant. All SVM models are trained and tested using a leave-one-case-out cross-validation method. SVM generates a likelihood score of each segmented mass region depicting on one-view mammogram. By fusion of two scores of the same mass depicting on two-view mammograms, a case-based likelihood score is also evaluated. RESULTS Comparing with the principle component analyses, nonnegative matrix factorization, and Chi-squared methods, SVM embedded with RPA yielded a significantly higher case-based lesion classification performance with the area under ROC curve of 0.84 ± 0.01 (p<0.02). CONCLUSION The study demonstrates that RPA is a promising method to generate optimal feature vectors and improve SVM performance. SIGNIFICANCE This study presents a new method to develop CAD schemes with significantly higher and robust performance.
Collapse
|
36
|
Osapoetra LO, Chan W, Tran W, Kolios MC, Czarnota GJ. Comparison of methods for texture analysis of QUS parametric images in the characterization of breast lesions. PLoS One 2020; 15:e0244965. [PMID: 33382837 PMCID: PMC7775053 DOI: 10.1371/journal.pone.0244965] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Accepted: 12/18/2020] [Indexed: 01/06/2023] Open
Abstract
PURPOSE Accurate and timely diagnosis of breast carcinoma is very crucial because of its high incidence and high morbidity. Screening can improve overall prognosis by detecting the disease early. Biopsy remains as the gold standard for pathological confirmation of malignancy and tumour grading. The development of diagnostic imaging techniques as an alternative for the rapid and accurate characterization of breast masses is necessitated. Quantitative ultrasound (QUS) spectroscopy is a modality well suited for this purpose. This study was carried out to evaluate different texture analysis methods applied on QUS spectral parametric images for the characterization of breast lesions. METHODS Parametric images of mid-band-fit (MBF), spectral-slope (SS), spectral-intercept (SI), average scatterer diameter (ASD), and average acoustic concentration (AAC) were determined using QUS spectroscopy from 193 patients with breast lesions. Texture methods were used to quantify heterogeneities of the parametric images. Three statistical-based approaches for texture analysis that include Gray Level Co-occurrence Matrix (GLCM), Gray Level Run-length Matrix (GRLM), and Gray Level Size Zone Matrix (GLSZM) methods were evaluated. QUS and texture-parameters were determined from both tumour core and a 5-mm tumour margin and were used in comparison to histopathological analysis in order to classify breast lesions as either benign or malignant. We developed a diagnostic model using different classification algorithms including linear discriminant analysis (LDA), k-nearest neighbours (KNN), support vector machine with radial basis function kernel (SVM-RBF), and an artificial neural network (ANN). Model performance was evaluated using leave-one-out cross-validation (LOOCV) and hold-out validation. RESULTS Classifier performances ranged from 73% to 91% in terms of accuracy dependent on tumour margin inclusion and classifier methodology. Utilizing information from tumour core alone, the ANN achieved the best classification performance of 93% sensitivity, 88% specificity, 91% accuracy, 0.95 AUC using QUS parameters and their GLSZM texture features. CONCLUSIONS A QUS-based framework and texture analysis methods enabled classification of breast lesions with >90% accuracy. The results suggest that optimizing method for extracting discriminative textural features from QUS spectral parametric images can improve classification performance. Evaluation of the proposed technique on a larger cohort of patients with proper validation technique demonstrated the robustness and generalization of the approach.
Collapse
Affiliation(s)
- Laurentius O. Osapoetra
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
| | - William Chan
- University of Waterloo, Toronto, Ontario, Canada
| | - William Tran
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Evaluative Clinical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
| | | | - Gregory J. Czarnota
- Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada
- Department of Radiation Oncology, University of Toronto, Toronto, Ontario, Canada
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada
- Department of Physics, Ryerson University, Toronto, Ontario, Canada
| |
Collapse
|
37
|
Breast Tumor Classification in Ultrasound Images Using Combined Deep and Handcrafted Features. SENSORS 2020; 20:s20236838. [PMID: 33265900 PMCID: PMC7730057 DOI: 10.3390/s20236838] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 11/19/2020] [Accepted: 11/22/2020] [Indexed: 12/24/2022]
Abstract
This study aims to enable effective breast ultrasound image classification by combining deep features with conventional handcrafted features to classify the tumors. In particular, the deep features are extracted from a pre-trained convolutional neural network model, namely the VGG19 model, at six different extraction levels. The deep features extracted at each level are analyzed using a features selection algorithm to identify the deep feature combination that achieves the highest classification performance. Furthermore, the extracted deep features are combined with handcrafted texture and morphological features and processed using features selection to investigate the possibility of improving the classification performance. The cross-validation analysis, which is performed using 380 breast ultrasound images, shows that the best combination of deep features is obtained using a feature set, denoted by CONV features that include convolution features extracted from all convolution blocks of the VGG19 model. In particular, the CONV features achieved mean accuracy, sensitivity, and specificity values of 94.2%, 93.3%, and 94.9%, respectively. The analysis also shows that the performance of the CONV features degrades substantially when the features selection algorithm is not applied. The classification performance of the CONV features is improved by combining these features with handcrafted morphological features to achieve mean accuracy, sensitivity, and specificity values of 96.1%, 95.7%, and 96.3%, respectively. Furthermore, the cross-validation analysis demonstrates that the CONV features and the combined CONV and morphological features outperform the handcrafted texture and morphological features as well as the fine-tuned VGG19 model. The generalization performance of the CONV features and the combined CONV and morphological features is demonstrated by performing the training using the 380 breast ultrasound images and the testing using another dataset that includes 163 images. The results suggest that the combined CONV and morphological features can achieve effective breast ultrasound image classifications that increase the capability of detecting malignant tumors and reduce the potential of misclassifying benign tumors.
Collapse
|
38
|
Osapoetra LO, Sannachi L, DiCenzo D, Quiaoit K, Fatima K, Czarnota GJ. Breast lesion characterization using Quantitative Ultrasound (QUS) and derivative texture methods. Transl Oncol 2020; 13:100827. [PMID: 32663657 PMCID: PMC7358267 DOI: 10.1016/j.tranon.2020.100827] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 06/12/2020] [Indexed: 12/19/2022] Open
Abstract
PURPOSE Accurate and timely diagnosis of breast cancer is extremely important because of its high incidence and high morbidity. Early diagnosis of breast cancer through screening can improve overall prognosis. Currently, biopsy remains as the gold standard for tumor pathological confirmation. Development of diagnostic imaging techniques for rapid and accurate characterization of breast lesions is required. We aim to evaluate the usefulness of texture-derivate features of QUS spectral parametric images for non-invasive characterization of breast lesions. METHODS QUS Spectroscopy was used to determine parametric images of mid-band fit (MBF), spectral slope (SS), spectral intercept (SI), average scatterer diameter (ASD), and average acoustic concentration (AAC) in 204 patients with suspicious breast lesions. Subsequently, texture analysis techniques were used to generate texture maps from parametric images to quantify heterogeneities of QUS parametric images. Further, a second-pass texture analysis was applied to obtain texture-derivate features. QUS parameters, texture-parameters and texture-derivate parameters were determined from both tumor core and a 5-mm tumor margin and were used in comparison to histopathological analysis in order to develop a diagnostic model for classifying breast lesions as either benign or malignant. Both leave-one-out and hold-out cross-validations were used to evaluate the performance of the diagnostic model. Three standard classification algorithms including a linear discriminant analysis (LDA), k-nearest neighbors (KNN), and support vector machines-radial basis function (SVM-RBF) were evaluated. RESULTS Core and margin information using the SVM-RBF attained the best classification performance of 90% sensitivity, 92% specificity, 91% accuracy, and 0.93 AUC utilizing QUS parameters and their texture derivatives, evaluated using leave-one-out cross-validation. Implementation of hold-out cross-validation using combination of both core and margin information and SVM-RBF achieved average accuracy and AUC of 88% and 0.92, respectively. CONCLUSIONS QUS-based framework and derivative texture methods enable accurate classification of breast lesions. Evaluation of the proposed technique on a large cohort using hold-out cross-validation demonstrates its robustness and its generalization.
Collapse
Affiliation(s)
- Laurentius O Osapoetra
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada; Departments of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Lakshmanan Sannachi
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada; Departments of Medical Biophysics, University of Toronto, Toronto, ON, Canada
| | - Daniel DiCenzo
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Karina Quiaoit
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Kashuf Fatima
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada
| | - Gregory J Czarnota
- Physical Sciences, Sunnybrook Research Institute, Toronto, ON, Canada; Departments of Medical Biophysics, University of Toronto, Toronto, ON, Canada; Department of Radiation Oncology, Sunnybrook Health Sciences Centre, Toronto, ON, Canada; Faculty of Medicine, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
39
|
Thirunavukkarasu U, Umapathy S, Janardhanan K, Thirunavukkarasu R. A computer aided diagnostic method for the evaluation of type II diabetes mellitus in facial thermograms. Phys Eng Sci Med 2020; 43:871-888. [PMID: 32524377 DOI: 10.1007/s13246-020-00886-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2019] [Accepted: 06/02/2020] [Indexed: 12/11/2022]
Abstract
Almost 50% of individuals around the globe are unaware of diabetes and its complications. So, an early screening of diabetes is very important at this current situation. To overcome the difficulties such as pain and discomfort to the subjects obtained from the biochemical diagnostic procedures; an infrared thermography is the diagnostic technique which measures the skin surface temperature noninvasively. Thus, the aim of our proposed study was to evaluate the type II diabetes in facial thermograms and to develop a computer aided diagnosis (CAD) system to classify the normal and diabetes. The facial thermograms (n = 160) including male (n = 79) and female (n = 81) were captured using FLIR A 305sc infrared thermal camera. The Haralick textural features were extracted from the facial thermograms based on gray level co-occurrence matrix algorithm. The TROI, TMAX, and TTOT are the statistical temperature parameters exhibited a significant negative correlation with HbA1c (r = - 0.421, - 0.411, - 0.242, p < 0.01 (TROI); r = - 0.259, p < 0.01(TMAX) and - 0.173, p < 0.05 (TTOT)). An optimal regression equation has been constructed by using the significant facial variables and standard HbA1c values. The model has achieved sensitivity, specificity, and accuracy rate as 91.42%, 88.57%, and 90% respectively. The anthropometrical variables, extracted textural features and temperature parameters were fed into the classifiers and their performances were compared. The Support Vector Machine outperformed the Linear Discriminant Analysis (84.37%) and k-Nearest Neighbor (81.25%) classifiers with the maximum accuracy rate of 89.37%. The developed CAD system has achieved 89.37% of accuracy rate for the classification of diabetes. Thus, the facial thermography could be used as the basic non-invasive prognostic tool for the evaluation of type II diabetes mellitus.
Collapse
Affiliation(s)
- Usharani Thirunavukkarasu
- Department of Biomedical Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, 603203, India
| | - Snekhalatha Umapathy
- Department of Biomedical Engineering, SRM Institute of Science and Technology, Kattankulathur, Tamil Nadu, 603203, India.
| | - Kumar Janardhanan
- Department of General Medicine, SRM Hospital & Research Centre, Tamil Nadu, Kattankulathur, 603203, India
| | | |
Collapse
|
40
|
DNA sequence similarity analysis using image texture analysis based on first-order statistics. J Mol Graph Model 2020; 99:107603. [PMID: 32442904 DOI: 10.1016/j.jmgm.2020.107603] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 03/13/2020] [Accepted: 03/23/2020] [Indexed: 01/25/2023]
Abstract
Similarity is one of the key processes of DNA sequence analysis in computational biology and bioinformatics. In nearly all research that explores evolutionary relationships, gene function analysis, protein structure prediction and sequence retrieving, it is necessary to perform similarity calculations. One major task in alignment-free DNA sequence similarity calculations is to develop novel mathematical descriptors for DNA sequences. In this paper, we present a novel approach to DNA sequence similarity analysis studies using similarity calculations of texture images. Texture analysis methods, which are a subset of digital image processing methods, are used here with the assumption that these calculations can be adapted to alignment-free DNA sequence similarity analysis methods. Gray-level textures were created by the values assigned to the nucleotides in the DNA sequences. Similarity calculations were made between these textures using histogram-based texture analyses based on first-order statistics. We obtained texture features for 3 different DNA data sets of different lengths, and calculated the similarity matrices. The phylogenetic relationships revealed by our method shows our trees to be similar to the results of the MEGA software, which is based on sequence alignment. Our findings show that texture analysis metrics can be used to characterize DNA sequences.
Collapse
|
41
|
Classification of Breast Ultrasound Tomography by Using Textural Analysis. IRANIAN JOURNAL OF RADIOLOGY 2020. [DOI: 10.5812/iranjradiol.91749] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background: Ultrasound imaging has become one of the most widely utilized adjunct tools in breast cancer screening due to its advantages. The computer-aided detection of breast ultrasound is rapid development via significant features extracted from images. Objectives: The main aim was to identify features of breast ultrasound image that can facilitate reasonable classification of ultrasound images between malignant and benign lesions. Patients and Methods: This research was a retrospective study in which 85 cases (35 malignant [positive group] and 50 benign [negative group] with diagnostic reports) with ultrasound images were collected. The B-mode ultrasound images have manually selected regions of interest (ROI) for estimated features of an image. Then, a fractal dimensional (FD) image was generated from the original ROI by using the box-counting method. Both FD and ROI images were extracted features, including mean, standard deviation, skewness, and kurtosis. These extracted features were tested as significant by t-test, receiver operating characteristic (ROC) analysis and Kappa coefficient. Results: The statistical analysis revealed that the mean texture of images performed the best in differentiating benign versus malignant tumors. As determined by the ROC analysis, the appropriate qualitative values for the mean and the LR model were 0.85 and 0.5, respectively. The sensitivity, specificity, accuracy, positive predicted value (PPV), negative predicted value (NPV), and Kappa for the mean was 0.77, 0.84, 0.81, 0.77, 0.84, and 0.61, respectively. Conclusion: The presented method was efficient in classifying malignant and benign tumors using image textures. Future studies on breast ultrasound texture analysis could focus on investigations of edge detection, texture estimation, classification models, and image features.
Collapse
|
42
|
Abstract
This article presents a real-time approach for classification of burn depth based on B-mode ultrasound imaging. A grey-level co-occurrence matrix (GLCM) computed from the ultrasound images of the tissue is employed to construct the textural feature set and the classification is performed using nonlinear support vector machine and kernel Fisher discriminant analysis. A leave-one-out cross-validation is used for the independent assessment of the classifiers. The model is tested for pair-wise binary classification of four burn conditions in ex vivo porcine skin tissue: (i) 200 °F for 10 s, (ii) 200 °F for 30 s, (iii) 450 °F for 10 s, and (iv) 450 °F for 30 s. The average classification accuracy for pairwise separation is 99% with just over 30 samples in each burn group and the average multiclass classification accuracy is 93%. The results highlight that the ultrasound imaging-based burn classification approach in conjunction with the GLCM texture features provide an accurate assessment of altered tissue characteristics with relatively moderate sample sizes, which is often the case with experimental and clinical datasets. The proposed method is shown to have the potential to assist with the real-time clinical assessment of burn degrees, particularly for discriminating between superficial and deep second degree burns, which is challenging in clinical practice.
Collapse
|
43
|
A Novel Computer-Aided-Diagnosis System for Breast Ultrasound Images Based on BI-RADS Categories. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10051830] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The breast ultrasound is not only one of major devices for breast tissue imaging, but also one of important methods in breast tumor screening. It is non-radiative, non-invasive, harmless, simple, and low cost screening. The American College of Radiology (ACR) proposed the Breast Imaging Reporting and Data System (BI-RADS) to evaluate far more breast lesion severities compared to traditional diagnoses according to five-criterion categories of masses composition described as follows: shape, orientation, margin, echo pattern, and posterior features. However, there exist some problems, such as intensity differences and different resolutions in image acquisition among different types of ultrasound imaging modalities so that clinicians cannot always identify accurately the BI-RADS categories or disease severities. To this end, this article adopted three different brands of ultrasound scanners to fetch breast images for our experimental samples. The breast lesion was detected on the original image using preprocessing, image segmentation, etc. The breast tumor’s severity was evaluated on the features of the breast lesion via our proposed classifiers according to the BI-RADS standard rather than traditional assessment on the severity; i.e., merely using benign or malignant. In this work, we mainly focused on the BI-RADS categories 2–5 after the stage of segmentation as a result of the clinical practice. Moreover, several features related to lesion severities based on the selected BI-RADS categories were introduced into three machine learning classifiers, including a Support Vector Machine (SVM), Random Forest (RF), and Convolution Neural Network (CNN) combined with feature selection to develop a multi-class assessment of breast tumor severity based on BI-RADS. Experimental results show that the proposed CAD system based on BI-RADS can obtain the identification accuracies with SVM, RF, and CNN reaching 80.00%, 77.78%, and 85.42%, respectively. We also validated the performance and adaptability of the classification using different ultrasound scanners. Results also indicate that the evaluations of F-score based on CNN can obtain measures higher than 75% (i.e., prominent adaptability) when samples were tested on various BI-RADS categories.
Collapse
|
44
|
Qian X, Zhang B, Liu S, Wang Y, Chen X, Liu J, Yang Y, Chen X, Wei Y, Xiao Q, Ma J, Shung KK, Zhou Q, Liu L, Chen Z. A combined ultrasonic B-mode and color Doppler system for the classification of breast masses using neural network. Eur Radiol 2020; 30:3023-3033. [PMID: 32006174 DOI: 10.1007/s00330-019-06610-0] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 11/19/2019] [Accepted: 12/06/2019] [Indexed: 12/11/2022]
Abstract
OBJECTIVES To develop a dual-modal neural network model to characterize ultrasound (US) images of breast masses. MATERIALS AND METHODS A combined US B-mode and color Doppler neural network model was developed to classify US images of the breast. Three datasets with breast masses were originally detected and interpreted by 20 experienced radiologists according to Breast Imaging-Reporting and Data System (BI-RADS) lexicon ((1) training set, 103212 masses from 45,433 + 12,519 patients. (2) held-out validation set, 2748 masses from 1197 + 395 patients. (3) test set, 605 masses from 337 + 78 patients). The neural network was first trained on training set. Then, the trained model was tested on a held-out validation set to evaluate agreement on BI-RADS category between the model and the radiologists. In addition, the model and a reader study of 10 radiologists were applied to the test set with biopsy-proven results. To evaluate the performance of the model in benign or malignant classifications, the receiver operating characteristic curve, sensitivities, and specificities were compared. RESULTS The trained dual-modal model showed favorable agreement with the assessment performed by the radiologists (κ = 0.73; 95% confidence interval, 0.71-0.75) in classifying breast masses into four BI-RADS categories in the validation set. For the binary categorization of benign or malignant breast masses in the test set, the dual-modal model achieved the area under the ROC curve (AUC) of 0.982, while the readers scored an AUC of 0.948 in terms of the ROC convex hull. CONCLUSION The dual-modal model can be used to assess breast masses at a level comparable to that of an experienced radiologist. KEY POINTS • A neural network model based on ultrasonic imaging can classify breast masses into different Breast Imaging-Reporting and Data System categories according to the probability of malignancy. • A combined ultrasonic B-mode and color Doppler neural network model achieved a high level of agreement with the readings of an experienced radiologist and has the potential to automate the routine characterization of breast masses.
Collapse
Affiliation(s)
- Xuejun Qian
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA.,Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Bo Zhang
- Ultrasound Imaging Department, Xiangya Hospital of Central South University, Changsha, 410083, Hunan, China
| | - Shaoqiang Liu
- School of Information Science and Engineering, Central South University, Changsha, 410083, Hunan, China
| | - Yueai Wang
- Ultrasound Imaging Department, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Xiaoqiong Chen
- Ultrasound Imaging Department, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Jingyuan Liu
- Blood Testing Center, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Yuzheng Yang
- The Middle School Attached to Human Normal University, Changsha, 410006, Hunan, China
| | - Xiang Chen
- Xiangya Hospital of Central South University, Aluminium Science & Technology Building, Changsha, 410083, Hunan, China
| | - Yi Wei
- Arvato Systems Co, Ltd, Xuhui, Shanghai, 20072, China
| | - Qisen Xiao
- School of Telecommunications Engineering, Xidian University, Xi'an, 710126, Shaanxi, China
| | - Jie Ma
- Department of Materials Science, University of Southern California, Los Angeles, CA, 90089, USA
| | - K Kirk Shung
- Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Qifa Zhou
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA.,Department of Biomedical Engineering and NIH Resource Center for Medical Ultrasonic Transducer Technology, University of Southern California, Los Angeles, CA, 90089, USA
| | - Lifang Liu
- Department of Breast Surgery, First Affiliated Hospital of Hunan University of Traditional Chinese Medicine, Changsha, 410007, Hunan, China
| | - Zeyu Chen
- Keck School of Medicine, University of Southern California, Los Angeles, CA, 90033, USA. .,Xiangya Hospital of Central South University, Aluminium Science & Technology Building, Changsha, 410083, Hunan, China.
| |
Collapse
|
45
|
Fan M, Yuan W, Zhao W, Xu M, Wang S, Gao X, Li L. Joint Prediction of Breast Cancer Histological Grade and Ki-67 Expression Level Based on DCE-MRI and DWI Radiomics. IEEE J Biomed Health Inform 2019; 24:1632-1642. [PMID: 31794406 DOI: 10.1109/jbhi.2019.2956351] [Citation(s) in RCA: 70] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE Histologic grade and Ki-67 proliferation status are important clinical indictors for breast cancer prognosis and treatment. The purpose of this study is to improve prediction accuracy of these clinical indicators based on tumor radiomic analysis. METHODS We jointly predicted Ki-67 and tumor grade with a multitask learning framework by separately utilizing radiomics from tumor MRI series. Additionally, we showed how multitask learning models (MTLs) could be extended to combined radiomics from the MRI series for a better prediction based on the assumption that features from different sources of images share common patterns while providing complementary information. Tumor radiomic analysis was performed with morphological, statistical and textural features extracted on the DWI and dynamic contrast-enhanced MRI (DCE-MRI) series of the precontrast and subtraction images, respectively. RESULTS Joint prediction of Ki-67 status and tumor grade on MR images using the MTL achieved performance improvements over that of single-task-based predictive models. Similarly, for the prediction tasks of Ki-67 and tumor grade, the MTL for combined precontrast and apparent diffusion coefficient (ADC) images achieved AUCs of 0.811 and 0.816, which were significantly better than that of the single-task- based model with p values of 0.005 and 0.017, respectively. CONCLUSION Mapping MRI radiomics to two related clinical indicators improves prediction performance for both Ki-67 expression level and tumor grade. SIGNIFICANCE Joint prediction of indicators by multitask learning that combines correlations of MRI radiomics is important for optimal tumor therapy and treatment because clinical decisions are made by integrating multiple clinical indicators.
Collapse
|
46
|
Naredo E, Pascau J, Damjanov N, Lepri G, Gordaliza PM, Janta I, Ovalles-Bonilla JG, López-Longo FJ, Matucci-Cerinic M. Performance of ultra-high-frequency ultrasound in the evaluation of skin involvement in systemic sclerosis: a preliminary report. Rheumatology (Oxford) 2019; 59:1671-1678. [DOI: 10.1093/rheumatology/kez439] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 07/30/2019] [Indexed: 01/18/2023] Open
Abstract
Abstract
Objective
High frequency ultrasound allows visualization of epidermis, dermis and hypodermis, precise measurement of skin thickness, as well as assessment of skin oedema, fibrosis and atrophy. The aim of this pilot cross-sectional observational study was to assess the performance and multiobserver variability of ultra-high-frequency (UHF) (50 MHz) ultrasound (US) in measuring skin thickness as well as the capacity of UHF-derived skin features to differentiate SSc patients from healthy controls.
Methods
Twenty-one SSc patients (16 limited and five diffuse SSc) and six healthy controls were enrolled. All subjects underwent US evaluation by three experts at three anatomical sites (forearm, hand and finger). Dermal thickness was measured and two rectangular regions of interest, one in dermis and one in hypodermis, were established for texture feature analysis.
Results
UHF-US allowed a precise identification and measurement of the thickness of the dermis. The dermal thickness in the finger was significantly higher in patients than in controls (P < 0.05), while in the forearm it was significantly lower in patients than in controls (P < 0.001). Interobserver variability for dermal thickness was good to excellent [forearm intraclass correlation coefficient (ICC) = 0.754; finger ICC = 0.699; hand ICC = 0.602]. Texture computed analysis of dermis and hypodermis was able to discriminate between SSc and healthy subjects (area under the curve >0.7).
Conclusion
These preliminary data show that skin UHF-US allows a very detailed imaging of skin layers, a reliable measurement of dermal thickness, and a discriminative capacity between dermis and hypodermis texture features in SSc and healthy subjects.
Collapse
Affiliation(s)
- Esperanza Naredo
- Department of Rheumatology, Joint and Bone Research Unit, Hospital Universitario Fundación Jiménez Díaz
| | - Javier Pascau
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Nemanja Damjanov
- Institute of Rheumatology, University of Belgrade Medical School, Belgrade, Serbia
| | - Gemma Lepri
- Department of Experimental and Clinical Medicine, University of Florence and Department of Geriatric Medicine, Division of Rheumatology AOUC, Florence, Italy
| | - Pedro M Gordaliza
- Bioengineering and Aerospace Engineering Department, Universidad Carlos III de Madrid, Instituto de Investigación Sanitaria Gregorio Marañón, Madrid, Spain
| | - Iustina Janta
- Department of Rheumatology, Hospital General, Universitario Gregorio Marañon, Madrid, Spain
| | | | | | - Marco Matucci-Cerinic
- Department of Experimental and Clinical Medicine, University of Florence and Department of Geriatric Medicine, Division of Rheumatology AOUC, Florence, Italy
| |
Collapse
|
47
|
Michahial S, Thomas BA. Applying cuckoo search based algorithm and hybrid based neural classifier for breast cancer detection using ultrasound images. EVOLUTIONARY INTELLIGENCE 2019. [DOI: 10.1007/s12065-019-00268-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
|
48
|
Gómez-Flores W, Rodríguez-Cristerna A, de Albuquerque Pereira WC. Texture Analysis Based on Auto-Mutual Information for Classifying Breast Lesions with Ultrasound. ULTRASOUND IN MEDICINE & BIOLOGY 2019; 45:2213-2225. [PMID: 31097332 DOI: 10.1016/j.ultrasmedbio.2019.03.018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Revised: 03/22/2019] [Accepted: 03/26/2019] [Indexed: 06/09/2023]
Abstract
Described here is a novel texture extraction method based on auto-mutual information (AMI) for classifying breast lesions. The objective is to extract discriminating information found in the non-linear relationship of textures in breast ultrasound (BUS) images. The AMI method performs three basic tasks: (i) it transforms the input image using the ranklet transform to handle intensity variations of BUS images acquired with distinct ultrasound scanners; (ii) it extracts the AMI-based texture features in the horizontal and vertical directions from each ranklet image; and (iii) it classifies the breast lesions into benign and malignant classes, in which a support-vector machine is used as the underlying classifier. The image data set is composed of 2050 BUS images consisting of 1347 benign and 703 malignant tumors. Additionally, nine commonly used texture extraction methods proposed in the literature for BUS analysis are compared with the AMI method. The bootstrap method, which considers 1000 bootstrap samples, is used to evaluate classification performance. The experimental results indicate that the proposed approach outperforms its counterparts in terms of area under the receiver operating characteristic curve, sensitivity, specificity and Matthews correlation coefficient, with values of 0.82, 0.80, 0.85 and 0.63, respectively. These results suggest that the AMI method is suitable for breast lesion classification systems.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Center for Research and Advanced Studies of the National Polytechnic Institute, 87138 Ciudad Victoria, Tamaulipas, Mexico.
| | - Arturo Rodríguez-Cristerna
- Center for Research and Advanced Studies of the National Polytechnic Institute, 87138 Ciudad Victoria, Tamaulipas, Mexico
| | | |
Collapse
|
49
|
Forgács A, Béresová M, Garai I, Lassen ML, Beyer T, DiFranco MD, Berényi E, Balkay L. Impact of intensity discretization on textural indices of [ 18F]FDG-PET tumour heterogeneity in lung cancer patients. Phys Med Biol 2019; 64:125016. [PMID: 31108468 DOI: 10.1088/1361-6560/ab2328] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
Quantifying tumour heterogeneity from [18F]FDG-PET images promises benefits for treatment selection of cancer patients. Here, the calculation of texture parameters mandates an initial discretization step (binning) to reduce the number of intensity levels. Typically, three types of discrimination methods are used: lesion relative resampling (LRR) with fixed bin number, lesion absolute resampling (LAR) and absolute resampling (AR) with fixed bin widths. We investigated the effects of varying bin widths or bin number using 27 commonly cited local and regional texture indices (TIs) applied on lung tumour volumes. The data set were extracted from 58 lung cancer patients, with three different and robust tumour segmentation methods. In our cohort, the variations of the mean value as the function of the bin widths were similar for TIs calculated with LAR and AR quantification. The TI histograms calculated by LRR method showed distinct behaviour and its numerical values substantially effected by the selected bin number. The correlations of the AR and LAR based TIs demonstrated no principal differences between these methods. However, no correlation was found for the interrelationship between the TIs calculated by LRR and LAR (or AR) discretization method. Visual classification of the texture was also performed for each lesion. This classification analysis revealed that the parameters show statistically significant correlation with the visual score, if LAR or AR discretization method is considered, in contrast to LRR. Moreover, all the resulted tendencies were similar regardless the segmentation methods and the type of textural features involved in this work.
Collapse
Affiliation(s)
- Attila Forgács
- Scanomed Nuclear Medicine Center, Debrecen, Hungary. Division of Nuclear Medicine and Translational Imaging, Department of Medical Imaging, Faculty of Medicine, University of Debrecen, Debrecen, Hungary. Author to whom any correspondence should be addressed
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Texture-Based Metallurgical Phase Identification in Structural Steels: A Supervised Machine Learning Approach. METALS 2019. [DOI: 10.3390/met9050546] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Automatic identification of metallurgical phases based on thresholding methods in microstructural images may not be possible when the pixel intensities associated with the metallurgical phases overlap and, hence, are indistinguishable. To circumvent this problem, additional visual information about the metallurgical phases, referred to as textural features, are considered in this study. Mathematically, textural features are the second order statistics of an image domain and can be distinct for each metallurgical phase. Textural features are evaluated from the gray level co-occurrence matrix (GLCM) of each metallurgical phase (ferrite, pearlite, and martensite) present in heat-treated ASTM A36 steels in this study. The dataset of textural features and pixel intensities generated for the metallurgical phases is used to train supervised machine learning classifiers, which are subsequently employed to predict the metallurgical phases in the microstructure. Naïve Bayes (NB), k-nearest neighbor (K-NN), linear discriminant analysis (LDA), and decision tree (DT) classifiers are the four classifiers employed in this study. The performances of all four classifiers were assessed prior to their deployment, and the classification accuracy was found to be >97%. The proposed technique has two unique advantages: (1) unlike pixel intensity-based methods, the proposed method does not misclassify the grain boundaries as a metallurgical phase, and (2) the proposed method does not require the end-user to input the number of phases present in the microstructure.
Collapse
|