1
|
Ozdemir F, Acmaz B, Latifoglu F, Muhtaroglu S, Okcu NT, Acmaz G, Muderris II. Ultrasonographic examination of the maturational effect of maternal vitamin D use on fetal clavicle bone development. BMC Med Imaging 2025; 25:20. [PMID: 39825247 PMCID: PMC11740525 DOI: 10.1186/s12880-025-01558-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2024] [Accepted: 01/11/2025] [Indexed: 01/20/2025] Open
Abstract
AIM This study aimed to evaluate the effect of maternal vitamin D use during intrauterine life on fetal bone development using ultrasonographic image processing techniques. MATERIALS AND METHODS We evaluated 52 pregnant women receiving vitamin D supplementation and 50 who refused vitamin D supplementation. Ultrasonographic imaging was performed on the fetal clavicle at 37-40 weeks of gestation. The fetal clavicle images were compared with adult male clavicle images. The texture features obtained from these images were used for analysis. RESULTS No difference was observed in bone formation and destruction markers between the two groups. However, the texture analysis of ultrasonographic images revealed similarities in the characteristics of fetal clavicles in pregnant women receiving vitamin D supplementation and those of adult male clavicles. CONCLUSIONS Vitamin D supplementation in pregnancy has significant positive effects on fetal bone maturation besides contributing to maternal bone health. Texture feature analyses using ultrasonographic images successfully demonstrated fetal bone maturation.
Collapse
Affiliation(s)
- Fatma Ozdemir
- Faculty of Medicine, Department of Obstetrics and Gynecology, Erciyes University, Yenidogan Neighborhood, Turhan Baytop Street No:1, Kayseri, 38280, Turkey.
| | - Banu Acmaz
- Department of Internal Medicine, Kayseri City Hospital, Kayseri, Turkey
| | - Fatma Latifoglu
- Faculty of Engineering, Department of Biomedical Engineering, Erciyes University, Kayseri, Turkey
| | - Sabahattin Muhtaroglu
- Faculty of Medicine, Department of Biochemistry, Erciyes University, Kayseri, Turkey
| | | | - Gokhan Acmaz
- Faculty of Medicine, Department of Obstetrics and Gynecology, Erciyes University, Yenidogan Neighborhood, Turhan Baytop Street No:1, Kayseri, 38280, Turkey
| | - Iptisam Ipek Muderris
- Faculty of Medicine, Department of Obstetrics and Gynecology, Erciyes University, Yenidogan Neighborhood, Turhan Baytop Street No:1, Kayseri, 38280, Turkey
| |
Collapse
|
2
|
Jacobson MJ, Masry ME, Arrubla DC, Tricas MR, Gnyawali SC, Zhang X, Gordillo G, Xue Y, Sen CK, Wachs J. Autonomous Multi-modality Burn Wound Characterization using Artificial Intelligence. Mil Med 2023; 188:674-681. [PMID: 37948279 DOI: 10.1093/milmed/usad301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 04/05/2023] [Accepted: 08/10/2023] [Indexed: 11/12/2023] Open
Abstract
INTRODUCTION Between 5% and 20% of all combat-related casualties are attributed to burn wounds. A decrease in the mortality rate of burns by about 36% can be achieved with early treatment, but this is contingent upon accurate characterization of the burn. Precise burn injury classification is recognized as a crucial aspect of the medical artificial intelligence (AI) field. An autonomous AI system designed to analyze multiple characteristics of burns using modalities including ultrasound and RGB images is described. MATERIALS AND METHODS A two-part dataset is created for the training and validation of the AI: in vivo B-mode ultrasound scans collected from porcine subjects (10,085 frames), and RGB images manually collected from web sources (338 images). The framework in use leverages an explanation system to corroborate and integrate burn expert's knowledge, suggesting new features and ensuring the validity of the model. Through the utilization of this framework, it is discovered that B-mode ultrasound classifiers can be enhanced by supplying textural features. More specifically, it is confirmed that statistical texture features extracted from ultrasound frames can increase the accuracy of the burn depth classifier. RESULTS The system, with all included features selected using explainable AI, is capable of classifying burn depth with accuracy and F1 average above 80%. Additionally, the segmentation module has been found capable of segmenting with a mean global accuracy greater than 84%, and a mean intersection-over-union score over 0.74. CONCLUSIONS This work demonstrates the feasibility of accurate and automated burn characterization for AI and indicates that these systems can be improved with additional features when a human expert is combined with explainable AI. This is demonstrated on real data (human for segmentation and porcine for depth classification) and establishes the groundwork for further deep-learning thrusts in the area of burn analysis.
Collapse
Affiliation(s)
- Maxwell J Jacobson
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Mohamed El Masry
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | | | - Maria Romeo Tricas
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Surya C Gnyawali
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Xinwei Zhang
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Gayle Gordillo
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Yexiang Xue
- Department of Computer Science, Purdue University, West Lafayette, IN 47907, USA
| | - Chandan K Sen
- School of Medicine, Indiana University, Indianapolis, IN 46202, USA
| | - Juan Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
3
|
Koh RGL, Dilek B, Ye G, Selver A, Kumbhare D. Myofascial Trigger Point Identification in B-Mode Ultrasound: Texture Analysis Versus a Convolutional Neural Network Approach. ULTRASOUND IN MEDICINE & BIOLOGY 2023; 49:2273-2282. [PMID: 37495496 DOI: 10.1016/j.ultrasmedbio.2023.06.019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Revised: 05/18/2023] [Accepted: 06/26/2023] [Indexed: 07/28/2023]
Abstract
OBJECTIVE Myofascial pain syndrome (MPS) is one of the most common causes of chronic pain and affects a large portion of patients seen in specialty pain centers as well as primary care clinics. Diagnosis of MPS relies heavily on a clinician's ability to identify the presence of a myofascial trigger point (MTrP). Ultrasound can help, but requires the user to be experienced in ultrasound. Thus, this study investigates the use of texture features and deep learning strategies for the automatic identification of muscle with MTrPs (i.e., active and latent MTrPs) from normal (i.e., no MTrP) muscle. METHODS Participants (n = 201) were recruited from Toronto Rehabilitation Institute, and ultrasound videos of their trapezius muscles were acquired. This new data set consists of 1344 images (248 active, 120 latent, 976 normal) collected from these videos. For texture analysis, several features were investigated with varying parameters (i.e., region of interest size, feature type and pixel pair relationships). Convolutional neural networks (CNN) were also applied to observe the performance of deep learning approaches. Performance was evaluated based on the classification accuracy, micro F1-score, sensitivity, specificity, positive predictive value and negative predictive value. RESULTS The best CNN approach was able to differentiate between muscles with and without MTrPs better than the best texture feature approach, with F1-scores of 0.7299 and 0.7135, respectively. CONCLUSION The results of this study reveal the challenges associated with MTrP identification and the potential and shortcomings of CNN and radiomics approaches in detail.
Collapse
Affiliation(s)
- Ryan G L Koh
- KITE Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada.
| | - Banu Dilek
- KITE Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada; Department of Physical Medicine and Rehabilitation, Dokuz Eylul University, Izmir, Turkey
| | - Gongkai Ye
- KITE Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Alper Selver
- Department of Electrical and Electronics Engineering, Dokuz Eylul University, Izmir, Turkey
| | - Dinesh Kumbhare
- KITE Research Institute, Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| |
Collapse
|
4
|
Khan RA, Fu M, Burbridge B, Luo Y, Wu FX. A multi-modal deep neural network for multi-class liver cancer diagnosis. Neural Netw 2023; 165:553-561. [PMID: 37354807 DOI: 10.1016/j.neunet.2023.06.013] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2022] [Revised: 01/21/2023] [Accepted: 06/07/2023] [Indexed: 06/26/2023]
Abstract
Liver disease is a potentially asymptomatic clinical entity that may progress to patient death. This study proposes a multi-modal deep neural network for multi-class malignant liver diagnosis. In parallel with the portal venous computed tomography (CT) scans, pathology data is utilized to prognosticate primary liver cancer variants and metastasis. The processed CT scans are fed to the deep dilated convolution neural network to explore salient features. The residual connections are further added to address vanishing gradient problems. Correspondingly, five pathological features are learned using a wide and deep network that gives a benefit of memorization with generalization. The down-scaled hierarchical features from CT scan and pathology data are concatenated to pass through fully connected layers for classification between liver cancer variants. In addition, the transfer learning of pre-trained deep dilated convolution layers assists in handling insufficient and imbalanced dataset issues. The fine-tuned network can predict three-class liver cancer variants with an average accuracy of 96.06% and an Area Under Curve (AUC) of 0.832. To the best of our knowledge, this is the first study to classify liver cancer variants by integrating pathology and image data, hence following the medical perspective of malignant liver diagnosis. The comparative analysis on the benchmark dataset shows that the proposed multi-modal neural network outperformed most of the liver diagnostic studies and is comparable to others.
Collapse
Affiliation(s)
- Rayyan Azam Khan
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Minghan Fu
- Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Brent Burbridge
- College of Medicine and Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Yigang Luo
- College of Medicine and Department of Surgery, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| | - Fang-Xiang Wu
- Division of Biomedical Engineering, Department of Computer Science and Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada.
| |
Collapse
|
5
|
Alhussan AA, Eid MM, Towfek SK, Khafaga DS. Breast Cancer Classification Depends on the Dynamic Dipper Throated Optimization Algorithm. Biomimetics (Basel) 2023; 8:163. [PMID: 37092415 PMCID: PMC10123690 DOI: 10.3390/biomimetics8020163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 04/12/2023] [Accepted: 04/14/2023] [Indexed: 04/25/2023] Open
Abstract
According to the American Cancer Society, breast cancer is the second largest cause of mortality among women after lung cancer. Women's death rates can be decreased if breast cancer is diagnosed and treated early. Due to the lengthy duration of manual breast cancer diagnosis, an automated approach is necessary for early cancer identification. This research proposes a novel framework integrating metaheuristic optimization with deep learning and feature selection for robustly classifying breast cancer from ultrasound images. The structure of the proposed methodology consists of five stages, namely, data augmentation to improve the learning of convolutional neural network (CNN) models, transfer learning using GoogleNet deep network for feature extraction, selection of the best set of features using a novel optimization algorithm based on a hybrid of dipper throated and particle swarm optimization algorithms, and classification of the selected features using CNN optimized using the proposed optimization algorithm. To prove the effectiveness of the proposed approach, a set of experiments were conducted on a breast cancer dataset, freely available on Kaggle, to evaluate the performance of the proposed feature selection method and the performance of the optimized CNN. In addition, statistical tests were established to study the stability and difference of the proposed approach compared to state-of-the-art approaches. The achieved results confirmed the superiority of the proposed approach with a classification accuracy of 98.1%, which is better than the other approaches considered in the conducted experiments.
Collapse
Affiliation(s)
- Amel Ali Alhussan
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| | - Marwa M. Eid
- Faculty of Artificial Intelligence, Delta University for Science and Technology, Mansoura 35712, Egypt
| | - S. K. Towfek
- Delta Higher Institute for Engineering and Technology, Mansoura 35111, Egypt
- Computer Science and Intelligent Systems Research Center, Blacksburg, VA 24060, USA
| | - Doaa Sami Khafaga
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
| |
Collapse
|
6
|
Crack Texture Feature Identification of Fiber Reinforced Concrete Based on Deep Learning. MATERIALS 2022; 15:ma15113940. [PMID: 35683238 PMCID: PMC9182088 DOI: 10.3390/ma15113940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 05/23/2022] [Accepted: 05/29/2022] [Indexed: 12/10/2022]
Abstract
Structural cracks in concrete have a significant influence on structural safety, so it is necessary to detect and monitor concrete cracks. Deep learning is a powerful tool for detecting cracks in concrete structures. However, it requires a large quantity of training samples and is costly in terms of computational time. In order to solve these difficulties, a deep learning target detection framework combining texture features with concrete crack data is proposed. Texture features and pre-processed concrete data are merged to increase the number of feature channels in order to reduce the demand of training samples for the model and improve training speed. With this framework, concrete crack detection can be realized even with a limited number of samples. To accomplish this aim, self-made steel fiber reinforced concrete crack data is used for comparison between our framework and those without texture feature mergence or pre-processed concrete data. The experimental results show that the number of parameters that need to be fitted in the model training and training time can be correspondingly reduced and the detection accuracy can also be improved.
Collapse
|
7
|
Deep Convolutional Neural Network Based Analysis of Liver Tissues Using Computed Tomography Images. Symmetry (Basel) 2022. [DOI: 10.3390/sym14020383] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023] Open
Abstract
Liver disease is one of the most prominent causes of the increase in the death rate worldwide. These death rates can be reduced by early liver diagnosis. Computed tomography (CT) is a method for the analysis of liver images in clinical practice. To analyze a large number of liver images, radiologists face problems that sometimes lead to the wrong classifications of liver diseases, eventually resulting in severe conditions, such as liver cancer. Thus, a machine-learning-based method is needed to classify such problems based on their texture features. This paper suggests two different kinds of algorithms to address this challenging task of liver disease classification. Our first method, which is based on conventional machine learning, uses texture features for classification. This method uses conventional machine learning through automated texture analysis and supervised machine learning methods. For this purpose, 3000 clinically verified CT image samples were obtained from 71 patients. Appropriate image classes belonging to the same disease were trained to confirm the abnormalities in liver tissues by using supervised learning methods. Our proposed method correctly quantified asymmetric patterns in CT images using machine learning. We evaluated the effectiveness of the feature vector with the K Nearest Neighbor (KNN), Naive Bayes (NB), Support Vector Machine (SVM), and Random Forest (RF) classifiers. The second algorithm proposes a semantic segmentation model for liver disease identification. Our model is based on semantic image segmentation (SIS) using a convolutional neural network (CNN). The model encodes high-density maps through a specific guided attention method. The trained model classifies CT images into five different categories of various diseases. The compelling results obtained confirm the effectiveness of the proposed model. The study concludes that abnormalities in the human liver could be discriminated and diagnosed by texture analysis techniques, which may also assist radiologists and medical physicists in predicting the severity and proliferation of abnormalities in liver diseases.
Collapse
|
8
|
Jabeen K, Khan MA, Alhaisoni M, Tariq U, Zhang YD, Hamza A, Mickus A, Damaševičius R. Breast Cancer Classification from Ultrasound Images Using Probability-Based Optimal Deep Learning Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2022; 22:807. [PMID: 35161552 PMCID: PMC8840464 DOI: 10.3390/s22030807] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 12/11/2022]
Abstract
After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.
Collapse
Affiliation(s)
- Kiran Jabeen
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Muhammad Attique Khan
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharaj 11942, Saudi Arabia;
| | - Yu-Dong Zhang
- Department of Informatics, University of Leicester, Leicester LE1 7RH, UK;
| | - Ameer Hamza
- Department of Computer Science, HITEC University Taxila, Taxila 47080, Pakistan; (K.J.); (M.A.K.); (A.H.)
| | - Artūras Mickus
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, LT-44404 Kaunas, Lithuania;
| |
Collapse
|
9
|
|
10
|
|
11
|
Wan Y, Zheng Z, Liu R, Zhu Z, Zhou H, Zhang X, Boumaraf S. A Multi-Scale and Multi-Level Fusion Approach for Deep Learning-Based Liver Lesion Diagnosis in Magnetic Resonance Images with Visual Explanation. Life (Basel) 2021; 11:life11060582. [PMID: 34207262 PMCID: PMC8234101 DOI: 10.3390/life11060582] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 06/10/2021] [Accepted: 06/16/2021] [Indexed: 02/08/2023] Open
Abstract
Many computer-aided diagnosis methods, especially ones with deep learning strategies, of liver cancers based on medical images have been proposed. However, most of such methods analyze the images under only one scale, and the deep learning models are always unexplainable. In this paper, we propose a deep learning-based multi-scale and multi-level fusing approach of CNNs for liver lesion diagnosis on magnetic resonance images, termed as MMF-CNN. We introduce a multi-scale representation strategy to encode both the local and semi-local complementary information of the images. To take advantage of the complementary information of multi-scale representations, we propose a multi-level fusion method to combine the information of both the feature level and the decision level hierarchically and generate a robust diagnostic classifier based on deep learning. We further explore the explanation of the diagnosis decision of the deep neural network through visualizing the areas of interest of the network. A new scoring method is designed to evaluate whether the attention maps can highlight the relevant radiological features. The explanation and visualization make the decision-making process of the deep neural network transparent for the clinicians. We apply our proposed approach to various state-of-the-art deep learning architectures. The experimental results demonstrate the effectiveness of our approach.
Collapse
Affiliation(s)
- Yuchai Wan
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
- Correspondence: (Y.W.); (Z.Z.)
| | - Zhongshu Zheng
- Beijing Lab of Intelligent Information Technology, School of Computer Science, Beijing Institute of Technology, Beijing 100081, China;
| | - Ran Liu
- China South-to-North Water Diversion Corporation Limited, Beijing 100038, China;
| | - Zheng Zhu
- Department of Diagnostic Radiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, 17, Panjiayuan NanLi, Chaoyang District, Beijing 100021, China
- Correspondence: (Y.W.); (Z.Z.)
| | - Hongen Zhou
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
| | - Xun Zhang
- Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, China; (H.Z.); (X.Z.)
| | - Said Boumaraf
- Centre d’Exploitation des Systèmes de Télécommunications Spatiales (CESTS), Agence Spatiale Algérienne, Algiers, Algeria;
| |
Collapse
|
12
|
A Multiscale Topographical Analysis Based on Morphological Information: The HEVC Multiscale Decomposition. MATERIALS 2020; 13:ma13235582. [PMID: 33297533 PMCID: PMC7729792 DOI: 10.3390/ma13235582] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2020] [Revised: 11/28/2020] [Accepted: 11/29/2020] [Indexed: 11/17/2022]
Abstract
In this paper, we evaluate the effect of scale analysis as well as the filtering process on the performances of an original compressed-domain classifier in the field of material surface topographies classification. Each surface profile is multiscale analyzed by using a Gaussian Filter analyzing method to be decomposed into three multiscale filtered image types: Low-pass (LP), Band-pass (BP), and High-pass (HP) filtered versions, respectively. The complete set of filtered image data constitutes the collected database. First, the images are lossless compressed using the state-of-the art High-efficiency video coding (HEVC) video coding standard. Then, the Intra-Prediction Modes Histogram (IPHM) feature descriptor is computed directly in the compressed domain from each HEVC compressed image. Finally, we apply the IPHM feature descriptors as an input of a Support Vector Machine (SVM) classifier. SVM is introduced here to strengthen the performances of the proposed classification system thanks to the powerful properties of machine learning tools. We evaluate the proposed solution we called "HEVC Multiscale Decomposition" (HEVC-MD) on a huge database of nearly 42,000 multiscale topographic images. A simple preliminary version of the algorithm reaches an accuracy of 52%. We increase this accuracy to 70% by using the multiscale analysis of the high-frequency range HP filtered image data sets. Finally, we verify that considering only the highest-scale analysis of low-frequency range LP was more appropriate for classifying our six surface topographies with an accuracy of up to 81%. To compare these new topographical descriptors to those conventionally used, SVM is applied on a set of 34 roughness parameters defined on the International Standard GPS ISO 25178 (Geometrical Product Specification), and one obtains accuracies of 38%, 52%, 65%, and 57% respectively for Sa, multiscale Sa, 34 roughness parameters, and multiscale ones. Compared to conventional roughness descriptors, the HEVC-MD descriptors increase surfaces discrimination from 65% to 81%.
Collapse
|
13
|
Gao X, Wang Q, Cheng C, Lin S, Lin T, Liu C, Han X. The Application of Prussian Blue Nanoparticles in Tumor Diagnosis and Treatment. SENSORS (BASEL, SWITZERLAND) 2020; 20:E6905. [PMID: 33287186 PMCID: PMC7730465 DOI: 10.3390/s20236905] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 11/30/2020] [Accepted: 11/30/2020] [Indexed: 12/15/2022]
Abstract
Prussian blue nanoparticles (PBNPs) have attracted increasing research interest in immunosensors, bioimaging, drug delivery, and application as therapeutic agents due to their large internal pore volume, tunable size, easy synthesis and surface modification, good thermal stability, and favorable biocompatibility. This review first outlines the effect of tumor markers using PBNPs-based immunosensors which have a sandwich-type architecture and competitive-type structure. Metal ion doped PBNPs which were used as T1-weight magnetic resonance and photoacoustic imaging agents to improve image quality and surface modified PBNPs which were used as drug carriers to decrease side effects via passive or active targeting to tumor sites are also summarized. Moreover, the PBNPs with high photothermal efficiency and excellent catalase-like activity were promising for photothermal therapy and O2 self-supplied photodynamic therapy of tumors. Hence, PBNPs-based multimodal imaging-guided combinational tumor therapies (such as chemo, photothermal, and photodynamic therapies) were finally reviewed. This review aims to inspire broad interest in the rational design and application of PBNPs for detecting and treating tumors in clinical research.
Collapse
Affiliation(s)
| | | | - Cui Cheng
- College of Biological Science and Engineering, Fuzhou University, Fuzhou 350108, China; (X.G.); (Q.W.); (S.L.); (T.L.); (C.L.); (X.H.)
| | | | | | | | | |
Collapse
|
14
|
Li H, Zhang B, Zhang Y, Liu W, Mao Y, Huang J, Wei L. A semi-automated annotation algorithm based on weakly supervised learning for medical images. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2020.03.005] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
15
|
Chambara N, Ying M. The Diagnostic Efficiency of Ultrasound Computer-Aided Diagnosis in Differentiating Thyroid Nodules: A Systematic Review and Narrative Synthesis. Cancers (Basel) 2019; 11:E1759. [PMID: 31717365 PMCID: PMC6896127 DOI: 10.3390/cancers11111759] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2019] [Revised: 11/03/2019] [Accepted: 11/06/2019] [Indexed: 12/20/2022] Open
Abstract
Computer-aided diagnosis (CAD) techniques have emerged to complement qualitative assessment in the diagnosis of benign and malignant thyroid nodules. The aim of this review was to summarize the current evidence on the diagnostic performance of various ultrasound CAD in characterizing thyroid nodules. PUBMED, EMBASE and Cochrane databases were searched for studies published until August 2019. The Quality Assessment of Studies of Diagnostic Accuracy included in Systematic Review 2 (QUADAS-2) tool was used to assess the methodological quality of the studies. Reported diagnostic performance data were analyzed and discussed. Fourteen studies with 2232 patients and 2675 thyroid nodules met the inclusion criteria. The study quality based on QUADAS-2 assessment was moderate. At best performance, grey scale CAD had a sensitivity of 96.7% while Doppler CAD was 90%. Combined techniques of qualitative grey scale features and Doppler CAD assessment resulted in overall increased sensitivity (92%) and optimal specificity (85.1%). The experience of the CAD user, nodule size and the thyroid malignancy risk stratification system used for interpretation were the main potential factors affecting diagnostic performance outcomes. The diagnostic performance of CAD of thyroid ultrasound is comparable to that of qualitative visual assessment; however, combined techniques have the potential for better optimized diagnostic accuracy.
Collapse
Affiliation(s)
| | - Michael Ying
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, SAR, China;
| |
Collapse
|
16
|
Gutiérrez-Martínez J, Pineda C, Sandoval H, Bernal-González A. Computer-aided diagnosis in rheumatic diseases using ultrasound: an overview. Clin Rheumatol 2019; 39:993-1005. [DOI: 10.1007/s10067-019-04791-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Revised: 08/07/2019] [Accepted: 09/21/2019] [Indexed: 12/12/2022]
|
17
|
Zhang L, Huang J, Liu L. Improved Deep Learning Network Based in combination with Cost-sensitive Learning for Early Detection of Ovarian Cancer in Color Ultrasound Detecting System. J Med Syst 2019; 43:251. [PMID: 31254110 DOI: 10.1007/s10916-019-1356-8] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Accepted: 05/22/2019] [Indexed: 12/29/2022]
Abstract
With the development of theories and technologies in medical imaging, most of the tumors can be detected in the early stage. However, the nature of ovarian cysts lacks accurate judgement, leading to that many patients with benign nodules still need Fine Needle Aspiration (FNA) biopsies or surgeries, increasing the physical pain and mental pressure of patients as well as unnecessary medical health care costs. Therefore, we present an image diagnosis system for classifying the ovarian cysts in color ultrasound images, which novelly applies the image features fused by both high-level features from deep learning network and low-level features from texture descriptor. Firstly, the ultrasound images are enhanced to improve the quality of training data set and the rotation invariant uniform local binary pattern (ULBP) features are extracted from each of the images as the low-level texture features. Then the high-level deep features extracted by the fine-tuned GoogLeNet neural network and the low-level ULBP features are normalized and cascaded as one fusion feature that can represent both the semantic context and the texture patterns distributed in the image. Finally, the fusion features are input to the Cost-sensitive Random Forest classifier to classify the images into "malignant" and "benign". The high-level features extracted by the deep neural network from the medical ultrasound image can reflect the visual features of the lesion region, while the low-level texture features can describe the edges, direction and distribution of intensities. Experimental results indicate that the combination of the two types of features can describe the differences between the lesion regions and other regions, and the differences between lesions regions of malignant and benign ovarian cysts.
Collapse
Affiliation(s)
- Lei Zhang
- The Ultrasound Centre, Tianjin central hospital of gynecology obstetrics, Tianjin, 300052, China
| | - Jian Huang
- The Ultrasound Centre, Tianjin central hospital of gynecology obstetrics, Tianjin, 300052, China
| | - Li Liu
- The Ultrasound Centre, Tianjin central hospital of gynecology obstetrics, Tianjin, 300052, China.
| |
Collapse
|
18
|
Kriti, Virmani J, Agarwal R. Effect of despeckle filtering on classification of breast tumors using ultrasound images. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.02.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|