1
|
Wan P, Xue H, Zhang S, Kong W, Shao W, Wen B, Zhang D. Image by co-reasoning: A collaborative reasoning-based implicit data augmentation method for dual-view CEUS classification. Med Image Anal 2025; 102:103557. [PMID: 40174326 DOI: 10.1016/j.media.2025.103557] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/26/2024] [Revised: 02/13/2025] [Accepted: 03/14/2025] [Indexed: 04/04/2025]
Abstract
Dual-view contrast-enhanced ultrasound (CEUS) data are often insufficient to train reliable machine learning models in typical clinical scenarios. A key issue is that limited clinical CEUS data fail to cover the underlying texture variations for specific diseases. Implicit data augmentation offers a flexible way to enrich sample diversity, however, inter-view semantic consistency has not been considered in previous studies. To address this issue, we propose a novel implicit data augmentation method for dual-view CEUS classification, which performs a sample-adaptive data augmentation with collaborative semantic reasoning across views. Specifically, the method constructs a feature augmentation distribution for each ultrasound view of an individual sample, accounting for intra-class variance. To maintain semantic consistency between the augmented views, plausible semantic changes in one view are transferred from similar instances in the other view. In this retrospective study, we validate the proposed method on the dual-view CEUS datasets of breast cancer and liver cancer, obtaining the superior mean diagnostic accuracy of 89.25% and 95.57%, respectively. Experimental results demonstrate its effectiveness in improving model performance with limited clinical CEUS data. Code: https://github.com/wanpeng16/CRIDA.
Collapse
Affiliation(s)
- Peng Wan
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Haiyan Xue
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Shukang Zhang
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China
| | - Wentao Kong
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China
| | - Wei Shao
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China.
| | - Baojie Wen
- Department of ultrasound, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China; Medical Imaging Center, Affiliated Drum Tower Hospital, Medical School of Nanjing University, Nanjing, 210008, Jiangsu, China; Nanjing University Institute of Medical Imaging and Artificial Intelligence, Nanjing, 210093, Jiangsu, China.
| | - Daoqiang Zhang
- College of Artificial Intelligence, Nanjing University of Aeronautics and Astronautics, Key Laboratory of Brain-Machine Intelligence Technology, Ministry of Education, Nanjing 211106, China.
| |
Collapse
|
2
|
Li J, Zhu L, Shen G, Zhao B, Hu Y, Zhang H, Wang W, Wang Q. Liver lesion segmentation in ultrasound: A benchmark and a baseline network. Comput Med Imaging Graph 2025; 123:102523. [PMID: 40112652 DOI: 10.1016/j.compmedimag.2025.102523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Revised: 03/03/2025] [Accepted: 03/04/2025] [Indexed: 03/22/2025]
Abstract
Accurate liver lesion segmentation in ultrasound is a challenging task due to high speckle noise, ambiguous lesion boundaries, and inhomogeneous intensity distribution inside the lesion regions. This work first collected and annotated a dataset for liver lesion segmentation in ultrasound. In this paper, we propose a novel convolutional neural network to learn dual self-attentive transformer features for boosting liver lesion segmentation by leveraging the complementary information among non-local features encoded at different layers of the transformer architecture. To do so, we devise a dual self-attention refinement (DSR) module to synergistically utilize self-attention and reverse self-attention mechanisms to extract complementary lesion characteristics between cascaded multi-layer feature maps, assisting the model to produce more accurate segmentation results. Moreover, we propose a False-Positive-Negative loss to enable our network to further suppress the non-liver-lesion noise at shallow transformer layers and enhance more target liver lesion details into CNN features at deep transformer layers. Experimental results show that our network outperforms state-of-the-art methods quantitatively and qualitatively.
Collapse
Affiliation(s)
- Jialu Li
- The Hong Kong University of Science and Technology (Guangzhou), Guangdong, China.
| | - Lei Zhu
- The Hong Kong University of Science and Technology (Guangzhou), Guangdong, China; Henan Key Laboratory of Imaging and Intelligent Processing, China.
| | - Guibao Shen
- The Hong Kong University of Science and Technology (Guangzhou), Guangdong, China.
| | - Baoliang Zhao
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China.
| | - Ying Hu
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China.
| | - Hai Zhang
- The Second Clinical College of Jinan University, China; The First Affiliated Hospital of Southern University of Science and Technology, China.
| | - Weiming Wang
- Hong Kong Metropolitan University, Hong Kong Special Administrative Region of China.
| | - Qiong Wang
- Guangdong Provincial Key Laboratory of Computer Vision and Virtual Reality Technology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, China.
| |
Collapse
|
3
|
Chi J, Chen JH, Wu B, Zhao J, Wang K, Yu X, Zhang W, Huang Y. A Dual-Branch Cross-Modality-Attention Network for Thyroid Nodule Diagnosis Based on Ultrasound Images and Contrast-Enhanced Ultrasound Videos. IEEE J Biomed Health Inform 2025; 29:1269-1282. [PMID: 39356606 DOI: 10.1109/jbhi.2024.3472609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/04/2024]
Abstract
Contrast-enhanced ultrasound (CEUS) has been extensively employed as an imaging modality in thyroid nodule diagnosis due to its capacity to visualise the distribution and circulation of micro-vessels in organs and lesions in a non-invasive manner. However, current CEUS-based thyroid nodule diagnosis methods suffered from: 1) the blurred spatial boundaries between nodules and other anatomies in CEUS videos, and 2) the insufficient representations of the local structural information of nodule tissues by the features extracted only from CEUS videos. In this paper, we propose a novel dual-branch network with a cross-modality-attention mechanism for thyroid nodule diagnosis by integrating the information from tow related modalities, i.e., CEUS videos and ultrasound image. The mechanism has two parts: US-attention-from-CEUS transformer (UAC-T) and CEUS-attention-from-US transformer (CAU-T). As such, this network imitates the manner of human radiologists by decomposing the diagnosis into two correlated tasks: 1) the spatio-temporal features extracted from CEUS are hierarchically embedded into the spatial features extracted from US with UAC-T for the nodule segmentation; 2) the US spatial features are used to guide the extraction of the CEUS spatio-temporal features with CAU-T for the nodule classification. The two tasks are intertwined in the dual-branch end-to-end network and optimized with the multi-task learning (MTL) strategy. The proposed method is evaluated on our collected thyroid US-CEUS dataset. Experimental results show that our method achieves the classification accuracy of 86.92%, specificity of 66.41%, and sensitivity of 97.01%, outperforming the state-of-the-art methods. As a general contribution in the field of multi-modality diagnosis of diseases, the proposed method has provided an effective way to combine static information with its related dynamic information, improving the quality of deep learning based diagnosis with an additional benefit of explainability.
Collapse
|
4
|
Brooks JA, Kallenbach M, Radu IP, Berzigotti A, Dietrich CF, Kather JN, Luedde T, Seraphin TP. Artificial Intelligence for Contrast-Enhanced Ultrasound of the Liver: A Systematic Review. Digestion 2024:1-18. [PMID: 39312896 DOI: 10.1159/000541540] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Accepted: 09/18/2024] [Indexed: 09/25/2024]
Abstract
INTRODUCTION The research field of artificial intelligence (AI) in medicine and especially in gastroenterology is rapidly progressing with the first AI tools entering routine clinical practice, for example, in colorectal cancer screening. Contrast-enhanced ultrasound (CEUS) is a highly reliable, low-risk, and low-cost diagnostic modality for the examination of the liver. However, doctors need many years of training and experience to master this technique and, despite all efforts to standardize CEUS, it is often believed to contain significant interrater variability. As has been shown for endoscopy, AI holds promise to support examiners at all training levels in their decision-making and efficiency. METHODS In this systematic review, we analyzed and compared original research studies applying AI methods to CEUS examinations of the liver published between January 2010 and February 2024. We performed a structured literature search on PubMed, Web of Science, and IEEE. Two independent reviewers screened the articles and subsequently extracted relevant methodological features, e.g., cohort size, validation process, machine learning algorithm used, and indicative performance measures from the included articles. RESULTS We included 41 studies with most applying AI methods for classification tasks related to focal liver lesions. These included distinguishing benign versus malignant or classifying the entity itself, while a few studies tried to classify tumor grading, microvascular invasion status, or response to transcatheter arterial chemoembolization directly from CEUS. Some articles tried to segment or detect focal liver lesions, while others aimed to predict survival and recurrence after ablation. The majority (25/41) of studies used hand-picked and/or annotated images as data input to their models. We observed mostly good to high reported model performances with accuracies ranging between 58.6% and 98.9%, while noticing a general lack of external validation. CONCLUSION Even though multiple proof-of-concept studies for the application of AI methods to CEUS examinations of the liver exist and report high performance, more prospective, externally validated, and multicenter research is needed to bring such algorithms from desk to bedside.
Collapse
Affiliation(s)
- James A Brooks
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Michael Kallenbach
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Iuliana-Pompilia Radu
- Department for Visceral Surgery and Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Annalisa Berzigotti
- Department for Visceral Surgery and Medicine, Inselspital, University of Bern, Bern, Switzerland
| | - Christoph F Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Hirslanden Beau Site, Salem and Permanence, Bern, Switzerland
| | - Jakob N Kather
- Else Kroener Fresenius Center for Digital Health, Medical Faculty Carl Gustav Carus, Technical University Dresden, Dresden, Germany
| | - Tom Luedde
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| | - Tobias P Seraphin
- Department of Gastroenterology, Hepatology and Infectious Diseases, University Hospital Dusseldorf, Medical Faculty at Heinrich-Heine-University, Dusseldorf, Germany
| |
Collapse
|
5
|
Zhao L, Liu S, An Y, Cai W, Li B, Wang SH, Liang P, Yu J, Zhao J. A unified end-to-end classification model for focal liver lesions. Biomed Signal Process Control 2023; 86:105260. [DOI: 10.1016/j.bspc.2023.105260] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
6
|
Vetter M, Waldner MJ, Zundler S, Klett D, Bocklitz T, Neurath MF, Adler W, Jesper D. Artificial intelligence for the classification of focal liver lesions in ultrasound - a systematic review. ULTRASCHALL IN DER MEDIZIN (STUTTGART, GERMANY : 1980) 2023; 44:395-407. [PMID: 37001563 DOI: 10.1055/a-2066-9372] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Focal liver lesions are detected in about 15% of abdominal ultrasound examinations. The diagnosis of frequent benign lesions can be determined reliably based on the characteristic B-mode appearance of cysts, hemangiomas, or typical focal fatty changes. In the case of focal liver lesions which remain unclear on B-mode ultrasound, contrast-enhanced ultrasound (CEUS) increases diagnostic accuracy for the distinction between benign and malignant liver lesions. Artificial intelligence describes applications that try to emulate human intelligence, at least in subfields such as the classification of images. Since ultrasound is considered to be a particularly examiner-dependent technique, the application of artificial intelligence could be an interesting approach for an objective and accurate diagnosis. In this systematic review we analyzed how artificial intelligence can be used to classify the benign or malignant nature and entity of focal liver lesions on the basis of B-mode or CEUS data. In a structured search on Scopus, Web of Science, PubMed, and IEEE, we found 52 studies that met the inclusion criteria. Studies showed good diagnostic performance for both the classification as benign or malignant and the differentiation of individual tumor entities. The results could be improved by inclusion of clinical parameters and were comparable to those of experienced investigators in terms of diagnostic accuracy. However, due to the limited spectrum of lesions included in the studies and a lack of independent validation cohorts, the transfer of the results into clinical practice is limited.
Collapse
Affiliation(s)
- Marcel Vetter
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Maximilian J Waldner
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Sebastian Zundler
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Daniel Klett
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Thomas Bocklitz
- Institute of Physical Chemistry and Abbe Center of Photonics, Friedrich-Schiller-Universitat Jena, Jena, Germany
- Leibniz-Institute of Photonic Technology, Friedrich Schiller University Jena, Jena, Germany
| | - Markus F Neurath
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| | - Werner Adler
- Department of Medical Informatics, Biometry and Epidemiology, Friedrich-Alexander University Erlangen-Nuremberg, Erlangen, Germany
| | - Daniel Jesper
- Department of Internal Medicine 1, Erlangen University Hospital Department of Medicine 1 Gastroenterology Endocrinology and Pneumology, Erlangen, Germany
| |
Collapse
|
7
|
Singh S, Hoque S, Zekry A, Sowmya A. Radiological Diagnosis of Chronic Liver Disease and Hepatocellular Carcinoma: A Review. J Med Syst 2023; 47:73. [PMID: 37432493 PMCID: PMC10335966 DOI: 10.1007/s10916-023-01968-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 07/02/2023] [Indexed: 07/12/2023]
Abstract
Medical image analysis plays a pivotal role in the evaluation of diseases, including screening, surveillance, diagnosis, and prognosis. Liver is one of the major organs responsible for key functions of metabolism, protein and hormone synthesis, detoxification, and waste excretion. Patients with advanced liver disease and Hepatocellular Carcinoma (HCC) are often asymptomatic in the early stages; however delays in diagnosis and treatment can lead to increased rates of decompensated liver diseases, late-stage HCC, morbidity and mortality. Ultrasound (US) is commonly used imaging modality for diagnosis of chronic liver diseases that includes fibrosis, cirrhosis and portal hypertension. In this paper, we first provide an overview of various diagnostic methods for stages of liver diseases and discuss the role of Computer-Aided Diagnosis (CAD) systems in diagnosing liver diseases. Second, we review the utility of machine learning and deep learning approaches as diagnostic tools. Finally, we present the limitations of existing studies and outline future directions to further improve diagnostic accuracy, as well as reduce cost and subjectivity, while also improving workflow for the clinicians.
Collapse
Affiliation(s)
- Sonit Singh
- School of CSE, UNSW Sydney, High St, Kensington, 2052, NSW, Australia.
| | - Shakira Hoque
- Gastroenterology and Hepatology Department, St George Hospital, Hogben St, Kogarah, 2217, NSW, Australia
| | - Amany Zekry
- St George and Sutherland Clinical Campus, School of Clinical Medicine, UNSW, High St, Kensington, 2052, NSW, Australia
- Gastroenterology and Hepatology Department, St George Hospital, Hogben St, Kogarah, 2217, NSW, Australia
| | - Arcot Sowmya
- School of CSE, UNSW Sydney, High St, Kensington, 2052, NSW, Australia
| |
Collapse
|
8
|
Zhang H, Guo L, Wang J, Ying S, Shi J. Multi-View Feature Transformation Based SVM+ for Computer-Aided Diagnosis of Liver Cancers With Ultrasound Images. IEEE J Biomed Health Inform 2023; 27:1512-1523. [PMID: 37018255 DOI: 10.1109/jbhi.2022.3233717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
It is feasible to improve the performance of B-mode ultrasound (BUS) based computer-aided diagnosis (CAD) for liver cancers by transferring knowledge from contrast-enhanced ultrasound (CEUS) images. In this work, we propose a novel feature transformation based support vector machine plus (SVM+) algorithm for this transfer learning task by introducing feature transformation into the SVM+ framework (named FSVM+). Specifically, the transformation matrix in FSVM+ is learned to minimize the radius of the enclosing ball of all samples, while the SVM+ is used to maximize the margin between two classes. Moreover, to capture more transferable information from multiple CEUS phase images, a multi-view FSVM+ (MFSVM+) is further developed, which transfers knowledge from three CEUS images from three phases, i.e., arterial phase, portal venous phase, and delayed phase, to the BUS-based CAD model. MFSVM+ innovatively assigns appropriate weights for each CEUS image by calculating the maximum mean discrepancy between a pair of BUS and CEUS images, which can capture the relationship between source and target domains. The experimental results on a bi-modal ultrasound liver cancer dataset demonstrate that MFSVM+ achieves the best classification accuracy of 88.24±1.28%, sensitivity of 88.32±2.88%, specificity of 88.17±2.91%, suggesting its effectiveness in promoting the diagnostic accuracy of BUS-based CAD.
Collapse
|
9
|
Turco S, Tiyarattanachai T, Ebrahimkheil K, Eisenbrey J, Kamaya A, Mischi M, Lyshchik A, Kaffas AE. Interpretable Machine Learning for Characterization of Focal Liver Lesions by Contrast-Enhanced Ultrasound. IEEE TRANSACTIONS ON ULTRASONICS, FERROELECTRICS, AND FREQUENCY CONTROL 2022; 69:1670-1681. [PMID: 35320099 PMCID: PMC9188683 DOI: 10.1109/tuffc.2022.3161719] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
This work proposes an interpretable radiomics approach to differentiate between malignant and benign focal liver lesions (FLLs) on contrast-enhanced ultrasound (CEUS). Although CEUS has shown promise for differential FLLs diagnosis, current clinical assessment is performed only by qualitative analysis of the contrast enhancement patterns. Quantitative analysis is often hampered by the unavoidable presence of motion artifacts and by the complex, spatiotemporal nature of liver contrast enhancement, consisting of multiple, overlapping vascular phases. To fully exploit the wealth of information in CEUS, while coping with these challenges, here we propose combining features extracted by the temporal and spatiotemporal analysis in the arterial phase enhancement with spatial features extracted by texture analysis at different time points. Using the extracted features as input, several machine learning classifiers are optimized to achieve semiautomatic FLLs characterization, for which there is no need for motion compensation and the only manual input required is the location of a suspicious lesion. Clinical validation on 87 FLLs from 72 patients at risk for hepatocellular carcinoma (HCC) showed promising performance, achieving a balanced accuracy of 0.84 in the distinction between benign and malignant lesions. Analysis of feature relevance demonstrates that a combination of spatiotemporal and texture features is needed to achieve the best performance. Interpretation of the most relevant features suggests that aspects related to microvascular perfusion and the microvascular architecture, together with the spatial enhancement characteristics at wash-in and peak enhancement, are important to aid the accurate characterization of FLLs.
Collapse
|
10
|
Zhang H, Guo L, Wang D, Wang J, Bao L, Ying S, Xu H, Shi J. Multi-Source Transfer Learning Via Multi-Kernel Support Vector Machine Plus for B-Mode Ultrasound-Based Computer-Aided Diagnosis of Liver Cancers. IEEE J Biomed Health Inform 2021; 25:3874-3885. [PMID: 33861717 DOI: 10.1109/jbhi.2021.3073812] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
B-mode ultrasound (BUS) imaging is a routine tool for diagnosis of liver cancers, while contrast-enhanced ultrasound (CEUS) provides additional information to BUS on the local tissue vascularization and perfusion to promote diagnostic accuracy. In this work, we propose to improve the BUS-based computer aided diagnosis for liver cancers by transferring knowledge from the multi-view CEUS images, including the arterial phase, portal venous phase, and delayed phase, respectively. To make full use of the shared labels of paired of BUS and CEUS images to guide knowledge transfer, support vector machine plus (SVM+), a specifically designed transfer learning (TL) classifier for paired data with shared labels, is adopted for this supervised TL. A nonparallel hyperplane based SVM+ (NHSVM+) is first proposed to improve the TL performance by transferring the per-class knowledge from source domain to the corresponding target domain. Moreover, to handle the issue of multi-source TL, a multi-kernel learning based NHSVM+ (MKL-NHSVM+) algorithm is further developed to effectively transfer multi-source knowledge from multi-view CEUS images. The experimental results indicate that the proposed MKL-NHSVM+ outperforms all the compared algorithms for diagnosis of liver cancers, whose mean classification accuracy, sensitivity, and specificity are 88.18 ± 3.16 %, 86.98 ± 4.77 %, and 89.42±3.77%, respectively.
Collapse
|
11
|
Wan P, Chen F, Liu C, Kong W, Zhang D. Hierarchical Temporal Attention Network for Thyroid Nodule Recognition Using Dynamic CEUS Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:1646-1660. [PMID: 33651687 DOI: 10.1109/tmi.2021.3063421] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Contrast-enhanced ultrasound (CEUS) has emerged as a popular imaging modality in thyroid nodule diagnosis due to its ability to visualize vascular distribution in real time. Recently, a number of learning-based methods are dedicated to mine pathological-related enhancement dynamics and make prediction at one step, ignoring a native diagnostic dependency. In clinics, the differentiation of benign or malignant nodules always precedes the recognition of pathological types. In this paper, we propose a novel hierarchical temporal attention network (HiTAN) for thyroid nodule diagnosis using dynamic CEUS imaging, which unifies dynamic enhancement feature learning and hierarchical nodules classification into a deep framework. Specifically, this method decomposes the diagnosis of nodules into an ordered two-stage classification task, where diagnostic dependency is modeled by Gated Recurrent Units (GRUs). Besides, we design a local-to-global temporal aggregation (LGTA) operator to perform a comprehensive temporal fusion along the hierarchical prediction path. Particularly, local temporal information is defined as typical enhancement patterns identified with the guidance of perfusion representation learned from the differentiation level. Then, we leverage an attention mechanism to embed global enhancement dynamics into each identified salient pattern. In this study, we evaluate the proposed HiTAN method on the collected CEUS dataset of thyroid nodules. Extensive experimental results validate the efficacy of dynamic patterns learning, fusion and hierarchical diagnosis mechanism.
Collapse
|
12
|
Zeng X, Wen L, Liu B, Qi X. Deep learning for ultrasound image caption generation based on object detection. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2018.11.114] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Huang Q, Pan F, Li W, Yuan F, Hu H, Huang J, Yu J, Wang W. Differential Diagnosis of Atypical Hepatocellular Carcinoma in Contrast-Enhanced Ultrasound Using Spatio-Temporal Diagnostic Semantics. IEEE J Biomed Health Inform 2020; 24:2860-2869. [PMID: 32149699 DOI: 10.1109/jbhi.2020.2977937] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Atypical Hepatocellular Carcinoma (HCC) is very hard to distinguish from Focal Nodular Hyperplasia (FNH) in routine imaging. However little attention was paid to this problem. This paper proposes a novel liver tumor Computer-Aided Diagnostic (CAD) approach extracting spatio-temporal semantics for atypical HCC. With respect to useful diagnostic semantics, our model automatically calculates three types of semantic feature with equally down-sampled frames based on Contrast-Enhanced Ultrasound (CEUS). Thereafter, a Support Vector Machine (SVM) classifier is trained to make the final diagnosis. Compared with traditional methods for diagnosing HCC, the proposed model has the advantage of less computational complexity and being able to handle the atypical HCC cases. The experimental results show that our method obtained a pretty considerable performance and outperformed two traditional methods. According to the results, the average accuracy reaches 94.40%, recall rate 94.76%, F1-score value 94.62%, specificity 93.62% and sensitivity 94.76%, indicating good merit for automatically diagnosing atypical HCC cases.
Collapse
|
14
|
Mishra D, Chaudhury S, Sarkar M, Soin AS. Ultrasound Image Segmentation: A Deeply Supervised Network With Attention to Boundaries. IEEE Trans Biomed Eng 2018; 66:1637-1648. [PMID: 30346279 DOI: 10.1109/tbme.2018.2877577] [Citation(s) in RCA: 59] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Segmentation of anatomical structures in ultrasound images requires vast radiological knowledge and experience. Moreover, the manual segmentation often results in subjective variations, therefore, an automatic segmentation is desirable. We aim to develop a fully convolutional neural network (FCNN) with attentional deep supervision for the automatic and accurate segmentation of the ultrasound images. METHOD FCNN/CNNs are used to infer high-level context using low-level image features. In this paper, a sub-problem specific deep supervision of the FCNN is performed. The attention of fine resolution layers is steered to learn object boundary definitions using auxiliary losses, whereas coarse resolution layers are trained to discriminate object regions from the background. Furthermore, a customized scheme for downweighting the auxiliary losses and a trainable fusion layer are introduced. This produces an accurate segmentation and helps in dealing with the broken boundaries, usually found in the ultrasound images. RESULTS The proposed network is first tested for blood vessel segmentation in liver images. It results in F1 score, mean intersection over union, and dice index of 0.83, 0.83, and 0.79, respectively. The best values observed among the existing approaches are produced by U-net as 0.74, 0.81, and 0.75, respectively. The proposed network also results in dice index value of 0.91 in the lumen segmentation experiments on MICCAI 2011 IVUS challenge dataset, which is near to the provided reference value of 0.93. Furthermore, the improvements similar to vessel segmentation experiments are also observed in the experiment performed to segment lesions. CONCLUSION Deep supervision of the network based on the input-output characteristics of the layers results in improvement in overall segmentation accuracy. SIGNIFICANCE Sub-problem specific deep supervision for ultrasound image segmentation is the main contribution of this paper. Currently the network is trained and tested for fixed size inputs. It requires image resizing and limits the performance in small size images.
Collapse
|
15
|
Liu Y, Chen Y, Han B, Zhang Y, Zhang X, Su Y. Fully automatic Breast ultrasound image segmentation based on fuzzy cellular automata framework. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.09.014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
16
|
Ta CN, Kono Y, Eghtedari M, Oh YT, Robbin ML, Barr RG, Kummel AC, Mattrey RF. Focal Liver Lesions: Computer-aided Diagnosis by Using Contrast-enhanced US Cine Recordings. Radiology 2017; 286:1062-1071. [PMID: 29072980 DOI: 10.1148/radiol.2017170365] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Purpose To assess the performance of computer-aided diagnosis (CAD) systems and to determine the dominant ultrasonographic (US) features when classifying benign versus malignant focal liver lesions (FLLs) by using contrast material-enhanced US cine clips. Materials and Methods One hundred six US data sets in all subjects enrolled by three centers from a multicenter trial that included 54 malignant, 51 benign, and one indeterminate FLL were retrospectively analyzed. The 105 benign or malignant lesions were confirmed at histologic examination, contrast-enhanced computed tomography (CT), dynamic contrast-enhanced magnetic resonance (MR) imaging, and/or 6 or more months of clinical follow-up. Data sets included 3-minute cine clips that were automatically corrected for in-plane motion and automatically filtered out frames acquired off plane. B-mode and contrast-specific features were automatically extracted on a pixel-by-pixel basis and analyzed by using an artificial neural network (ANN) and a support vector machine (SVM). Areas under the receiver operating characteristic curve (AUCs) for CAD were compared with those for one experienced and one inexperienced blinded reader. A third observer graded cine quality to assess its effects on CAD performance. Results CAD, the inexperienced observer, and the experienced observer were able to analyze 95, 100, and 102 cine clips, respectively. The AUCs for the SVM, ANN, and experienced and inexperienced observers were 0.883 (95% confidence interval [CI]: 0.793, 0.940), 0.829 (95% CI: 0.724, 0.901), 0.843 (95% CI: 0.756, 0.903), and 0.702 (95% CI: 0.586, 0.782), respectively; only the difference between SVM and the inexperienced observer was statistically significant. Accuracy improved from 71.3% (67 of 94; 95% CI: 60.6%, 79.8%) to 87.7% (57 of 65; 95% CI: 78.5%, 93.8%) and from 80.9% (76 of 94; 95% CI: 72.3%, 88.3%) to 90.3% (65 of 72; 95% CI: 80.6%, 95.8%) when CAD was in agreement with the inexperienced reader and when it was in agreement with the experienced reader, respectively. B-mode heterogeneity and contrast material washout were the most discriminating features selected by CAD for all iterations. CAD selected time-based time-intensity curve (TIC) features 99.0% (207 of 209) of the time to classify FLLs, versus 1.0% (two of 209) of the time for intensity-based features. None of the 15 video-quality criteria had a statistically significant effect on CAD accuracy-all P values were greater than the Holm-Sidak α-level correction for multiple comparisons. Conclusion CAD systems classified benign and malignant FLLs with an accuracy similar to that of an expert reader. CAD improved the accuracy of both readers. Time-based features of TIC were more discriminating than intensity-based features. © RSNA, 2017 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Casey N Ta
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Yuko Kono
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Mohammad Eghtedari
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Young Taik Oh
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Michelle L Robbin
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Richard G Barr
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Andrew C Kummel
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| | - Robert F Mattrey
- From the Department of Electrical and Computer Engineering (C.N.T.), Departments of Medicine and Radiology (Y.K.), Department of Radiology (M.E.), and Department of Chemistry and Biochemistry (A.C.K.), University of California, San Diego, La Jolla, Calif; Department of Radiology and Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Korea (Y.T.O.); Department of Radiology, University of Alabama at Birmingham, Birmingham, Ala (M.L.R.); Southwoods Imaging, Youngstown, Ohio and Northeastern Ohio Medical University, Rootstown, Ohio (R.G.B.); and Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Blvd, Room D1.204, Dallas, TX 75390-8514 (R.F.M.)
| |
Collapse
|
17
|
Sun XL, Yao H, Men Q, Hou KZ, Chen Z, Xu CQ, Liang LW. Combination of acoustic radiation force impulse imaging, serological indexes and contrast-enhanced ultrasound for diagnosis of liver lesions. World J Gastroenterol 2017; 23:5602-5609. [PMID: 28852319 PMCID: PMC5558123 DOI: 10.3748/wjg.v23.i30.5602] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Revised: 04/24/2017] [Accepted: 05/09/2017] [Indexed: 02/06/2023] Open
Abstract
AIM To assess the value of combined acoustic radiation force impulse (ARFI) imaging, serological indexes and contrast-enhanced ultrasound (CEUS) in distinguishing between benign and malignant liver lesions.
METHODS Patients with liver lesions treated at our hospital were included in this study. The lesions were divided into either a malignant tumor group or a benign tumor group according to pathological or radiological findings. ARFI quantitative detection, serological testing and CEUS quantitative detection were performed and compared. A comparative analysis of the measured indexes was performed between these groups. Receiver operating characteristic (ROC) curves were constructed to compare the diagnostic accuracy of ARFI imaging, serological indexes and CEUS, alone or in different combinations, in identifying benign and malignant liver lesions.
RESULTS A total of 112 liver lesions in 43 patients were included, of which 78 were malignant and 34 were benign. Shear wave velocity (SWV) value, serum alpha-fetoprotein (AFP) content and enhancement rate were significantly higher in the malignant tumor group than in the benign tumor group (2.39 ± 1.20 m/s vs 1.50 ± 0.49 m/s, 18.02 ± 5.01 ng/mL vs 15.96 ± 4.33 ng/mL, 2.14 ± 0.21 dB/s vs 2.01 ± 0.31 dB/s; P < 0.05). The ROC curve analysis revealed that the areas under the curves (AUCs) of SWV value alone, AFP content alone, enhancement rate alone, SWV value + AFP content, SWV value + enhancement rate, AFP content + enhancement rate and SWV value + AFP content + enhancement rate were 85.1%, 72.1%, 74.5%, 88.3%, 90.4%, 82.0% and 92.3%, respectively. The AUC of SWV value + AFP content + enhancement rate was higher than those of SWV value + AFP content and SWV value + enhancement rate, and significantly higher than those of any single parameter or the combination of any two of parameters.
CONCLUSION The combination of SWV, AFP and enhancement rate had better diagnostic performance in distinguishing between benign and malignant liver lesions than the use of any single parameter or the combination of any two of parameters. It is expected that this would provide a tool for the differential diagnosis of benign and malignant liver lesions.
Collapse
|
18
|
Chen Y, Yue X, Fujita H, Fu S. Three-way decision support for diagnosis on focal liver lesions. Knowl Based Syst 2017. [DOI: 10.1016/j.knosys.2017.04.008] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
|