1
|
Peng T, Wu Y, Gu Y, Xu D, Wang C, Li Q, Cai J. Intelligent contour extraction approach for accurate segmentation of medical ultrasound images. Front Physiol 2023; 14:1177351. [PMID: 37675280 PMCID: PMC10479019 DOI: 10.3389/fphys.2023.1177351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 07/28/2023] [Indexed: 09/08/2023] Open
Abstract
Introduction: Accurate contour extraction in ultrasound images is of great interest for image-guided organ interventions and disease diagnosis. Nevertheless, it remains a problematic issue owing to the missing or ambiguous outline between organs (i.e., prostate and kidney) and surrounding tissues, the appearance of shadow artifacts, and the large variability in the shape of organs. Methods: To address these issues, we devised a method that includes four stages. In the first stage, the data sequence is acquired using an improved adaptive selection principal curve method, in which a limited number of radiologist defined data points are adopted as the prior. The second stage then uses an enhanced quantum evolution network to help acquire the optimal neural network. The third stage involves increasing the precision of the experimental outcomes after training the neural network, while using the data sequence as the input. In the final stage, the contour is smoothed using an explicable mathematical formula explained by the model parameters of the neural network. Results: Our experiments showed that our approach outperformed other current methods, including hybrid and Transformer-based deep-learning methods, achieving an average Dice similarity coefficient, Jaccard similarity coefficient, and accuracy of 95.7 ± 2.4%, 94.6 ± 2.6%, and 95.3 ± 2.6%, respectively. Discussion: This work develops an intelligent contour extraction approach on ultrasound images. Our approach obtained more satisfactory outcome compared with recent state-of-the-art approaches . The knowledge of precise boundaries of the organ is significant for the conservation of risk structures. Our developed approach has the potential to enhance disease diagnosis and therapeutic outcomes.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, China
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, United States
| | - Yiyun Wu
- Department of Ultrasound, Jiangsu Province Hospital of Chinese Medicine, Nanjing, Jiangsu, China
| | - Yidong Gu
- Department of Medical Ultrasound, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu, China
| | - Daqiang Xu
- Department of Radiology, The Affiliated Suzhou Hospital of Nanjing Medical University, Suzhou Municipal Hospital, Suzhou, Jiangsu, China
| | - Caishan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Quan Li
- Center of Stomatology, The Second Affiliated Hospital of Soochow University, Suzhou, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| |
Collapse
|
2
|
Peng T, Wu Y, Zhao J, Wang C, Wang J, Cai J. Ultrasound Prostate Segmentation Using Adaptive Selection Principal Curve and Smooth Mathematical Model. J Digit Imaging 2023; 36:947-963. [PMID: 36729258 PMCID: PMC10287615 DOI: 10.1007/s10278-023-00783-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 12/15/2022] [Accepted: 01/18/2023] [Indexed: 02/03/2023] Open
Abstract
Accurate prostate segmentation in ultrasound images is crucial for the clinical diagnosis of prostate cancer and for performing image-guided prostate surgery. However, it is challenging to accurately segment the prostate in ultrasound images due to their low signal-to-noise ratio, the low contrast between the prostate and neighboring tissues, and the diffuse or invisible boundaries of the prostate. In this paper, we develop a novel hybrid method for segmentation of the prostate in ultrasound images that generates accurate contours of the prostate from a range of datasets. Our method involves three key steps: (1) application of a principal curve-based method to obtain a data sequence comprising data coordinates and their corresponding projection index; (2) use of the projection index as training input for a fractional-order-based neural network that increases the accuracy of results; and (3) generation of a smooth mathematical map (expressed via the parameters of the neural network) that affords a smooth prostate boundary, which represents the output of the neural network (i.e., optimized vertices) and matches the ground truth contour. Experimental evaluation of our method and several other state-of-the-art segmentation methods on datasets of prostate ultrasound images generated at multiple institutions demonstrated that our method exhibited the best capability. Furthermore, our method is robust as it can be applied to segment prostate ultrasound images obtained at multiple institutions based on various evaluation metrics.
Collapse
Affiliation(s)
- Tao Peng
- School of Future Science and Engineering, Soochow University, Suzhou, China.
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China.
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Yiyun Wu
- Department of Ultrasound, Jiangsu Province Hospital of Chinese Medicine, Nanjing, Jiangsu, China
| | - Jing Zhao
- Department of Ultrasound, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China
| | - Caishan Wang
- Department of Ultrasound, the Second Affiliated Hospital of Soochow University, Suzhou, Jiangsu, China
| | - Jin Wang
- School of Future Science and Engineering, Soochow University, Suzhou, China
| | - Jing Cai
- Department of Health Technology and Informatics, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
3
|
Miao R. Emotion Analysis and Opinion Monitoring of Social Network Users Under Deep Convolutional Neural Network. JOURNAL OF GLOBAL INFORMATION MANAGEMENT 2023. [DOI: 10.4018/jgim.319309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/12/2023]
Abstract
With the development of the internet, the user behavior and emotional characteristics behind social networks have attracted scholars' attention. Meanwhile, identifying user emotion can promote the development of mobile communication technology and network intelligence industrialization. Based on this, this work explores the emotions of social network users and discusses the public comments on the speeches through the speeches of social network users. After 100 times of training, F1 of the BiLSTM algorithm can reach 97.32%, and after 100 times of training, its function loss can be reduced to 1.33%, which can reduce the impact of function loss on emotion recognition. The exploration is of great significance for analyzing the emotional behavior of social network users and provides a reference for the intelligent and systematic development of internet social model as well as the information management.
Collapse
Affiliation(s)
- Ruomu Miao
- School of Media and Communication, Shanghai Jiao Tong University, China
| |
Collapse
|
4
|
Tyagi A, Xie C, Mueller K. NAS-Navigator: Visual Steering for Explainable One-Shot Deep Neural Network Synthesis. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:299-309. [PMID: 36166525 DOI: 10.1109/tvcg.2022.3209361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The success of DL can be attributed to hours of parameter and architecture tuning by human experts. Neural Architecture Search (NAS) techniques aim to solve this problem by automating the search procedure for DNN architectures making it possible for non-experts to work with DNNs. Specifically, One-shot NAS techniques have recently gained popularity as they are known to reduce the search time for NAS techniques. One-Shot NAS works by training a large template network through parameter sharing which includes all the candidate NNs. This is followed by applying a procedure to rank its components through evaluating the possible candidate architectures chosen randomly. However, as these search models become increasingly powerful and diverse, they become harder to understand. Consequently, even though the search results work well, it is hard to identify search biases and control the search progression, hence a need for explainability and human-in-the-loop (HIL) One-Shot NAS. To alleviate these problems, we present NAS-Navigator, a visual analytics (VA) system aiming to solve three problems with One-Shot NAS; explainability, HIL design, and performance improvements compared to existing state-of-the-art (SOTA) techniques. NAS-Navigator gives full control of NAS back in the hands of the users while still keeping the perks of automated search, thus assisting non-expert users. Analysts can use their domain knowledge aided by cues from the interface to guide the search. Evaluation results confirm the performance of our improved One-Shot NAS algorithm is comparable to other SOTA techniques. While adding Visual Analytics (VA) using NAS-Navigator shows further improvements in search time and performance. We designed our interface in collaboration with several deep learning researchers and evaluated NAS-Navigator through a control experiment and expert interviews.
Collapse
|
5
|
Yuan J, Liu M, Tian F, Liu S. Visual Analysis of Neural Architecture Spaces for Summarizing Design Principles. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2023; 29:288-298. [PMID: 36191103 DOI: 10.1109/tvcg.2022.3209404] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Recent advances in artificial intelligence largely benefit from better neural network architectures. These architectures are a product of a costly process of trial-and-error. To ease this process, we develop ArchExplorer, a visual analysis method for understanding a neural architecture space and summarizing design principles. The key idea behind our method is to make the architecture space explainable by exploiting structural distances between architectures. We formulate the pairwise distance calculation as solving an all-pairs shortest path problem. To improve efficiency, we decompose this problem into a set of single-source shortest path problems. The time complexity is reduced from O(kn2N) to O(knN). Architectures are hierarchically clustered according to the distances between them. A circle-packing-based architecture visualization has been developed to convey both the global relationships between clusters and local neighborhoods of the architectures in each cluster. Two case studies and a post-analysis are presented to demonstrate the effectiveness of ArchExplorer in summarizing design principles and selecting better-performing architectures.
Collapse
|
6
|
Ruddle RA, Bernard J, Lucke-Tieke H, May T, Kohlhammer J. The Effect of Alignment on People's Ability to Judge Event Sequence Similarity. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:3070-3081. [PMID: 33434130 DOI: 10.1109/tvcg.2021.3050497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Event sequences are central to the analysis of data in domains that range from biology and health, to logfile analysis and people's everyday behavior. Many visualization tools have been created for such data, but people are error-prone when asked to judge the similarity of event sequences with basic presentation methods. This article describes an experiment that investigates whether local and global alignment techniques improve people's performance when judging sequence similarity. Participants were divided into three groups (basic versus local versus global alignment), and each participant judged the similarity of 180 sets of pseudo-randomly generated sequences. Each set comprised a target, a correct choice and a wrong choice. After training, the global alignment group was more accurate than the local alignment group (98 versus 93 percent correct), with the basic group getting 95 percent correct. Participants' response times were primarily affected by the number of event types, the similarity of sequences (measured by the Levenshtein distance) and the edit types (nine combinations of deletion, insertion and substitution). In summary, global alignment is superior and people's performance could be further improved by choosing alignment parameters that explicitly penalize sequence mismatches.
Collapse
|
7
|
Lu X. Deep Learning Based Emotion Recognition and Visualization of Figural Representation. Front Psychol 2022; 12:818833. [PMID: 35069400 PMCID: PMC8770983 DOI: 10.3389/fpsyg.2021.818833] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Accepted: 12/07/2021] [Indexed: 11/28/2022] Open
Abstract
This exploration aims to study the emotion recognition of speech and graphic visualization of expressions of learners under the intelligent learning environment of the Internet. After comparing the performance of several neural network algorithms related to deep learning, an improved convolution neural network-Bi-directional Long Short-Term Memory (CNN-BiLSTM) algorithm is proposed, and a simulation experiment is conducted to verify the performance of this algorithm. The experimental results indicate that the Accuracy of CNN-BiLSTM algorithm reported here reaches 98.75%, which is at least 3.15% higher than that of other algorithms. Besides, the Recall is at least 7.13% higher than that of other algorithms, and the recognition rate is not less than 90%. Evidently, the improved CNN-BiLSTM algorithm can achieve good recognition results, and provide significant experimental reference for research on learners’ emotion recognition and graphic visualization of expressions in an intelligent learning environment.
Collapse
Affiliation(s)
- Xiaofeng Lu
- Department of Fine Arts, Shandong University of Arts, Jinan, China
| |
Collapse
|
8
|
Millar S, McLaughlin N, Martinez del Rincon J, Miller P. Multi-view deep learning for zero-day Android malware detection. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS 2021. [DOI: 10.1016/j.jisa.2020.102718] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
9
|
Ma Y, Fan A, He J, Nelakurthi AR, Maciejewski R. A Visual Analytics Framework for Explaining and Diagnosing Transfer Learning Processes. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:1385-1395. [PMID: 33035164 DOI: 10.1109/tvcg.2020.3028888] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Many statistical learning models hold an assumption that the training data and the future unlabeled data are drawn from the same distribution. However, this assumption is difficult to fulfill in real-world scenarios and creates barriers in reusing existing labels from similar application domains. Transfer Learning is intended to relax this assumption by modeling relationships between domains, and is often applied in deep learning applications to reduce the demand for labeled data and training time. Despite recent advances in exploring deep learning models with visual analytics tools, little work has explored the issue of explaining and diagnosing the knowledge transfer process between deep learning models. In this paper, we present a visual analytics framework for the multi-level exploration of the transfer learning processes when training deep neural networks. Our framework establishes a multi-aspect design to explain how the learned knowledge from the existing model is transferred into the new learning task when training deep neural networks. Based on a comprehensive requirement and task analysis, we employ descriptive visualization with performance measures and detailed inspections of model behaviors from the statistical, instance, feature, and model structure levels. We demonstrate our framework through two case studies on image classification by fine-tuning AlexNets to illustrate how analysts can utilize our framework.
Collapse
|