1
|
Balavenkatasubramanian J, Kumar S, Sanjayan RD. Artificial intelligence in regional anaesthesia. Indian J Anaesth 2024; 68:100-104. [PMID: 38406349 PMCID: PMC10893813 DOI: 10.4103/ija.ija_1274_23] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2023] [Revised: 01/05/2024] [Accepted: 01/10/2024] [Indexed: 02/27/2024] Open
Abstract
Ultrasound-guided regional anaesthesia is used to facilitate the real-time performance of the regional block, increase the block success and reduce the complication rate. Artificial intelligence (AI) has been studied in many medical disciplines with high success rates, especially radiology. The purpose of this article was to review the evolution of AI in regional anaesthesia. The role of AI is to identify and optimise the sonography image, display the target, guide the practitioner to advance the needle tip to the intended target and inject the local anaesthetic. AI supports non-experts in training and clinical practice and experts in teaching ultrasound-guided regional anaesthesia.
Collapse
Affiliation(s)
- J Balavenkatasubramanian
- Senior Consultant and Academic Director, Ganga Medical Centre and Hospital Pvt Ltd, Coimbatore, Tamil Nadu, India
| | - Senthil Kumar
- Consultant Anaesthesiologist, Ganga Medical Centre and Hospital Pvt Ltd, Coimbatore, Tamil Nadu, India
| | - R. D. Sanjayan
- Department of Anaesthesia, Ganga Medical Centre and Hospital Pvt Ltd, Coimbatore, Tamil Nadu, India
| |
Collapse
|
2
|
Li D, Peng Y, Sun J, Guo Y. A task-unified network with transformer and spatial-temporal convolution for left ventricular quantification. Sci Rep 2023; 13:13529. [PMID: 37598235 PMCID: PMC10439898 DOI: 10.1038/s41598-023-40841-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Accepted: 08/17/2023] [Indexed: 08/21/2023] Open
Abstract
Quantification of the cardiac function is vital for diagnosing and curing the cardiovascular diseases. Left ventricular function measurement is the most commonly used measure to evaluate the function of cardiac in clinical practice, how to improve the accuracy of left ventricular quantitative assessment results has always been the subject of research by medical researchers. Although considerable efforts have been put forward to measure the left ventricle (LV) automatically using deep learning methods, the accurate quantification is yet a challenge work as a result of the changeable anatomy structure of heart in the systolic diastolic cycle. Besides, most methods used direct regression method which lacks of visual based analysis. In this work, a deep learning segmentation and regression task-unified network with transformer and spatial-temporal convolution is proposed to segment and quantify the LV simultaneously. The segmentation module leverages a U-Net like 3D Transformer model to predict the contour of three anatomy structures, while the regression module learns spatial-temporal representations from the original images and the reconstruct feature map from segmentation path to estimate the finally desired quantification metrics. Furthermore, we employ a joint task loss function to train the two module networks. Our framework is evaluated on the MICCAI 2017 Left Ventricle Full Quantification Challenge dataset. The results of experiments demonstrate the effectiveness of our framework, which achieves competitive cardiac quantification metric results and at the same time produces visualized segmentation results that are conducive to later analysis.
Collapse
Affiliation(s)
- Dapeng Li
- Shandong University of Science and Technology, Qingdao, China
| | - Yanjun Peng
- Shandong University of Science and Technology, Qingdao, China.
- Shandong Province Key Laboratory of Wisdom Mining Information Technology, Qingdao, China.
| | - Jindong Sun
- Shandong University of Science and Technology, Qingdao, China
| | - Yanfei Guo
- Shandong University of Science and Technology, Qingdao, China
| |
Collapse
|
3
|
Viderman D, Dossov M, Seitenov S, Lee MH. Artificial intelligence in ultrasound-guided regional anesthesia: A scoping review. Front Med (Lausanne) 2022; 9:994805. [PMID: 36388935 PMCID: PMC9640918 DOI: 10.3389/fmed.2022.994805] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 09/22/2022] [Indexed: 01/06/2024] Open
Abstract
Background Regional anesthesia is increasingly used in acute postoperative pain management. Ultrasound has been used to facilitate the performance of the regional block, increase the percentage of successfully performed procedures and reduce the complication rate. Artificial intelligence (AI) has been studied in many medical disciplines with achieving high success, especially in radiology. The purpose of this review was to review the evidence on the application of artificial intelligence for optimization and interpretation of the sonographic image, and visualization of needle advancement and injection of local anesthetic. Methods To conduct this scoping review, we followed the PRISMA-S guidelines. We included studies if they met the following criteria: (1) Application of Artificial intelligence-assisted in ultrasound-guided regional anesthesia; (2) Any human subject (of any age), object (manikin), or animal; (3) Study design: prospective, retrospective, RCTs; (4) Any method of regional anesthesia (epidural, spinal anesthesia, peripheral nerves); (5) Any anatomical localization of regional anesthesia (any nerve or plexus) (6) Any methods of artificial intelligence; (7) Settings: Any healthcare settings (Medical centers, hospitals, clinics, laboratories. Results The systematic searches identified 78 citations. After the removal of the duplicates, 19 full-text articles were assessed; and 15 studies were eligible for inclusion in the review. Conclusions AI solutions might be useful in anatomical landmark identification, reducing or even avoiding possible complications. AI-guided solutions can improve the optimization and interpretation of the sonographic image, visualization of needle advancement, and injection of local anesthetic. AI-guided solutions might improve the training process in UGRA. Although significant progress has been made in the application of AI-guided UGRA, randomized control trials are still missing.
Collapse
Affiliation(s)
- Dmitriy Viderman
- Department of Biomedical Sciences, Nazarbayev University School of Medicine, Nur-Sultan, Kazakhstan
| | - Mukhit Dossov
- Department of Anesthesiology and Critical Care, Presidential Hospital, Nur-Sultan, Kazakhstan
| | - Serik Seitenov
- Department of Anesthesiology and Critical Care, Presidential Hospital, Nur-Sultan, Kazakhstan
| | - Min-Ho Lee
- Department of Computer Sciences, Nazarbayev University School of Engineering and Digital Sciences, Nur-Sultan, Kazakhstan
| |
Collapse
|
4
|
Wang Y, Chen W, Tang T, Xie W, Jiang Y, Zhang H, Zhou X, Yuan K. Cardiac Segmentation Method Based on Domain Knowledge. ULTRASONIC IMAGING 2022; 44:105-117. [PMID: 35574925 DOI: 10.1177/01617346221099435] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Echocardiography plays an important role in the clinical diagnosis of cardiovascular diseases. Cardiac function assessment by echocardiography is a crucial process in daily cardiology. However, cardiac segmentation in echocardiography is a challenging task due to shadows and speckle noise. The traditional manual segmentation method is a time-consuming process and limited by inter-observer variability. In this paper, we present a fast and accurate echocardiographic automatic segmentation framework based on Convolutional neural networks (CNN). We propose FAUet, a segmentation method serially integrated U-Net with coordinate attention mechanism and domain feature loss from VGG19 pre-trained on the ImageNet dataset. The coordinate attention mechanism can capture long-range dependencies along one spatial direction and meanwhile preserve precise positional information along the other spatial direction. And the domain feature loss is more concerned with the topology of cardiac structures by exploiting their higher-level features. In this research, we use a two-dimensional echocardiogram (2DE) of 88 patients from two devices, Philips Epiq 7C and Mindray Resona 7T, to segment the left ventricle (LV), interventricular septal (IVS), and posterior left ventricular wall (PLVW). We also draw the gradient weighted class activation mapping (Grad-CAM) to improve the interpretability of the segmentation results. Compared with the traditional U-Net, the proposed segmentation method shows better performance. The mean Dice Score Coefficient (Dice) of LV, IVS, and PLVW of FAUet can achieve 0.932, 0.848, and 0.868, and the average Dice of the three objects can achieve 0.883. Statistical analysis showed that there is no significant difference between the segmentation results of the two devices. The proposed method can realize fast and accurate segmentation of 2DE with a low time cost. Combining coordinate attention module and feature loss with the original U-Net framework can significantly increase the performance of the algorithm.
Collapse
Affiliation(s)
- Yingni Wang
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Wenbin Chen
- Department of Echocardiography, Fuwai Hospital Chinese Academy of Medical Sciences, Shenzhen, China
| | - Tianhong Tang
- Department of Echocardiography, Fuwai Hospital Chinese Academy of Medical Sciences, Shenzhen, China
| | - Wenquan Xie
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| | - Yong Jiang
- Department of Echocardiography, Fuwai Hospital Chinese Academy of Medical Sciences, Shenzhen, China
| | - Huabin Zhang
- Beijing Tsinghua Changgung Hospital, Tsinghua University, Beijing, China
| | - Xiaobo Zhou
- School of Biomedical Informatics, University of Texas Health Sciences Center at Houston, Houston, TX, USA
| | - Kehong Yuan
- Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| |
Collapse
|
5
|
Nascimento JC, Carneiro G. One Shot Segmentation: Unifying Rigid Detection and Non-Rigid Segmentation Using Elastic Regularization. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:3054-3070. [PMID: 31217094 DOI: 10.1109/tpami.2019.2922959] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes a novel approach for the non-rigid segmentation of deformable objects in image sequences, which is based on one-shot segmentation that unifies rigid detection and non-rigid segmentation using elastic regularization. The domain of application is the segmentation of a visual object that temporally undergoes a rigid transformation (e.g., affine transformation) and a non-rigid transformation (i.e., contour deformation). The majority of segmentation approaches to solve this problem are generally based on two steps that run in sequence: a rigid detection, followed by a non-rigid segmentation. In this paper, we propose a new approach, where both the rigid and non-rigid segmentation are performed in a single shot using a sparse low-dimensional manifold that represents the visual object deformations. Given the multi-modality of these deformations, the manifold partitions the training data into several patches, where each patch provides a segmentation proposal during the inference process. These multiple segmentation proposals are merged using the classification results produced by deep belief networks (DBN) that compute the confidence on each segmentation proposal. Thus, an ensemble of DBN classifiers is used for estimating the final segmentation. Compared to current methods proposed in the field, our proposed approach is advantageous in four aspects: (i) it is a unified framework to produce rigid and non-rigid segmentations; (ii) it uses an ensemble classification process, which can help the segmentation robustness; (iii) it provides a significant reduction in terms of the number of dimensions of the rigid and non-rigid segmentations search spaces, compared to current approaches that divide these two problems; and (iv) this lower dimensionality of the search space can also reduce the need for large annotated training sets to be used for estimating the DBN models. Experiments on the problem of left ventricle endocardial segmentation from ultrasound images, and lip segmentation from frontal facial images using the extended Cohn-Kanade (CK+) database, demonstrate the potential of the methodology through qualitative and quantitative evaluations, and the ability to reduce the search and training complexities without a significant impact on the segmentation accuracy.
Collapse
|
6
|
Arafati A, Morisawa D, Avendi MR, Amini MR, Assadi RA, Jafarkhani H, Kheradvar A. Generalizable fully automated multi-label segmentation of four-chamber view echocardiograms based on deep convolutional adversarial networks. J R Soc Interface 2020; 17:20200267. [PMID: 32811299 PMCID: PMC7482559 DOI: 10.1098/rsif.2020.0267] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Accepted: 07/27/2020] [Indexed: 11/12/2022] Open
Abstract
A major issue in translation of the artificial intelligence platforms for automatic segmentation of echocardiograms to clinics is their generalizability. The present study introduces and verifies a novel generalizable and efficient fully automatic multi-label segmentation method for four-chamber view echocardiograms based on deep fully convolutional networks (FCNs) and adversarial training. For the first time, we used generative adversarial networks for pixel classification training, a novel method in machine learning not currently used for cardiac imaging, to overcome the generalization problem. The method's performance was validated against manual segmentations as the ground-truth. Furthermore, to verify our method's generalizability in comparison with other existing techniques, we compared our method's performance with a state-of-the-art method on our dataset in addition to an independent dataset of 450 patients from the CAMUS (cardiac acquisitions for multi-structure ultrasound segmentation) challenge. On our test dataset, automatic segmentation of all four chambers achieved a dice metric of 92.1%, 86.3%, 89.6% and 91.4% for LV, RV, LA and RA, respectively. LV volumes' correlation between automatic and manual segmentation were 0.94 and 0.93 for end-diastolic volume and end-systolic volume, respectively. Excellent agreement with chambers' reference contours and significant improvement over previous FCN-based methods suggest that generative adversarial networks for pixel classification training can effectively design generalizable fully automatic FCN-based networks for four-chamber segmentation of echocardiograms even with limited number of training data.
Collapse
Affiliation(s)
- Arghavan Arafati
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| | - Daisuke Morisawa
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| | - Michael R. Avendi
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
- Center for Pervasive Communications and Computing, University of California, 4217 Engineering Hall, Irvine, CA 92697-2700, USA
| | - M. Reza Amini
- Loma Linda University Medical Center, Loma Linda, CA 92354, USA
| | - Ramin A. Assadi
- Division of Cardiology, David Geffen School of Medicine at UCLA, Los Angeles, CA 90095, USA
| | - Hamid Jafarkhani
- Center for Pervasive Communications and Computing, University of California, 4217 Engineering Hall, Irvine, CA 92697-2700, USA
| | - Arash Kheradvar
- The Edwards Lifesciences Center for Advanced Cardiovascular Technology, University of California, 2410 Engineering Hall, Irvine, CA 92697-2730, USA
| |
Collapse
|
7
|
Ge R, Yang G, Chen Y, Luo L, Feng C, Ma H, Ren J, Li S. K-Net: Integrate Left Ventricle Segmentation and Direct Quantification of Paired Echo Sequence. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:1690-1702. [PMID: 31765307 DOI: 10.1109/tmi.2019.2955436] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
The integration of segmentation and direct quantification on the left ventricle (LV) from the paired apical views(i.e., apical 4-chamber and 2-chamber together) echo sequence clinically achieves the comprehensive cardiac assessment: multiview segmentation for anatomical morphology, and multidimensional quantification for contractile function. Direct quantification of LV, i.e., to automatically quantify multiple LV indices directly from the image via task-aware feature representation and regression, avoids accumulative error from the inter-step target. This integration sequentially makes a stereoscopical reflection of cardiac activity jointly from the paired orthogonal cross views sequences, overcoming limited observation with a single plane. We propose a K-shaped Unified Network (K-Net), the first end-to-end framework to simultaneously segment LV from apical 4-chamber and 2-chamber views, and directly quantify LV from major- and minor-axis dimensions (1D), area (2D), and volume (3D), in sequence. It works via four components: 1) the K-Net architecture with the Attention Junction enables heterogeneous tasks learning of segmentation task of pixel-wise classification, and direct quantification task of image-wise regression, by interactively introducing the information from segmentation to jointly promote spatial attention map to guide quantification focusing on LV-related region, and transferring quantification feedback to make global constraint on segmentation; 2) the Bi-ResLSTMs distributed in K-Net layer-by-layer hierarchically extract spatial-temporal information in echo sequence, with bidirectional recurrent and short-cut connection to model spatial-temporal information among all frames; 3) the Information Valve tailing the Bi-ResLSTMs selectively exchanges information among multiple views, by stimulating complementary information and suppressing redundant information to make the efficient cross-flow for each view; 4) the Evolution Loss comprehensively guides sequential data learning, with static constraint for frame values, and dynamic constraint for inter-frame value changes. The experiments show that our K-Net gains high performance with a Dice coefficient up to 91.44% and a mean absolute error of the major-axis dimension down to 2.74mm, which reveal its clinical potential.
Collapse
|
8
|
Ge R, Yang G, Chen Y, Luo L, Feng C, Zhang H, Li S. PV-LVNet: Direct left ventricle multitype indices estimation from 2D echocardiograms of paired apical views with deep neural networks. Med Image Anal 2019; 58:101554. [DOI: 10.1016/j.media.2019.101554] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2018] [Revised: 05/15/2019] [Accepted: 09/04/2019] [Indexed: 11/16/2022]
|
9
|
Medley DO, Santiago C, Nascimento JC. Deep Active Shape Model for Robust Object Fitting. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:2380-2394. [PMID: 31670673 DOI: 10.1109/tip.2019.2948728] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Object recognition and localization is still a very challenging problem, despite recent advances in deep learning (DL) approaches, especially for objects with varying shapes and appearances. Statistical models, such as an Active Shape Model (ASM), rely on a parametric model of the object, allowing an easy incorporation of prior knowledge about shape and appearance in a principled way. To take advantage of these benefits, this paper proposes a new ASM framework that addresses two tasks: (i) comparing the performance of several image features used to extract observations from an input image; and (ii) improving the performance of the model fitting by relying on a probabilistic framework that allows the use of multiple observations and is robust to the presence of outliers. The goal in (i) is to maximize the quality of the observations by exploring a wide set of handcrafted features (HOG, SIFT, and texture templates) and more recent DL-based features. Regarding (ii), we use the Generalized Expectation-Maximization algorithm to deal with outliers and to extend the fitting process to multiple observations. The proposed framework is evaluated in the context of facial landmark fitting and the segmentation of the endocardium of the left ventricle in cardiac magnetic resonance volumes. We experimentally observe that the proposed approach is robust not only to outliers, but also to adverse initialization conditions and to large search regions (from where the observations are extracted from the image). Furthermore, the results of the proposed combination of the ASM with DL-based features are competitive with more recent DL approaches (e.g. FCN [1], U-Net [2] and CNN Cascade [3]), showing that it is possible to combine the benefits of statistical models and DL into a new deep ASM probabilistic framework.
Collapse
|
10
|
Alkhatib M, Hafiane A, Vieyres P, Delbos A. Deep visual nerve tracking in ultrasound images. Comput Med Imaging Graph 2019; 76:101639. [DOI: 10.1016/j.compmedimag.2019.05.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 02/19/2019] [Accepted: 05/31/2019] [Indexed: 01/08/2023]
|
11
|
Liu Z, Chan SC, Zhang S, Zhang Z, Chen X. Automatic Muscle Fiber Orientation Tracking in Ultrasound Images Using a New Adaptive Fading Bayesian Kalman Smoother. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:3714-3727. [PMID: 30794172 DOI: 10.1109/tip.2019.2899941] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper proposes a new algorithm for automatic estimation of muscle fiber orientation (MFO) in musculoskeletal ultrasound images, which is commonly used for both diagnosis and rehabilitation assessment of patients. The algorithm is based on a novel adaptive fading Bayesian Kalman filter (AF-BKF) and an automatic region of interest (ROI) extraction method. The ROI is first enhanced by the Gabor filter (GF) and extracted automatically using the revoting constrained Radon transform (RCRT) approach. The dominant MFO in the ROI is then detected by the RT and tracked by the proposed AF-BKF, which employs simplified Gaussian mixtures to approximate the non-Gaussian state densities and a new adaptive fading method to update the mixture parameters. An AF-BK smoother (AF-BKS) is also proposed by extending the AF-BKF using the concept of Rauch-Tung-Striebel smoother for further smoothing the fascicle orientations. The experimental results and comparisons show that: 1) the maximum segmentation error of the proposed RCRT is below nine pixels, which is sufficiently small for MFO tracking; 2) the accuracy of MFO gauged by RT in the ROI enhanced by the GF is comparable to that of using multiscale vessel enhancement filter-based method and better than those of local RT and revoting Hough transform approaches; and 3) the proposed AF-BKS algorithm outperforms the other tested approaches and achieves a performance close to those obtained by experienced operators (the overall covariance obtained by the AF-BKS is 3.19, which is rather close to that of the operators, 2.86). It, thus, serves as a valuable tool for automatic estimation of fascicle orientations and possibly for other applications in musculoskeletal ultrasound images.
Collapse
|
12
|
Alkhatib M, Hafiane A, Tahri O, Vieyres P, Delbos A. Adaptive median binary patterns for fully automatic nerves tracking in ultrasound images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 160:129-140. [PMID: 29728240 DOI: 10.1016/j.cmpb.2018.03.013] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2017] [Revised: 02/07/2018] [Accepted: 03/20/2018] [Indexed: 05/28/2023]
Abstract
BACKGROUND AND OBJECTIVE In the last decade, Ultrasound-Guided Regional Anesthesia (UGRA) gained importance in surgical procedures and pain management, due to its ability to perform target delivery of local anesthetics under direct sonographic visualization. However, practicing UGRA can be challenging, since it requires high skilled and experienced operator. Among the difficult task that the operator can face, is the tracking of the nerve structure in ultrasound images. Tracking task in US images is very challenging due to the noise and other artifacts. METHODS In this paper, we introduce a new and robust tracking technique by using Adaptive Median Binary Pattern(AMBP) as texture feature for tracking algorithms (particle filter, mean-shift and Kanade-Lucas-Tomasi(KLT)). Moreover, we propose to incorporate Kalman filter as prediction and correction steps for the tracking algorithms, in order to enhance the accuracy, computational cost and handle target disappearance. RESULTS The proposed method have been applied on real data and evaluated in different situations. The obtained results show that tracking with AMBP features outperforms other descriptors and achieved best performance with 95% accuracy. CONCLUSIONS This paper presents the first fully automatic nerve tracking method in Ultrasound images. AMBP features outperforms other descriptors in all situations such as noisy and filtered images.
Collapse
Affiliation(s)
- Mohammad Alkhatib
- INSA Centre Val de Loire, Laboratoire PRISME EA 4229, Bourges F-18000, France; Université d'Orléans, Laboratoire PRISME EA 4229, Bourges F-18000, France
| | - Adel Hafiane
- INSA Centre Val de Loire, Laboratoire PRISME EA 4229, Bourges F-18000, France.
| | - Omar Tahri
- INSA Centre Val de Loire, Laboratoire PRISME EA 4229, Bourges F-18000, France
| | - Pierre Vieyres
- Université d'Orléans, Laboratoire PRISME EA 4229, Bourges F-18000, France
| | - Alain Delbos
- Clinique Medipôle Garonne, Toulouse F-31036, France
| |
Collapse
|
13
|
Nascimento JC, Carneiro G. Deep Learning on Sparse Manifolds for Faster Object Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:4978-4990. [PMID: 28708556 DOI: 10.1109/tip.2017.2725582] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We propose a new combination of deep belief networks and sparse manifold learning strategies for the 2D segmentation of non-rigid visual objects. With this novel combination, we aim to reduce the training and inference complexities while maintaining the accuracy of machine learning-based non-rigid segmentation methodologies. Typical non-rigid object segmentation methodologies divide the problem into a rigid detection followed by a non-rigid segmentation, where the low dimensionality of the rigid detection allows for a robust training (i.e., a training that does not require a vast amount of annotated images to estimate robust appearance and shape models) and a fast search process during inference. Therefore, it is desirable that the dimensionality of this rigid transformation space is as small as possible in order to enhance the advantages brought by the aforementioned division of the problem. In this paper, we propose the use of sparse manifolds to reduce the dimensionality of the rigid detection space. Furthermore, we propose the use of deep belief networks to allow for a training process that can produce robust appearance models without the need of large annotated training sets. We test our approach in the segmentation of the left ventricle of the heart from ultrasound images and lips from frontal face images. Our experiments show that the use of sparse manifolds and deep belief networks for the rigid detection stage leads to segmentation results that are as accurate as the current state of the art, but with lower search complexity and training processes that require a small amount of annotated training data.
Collapse
|
14
|
Bridge CP, Ioannou C, Noble JA. Automated annotation and quantitative description of ultrasound videos of the fetal heart. Med Image Anal 2016; 36:147-161. [PMID: 27907850 DOI: 10.1016/j.media.2016.11.006] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2016] [Revised: 11/17/2016] [Accepted: 11/18/2016] [Indexed: 10/20/2022]
Abstract
Interpretation of ultrasound videos of the fetal heart is crucial for the antenatal diagnosis of congenital heart disease (CHD). We believe that automated image analysis techniques could make an important contribution towards improving CHD detection rates. However, to our knowledge, no previous work has been done in this area. With this goal in mind, this paper presents a framework for tracking the key variables that describe the content of each frame of freehand 2D ultrasound scanning videos of the healthy fetal heart. This represents an important first step towards developing tools that can assist with CHD detection in abnormal cases. We argue that it is natural to approach this as a sequential Bayesian filtering problem, due to the strong prior model we have of the underlying anatomy, and the ambiguity of the appearance of structures in ultrasound images. We train classification and regression forests to predict the visibility, location and orientation of the fetal heart in the image, and the viewing plane label from each frame. We also develop a novel adaptation of regression forests for circular variables to deal with the prediction of cardiac phase. Using a particle-filtering-based method to combine predictions from multiple video frames, we demonstrate how to filter this information to give a temporally consistent output at real-time speeds. We present results on a challenging dataset gathered in a real-world clinical setting and compare to expert annotations, achieving similar levels of accuracy to the levels of inter- and intra-observer variation.
Collapse
Affiliation(s)
- Christopher P Bridge
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom.
| | - Christos Ioannou
- Fetal Medicine Unit, John Radcliffe Hospital, Oxford, United Kingdom.
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, United Kingdom.
| |
Collapse
|
15
|
Santiago C, Nascimento JC, Marques JS. A new ASM framework for left ventricle segmentation exploring slice variability in cardiac MRI volumes. Neural Comput Appl 2016. [DOI: 10.1007/s00521-016-2337-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
|
16
|
Santiago C, Nascimento JC, Marques JS. Segmentation of the left ventricle in cardiac MRI using a probabilistic data association active shape model. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2016; 2015:7304-7. [PMID: 26737978 DOI: 10.1109/embc.2015.7320078] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The 3D segmentation of endocardium of the left ventricle (LV) in cardiac MRI volumes is a challenging problem due to the intrinsic properties of this image modality. Typically, the object shape and position are estimated to fit the observed features collected from the images. The difficulty inherent to the LV segmentation in MRI is that the images contain outliers (i.e., observations not belonging to the LV border) due to the presence of other structures. This paper proposes a robust approach based on the Active Shape Model (ASM) that is able to circumvent the above problem. More specifically, the ASM will be guided by probabilistic data association filtering (PDAF) of strokes (i.e. line segments) computed in the neighborhood of the shape model. Thus, the proposed approach, termed herein as ASM-PDAF, will perform the following main steps: 1) edge detection (low-level features) in the vicinity of the shape model; 2) edge grouping (mid-level features) to obtain potential LV strokes; and 3) filtering using a PDAF framework (high-level features) to update the ASM. Experimental results on a public cardiac MRI database show that the proposed approach outperforms previous literature research.
Collapse
|
17
|
De Luca V, Székely G, Tanner C. Estimation of Large-Scale Organ Motion in B-Mode Ultrasound Image Sequences: A Survey. ULTRASOUND IN MEDICINE & BIOLOGY 2015; 41:3044-3062. [PMID: 26360977 DOI: 10.1016/j.ultrasmedbio.2015.07.022] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/11/2014] [Revised: 06/13/2015] [Accepted: 07/16/2015] [Indexed: 06/05/2023]
Abstract
Reviewed here are methods developed for following (i.e., tracking) structures in medical B-mode ultrasound time sequences during large-scale motion. The resulting motion estimation problem and its key components are defined. The main tracking approaches are described, and their strengths and weaknesses are discussed. Existing motion estimation methods, tested on multiple in vivo sequences, are categorized with respect to their clinical applications, namely, cardiac, respiratory and muscular motion. A large number of works in this field had to be discarded as thorough validation of the results was missing. The remaining relevant works identified indicate the possibility of reaching an average tracking accuracy up to 1-2 mm. Real-time performance can be achieved using several methods. Yet only very few of these have progressed to clinical practice. The latest trends include incorporation of complementary and prior information. Advances are expected from common evaluation databases and 4-D ultrasound scanning technologies.
Collapse
Affiliation(s)
- Valeria De Luca
- Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland.
| | - Gábor Székely
- Computer Vision Laboratory, ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
18
|
Santiago C, Nascimento JC, Marques JS. Automatic 3-D segmentation of endocardial border of the left ventricle from ultrasound images. IEEE J Biomed Health Inform 2015; 19:339-48. [PMID: 25561455 DOI: 10.1109/jbhi.2014.2308424] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The segmentation of the left ventricle (LV) is an important task to assess the cardiac function in ultrasound images of the heart. This paper presents a novel methodology for the segmentation of the LV in three-dimensional (3-D) echocardiographic images based on the probabilistic data association filter (PDAF). The proposed methodology begins by initializing a 3-D deformable model either semiautomatically, with user input, or automatically, and it comprises the following feature hierarchical approach: 1) edge detection in the vicinity of the surface (low-level features); 2) edge grouping to obtain potential LV surface patches (mid-level features); and 3) patch filtering using a shape-PDAF framework (high-level features). This method provides good performance accuracy in 20 echocardiographic volumes, and compares favorably with the state-of-the-art segmentation methodologies proposed in the recent literature.
Collapse
|
19
|
Gao W, Tan KK, Liang W, Gan CW, Lim HY. Intelligent vision guide for automatic ventilation grommet insertion into the tympanic membrane. Int J Med Robot 2015; 12:18-31. [PMID: 25622548 DOI: 10.1002/rcs.1639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2014] [Revised: 12/09/2014] [Accepted: 12/11/2014] [Indexed: 11/11/2022]
Abstract
BACKGROUND Otitis media with effusion is a worldwide ear disease. The current treatment is to surgically insert a ventilation grommet into the tympanic membrane. A robotic device allowing automatic grommet insertion has been designed in a previous study; however, the part of the membrane where the malleus bone is attached to the inner surface is to be avoided during the insertion process. METHODS This paper proposes a synergy of optical flow technique and a gradient vector flow active contours algorithm to achieve an online tracking of the malleus under endoscopic vision, to guide the working channel to move efficiently during the surgery. RESULTS The proposed method shows a more stable and accurate tracking performance than the current tracking methods in preclinical tests. CONCLUSION With satisfactory tracking results, vision guidance of a suitable insertion spot can be provided to the device to perform the surgery in an automatic way.
Collapse
Affiliation(s)
- Wenchao Gao
- Department of Electrical and Computer Engineering, NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore
| | - Kok Kiong Tan
- Department of Electrical and Computer Engineering, NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore
| | - Wenyu Liang
- Department of Electrical and Computer Engineering, NUS Graduate School for Integrative Sciences and Engineering, National University of Singapore
| | - Chee Wee Gan
- Department of Otolaryngology, National University of Singapore
| | - Hsueh Yee Lim
- Department of Otolaryngology, National University of Singapore
| |
Collapse
|
20
|
McCann MT, Mixon DG, Fickus MC, Castro CA, Ozolek JA, Kovacevic J. Images as occlusions of textures: a framework for segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:2033-2046. [PMID: 24710403 DOI: 10.1109/tip.2014.2307475] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We propose a new mathematical and algorithmic framework for unsupervised image segmentation, which is a critical step in a wide variety of image processing applications. We have found that most existing segmentation methods are not successful on histopathology images, which prompted us to investigate segmentation of a broader class of images, namely those without clear edges between the regions to be segmented. We model these images as occlusions of random images, which we call textures, and show that local histograms are a useful tool for segmenting them. Based on our theoretical results, we describe a flexible segmentation framework that draws on existing work on nonnegative matrix factorization and image deconvolution. Results on synthetic texture mosaics and real histology images show the promise of the method.
Collapse
|
21
|
Zhong L, Zhang JM, Zhao X, Tan RS, Wan M. Automatic localization of the left ventricle from cardiac cine magnetic resonance imaging: a new spectrum-based computer-aided tool. PLoS One 2014; 9:e92382. [PMID: 24722328 PMCID: PMC3982962 DOI: 10.1371/journal.pone.0092382] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2013] [Accepted: 02/21/2014] [Indexed: 11/19/2022] Open
Abstract
Traditionally, cardiac image analysis is done manually. Automatic image processing can help with the repetitive tasks, and also deal with huge amounts of data, a task which would be humanly tedious. This study aims to develop a spectrum-based computer-aided tool to locate the left ventricle using images obtained via cardiac magnetic resonance imaging. Discrete Fourier Transform was conducted pixelwise on the image sequence. Harmonic images of all frequencies were analyzed visually and quantitatively to determine different patterns of the left and right ventricles on spectrum. The first and fifth harmonic images were selected to perform an anisotropic weighted circle Hough detection. This tool was then tested in ten volunteers. Our tool was able to locate the left ventricle in all cases and had a significantly higher cropping ratio of 0.165 than did earlier studies. In conclusion, a new spectrum-based computer aided tool has been proposed and developed for automatic left ventricle localization. The development of this technique, which will enable the automatic location and further segmentation of the left ventricle, will have a significant impact in research and in diagnostic settings. We envisage that this automated method could be used by radiographers and cardiologists to diagnose and assess ventricular function in patients with diverse heart diseases.
Collapse
Affiliation(s)
- Liang Zhong
- Bioengineering Department, National Heart Centre Singapore, Singapore, Singapore
- Cardiovascular & Metabolic Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore, Singapore
| | - Jun-Mei Zhang
- Bioengineering Department, National Heart Centre Singapore, Singapore, Singapore
| | - Xiaodan Zhao
- Bioengineering Department, National Heart Centre Singapore, Singapore, Singapore
| | - Ru San Tan
- Bioengineering Department, National Heart Centre Singapore, Singapore, Singapore
- Cardiovascular & Metabolic Disorders Program, Duke-NUS Graduate Medical School Singapore, Singapore, Singapore
| | - Min Wan
- Bioengineering Department, National Heart Centre Singapore, Singapore, Singapore
- Geometrical Modelling, Institute of High Performance Computing, A*STAR, Singapore, Singapore
| |
Collapse
|
22
|
Nascimento JC, Silva JG, Marques JS, Lemos JM. Manifold learning for object tracking with multiple nonlinear models. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:1593-1605. [PMID: 24577194 DOI: 10.1109/tip.2014.2303652] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents a novel manifold learning algorithm for high-dimensional data sets. The scope of the application focuses on the problem of motion tracking in video sequences. The framework presented is twofold. First, it is assumed that the samples are time ordered, providing valuable information that is not presented in the current methodologies. Second, the manifold topology comprises multiple charts, which contrasts to the most current methods that assume one single chart, being overly restrictive. The proposed algorithm, Gaussian process multiple local models (GP-MLM), can deal with arbitrary manifold topology by decomposing the manifold into multiple local models that are probabilistic combined using Gaussian process regression. In addition, the paper presents a multiple filter architecture where standard filtering techniques are integrated within the GP-MLM. The proposed approach exhibits comparable performance of state-of-the-art trackers, namely multiple model data association and deep belief networks, and compares favorably with Gaussian process latent variable models. Extensive experiments are presented using real video data, including a publicly available database of lip sequences and left ventricle ultrasound images, in which the GP-MLM achieves state of the art results.
Collapse
|
23
|
Dietenbeck T, Barbosa D, Alessandrini M, Jasaityte R, Robesyn V, D'hooge J, Friboulet D, Bernard O. Whole myocardium tracking in 2D-echocardiography in multiple orientations using a motion constrained level-set. Med Image Anal 2014; 18:500-14. [PMID: 24561989 DOI: 10.1016/j.media.2014.01.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2013] [Revised: 01/08/2014] [Accepted: 01/24/2014] [Indexed: 11/18/2022]
Abstract
The segmentation and tracking of the myocardium in echocardiographic sequences is an important task for the diagnosis of heart disease. This task is difficult due to the inherent problems of echographic images (i.e. low contrast, speckle noise, signal dropout, presence of shadows). In this article, we extend a level-set method recently proposed in Dietenbeck et al. (2012) in order to track the whole myocardium in echocardiographic sequences. To this end, we enforce temporal coherence by adding a new motion prior energy to the existing framework. This motion prior term is expressed as new constraint that enforces the conservation of the levels of the implicit function along the image sequence. Moreover, the robustness of the proposed method is improved by adjusting the associated hyperparameters in a spatially adaptive way, using the available strong a priori about the echocardiographic regions to be segmented. The accuracy and robustness of the proposed method is evaluated by comparing the obtained segmentation with experts references and to another state-of-the-art method on a dataset of 15 sequences (≃ 900 images) acquired in three echocardiographic views. We show that the algorithm provides results that are consistent with the inter-observer variability and outperforms the state-of-the-art method. We also carry out a complete study on the influence of the parameters settings. The obtained results demonstrate the stability of our method according to those values.
Collapse
Affiliation(s)
- T Dietenbeck
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France.
| | - D Barbosa
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France; Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - M Alessandrini
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| | - R Jasaityte
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - V Robesyn
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - J D'hooge
- Cardiovascular Imaging and Dynamics, KU Leuven, Leuven, Belgium
| | - D Friboulet
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| | - O Bernard
- Université de Lyon, CREATIS, CNRS UMR5220, INSERM U1044, Université Lyon 1, INSA-LYON, France
| |
Collapse
|
24
|
Carneiro G, Nascimento JC. Combining multiple dynamic models and deep learning architectures for tracking the left ventricle endocardium in ultrasound data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:2592-2607. [PMID: 24051722 DOI: 10.1109/tpami.2013.96] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We present a new statistical pattern recognition approach for the problem of left ventricle endocardium tracking in ultrasound data. The problem is formulated as a sequential importance resampling algorithm such that the expected segmentation of the current time step is estimated based on the appearance, shape, and motion models that take into account all previous and current images and previous segmentation contours produced by the method. The new appearance and shape models decouple the affine and nonrigid segmentations of the left ventricle to reduce the running time complexity. The proposed motion model combines the systole and diastole motion patterns and an observation distribution built by a deep neural network. The functionality of our approach is evaluated using a dataset of diseased cases containing 16 sequences and another dataset of normal cases comprised of four sequences, where both sets present long axis views of the left ventricle. Using a training set comprised of diseased and healthy cases, we show that our approach produces more accurate results than current state-of-the-art endocardium tracking methods in two test sequences from healthy subjects. Using three test sequences containing different types of cardiopathies, we show that our method correlates well with interuser statistics produced by four cardiologists.
Collapse
|
25
|
Mahdavi SS, Moradi M, Morris WJ, Goldenberg SL, Salcudean SE. Fusion of ultrasound B-mode and vibro-elastography images for automatic 3D segmentation of the prostate. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:2073-2082. [PMID: 22829391 DOI: 10.1109/tmi.2012.2209204] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Prostate segmentation in B-mode images is a challenging task even when done manually by experts. In this paper we propose a 3D automatic prostate segmentation algorithm which makes use of information from both ultrasound B-mode and vibro-elastography data.We exploit the high contrast to noise ratio of vibro-elastography images of the prostate, in addition to the commonly used B-mode images, to implement a 2D Active Shape Model (ASM)-based segmentation algorithm on the midgland image. The prostate model is deformed by a combination of two measures: the gray level similarity and the continuity of the prostate edge in both image types. The automatically obtained mid-gland contour is then used to initialize a 3D segmentation algorithm which models the prostate as a tapered and warped ellipsoid. Vibro-elastography images are used in addition to ultrasound images to improve boundary detection.We report a Dice similarity coefficient of 0.87±0.07 and 0.87±0.08 comparing the 2D automatic contours with manual contours of two observers on 61 images. For 11 cases, a whole gland volume error of 10.2±2.2% and 13.5±4.1% and whole gland volume difference of -7.2±9.1% and -13.3±12.6% between 3D automatic and manual surfaces of two observers is obtained. This is the first validated work showing the fusion of B-mode and vibro-elastography data for automatic 3D segmentation of the prostate.
Collapse
|
26
|
Chen C, Wang Y, Yu J, Zhou Z, Shen L, Chen Y. Tracking pylorus in ultrasonic image sequences with edge-based optical flow. IEEE TRANSACTIONS ON MEDICAL IMAGING 2012; 31:843-855. [PMID: 22262680 DOI: 10.1109/tmi.2012.2183884] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Tracking pylorus in ultrasonic image sequences is an important step in the analysis of duodenogastric reflux (DGR). We propose a joint prediction and segmentation method (JPS) which combines optical flow with active contour to track pylorus. The goal of the proposed method is to improve the pyloric tracking accuracy by taking account of not only the connection information among edge points but also the spatio-temporal information among consecutive frames. The proposed method is compared with other four tracking methods by using both synthetic and real ultrasonic image sequences. Several numerical indexes: Hausdorff distance (HD), average distance (AD), mean edge distance (MED), and edge curvature (EC) have been calculated to evaluate the performance of each method. JPS achieves the minimum distance metrics (HD, AD, and MED) and a smaller EC. The experimental results indicate that JPS gives a better tracking performance than others by the best agreement with the gold curves while keeping the smoothness of the result.
Collapse
Affiliation(s)
- Chaojie Chen
- Department of Electronic Engineering, Fudan University, Shanghai 200433, China
| | | | | | | | | | | |
Collapse
|
27
|
Carneiro G, Nascimento JC, Freitas A. The segmentation of the left ventricle of the heart from ultrasound data using deep learning architectures and derivative-based search methods. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:968-982. [PMID: 21947526 DOI: 10.1109/tip.2011.2169273] [Citation(s) in RCA: 93] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We present a new supervised learning model designed for the automatic segmentation of the left ventricle (LV) of the heart in ultrasound images. We address the following problems inherent to supervised learning models: 1) the need of a large set of training images; 2) robustness to imaging conditions not present in the training data; and 3) complex search process. The innovations of our approach reside in a formulation that decouples the rigid and nonrigid detections, deep learning methods that model the appearance of the LV, and efficient derivative-based search algorithms. The functionality of our approach is evaluated using a data set of diseased cases containing 400 annotated images (from 12 sequences) and another data set of normal cases comprising 80 annotated images (from two sequences), where both sets present long axis views of the LV. Using several error measures to compute the degree of similarity between the manual and automatic segmentations, we show that our method not only has high sensitivity and specificity but also presents variations with respect to a gold standard (computed from the manual annotations of two experts) within interuser variability on a subset of the diseased cases. We also compare the segmentations produced by our approach and by two state-of-the-art LV segmentation models on the data set of normal cases, and the results show that our approach produces segmentations that are comparable to these two approaches using only 20 training images and increasing the training set to 400 images causes our approach to be generally more accurate. Finally, we show that efficient search methods reduce up to tenfold the complexity of the method while still producing competitive segmentations. In the future, we plan to include a dynamical model to improve the performance of the algorithm, to use semisupervised learning methods to reduce even more the dependence on rich and large training sets, and to design a shape model less dependent on the training set.
Collapse
Affiliation(s)
- Gustavo Carneiro
- Australian Centre for Visual Technologies, University of Adelaide, Adelaide, SA 5005, Australia.
| | | | | |
Collapse
|
28
|
Mahdavi SS, Chng N, Spadinger I, Morris WJ, Salcudean SE. Semi-automatic segmentation for prostate interventions. Med Image Anal 2010; 15:226-37. [PMID: 21084216 DOI: 10.1016/j.media.2010.10.002] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2009] [Revised: 09/05/2010] [Accepted: 10/19/2010] [Indexed: 11/24/2022]
Abstract
In this paper we report and characterize a semi-automatic prostate segmentation method for prostate brachytherapy. Based on anatomical evidence and requirements of the treatment procedure, a warped and tapered ellipsoid was found suitable as the a-priori 3D shape of the prostate. By transforming the acquired endorectal transverse images of the prostate into ellipses, the shape fitting problem was cast into a convex problem which can be solved efficiently. The average whole gland error between non-overlapping volumes created from manual and semi-automatic contours from 21 patients was 6.63 ± 0.9%. For use in brachytherapy treatment planning, the resulting contours were modified, if deemed necessary, by radiation oncologists prior to treatment. The average whole gland volume error between the volumes computed from semi-automatic contours and those computed from modified contours, from 40 patients, was 5.82 ± 4.15%. The amount of bias in the physicians' delineations when given an initial semi-automatic contour was measured by comparing the volume error between 10 prostate volumes computed from manual contours with those of modified contours. This error was found to be 7.25 ± 0.39% for the whole gland. Automatic contouring reduced subjectivity, as evidenced by a decrease in segmentation inter- and intra-observer variability from 4.65% and 5.95% for manual segmentation to 3.04% and 3.48% for semi-automatic segmentation, respectively. We characterized the performance of the method relative to the reference obtained from manual segmentation by using a novel approach that divides the prostate region into nine sectors. We analyzed each sector independently as the requirements for segmentation accuracy depend on which region of the prostate is considered. The measured segmentation time is 14 ± 1s with an additional 32 ± 14s for initialization. By assuming 1-3 min for modification of the contours, if necessary, a total segmentation time of less than 4 min is required, with no additional time required prior to treatment planning. This compares favorably to the 5-15 min manual segmentation time required for experienced individuals. The method is currently used at the British Columbia Cancer Agency (BCCA) Vancouver Cancer Centre as part of the standard treatment routine in low dose rate prostate brachytherapy and is found to be a fast, consistent and accurate tool for the delineation of the prostate gland in ultrasound images.
Collapse
Affiliation(s)
- S Sara Mahdavi
- Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, BC, Canada.
| | | | | | | | | |
Collapse
|
29
|
Abstract
Prostate segmentation from trans-rectal transverse B-mode ultrasound images is required for radiation treatment of prostate cancer. Manual segmentation is a time-consuming task, the results of which are dependent on image quality and physicians' experience. This paper introduces a semi-automatic 3D method based on super-ellipsoidal shapes. It produces a 3D segmentation in less than 15 seconds using a warped, tapered ellipsoid fit to the prostate. A study of patient images shows good performance and repeatability. This method is currently in clinical use at the Vancouver Cancer Center where it has become the standard segmentation procedure for low dose-rate brachytherapy treatment.
Collapse
|
30
|
Nascimento JC, Marques JS. Improved Gradient Vector Flow for robust shape estimation in medical imaging. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2010; 2010:4809-4812. [PMID: 21097295 DOI: 10.1109/iembs.2010.5628031] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We propose a improved Gradient Vector Flow (iGVF) for active contour detection. The algorithm herein proposed allows to surpass the problems of the GVF, which occur in noisy images with cluttered background. We experimentally illustrate that the proposed modified version of the GVF algorithm has a better performance in noisy images. The main difference concerns the use of more robust and informative features (edge segments) which significantly reduce the influence of noise. Experiments with real data from several image modalities are presented to illustrate the performance of the proposed approach.
Collapse
|