1
|
Bowness JS, Metcalfe D, El-Boghdadly K, Thurley N, Morecroft M, Hartley T, Krawczyk J, Noble JA, Higham H. Artificial intelligence for ultrasound scanning in regional anaesthesia: a scoping review of the evidence from multiple disciplines. Br J Anaesth 2024; 132:1049-1062. [PMID: 38448269 PMCID: PMC11103083 DOI: 10.1016/j.bja.2024.01.036] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 01/09/2024] [Accepted: 01/24/2024] [Indexed: 03/08/2024] Open
Abstract
BACKGROUND Artificial intelligence (AI) for ultrasound scanning in regional anaesthesia is a rapidly developing interdisciplinary field. There is a risk that work could be undertaken in parallel by different elements of the community but with a lack of knowledge transfer between disciplines, leading to repetition and diverging methodologies. This scoping review aimed to identify and map the available literature on the accuracy and utility of AI systems for ultrasound scanning in regional anaesthesia. METHODS A literature search was conducted using Medline, Embase, CINAHL, IEEE Xplore, and ACM Digital Library. Clinical trial registries, a registry of doctoral theses, regulatory authority databases, and websites of learned societies in the field were searched. Online commercial sources were also reviewed. RESULTS In total, 13,014 sources were identified; 116 were included for full-text review. A marked change in AI techniques was noted in 2016-17, from which point on the predominant technique used was deep learning. Methods of evaluating accuracy are variable, meaning it is impossible to compare the performance of one model with another. Evaluations of utility are more comparable, but predominantly gained from the simulation setting with limited clinical data on efficacy or safety. Study methodology and reporting lack standardisation. CONCLUSIONS There is a lack of structure to the evaluation of accuracy and utility of AI for ultrasound scanning in regional anaesthesia, which hinders rigorous appraisal and clinical uptake. A framework for consistent evaluation is needed to inform model evaluation, allow comparison between approaches/models, and facilitate appropriate clinical adoption.
Collapse
Affiliation(s)
- James S Bowness
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK.
| | - David Metcalfe
- Nuffield Department of Orthopaedics, Rheumatology & Musculoskeletal Sciences, University of Oxford, Oxford, UK; Emergency Medicine Research in Oxford (EMROx), Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@TraumaDataDoc
| | - Kariem El-Boghdadly
- Department of Anaesthesia and Peri-operative Medicine, Guy's & St Thomas's NHS Foundation Trust, London, UK; Centre for Human and Applied Physiological Sciences, King's College London, London, UK. https://twitter.com/@elboghdadly
| | - Neal Thurley
- Bodleian Health Care Libraries, University of Oxford, Oxford, UK
| | - Megan Morecroft
- Faculty of Medicine, Health & Life Sciences, University of Swansea, Swansea, UK
| | - Thomas Hartley
- Intelligent Ultrasound, Cardiff, UK. https://twitter.com/@tomhartley84
| | - Joanna Krawczyk
- Department of Anaesthesia, Aneurin Bevan University Health Board, Newport, UK
| | - J Alison Noble
- Institute of Biomedical Engineering, University of Oxford, Oxford, UK. https://twitter.com/@AlisonNoble_OU
| | - Helen Higham
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, UK; Nuffield Department of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK. https://twitter.com/@HelenEHigham
| |
Collapse
|
2
|
Wang JC, Shu YC, Lin CY, Wu WT, Chen LR, Lo YC, Chiu HC, Özçakar L, Chang KV. Application of deep learning algorithms in automatic sonographic localization and segmentation of the median nerve: A systematic review and meta-analysis. Artif Intell Med 2023; 137:102496. [PMID: 36868687 DOI: 10.1016/j.artmed.2023.102496] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 11/13/2022] [Accepted: 01/18/2023] [Indexed: 01/22/2023]
Abstract
OBJECTIVE High-resolution ultrasound is an emerging tool for diagnosing carpal tunnel syndrome caused by the compression of the median nerve at the wrist. This systematic review and meta-analysis aimed to explore and summarize the performance of deep learning algorithms in the automatic sonographic assessment of the median nerve at the carpal tunnel level. METHODS PubMed, Medline, Embase, and Web of Science were searched from the earliest records to May 2022 for studies investigating the utility of deep neural networks in the evaluation of the median nerve in carpal tunnel syndrome. The quality of the included studies was evaluated using the Quality Assessment Tool for Diagnostic Accuracy Studies. The outcome variables included precision, recall, accuracy, F-score, and Dice coefficient. RESULTS In total, seven articles were included, comprising 373 participants. The deep learning and related algorithms comprised U-Net, phase-based probabilistic active contour, MaskTrack, ConvLSTM, DeepNerve, DeepSL, ResNet, Feature Pyramid Network, DeepLab, Mask R-CNN, region proposal network, and ROI Align. The pooled values of precision and recall were 0.917 (95 % confidence interval [CI], 0.873-0.961) and 0.940 (95 % CI, 0.892-0.988), respectively. The pooled accuracy and Dice coefficient were 0.924 (95 % CI, 0.840-1.008) and 0.898 (95 % CI, 0.872-0.923), respectively, whereas the summarized F-score was 0.904 (95 % CI, 0.871-0.937). CONCLUSION The deep learning algorithm enables automated localization and segmentation of the median nerve at the carpal tunnel level in ultrasound imaging with acceptable accuracy and precision. Future research is expected to validate the performance of deep learning algorithms in detecting and segmenting the median nerve along its entire length as well as across datasets obtained from various ultrasound manufacturers.
Collapse
Affiliation(s)
- Jia-Chi Wang
- Department of Physical Medicine and Rehabilitation, Taipei Veterans General Hospital, Taipei, Taiwan; School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Yi-Chung Shu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Che-Yu Lin
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Wei-Ting Wu
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan
| | - Lan-Rong Chen
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan
| | - Yu-Cheng Lo
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Hsiao-Chi Chiu
- Institute of Applied Mechanics, College of Engineering, National Taiwan University, Taipei, Taiwan
| | - Levent Özçakar
- Department of Physical and Rehabilitation Medicine, Hacettepe University Medical School, Ankara, Turkey
| | - Ke-Vin Chang
- Department of Physical Medicine and Rehabilitation and Community and Geriatric Research Center, National Taiwan University Hospital, Bei-Hu Branch, Taipei, Taiwan; Department of Physical Medicine and Rehabilitation, National Taiwan University College of Medicine, Taipei, Taiwan; Center for Regional Anesthesia and Pain Medicine, Wang-Fang Hospital, Taipei Medical University, Taipei, Taiwan.
| |
Collapse
|
3
|
Di Cosmo M, Fiorentino MC, Villani FP, Frontoni E, Smerilli G, Filippucci E, Moccia S. A deep learning approach to median nerve evaluation in ultrasound images of carpal tunnel inlet. Med Biol Eng Comput 2022; 60:3255-3264. [PMID: 36152237 PMCID: PMC9537213 DOI: 10.1007/s11517-022-02662-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 08/22/2022] [Indexed: 11/29/2022]
Abstract
AbstractUltrasound (US) imaging is recognized as a useful support for Carpal Tunnel Syndrome (CTS) assessment through the evaluation of median nerve morphology. However, US is still far to be systematically adopted to evaluate this common entrapment neuropathy, due to US intrinsic challenges, such as its operator dependency and the lack of standard protocols. To support sonographers, the present study proposes a fully-automatic deep learning approach to median nerve segmentation from US images. We collected and annotated a dataset of 246 images acquired in clinical practice involving 103 rheumatic patients, regardless of anatomical variants (bifid nerve, closed vessels). We developed a Mask R-CNN with two additional transposed layers at segmentation head to accurately segment the median nerve directly on transverse US images. We calculated the cross-sectional area (CSA) of the predicted median nerve. Proposed model achieved good performances both in median nerve detection and segmentation: Precision (Prec), Recall (Rec), Mean Average Precision (mAP) and Dice Similarity Coefficient (DSC) values are 0.916 ± 0.245, 0.938 ± 0.233, 0.936 ± 0.235 and 0.868 ± 0.201, respectively. The CSA values measured on true positive predictions were comparable with the sonographer manual measurements with a mean absolute error (MAE) of 0.918 mm2. Experimental results showed the potential of proposed model, which identified and segmented the median nerve section in normal anatomy images, while still struggling when dealing with infrequent anatomical variants. Future research will expand the dataset including a wider spectrum of normal anatomy and pathology to support sonographers in daily practice.
Graphical abstract
Collapse
Affiliation(s)
- Mariachiara Di Cosmo
- Department of Information Engineering, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131, Ancona, AN, Italy.
| | - Maria Chiara Fiorentino
- Department of Information Engineering, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131, Ancona, AN, Italy
| | | | - Emanuele Frontoni
- Department of Political Sciences, Communication and International Relations, Università di Macerata, Macerata, Italy
| | - Gianluca Smerilli
- Rheumatology Unit, Department of Clinical and Molecular Sciences, Università Politecnica delle Marche, "Carlo Urbani" Hospital, Ancona, Italy
| | - Emilio Filippucci
- Rheumatology Unit, Department of Clinical and Molecular Sciences, Università Politecnica delle Marche, "Carlo Urbani" Hospital, Ancona, Italy
| | - Sara Moccia
- The BioRobotics Institute, Department of Excellence in Robotics and AI, Scuola Superiore Sant'Anna, Pisa, Italy
| |
Collapse
|
4
|
Artificial intelligence using deep neural network learning for automatic location of the interscalene brachial plexus in ultrasound images. Eur J Anaesthesiol 2022; 39:758-765. [PMID: 35919026 DOI: 10.1097/eja.0000000000001720] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
BACKGROUND Identifying the interscalene brachial plexus can be challenging during ultrasound-guided interscalene block. OBJECTIVE We hypothesised that an algorithm based on deep learning could locate the interscalene brachial plexus in ultrasound images better than a nonexpert anaesthesiologist, thus possessing the potential to aid anaesthesiologists. DESIGN Observational study. SETTING A tertiary hospital in Shanghai, China. PATIENTS Patients undergoing elective surgery. INTERVENTIONS Ultrasound images at the interscalene level were collected from patients. Two independent image datasets were prepared to train and evaluate the deep learning model. Three senior anaesthesiologists who were experts in regional anaesthesia annotated the images. A deep convolutional neural network was developed, trained and optimised to locate the interscalene brachial plexus in the ultrasound images. Expert annotations on the datasets were regarded as an accurate baseline (ground truth). The test dataset was also annotated by five nonexpert anaesthesiologists. MAIN OUTCOME MEASURES The primary outcome of the research was the distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth. RESULTS The data set was obtained from 1126 patients. The training dataset comprised 11 392 images from 1076 patients. The test dataset constituted 100 images from 50 patients. In the test dataset, the median [IQR] distance between the lateral midpoints of the nerve sheath contours of the model predictions and ground truth was 0.8 [0.4 to 2.9] mm: this was significantly shorter than that between nonexpert predictions and ground truth (3.4 mm [2.1 to 4.5] mm; P < 0.001). CONCLUSION The proposed model was able to locate the interscalene brachial plexus in ultrasound images more accurately than nonexperts. TRIAL REGISTRATION ClinicalTrials.gov (https://clinicaltrials.gov) identifier: NCT04183972.
Collapse
|
5
|
Lloyd J, Morse R, Taylor A, Phillips D, Higham H, Burckett-St Laurent D, Bowness J. Artificial Intelligence: Innovation to Assist in the Identification of Sono-anatomy for Ultrasound-Guided Regional Anaesthesia. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2022; 1356:117-140. [PMID: 35146620 DOI: 10.1007/978-3-030-87779-8_6] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
Ultrasound-guided regional anaesthesia (UGRA) involves the targeted deposition of local anaesthesia to inhibit the function of peripheral nerves. Ultrasound allows the visualisation of nerves and the surrounding structures, to guide needle insertion to a perineural or fascial plane end point for injection. However, it is challenging to develop the necessary skills to acquire and interpret optimal ultrasound images. Sound anatomical knowledge is required and human image analysis is fallible, limited by heuristic behaviours and fatigue, while its subjectivity leads to varied interpretation even amongst experts. Therefore, to maximise the potential benefit of ultrasound guidance, innovation in sono-anatomical identification is required.Artificial intelligence (AI) is rapidly infiltrating many aspects of everyday life. Advances related to medicine have been slower, in part because of the regulatory approval process needing to thoroughly evaluate the risk-benefit ratio of new devices. One area of AI to show significant promise is computer vision (a branch of AI dealing with how computers interpret the visual world), which is particularly relevant to medical image interpretation. AI includes the subfields of machine learning and deep learning, techniques used to interpret or label images. Deep learning systems may hold potential to support ultrasound image interpretation in UGRA but must be trained and validated on data prior to clinical use.Review of the current UGRA literature compares the success and generalisability of deep learning and non-deep learning approaches to image segmentation and explains how computers are able to track structures such as nerves through image frames. We conclude this review with a case study from industry (ScanNav Anatomy Peripheral Nerve Block; Intelligent Ultrasound Limited). This includes a more detailed discussion of the AI approach involved in this system and reviews current evidence of the system performance.The authors discuss how this technology may be best used to assist anaesthetists and what effects this may have on the future of learning and practice of UGRA. Finally, we discuss possible avenues for AI within UGRA and the associated implications.
Collapse
Affiliation(s)
- James Lloyd
- Department of Anaesthesia, Royal Gwent Hospital, Aneurin Bevan University Health Board, Newport, UK
| | - Robert Morse
- Machine Learning Software Engineer, Intelligent Ultrasound Limited, Cardiff, UK
| | | | - David Phillips
- Department of Anaesthesia, Royal Gwent Hospital, Aneurin Bevan University Health Board, Newport, UK
| | - Helen Higham
- Nuffield Division of Anaesthesia, Oxford University Hospitals NHS Foundation Trust, Oxford, UK
- OxSTaR Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, UK
| | | | - James Bowness
- Department of Anaesthesia, Royal Gwent Hospital, Aneurin Bevan University Health Board, Newport, UK.
- OxSTaR Centre, Nuffield Department of Clinical Neurosciences, University of Oxford, John Radcliffe Hospital, Oxford, UK.
| |
Collapse
|
6
|
Fang L, Zhang L, Yao Y. Integrating a learned probabilistic model with energy functional for ultrasound image segmentation. Med Biol Eng Comput 2021; 59:1917-1931. [PMID: 34383220 DOI: 10.1007/s11517-021-02411-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2020] [Accepted: 07/03/2021] [Indexed: 11/26/2022]
Abstract
The segmentation of ultrasound (US) images is steadily growing in popularity, owing to the necessity of computer-aided diagnosis (CAD) systems and the advantages that this technique shows, such as safety and efficiency. The objective of this work is to separate the lesion from its background in US images. However, most US images contain poor quality, which is affected by the noise, ambiguous boundary, and heterogeneity. Moreover, the lesion region may be not salient amid the other normal tissues, which makes its segmentation a challenging problem. In this paper, an US image segmentation algorithm that combines the learned probabilistic model with energy functionals is proposed. Firstly, a learned probabilistic model based on the generalized linear model (GLM) reduces the false positives and increases the likelihood energy term of the lesion region. It yields a new probability projection that attracts the energy functional toward the desired region of interest. Then, boundary indicator and probability statistical-based energy functional are used to provide a reliable boundary for the lesion. Integrating probabilistic information into the energy functional framework can effectively overcome the impact of poor quality and further improve the accuracy of segmentation. To verify the performance of the proposed algorithm, 40 images are randomly selected in three databases for evaluation. The values of DICE coefficient, the Jaccard distance, root-mean-square error, and mean absolute error are 0.96, 0.91, 0.059, and 0.042, respectively. Besides, the initialization of the segmentation algorithm and the influence of noise are also analyzed. The experiment shows a significant improvement in performance. A. Description of the proposed paper. B. The main steps involved in the proposed method.
Collapse
Affiliation(s)
- Lingling Fang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China.
- Nanchang Institute of Technology, City, Nanchang, Jiangxi Province, China.
| | - Lirong Zhang
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| | - Yibo Yao
- Department of Computing and Information Technology, Liaoning Normal University, Dalian City, Liaoning Province, China
| |
Collapse
|
7
|
Horng MH, Yang CW, Sun YN, Yang TH. DeepNerve: A New Convolutional Neural Network for the Localization and Segmentation of the Median Nerve in Ultrasound Image Sequences. ULTRASOUND IN MEDICINE & BIOLOGY 2020; 46:2439-2452. [PMID: 32527593 DOI: 10.1016/j.ultrasmedbio.2020.03.017] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 03/16/2020] [Accepted: 03/22/2020] [Indexed: 06/11/2023]
Abstract
Carpal tunnel syndrome commonly occurs in individuals working in occupations that involve use of vibrating manual tools or tasks with highly repetitive and forceful manual exertion. In recent years, carpal tunnel syndrome has been evaluated by ultrasound imaging that monitors median nerve movement. Conventional image analysis methods, such as the active contour model, are typically used to expedite automatic segmentation of the median nerve, but these usually suffer from an arduous manual intervention. We propose a new convolutional neural network framework for localization and segmentation of the median nerve, called DeepNerve, that is based on the U-Net model. DeepNerve integrates the characteristics of MaskTrack and convolutional long short-term memory to effectively locate and segment the median nerve. On the basis of experimental results, the proposed model achieved high performance and generated average Dice measurement, precision, recall and F-score values of 0.8975, 0.8912, 0.9119 and 0.9015, respectively. The segmentation results of DeepNerve were significantly improved in comparison with those of conventional active contour models. Additionally, the results of Student's t-test revealed significant differences in four deformation measurements of the median nerve, including area, perimeter, aspect ratio and circularity. We conclude that the proposed DeepNerve not only generates satisfactory results for localization and segmentation of the median nerve, but also creates more promising measurements for applications in clinical carpal tunnel syndrome diagnosis.
Collapse
Affiliation(s)
- Ming-Huwi Horng
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung, Taiwan
| | - Cheng-Wei Yang
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung, Taiwan
| | - Yung-Nien Sun
- Department of Computer Science and Information Engineering, National Pingtung University, Pingtung, Taiwan.
| | - Tai-Hua Yang
- Department of Biomedical Engineering, National Cheng Kung University Hospital, Tainan, Taiwan
| |
Collapse
|
8
|
Zhou Z, Wang Y, Guo Y, Jiang X, Qi Y. Ultrafast Plane Wave Imaging With Line-Scan-Quality Using an Ultrasound-Transfer Generative Adversarial Network. IEEE J Biomed Health Inform 2020; 24:943-956. [DOI: 10.1109/jbhi.2019.2950334] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
9
|
Smistad E, Johansen KF, Iversen DH, Reinertsen I. Highlighting nerves and blood vessels for ultrasound-guided axillary nerve block procedures using neural networks. J Med Imaging (Bellingham) 2018; 5:044004. [PMID: 30840734 PMCID: PMC6228309 DOI: 10.1117/1.jmi.5.4.044004] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 10/23/2018] [Indexed: 11/14/2022] Open
Abstract
Ultrasound images acquired during axillary nerve block procedures can be difficult to interpret. Highlighting the important structures, such as nerves and blood vessels, may be useful for the training of inexperienced users. A deep convolutional neural network is used to identify the musculocutaneous, median, ulnar, and radial nerves, as well as the blood vessels in ultrasound images. A dataset of 49 subjects is collected and used for training and evaluation of the neural network. Several image augmentations, such as rotation, elastic deformation, shadows, and horizontal flipping, are tested. The neural network is evaluated using cross validation. The results showed that the blood vessels were the easiest to detect with a precision and recall above 0.8. Among the nerves, the median and ulnar nerves were the easiest to detect with an F -score of 0.73 and 0.62, respectively. The radial nerve was the hardest to detect with an F -score of 0.39. Image augmentations proved effective, increasing F -score by as much as 0.13. A Wilcoxon signed-rank test showed that the improvement from rotation, shadow, and elastic deformation augmentations were significant and the combination of all augmentations gave the best result. The results are promising; however, there is more work to be done, as the precision and recall are still too low. A larger dataset is most likely needed to improve accuracy, in combination with anatomical and temporal models.
Collapse
Affiliation(s)
- Erik Smistad
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | | - Daniel Høyer Iversen
- SINTEF Medical Technology, Trondheim, Norway
- Norwegian University of Science and Technology, Department of Circulation and Medical Imaging, Trondheim, Norway
| | | |
Collapse
|
10
|
Meiburger KM, Acharya UR, Molinari F. Automated localization and segmentation techniques for B-mode ultrasound images: A review. Comput Biol Med 2017; 92:210-235. [PMID: 29247890 DOI: 10.1016/j.compbiomed.2017.11.018] [Citation(s) in RCA: 66] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2017] [Revised: 11/30/2017] [Accepted: 11/30/2017] [Indexed: 12/14/2022]
Abstract
B-mode ultrasound imaging is used extensively in medicine. Hence, there is a need to have efficient segmentation tools to aid in computer-aided diagnosis, image-guided interventions, and therapy. This paper presents a comprehensive review on automated localization and segmentation techniques for B-mode ultrasound images. The paper first describes the general characteristics of B-mode ultrasound images. Then insight on the localization and segmentation of tissues is provided, both in the case in which the organ/tissue localization provides the final segmentation and in the case in which a two-step segmentation process is needed, due to the desired boundaries being too fine to locate from within the entire ultrasound frame. Subsequenly, examples of some main techniques found in literature are shown, including but not limited to shape priors, superpixel and classification, local pixel statistics, active contours, edge-tracking, dynamic programming, and data mining. Ten selected applications (abdomen/kidney, breast, cardiology, thyroid, liver, vascular, musculoskeletal, obstetrics, gynecology, prostate) are then investigated in depth, and the performances of a few specific applications are compared. In conclusion, future perspectives for B-mode based segmentation, such as the integration of RF information, the employment of higher frequency probes when possible, the focus on completely automatic algorithms, and the increase in available data are discussed.
Collapse
Affiliation(s)
- Kristen M Meiburger
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy
| | - U Rajendra Acharya
- Department of Electronic & Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Imaging, Faculty of Medicine, University of Malaya, Kuala Lumpur, Malaysia
| | - Filippo Molinari
- Biolab, Department of Electronics and Telecommunications, Politecnico di Torino, Torino, Italy.
| |
Collapse
|
11
|
Wang Y, Zheng C, Peng H, Chen X. Short-lag spatial coherence combined with eigenspace-based minimum variance beamformer for synthetic aperture ultrasound imaging. Comput Biol Med 2017; 91:267-276. [DOI: 10.1016/j.compbiomed.2017.10.016] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Revised: 10/16/2017] [Accepted: 10/16/2017] [Indexed: 11/30/2022]
|
12
|
Smistad E, Iversen DH, Leidig L, Lervik Bakeng JB, Johansen KF, Lindseth F. Automatic Segmentation and Probe Guidance for Real-Time Assistance of Ultrasound-Guided Femoral Nerve Blocks. ULTRASOUND IN MEDICINE & BIOLOGY 2017; 43:218-226. [PMID: 27727021 DOI: 10.1016/j.ultrasmedbio.2016.08.036] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 05/13/2016] [Accepted: 08/30/2016] [Indexed: 06/06/2023]
Abstract
Ultrasound-guided regional anesthesia can be challenging, especially for inexperienced physicians. The goal of the proposed methods is to create a system that can assist a user in performing ultrasound-guided femoral nerve blocks. The system indicates in which direction the user should move the ultrasound probe to investigate the region of interest and to reach the target site for needle insertion. Additionally, the system provides automatic real-time segmentation of the femoral artery, the femoral nerve and the two layers fascia lata and fascia iliaca. This aids in interpretation of the 2-D ultrasound images and the surrounding anatomy in 3-D. The system was evaluated on 24 ultrasound acquisitions of both legs from six subjects. The estimated target site for needle insertion and the segmentations were compared with those of an expert anesthesiologist. Average target distance was 8.5 mm with a standard deviation of 2.5 mm. The mean absolute differences of the femoral nerve and the fascia segmentations were about 1-3 mm.
Collapse
Affiliation(s)
- Erik Smistad
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway.
| | - Daniel Høyer Iversen
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| | - Linda Leidig
- Norwegian University of Science and Technology, Trondheim, Norway
| | | | | | - Frank Lindseth
- SINTEF Medical Technology, Trondheim, Norway; Norwegian University of Science and Technology, Trondheim, Norway
| |
Collapse
|
13
|
Hadjerci O, Hafiane A, Conte D, Makris P, Vieyres P, Delbos A. Computer-aided detection system for nerve identification using ultrasound images: A comparative study. INFORMATICS IN MEDICINE UNLOCKED 2016. [DOI: 10.1016/j.imu.2016.06.003] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
|
14
|
Segmentation of uterine fibroid ultrasound images using a dynamic statistical shape model in HIFU therapy. Comput Med Imaging Graph 2015; 46 Pt 3:302-14. [PMID: 26459767 DOI: 10.1016/j.compmedimag.2015.07.004] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2014] [Revised: 06/24/2015] [Accepted: 07/13/2015] [Indexed: 11/20/2022]
Abstract
Segmenting the lesion areas from ultrasound (US) images is an important step in the intra-operative planning of high-intensity focused ultrasound (HIFU). However, accurate segmentation remains a challenge due to intensity inhomogeneity, blurry boundaries in HIFU US images and the deformation of uterine fibroids caused by patient's breathing or external force. This paper presents a novel dynamic statistical shape model (SSM)-based segmentation method to accurately and efficiently segment the target region in HIFU US images of uterine fibroids. For accurately learning the prior shape information of lesion boundary fluctuations in the training set, the dynamic properties of stochastic differential equation and Fokker-Planck equation are incorporated into SSM (referred to as SF-SSM). Then, a new observation model of lesion areas (named to RPFM) in HIFU US images is developed to describe the features of the lesion areas and provide a likelihood probability to the prior shape given by SF-SSM. SF-SSM and RPFM are integrated into active contour model to improve the accuracy and robustness of segmentation in HIFU US images. We compare the proposed method with four well-known US segmentation methods to demonstrate its superiority. The experimental results in clinical HIFU US images validate the high accuracy and robustness of our approach, even when the quality of the images is unsatisfactory, indicating its potential for practical application in HIFU therapy.
Collapse
|
15
|
Awan R, Rajpoot K. Spatial and spatio-temporal feature extraction from 4D echocardiography images. Comput Biol Med 2015; 64:138-47. [PMID: 26164034 DOI: 10.1016/j.compbiomed.2015.06.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2015] [Revised: 06/19/2015] [Accepted: 06/20/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND Ultrasound images are difficult to segment because of their noisy and low contrast nature which makes it challenging to extract the important features. Typical intensity-gradient based approaches are not suitable for these low contrast images while it has been shown that the local phase based technique provides better results than intensity based methods for ultrasound images. The spatial feature extraction methods ignore the continuity in the heart cycle and may also capture spurious features. It is believed that the spurious features (noise) that are not consistent along the frames can be excluded by considering the temporal information. METHODS In this paper, we present a local phase based 4D (3D+time) feature asymmetry (FA) measure using the monogenic signal. We have investigated the spatio-temporal feature extraction to explore the effect of adding time information in the feature extraction process. RESULTS To evaluate the impact of time dimension, the results of 4D based feature extraction are compared with the results of 3D based feature extraction which shows the favorable 4D feature extraction results when temporal resolution is good. The paper compares the band-pass filters (difference of Gaussian, Cauchy and Gaussian derivative) in terms of their feature extraction performance. Moreover, the feature extraction is further evaluated quantitatively by left ventricle segmentation using the extracted features. CONCLUSIONS The results demonstrate that the spatio-temporal feature extraction is promising in frames with good temporal resolution.
Collapse
Affiliation(s)
- Ruqayya Awan
- School of Electrical Engineering and Computer Science (SEECS), National University of Sciences & Technology (NUST), Islamabad, Pakistan.
| | - Kashif Rajpoot
- School of Electrical Engineering and Computer Science (SEECS), National University of Sciences & Technology (NUST), Islamabad, Pakistan; College of Computer Science & Information Technology (CCSIT), King Faisal University (KFU), Al-Hofuf, Saudi Arabia.
| |
Collapse
|