301
|
Ali H, Sharif M, Yasmin M, Rehmani MH, Riaz F. A survey of feature extraction and fusion of deep learning for detection of abnormalities in video endoscopy of gastrointestinal-tract. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09743-2] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
|
302
|
Cerrolaza JJ, Picazo ML, Humbert L, Sato Y, Rueckert D, Ballester MÁG, Linguraru MG. Computational anatomy for multi-organ analysis in medical imaging: A review. Med Image Anal 2019; 56:44-67. [DOI: 10.1016/j.media.2019.04.002] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Revised: 02/05/2019] [Accepted: 04/13/2019] [Indexed: 12/19/2022]
|
303
|
Hesamian MH, Jia W, He X, Kennedy P. Deep Learning Techniques for Medical Image Segmentation: Achievements and Challenges. J Digit Imaging 2019; 32:582-596. [PMID: 31144149 PMCID: PMC6646484 DOI: 10.1007/s10278-019-00227-x] [Citation(s) in RCA: 565] [Impact Index Per Article: 94.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023] Open
Abstract
Deep learning-based image segmentation is by now firmly established as a robust tool in image segmentation. It has been widely used to separate homogeneous areas as the first and critical component of diagnosis and treatment pipeline. In this article, we present a critical appraisal of popular methods that have employed deep-learning techniques for medical image segmentation. Moreover, we summarize the most common challenges incurred and suggest possible solutions.
Collapse
Affiliation(s)
- Mohammad Hesam Hesamian
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia.
- CB11.09, University of Technology Sydney, 81 Broadway, Ultimo NSW, 2007, Sydney, Australia.
| | - Wenjing Jia
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia
| | - Xiangjian He
- School of Electrical and Data Engineering (SEDE), University of Technology Sydney, 2007, Sydney, Australia
| | - Paul Kennedy
- School of Software, University of Technology Sydney, 2007, Sydney, Australia
| |
Collapse
|
304
|
Man Y, Huang Y, Feng J, Li X, Wu F. Deep Q Learning Driven CT Pancreas Segmentation With Geometry-Aware U-Net. IEEE TRANSACTIONS ON MEDICAL IMAGING 2019; 38:1971-1980. [PMID: 30998461 DOI: 10.1109/tmi.2019.2911588] [Citation(s) in RCA: 58] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The segmentation of pancreas is important for medical image analysis, yet it faces great challenges of class imbalance, background distractions, and non-rigid geometrical features. To address these difficulties, we introduce a deep Q network (DQN) driven approach with deformable U-Net to accurately segment the pancreas by explicitly interacting with contextual information and extract anisotropic features from pancreas. The DQN-based model learns a context-adaptive localization policy to produce a visually tightened and precise localization bounding box of the pancreas. Furthermore, deformable U-Net captures geometry-aware information of pancreas by learning geometrically deformable filters for feature extraction. The experiments on NIH dataset validate the effectiveness of the proposed framework in pancreas segmentation.
Collapse
|
305
|
Park S, Chu LC, Fishman EK, Yuille AL, Vogelstein B, Kinzler KW, Horton KM, Hruban RH, Zinreich ES, Fouladi DF, Shayesteh S, Graves J, Kawamoto S. Annotated normal CT data of the abdomen for deep learning: Challenges and strategies for implementation. Diagn Interv Imaging 2019; 101:35-44. [PMID: 31358460 DOI: 10.1016/j.diii.2019.05.008] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2019] [Revised: 05/23/2019] [Accepted: 05/28/2019] [Indexed: 02/08/2023]
Abstract
PURPOSE The purpose of this study was to report procedures developed to annotate abdominal computed tomography (CT) images from subjects without pancreatic disease that will be used as the input for deep convolutional neural networks (DNN) for development of deep learning algorithms for automatic recognition of a normal pancreas. MATERIALS AND METHODS Dual-phase contrast-enhanced volumetric CT acquired from 2005 to 2009 from potential kidney donors were retrospectively assessed. Four trained human annotators manually and sequentially annotated 22 structures in each datasets, then expert radiologists confirmed the annotation. For efficient annotation and data management, a commercial software package that supports three-dimensional segmentation was used. RESULTS A total of 1150 dual-phase CT datasets from 575 subjects were annotated. There were 229 men and 346 women (mean age: 45±12years; range: 18-79years). The mean intra-observer intra-subject dual-phase CT volume difference of all annotated structures was 4.27mL (7.65%). The deep network prediction for multi-organ segmentation showed high fidelity with 89.4% and 1.29mm in terms of mean Dice similarity coefficients and mean surface distances, respectively. CONCLUSIONS A reliable data collection/annotation process for abdominal structures was developed. This process can be used to generate large datasets appropriate for deep learning.
Collapse
Affiliation(s)
- S Park
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - L C Chu
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - E K Fishman
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - A L Yuille
- Department of Computer Science, Johns Hopkins University, School of Arts and Sciences, Baltimore, MD 21218, USA
| | - B Vogelstein
- Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, School of Medicine, Baltimore, MD 21287, USA; Johns Hopkins University, School of Medicine, Ludwig Center for Cancer Genetics and Therapeutics, Baltimore, MD 21205, USA
| | - K W Kinzler
- Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, School of Medicine, Baltimore, MD 21287, USA; Johns Hopkins University, School of Medicine, Ludwig Center for Cancer Genetics and Therapeutics, Baltimore, MD 21205, USA
| | - K M Horton
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - R H Hruban
- Department of Pathology, The Sol Goldman Pancreatic Cancer Research Center, Johns Hopkins University, School of Medicine, Baltimore, MD 21205, USA
| | - E S Zinreich
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - D F Fouladi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - S Shayesteh
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - J Graves
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA
| | - S Kawamoto
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University, School of Medicine, 601N. Caroline Street, Baltimore, MD 21287, USA.
| |
Collapse
|
306
|
Liu L, Wu FX, Wang J. Efficient multi-kernel DCNN with pixel dropout for stroke MRI segmentation. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.049] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
307
|
Sanders JW, Fletcher JR, Frank SJ, Liu HL, Johnson JM, Zhou Z, Chen HSM, Venkatesan AM, Kudchadker RJ, Pagel MD, Ma J. Deep learning application engine (DLAE): Development and integration of deep learning algorithms in medical imaging. SOFTWAREX 2019; 10:100347. [PMID: 34113706 PMCID: PMC8188855 DOI: 10.1016/j.softx.2019.100347] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Herein we introduce a deep learning (DL) application engine (DLAE) system concept, present potential uses of it, and describe pathways for its integration in clinical workflows. An open-source software application was developed to provide a code-free approach to DL for medical imaging applications. DLAE supports several DL techniques used in medical imaging, including convolutional neural networks, fully convolutional networks, generative adversarial networks, and bounding box detectors. Several example applications using clinical images were developed and tested to demonstrate the capabilities of DLAE. Additionally, a model deployment example was demonstrated in which DLAE was used to integrate two trained models into a commercial clinical software package.
Collapse
Affiliation(s)
- Jeremiah W. Sanders
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Justin R. Fletcher
- Odyssey Systems Consulting, LLC, 550 Lipoa Parkway, Kihei, Maui, HI, United States of America
| | - Steven J. Frank
- Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1422, Houston, TX 77030, United States of America
| | - Ho-Ling Liu
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| | - Jason M. Johnson
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Zijian Zhou
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Henry Szu-Meng Chen
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
| | - Aradhana M. Venkatesan
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Diagnostic Radiology, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1473, Houston, TX 77030, United States of America
| | - Rajat J. Kudchadker
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1420, Houston, TX 77030, United States of America
| | - Mark D. Pagel
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
- Department of Cancer Systems Imaging, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1907, Houston, TX 77030, United States of America
| | - Jingfei Ma
- Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, 1515 Holcombe Blvd., Unit 1472, Houston, TX 77030, United States of America
- Medical Physics Graduate Program, MD Anderson Cancer Center UTHealth Graduate School of Biomedical Sciences, Houston, 1515 Holcombe Blvd., Unit 1472, TX 77030, United States of America
| |
Collapse
|
308
|
Da Silva K, Kumar P, Choonara YE, du Toit LC, Pillay V. Preprocessing of Medical Image Data for Three-Dimensional Bioprinted Customized-Neural-Scaffolds. Tissue Eng Part C Methods 2019; 25:401-410. [PMID: 31144597 DOI: 10.1089/ten.tec.2019.0052] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
IMPACT STATEMENT Nerve damage, which can be devastating, triggers several biological cascades, which result in the insufficiencies of the human nervous system to provide complete nerve repair and regain of function. Since no therapeutic strategy exists to provide immediate attention and intervention to patients with newly acquired nerve damage, we propose a strategy in which accelerated medical image processing through graphical processing unit implementation and three-dimensional printing are combined to produce a time-efficient, patient-specific (custom-neural-scaffold) solution to nerve damage. This work aims to beneficially shorten the time required for medical decision-making so that improved patient outcomes are achieved.
Collapse
Affiliation(s)
- Kate Da Silva
- Wits Advanced Drug Delivery Platform Research Unit, Department of Pharmacy and Pharmacology, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Parktown, South Africa
| | - Pradeep Kumar
- Wits Advanced Drug Delivery Platform Research Unit, Department of Pharmacy and Pharmacology, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Parktown, South Africa
| | - Yahya E Choonara
- Wits Advanced Drug Delivery Platform Research Unit, Department of Pharmacy and Pharmacology, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Parktown, South Africa
| | - Lisa C du Toit
- Wits Advanced Drug Delivery Platform Research Unit, Department of Pharmacy and Pharmacology, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Parktown, South Africa
| | - Viness Pillay
- Wits Advanced Drug Delivery Platform Research Unit, Department of Pharmacy and Pharmacology, School of Therapeutic Sciences, Faculty of Health Sciences, University of the Witwatersrand, Johannesburg, Parktown, South Africa
| |
Collapse
|
309
|
Abstract
Manual image segmentation is a time-consuming task routinely performed in radiotherapy to identify each patient's targets and anatomical structures. The efficacy and safety of the radiotherapy plan requires accurate segmentations as these regions of interest are generally used to optimize and assess the quality of the plan. However, reports have shown that this process can be subject to significant inter- and intraobserver variability. Furthermore, the quality of the radiotherapy treatment, and subsequent analyses (ie, radiomics, dosimetric), can be subject to the accuracy of these manual segmentations. Automatic segmentation (or auto-segmentation) of targets and normal tissues is, therefore, preferable as it would address these challenges. Previously, auto-segmentation techniques have been clustered into 3 generations of algorithms, with multiatlas based and hybrid techniques (third generation) being considered the state-of-the-art. More recently, however, the field of medical image segmentation has seen accelerated growth driven by advances in computer vision, particularly through the application of deep learning algorithms, suggesting we have entered the fourth generation of auto-segmentation algorithm development. In this paper, the authors review traditional (nondeep learning) algorithms particularly relevant for applications in radiotherapy. Concepts from deep learning are introduced focusing on convolutional neural networks and fully-convolutional networks which are generally used for segmentation tasks. Furthermore, the authors provide a summary of deep learning auto-segmentation radiotherapy applications reported in the literature. Lastly, considerations for clinical deployment (commissioning and QA) of auto-segmentation software are provided.
Collapse
|
310
|
Heinrich MP, Oktay O, Bouteldja N. OBELISK-Net: Fewer layers to solve 3D multi-organ segmentation with sparse deformable convolutions. Med Image Anal 2019; 54:1-9. [DOI: 10.1016/j.media.2019.02.006] [Citation(s) in RCA: 44] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2018] [Revised: 01/10/2019] [Accepted: 02/12/2019] [Indexed: 11/15/2022]
|
311
|
Soltanian-Zadeh S, Sahingur K, Blau S, Gong Y, Farsiu S. Fast and robust active neuron segmentation in two-photon calcium imaging using spatiotemporal deep learning. Proc Natl Acad Sci U S A 2019; 116:8554-8563. [PMID: 30975747 PMCID: PMC6486774 DOI: 10.1073/pnas.1812995116] [Citation(s) in RCA: 72] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Calcium imaging records large-scale neuronal activity with cellular resolution in vivo. Automated, fast, and reliable active neuron segmentation is a critical step in the analysis workflow of utilizing neuronal signals in real-time behavioral studies for discovery of neuronal coding properties. Here, to exploit the full spatiotemporal information in two-photon calcium imaging movies, we propose a 3D convolutional neural network to identify and segment active neurons. By utilizing a variety of two-photon microscopy datasets, we show that our method outperforms state-of-the-art techniques and is on a par with manual segmentation. Furthermore, we demonstrate that the network trained on data recorded at a specific cortical layer can be used to accurately segment active neurons from another layer with different neuron density. Finally, our work documents significant tabulation flaws in one of the most cited and active online scientific challenges in neuron segmentation. As our computationally fast method is an invaluable tool for a large spectrum of real-time optogenetic experiments, we have made our open-source software and carefully annotated dataset freely available online.
Collapse
Affiliation(s)
| | - Kaan Sahingur
- Department of Biomedical Engineering, Duke University, Durham, NC 27708
| | - Sarah Blau
- Department of Biomedical Engineering, Duke University, Durham, NC 27708
| | - Yiyang Gong
- Department of Biomedical Engineering, Duke University, Durham, NC 27708;
- Department of Neurobiology, Duke University, Durham, NC 27708
| | - Sina Farsiu
- Department of Biomedical Engineering, Duke University, Durham, NC 27708;
- Department of Ophthalmology, Duke University Medical Center, Durham, NC 27710
| |
Collapse
|
312
|
Koizumi M, Motegi K, Umeda T. A novel biomarker, active whole skeletal total lesion glycolysis (WS-TLG), as a quantitative method to measure bone metastatic activity in breast cancer patients. Ann Nucl Med 2019; 33:502-511. [PMID: 30982124 PMCID: PMC6609583 DOI: 10.1007/s12149-019-01359-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 04/05/2019] [Indexed: 11/26/2022]
Abstract
Objective There is no good response evaluation method for skeletal metastasis. We aimed to develop a novel quantitative method to evaluate the response of skeletal metastasis, especially lytic lesions, for treatment. Methods A method to measure active bone metastatic burden quantitatively using F-18 fluorodeoxyglucose positron emission tomography with computed tomography (FDG–PET/CT) in breast cancer patients, whole skeletal total lesion glycolysis (WS-TLG), a summation of each skeletal lesion’s TLG, was developed. To identify active bone lesions, a tentative cutoff value was decided using FDG–PET/CT in 85 breast cancer patients without skeletal metastasis and 35 with skeletal metastasis by changing the cutoff value. Then, the WS-TLG method was evaluated by comparing to PET Response Criteria in Solid Tumor (PERCIST) or European Organization for Research and Treatment of Cancer (EORTC) criteria for only bone in 15 breast cancer patients with skeletal metastasis who were treated. Results A cutoff value of the standardized uptake value (SUV) = 4.0 gave 91% (77/85) specificity and 97% (34/35) sensitivity. We decided on SUV = 4.0 as a tentative cutoff value. Skeletal metastases of lytic and mixed types showed higher WS-TLG values than those of blastic or intertrabecular types, although statistical significance was not tested. All 15 patients showed agreement with PERCIST or EORTC in the therapeutic bone response. Conclusion This quantitative WS-TLG method appears to be a good biomarker to evaluate skeletal metastasis in breast cancer patients, especially lytic or mixed types. Further clinical studies are warranted to assess the clinical values of this new WS-TLG method.
Collapse
Affiliation(s)
- Mitsuru Koizumi
- Department of Nuclear Medicine, Cancer Institute Hospital, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8555, Japan.
| | - Kazuki Motegi
- Department of Nuclear Medicine, Cancer Institute Hospital, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8555, Japan
| | - Takuro Umeda
- Department of Nuclear Medicine, Cancer Institute Hospital, 3-8-31 Ariake, Koto-ku, Tokyo, 135-8555, Japan
| |
Collapse
|
313
|
What the radiologist should know about artificial intelligence - an ESR white paper. Insights Imaging 2019; 10:44. [PMID: 30949865 PMCID: PMC6449411 DOI: 10.1186/s13244-019-0738-2] [Citation(s) in RCA: 177] [Impact Index Per Article: 29.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 03/20/2019] [Indexed: 02/08/2023] Open
Abstract
This paper aims to provide a review of the basis for application of AI in radiology, to discuss the immediate ethical and professional impact in radiology, and to consider possible future evolution.Even if AI does add significant value to image interpretation, there are implications outside the traditional radiology activities of lesion detection and characterisation. In radiomics, AI can foster the analysis of the features and help in the correlation with other omics data. Imaging biobanks would become a necessary infrastructure to organise and share the image data from which AI models can be trained. AI can be used as an optimising tool to assist the technologist and radiologist in choosing a personalised patient's protocol, tracking the patient's dose parameters, providing an estimate of the radiation risks. AI can also aid the reporting workflow and help the linking between words, images, and quantitative data. Finally, AI coupled with CDS can improve the decision process and thereby optimise clinical and radiological workflow.
Collapse
|
314
|
Precision Medicine in Pancreatic Disease-Knowledge Gaps and Research Opportunities: Summary of a National Institute of Diabetes and Digestive and Kidney Diseases Workshop. Pancreas 2019; 48:1250-1258. [PMID: 31688587 PMCID: PMC7282491 DOI: 10.1097/mpa.0000000000001412] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
A workshop on research gaps and opportunities for Precision Medicine in Pancreatic Disease was sponsored by the National Institute of Diabetes and Digestive Kidney Diseases on July 24, 2019, in Pittsburgh. The workshop included an overview lecture on precision medicine in cancer and 4 sessions: (1) general considerations for the application of bioinformatics and artificial intelligence; (2) omics, the combination of risk factors and biomarkers; (3) precision imaging; and (4) gaps, barriers, and needs to move from precision to personalized medicine for pancreatic disease. Current precision medicine approaches and tools were reviewed, and participants identified knowledge gaps and research needs that hinder bringing precision medicine to pancreatic diseases. Most critical were (a) multicenter efforts to collect large-scale patient data sets from multiple data streams in the context of environmental and social factors; (b) new information systems that can collect, annotate, and quantify data to inform disease mechanisms; (c) novel prospective clinical trial designs to test and improve therapies; and (d) a framework for measuring and assessing the value of proposed approaches to the health care system. With these advances, precision medicine can identify patients early in the course of their pancreatic disease and prevent progression to chronic or fatal illness.
Collapse
|
315
|
Augmented visualization with depth perception cues to improve the surgeon's performance in minimally invasive surgery. Med Biol Eng Comput 2018; 57:995-1013. [PMID: 30511205 DOI: 10.1007/s11517-018-1929-6] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 11/03/2018] [Indexed: 01/14/2023]
Abstract
Minimally invasive techniques, such as laparoscopy and radiofrequency ablation of tumors, bring important advantages in surgery: by minimizing incisions on the patient's body, they can reduce the hospitalization period and the risk of postoperative complications. Unfortunately, they come with drawbacks for surgeons, who have a restricted vision of the operation area through an indirect access and 2D images provided by a camera inserted in the body. Augmented reality provides an "X-ray vision" of the patient anatomy thanks to the visualization of the internal organs of the patient. In this way, surgeons are free from the task of mentally associating the content from CT images to the operative scene. We present a navigation system that supports surgeons in preoperative and intraoperative phases and an augmented reality system that superimposes virtual organs on the patient's body together with depth and distance information. We implemented a combination of visual and audio cues allowing the surgeon to improve the intervention precision and avoid the risk of damaging anatomical structures. The test scenarios proved the good efficacy and accuracy of the system. Moreover, tests in the operating room suggested some modifications to the tracking system to make it more robust with respect to occlusions. Graphical Abstract Augmented visualization in minimally invasive surgery.
Collapse
|
316
|
Imran AAZ, Hatamizadeh A, Ananth SP, Ding X, Terzopoulos D, Tajbakhsh N. Automatic Segmentation of Pulmonary Lobes Using a Progressive Dense V-Network. DEEP LEARNING IN MEDICAL IMAGE ANALYSIS AND MULTIMODAL LEARNING FOR CLINICAL DECISION SUPPORT 2018. [DOI: 10.1007/978-3-030-00889-5_32] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
317
|
Gibson E, Hu Y, Ghavami N, Ahmed HU, Moore C, Emberton M, Huisman HJ, Barratt DC. Inter-site Variability in Prostate Segmentation Accuracy Using Deep Learning. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_58] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|