1
|
Zhong L, Xiao R, Shu H, Zheng K, Li X, Wu Y, Ma J, Feng Q, Yang W. NCCT-to-CECT synthesis with contrast-enhanced knowledge and anatomical perception for multi-organ segmentation in non-contrast CT images. Med Image Anal 2025; 100:103397. [PMID: 39612807 DOI: 10.1016/j.media.2024.103397] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2024] [Revised: 09/06/2024] [Accepted: 11/15/2024] [Indexed: 12/01/2024]
Abstract
Contrast-enhanced computed tomography (CECT) is constantly used for delineating organs-at-risk (OARs) in radiation therapy planning. The delineated OARs are needed to transfer from CECT to non-contrast CT (NCCT) for dose calculation. Yet, the use of iodinated contrast agents (CA) in CECT and the dose calculation errors caused by the spatial misalignment between NCCT and CECT images pose risks of adverse side effects. A promising solution is synthesizing CECT images from NCCT scans, which can improve the visibility of organs and abnormalities for more effective multi-organ segmentation in NCCT images. However, existing methods neglect the difference between tissues induced by CA and lack the ability to synthesize the details of organ edges and blood vessels. To address these issues, we propose a contrast-enhanced knowledge and anatomical perception network (CKAP-Net) for NCCT-to-CECT synthesis. CKAP-Net leverages a contrast-enhanced knowledge learning network to capture both similarities and dissimilarities in domain characteristics attributable to CA. Specifically, a CA-based perceptual loss function is introduced to enhance the synthesis of CA details. Furthermore, we design a multi-scale anatomical perception transformer that utilizes multi-scale anatomical information from NCCT images, enabling the precise synthesis of tissue details. Our CKAP-Net is evaluated on a multi-center abdominal NCCT-CECT dataset, a head an neck NCCT-CECT dataset, and an NCMRI-CEMRI dataset. It achieves a MAE of 25.96 ± 2.64, a SSIM of 0.855 ± 0.017, and a PSNR of 32.60 ± 0.02 for CECT synthesis, and a DSC of 81.21 ± 4.44 for segmentation on the internal dataset. Extensive experiments demonstrate that CKAP-Net outperforms state-of-the-art CA synthesis methods and has better generalizability across different datasets.
Collapse
Affiliation(s)
- Liming Zhong
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Ruolin Xiao
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Hai Shu
- Department of Biostatistics, School of Global Public Health, New York University, NY, USA
| | - Kaiyi Zheng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Xinming Li
- Department of Radiology, Zhujiang Hospital, Southern Medical University, Guangzhou, China
| | - Yuankui Wu
- Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, 510515, China
| | - Jianhua Ma
- School of Life Science and Technology, Xi'an Jiaotong University, Xi'an, China
| | - Qianjin Feng
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China
| | - Wei Yang
- School of Biomedical Engineering, Southern Medical University, Guangzhou, 510515, China; Guangdong Provincial Key Laboratory of Medical Image Processing, Guangzhou, 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Guangzhou, 510515, China.
| |
Collapse
|
2
|
Iftikhar M, Saqib M, Zareen M, Mumtaz H. Artificial intelligence: revolutionizing robotic surgery: review. Ann Med Surg (Lond) 2024; 86:5401-5409. [PMID: 39238994 PMCID: PMC11374272 DOI: 10.1097/ms9.0000000000002426] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Accepted: 07/25/2024] [Indexed: 09/07/2024] Open
Abstract
Robotic surgery, known for its minimally invasive techniques and computer-controlled robotic arms, has revolutionized modern medicine by providing improved dexterity, visualization, and tremor reduction compared to traditional methods. The integration of artificial intelligence (AI) into robotic surgery has further advanced surgical precision, efficiency, and accessibility. This paper examines the current landscape of AI-driven robotic surgical systems, detailing their benefits, limitations, and future prospects. Initially, AI applications in robotic surgery focused on automating tasks like suturing and tissue dissection to enhance consistency and reduce surgeon workload. Present AI-driven systems incorporate functionalities such as image recognition, motion control, and haptic feedback, allowing real-time analysis of surgical field images and optimizing instrument movements for surgeons. The advantages of AI integration include enhanced precision, reduced surgeon fatigue, and improved safety. However, challenges such as high development costs, reliance on data quality, and ethical concerns about autonomy and liability hinder widespread adoption. Regulatory hurdles and workflow integration also present obstacles. Future directions for AI integration in robotic surgery include enhancing autonomy, personalizing surgical approaches, and refining surgical training through AI-powered simulations and virtual reality. Overall, AI integration holds promise for advancing surgical care, with potential benefits including improved patient outcomes and increased access to specialized expertise. Addressing challenges and promoting responsible adoption are essential for realizing the full potential of AI-driven robotic surgery.
Collapse
|
3
|
Bellos T, Manolitsis I, Katsimperis S, Juliebø-Jones P, Feretzakis G, Mitsogiannis I, Varkarakis I, Somani BK, Tzelves L. Artificial Intelligence in Urologic Robotic Oncologic Surgery: A Narrative Review. Cancers (Basel) 2024; 16:1775. [PMID: 38730727 PMCID: PMC11083167 DOI: 10.3390/cancers16091775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 04/29/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024] Open
Abstract
With the rapid increase in computer processing capacity over the past two decades, machine learning techniques have been applied in many sectors of daily life. Machine learning in therapeutic settings is also gaining popularity. We analysed current studies on machine learning in robotic urologic surgery. We searched PubMed/Medline and Google Scholar up to December 2023. Search terms included "urologic surgery", "artificial intelligence", "machine learning", "neural network", "automation", and "robotic surgery". Automatic preoperative imaging, intraoperative anatomy matching, and bleeding prediction has been a major focus. Early artificial intelligence (AI) therapeutic outcomes are promising. Robot-assisted surgery provides precise telemetry data and a cutting-edge viewing console to analyse and improve AI integration in surgery. Machine learning enhances surgical skill feedback, procedure effectiveness, surgical guidance, and postoperative prediction. Tension-sensors on robotic arms and augmented reality can improve surgery. This provides real-time organ motion monitoring, improving precision and accuracy. As datasets develop and electronic health records are used more and more, these technologies will become more effective and useful. AI in robotic surgery is intended to improve surgical training and experience. Both seek precision to improve surgical care. AI in ''master-slave'' robotic surgery offers the detailed, step-by-step examination of autonomous robotic treatments.
Collapse
Affiliation(s)
- Themistoklis Bellos
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Manolitsis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Stamatios Katsimperis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | | | - Georgios Feretzakis
- School of Science and Technology, Hellenic Open University, 26335 Patras, Greece;
| | - Iraklis Mitsogiannis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Ioannis Varkarakis
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| | - Bhaskar K. Somani
- Department of Urology, University of Southampton, Southampton SO16 6YD, UK;
| | - Lazaros Tzelves
- 2nd Department of Urology, Sismanoglio General Hospital of Athens, 15126 Athens, Greece; (T.B.); (I.M.); (S.K.); (I.M.); (I.V.)
| |
Collapse
|
4
|
Cellina M, Cè M, Rossini N, Cacioppa LM, Ascenti V, Carrafiello G, Floridi C. Computed Tomography Urography: State of the Art and Beyond. Tomography 2023; 9:909-930. [PMID: 37218935 PMCID: PMC10204399 DOI: 10.3390/tomography9030075] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/24/2023] Open
Abstract
Computed Tomography Urography (CTU) is a multiphase CT examination optimized for imaging kidneys, ureters, and bladder, complemented by post-contrast excretory phase imaging. Different protocols are available for contrast administration and image acquisition and timing, with different strengths and limits, mainly related to kidney enhancement, ureters distension and opacification, and radiation exposure. The availability of new reconstruction algorithms, such as iterative and deep-learning-based reconstruction has dramatically improved the image quality and reducing radiation exposure at the same time. Dual-Energy Computed Tomography also has an important role in this type of examination, with the possibility of renal stone characterization, the availability of synthetic unenhanced phases to reduce radiation dose, and the availability of iodine maps for a better interpretation of renal masses. We also describe the new artificial intelligence applications for CTU, focusing on radiomics to predict tumor grading and patients' outcome for a personalized therapeutic approach. In this narrative review, we provide a comprehensive overview of CTU from the traditional to the newest acquisition techniques and reconstruction algorithms, and the possibility of advanced imaging interpretation to provide an up-to-date guide for radiologists who want to better comprehend this technique.
Collapse
Affiliation(s)
- Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Piazza Principessa Clotilde 3, 20121 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Nicolo’ Rossini
- Department of Clinical, Special and Dental Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
| | - Laura Maria Cacioppa
- Division of Interventional Radiology, Department of Radiological Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
| | - Velio Ascenti
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono 7, 20122 Milan, Italy
| | - Gianpaolo Carrafiello
- Radiology Department, Policlinico di Milano Ospedale Maggiore|Fondazione IRCCS Ca’ Granda, Via Francesco Sforza 35, 20122 Milan, Italy
| | - Chiara Floridi
- Division of Interventional Radiology, Department of Radiological Sciences, University Politecnica delle Marche, 60126 Ancona, Italy
- Division of Special and Pediatric Radiology, Department of Radiology, University Hospital “Umberto I-Lancisi-Salesi”, 60126 Ancona, Italy
| |
Collapse
|
5
|
Alzu'bi D, Abdullah M, Hmeidi I, AlAzab R, Gharaibeh M, El-Heis M, Almotairi KH, Forestiero A, Hussein AM, Abualigah L. Kidney Tumor Detection and Classification Based on Deep Learning Approaches: A New Dataset in CT Scans. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:3861161. [PMID: 37323471 PMCID: PMC10266909 DOI: 10.1155/2022/3861161] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 09/15/2022] [Indexed: 09/01/2023]
Abstract
Kidney tumor (KT) is one of the diseases that have affected our society and is the seventh most common tumor in both men and women worldwide. The early detection of KT has significant benefits in reducing death rates, producing preventive measures that reduce effects, and overcoming the tumor. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of deep learning (DL) can save diagnosis time, improve test accuracy, reduce costs, and reduce the radiologist's workload. In this paper, we present detection models for diagnosing the presence of KTs in computed tomography (CT) scans. Toward detecting and classifying KT, we proposed 2D-CNN models; three models are concerning KT detection such as a 2D convolutional neural network with six layers (CNN-6), a ResNet50 with 50 layers, and a VGG16 with 16 layers. The last model is for KT classification as a 2D convolutional neural network with four layers (CNN-4). In addition, a novel dataset from the King Abdullah University Hospital (KAUH) has been collected that consists of 8,400 images of 120 adult patients who have performed CT scans for suspected kidney masses. The dataset was divided into 80% for the training set and 20% for the testing set. The accuracy results for the detection models of 2D CNN-6 and ResNet50 reached 97%, 96%, and 60%, respectively. At the same time, the accuracy results for the classification model of the 2D CNN-4 reached 92%. Our novel models achieved promising results; they enhance the diagnosis of patient conditions with high accuracy, reducing radiologist's workload and providing them with a tool that can automatically assess the condition of the kidneys, reducing the risk of misdiagnosis. Furthermore, increasing the quality of healthcare service and early detection can change the disease's track and preserve the patient's life.
Collapse
Affiliation(s)
- Dalia Alzu'bi
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid 2210, Jordan
| | - Malak Abdullah
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid 2210, Jordan
| | - Ismail Hmeidi
- Department of Computer Information Systems, Jordan University of Science and Technology, Irbid 2210, Jordan
| | - Rami AlAzab
- Department of General Surgery and Urology, University of Science and Technology, Irbid 22110, Jordan
| | - Maha Gharaibeh
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid 2210, Jordan
| | - Mwaffaq El-Heis
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, Jordan University of Science and Technology, Irbid 2210, Jordan
| | - Khaled H. Almotairi
- Computer Engineering Department, Computer and Information Systems College, Umm Al-Qura University, Makkah 21955, Saudi Arabia
| | - Agostino Forestiero
- Institute for High Performance Computing and Networking, CNR, Rende (CS), Italy
| | - Ahmad MohdAziz Hussein
- Deanship of E-Learning and Distance Education, Umm Al-Qura University, Makkah 21955, Saudi Arabia
| | - Laith Abualigah
- Hourani Center for Applied Scientific Research, Al-Ahliyya Amman University, Amman 19328, Jordan
- Faculty of Information Technology, Middle East University, Amman 11831, Jordan
- School of Computer Sciences, Universiti Sains Malaysia, Pulau Pinang 11800, Malaysia
| |
Collapse
|
6
|
Radiology Imaging Scans for Early Diagnosis of Kidney Tumors: A Review of Data Analytics-Based Machine Learning and Deep Learning Approaches. BIG DATA AND COGNITIVE COMPUTING 2022. [DOI: 10.3390/bdcc6010029] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Plenty of disease types exist in world communities that can be explained by humans’ lifestyles or the economic, social, genetic, and other factors of the country of residence. Recently, most research has focused on studying common diseases in the population to reduce death risks, take the best procedure for treatment, and enhance the healthcare level of the communities. Kidney Disease is one of the common diseases that have affected our societies. Sectionicularly Kidney Tumors (KT) are the 10th most prevalent tumor for men and women worldwide. Overall, the lifetime likelihood of developing a kidney tumor for males is about 1 in 466 (2.02 percent) and it is around 1 in 80 (1.03 percent) for females. Still, more research is needed on new diagnostic, early, and innovative methods regarding finding an appropriate treatment method for KT. Compared to the tedious and time-consuming traditional diagnosis, automatic detection algorithms of machine learning can save diagnosis time, improve test accuracy, and reduce costs. Previous studies have shown that deep learning can play a role in dealing with complex tasks, diagnosis and segmentation, and classification of Kidney Tumors, one of the most malignant tumors. The goals of this review article on deep learning in radiology imaging are to summarize what has already been accomplished, determine the techniques used by the researchers in previous years in diagnosing Kidney Tumors through medical imaging, and identify some promising future avenues, whether in terms of applications or technological developments, as well as identifying common problems, describing ways to expand the data set, summarizing the knowledge and best practices, and determining remaining challenges and future directions.
Collapse
|
7
|
Doyle PW, Kavoussi NL. Machine learning applications to enhance patient specific care for urologic surgery. World J Urol 2021; 40:679-686. [PMID: 34047826 DOI: 10.1007/s00345-021-03738-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Accepted: 05/17/2021] [Indexed: 11/24/2022] Open
Abstract
PURPOSE As computational power has improved over the past 20 years, the daily application of machine learning methods has become more prevalent in daily life. Additionally, there is increasing interest in the clinical application of machine learning techniques. We sought to review the current literature regarding machine learning applications for patient-specific urologic surgical care. METHODS We performed a broad search of the current literature via the PubMed-Medline and Google Scholar databases up to Dec 2020. The search terms "urologic surgery" as well as "artificial intelligence", "machine learning", "neural network", and "automation" were used. RESULTS The focus of machine learning applications for patient counseling is disease-specific. For stone disease, multiple studies focused on the prediction of stone-free rate based on preoperative characteristics of clinical and imaging data. For kidney cancer, many studies focused on advanced imaging analysis to predict renal mass pathology preoperatively. Machine learning applications in prostate cancer could provide for treatment counseling as well as prediction of disease-specific outcomes. Furthermore, for bladder cancer, the reviewed studies focus on staging via imaging, to better counsel patients towards neoadjuvant chemotherapy. Additionally, there have been many efforts on automatically segmenting and matching preoperative imaging with intraoperative anatomy. CONCLUSION Machine learning techniques can be implemented to assist patient-centered surgical care and increase patient engagement within their decision-making processes. As data sets improve and expand, especially with the transition to large-scale EHR usage, these tools will improve in efficacy and be utilized more frequently.
Collapse
Affiliation(s)
- Patrick W Doyle
- Department of Urology, Vanderbilt University Medical Center, 3823 The Vanderbilt Clinic, Nashville, Tennessee, 37232, USA
| | - Nicholas L Kavoussi
- Department of Urology, Vanderbilt University Medical Center, 3823 The Vanderbilt Clinic, Nashville, Tennessee, 37232, USA.
| |
Collapse
|
8
|
Machine learning based quantitative texture analysis of CT images for diagnosis of renal lesions. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
9
|
Automated detection of kidney abnormalities using multi-feature fusion convolutional neural networks. Knowl Based Syst 2020. [DOI: 10.1016/j.knosys.2020.105873] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
11
|
Borkmann S, Geterud K, Lundstam S, Hellström M. Frequency and radiological characteristics of previously overlooked renal cell carcinoma. Acta Radiol 2019; 60:1348-1359. [PMID: 30700094 DOI: 10.1177/0284185118823362] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Affiliation(s)
- Simon Borkmann
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Kjell Geterud
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Sven Lundstam
- The Sahlgrenska University Hospital, Gothenburg, Sweden
- Department of Urology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
| | - Mikael Hellström
- Department of Radiology, Institute of Clinical Sciences, The Sahlgrenska Academy, Gothenburg University, Gothenburg, Sweden
- The Sahlgrenska University Hospital, Gothenburg, Sweden
| |
Collapse
|
12
|
Farzaneh N, Reza Soroushmehr SM, Patel H, Wood A, Gryak J, Fessell D, Najarian K. Automated Kidney Segmentation for Traumatic Injured Patients through Ensemble Learning and Active Contour Modeling. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:3418-3421. [PMID: 30441122 DOI: 10.1109/embc.2018.8512967] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Traumatic abdominal injury can lead to multiple complications including laceration of major organs such as kidneys. Contrast-enhanced Computed Tomography (CT) is the primary imaging modality for evaluating kidney injury. However, the traditional visual examination of CT scans is time consuming, non-quantitative, prone to human error, and costly. In this work we propose a kidney segmentation method using machine learning and active contour modeling. We first detect an initialization mask inside the kidney and then evolve its boundary. This model is specifically developed and evaluated on trauma cases. Our experimental results show the average recall score of 92.6% and average Dice similarity value of 88.9%.
Collapse
|
13
|
Yap FY, Hwang DH, Cen SY, Varghese BA, Desai B, Quinn BD, Gupta MN, Rajarubendra N, Desai MM, Aron M, Liang G, Aron M, Gill IS, Duddalwar VA. Quantitative Contour Analysis as an Image-based Discriminator Between Benign and Malignant Renal Tumors. Urology 2018; 114:121-127. [DOI: 10.1016/j.urology.2017.12.018] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Revised: 12/04/2017] [Accepted: 12/12/2017] [Indexed: 12/12/2022]
|
14
|
Blau N, Klang E, Kiryati N, Amitai M, Portnoy O, Mayer A. Fully automatic detection of renal cysts in abdominal CT scans. Int J Comput Assist Radiol Surg 2018; 13:957-966. [DOI: 10.1007/s11548-018-1726-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2017] [Accepted: 02/28/2018] [Indexed: 11/29/2022]
|
15
|
Gui L, Yang X. Automatic renal lesion segmentation in ultrasound images based on saliency features, improved LBP, and an edge indicator under level set framework. Med Phys 2017; 45:223-235. [PMID: 29131363 DOI: 10.1002/mp.12661] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2017] [Revised: 10/09/2017] [Accepted: 10/11/2017] [Indexed: 12/11/2022] Open
Abstract
PURPOSE Segmentation of lesions in ultrasound images is widely used for preliminary diagnosis. In this paper, we develop an automatic segmentation algorithm for multiple types of lesions in ultrasound images. The proposed method is able to detect and segment lesions automatically as well as generate accurate segmentation results for lesion regions. METHODS In the detection step, two saliency detection frameworks which adopt global image information are designed to capture the differences between normal and abnormal organs as well as these between lesions and the normal tissues around them. In the segmentation step, three types of local information, i.e., image intensity, improved local binary patterns (LBP) features, and an edge indicator, are embedded into a modified level set framework to carry out the segmentation task. RESULTS The cyst and carcinoma regions in the ultrasound images of the human kidneys can be automatically detected and segmented by using the proposed method. The efficiency and accuracy of the method are validated by quantitative evaluations and comparative measurements with three well-recognized segmentation methods. Specifically, the average precision and dice coefficient of the proposed method in segmenting renal cysts are 95.33% and 90.16%, respectively, while those in segmenting renal carcinomas are 94.22% and 91.13%, respectively. The average precision and dice coefficient of the proposed method are higher than those of three compared segmentation methods. CONCLUSIONS The proposed method can efficiently detect and segment the renal lesions in ultrasound images. In addition, since the proposed method utilizes the differences between normal and abnormal organs as well as these between lesions and the normal tissues around them, it can be possibly extended to deal with lesions in other organs of ultrasound images as well as lesions in medical images of other modalities.
Collapse
Affiliation(s)
- Luying Gui
- Nanjing University of Science and Technology, Nanjing, Jiangsu, 210094, China
| | | |
Collapse
|
16
|
Liu J, Jung H, Dubra A, Tam J. Automated Photoreceptor Cell Identification on Nonconfocal Adaptive Optics Images Using Multiscale Circular Voting. Invest Ophthalmol Vis Sci 2017; 58:4477-4489. [PMID: 28873173 PMCID: PMC5586244 DOI: 10.1167/iovs.16-21003] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2016] [Accepted: 07/11/2017] [Indexed: 12/15/2022] Open
Abstract
Purpose Adaptive optics scanning light ophthalmoscopy (AOSLO) has enabled quantification of the photoreceptor mosaic in the living human eye using metrics such as cell density and average spacing. These rely on the identification of individual cells. Here, we demonstrate a novel approach for computer-aided identification of cone photoreceptors on nonconfocal split detection AOSLO images. Methods Algorithms for identification of cone photoreceptors were developed, based on multiscale circular voting (MSCV) in combination with a priori knowledge that split detection images resemble Nomarski differential interference contrast images, in which dark and bright regions are present on the two sides of each cell. The proposed algorithm locates dark and bright region pairs, iteratively refining the identification across multiple scales. Identification accuracy was assessed in data from 10 subjects by comparing automated identifications with manual labeling, followed by computation of density and spacing metrics for comparison to histology and published data. Results There was good agreement between manual and automated cone identifications with overall recall, precision, and F1 score of 92.9%, 90.8%, and 91.8%, respectively. On average, computed density and spacing values using automated identification were within 10.7% and 11.2% of the expected histology values across eccentricities ranging from 0.5 to 6.2 mm. There was no statistically significant difference between MSCV-based and histology-based density measurements (P = 0.96, Kolmogorov-Smirnov 2-sample test). Conclusions MSCV can accurately detect cone photoreceptors on split detection images across a range of eccentricities, enabling quick, objective estimation of photoreceptor mosaic metrics, which will be important for future clinical trials utilizing adaptive optics.
Collapse
Affiliation(s)
- Jianfei Liu
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| | - HaeWon Jung
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| | - Alfredo Dubra
- Department of Ophthalmology, Stanford University, Palo Alto, California, United States
| | - Johnny Tam
- Ophthalmic Genetics and Visual Function Branch, National Eye Institute, National Institutes of Health, Bethesda, Maryland, United States
| |
Collapse
|
17
|
Quantitative computer-aided diagnostic algorithm for automated detection of peak lesion attenuation in differentiating clear cell from papillary and chromophobe renal cell carcinoma, oncocytoma, and fat-poor angiomyolipoma on multiphasic multidetector computed tomography. Abdom Radiol (NY) 2017; 42:1919-1928. [PMID: 28280876 DOI: 10.1007/s00261-017-1095-6] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
OBJECTIVE To evaluate the performance of a novel, quantitative computer-aided diagnostic (CAD) algorithm on four-phase multidetector computed tomography (MDCT) to detect peak lesion attenuation to enable differentiation of clear cell renal cell carcinoma (ccRCC) from chromophobe RCC (chRCC), papillary RCC (pRCC), oncocytoma, and fat-poor angiomyolipoma (fp-AML). MATERIALS AND METHODS We queried our clinical databases to obtain a cohort of histologically proven renal masses with preoperative MDCT with four phases [unenhanced (U), corticomedullary (CM), nephrographic (NP), and excretory (E)]. A whole lesion 3D contour was obtained in all four phases. The CAD algorithm determined a region of interest (ROI) of peak lesion attenuation within the 3D lesion contour. For comparison, a manual ROI was separately placed in the most enhancing portion of the lesion by visual inspection for a reference standard, and in uninvolved renal cortex. Relative lesion attenuation for both CAD and manual methods was obtained by normalizing the CAD peak lesion attenuation ROI (and the reference standard manually placed ROI) to uninvolved renal cortex with the formula [(peak lesion attenuation ROI - cortex ROI)/cortex ROI] × 100%. ROC analysis and area under the curve (AUC) were used to assess diagnostic performance. Bland-Altman analysis was used to compare peak ROI between CAD and manual method. RESULTS The study cohort comprised 200 patients with 200 unique renal masses: 106 (53%) ccRCC, 32 (16%) oncocytomas, 18 (9%) chRCCs, 34 (17%) pRCCs, and 10 (5%) fp-AMLs. In the CM phase, CAD-derived ROI enabled characterization of ccRCC from chRCC, pRCC, oncocytoma, and fp-AML with AUCs of 0.850 (95% CI 0.732-0.968), 0.959 (95% CI 0.930-0.989), 0.792 (95% CI 0.716-0.869), and 0.825 (95% CI 0.703-0.948), respectively. On Bland-Altman analysis, there was excellent agreement of CAD and manual methods with mean differences between 14 and 26 HU in each phase. CONCLUSION A novel, quantitative CAD algorithm enabled robust peak HU lesion detection and discrimination of ccRCC from other renal lesions with similar performance compared to the manual method.
Collapse
|
18
|
Takahashi R, Kajikawa Y. Computer-aided diagnosis: A survey with bibliometric analysis. Int J Med Inform 2017; 101:58-67. [DOI: 10.1016/j.ijmedinf.2017.02.004] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2016] [Revised: 01/28/2017] [Accepted: 02/04/2017] [Indexed: 12/18/2022]
|
19
|
3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2017; 2017:9818506. [PMID: 28280519 PMCID: PMC5322574 DOI: 10.1155/2017/9818506] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Revised: 11/29/2016] [Accepted: 12/22/2016] [Indexed: 11/18/2022]
Abstract
Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images' inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels' appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach.
Collapse
|
20
|
Abstract
OBJECTIVE Automated analysis of abdominal CT has advanced markedly over just the last few years. Fully automated assessment of organs, lymph nodes, adipose tissue, muscle, bowel, spine, and tumors are some examples where tremendous progress has been made. Computer-aided detection of lesions has also improved dramatically. CONCLUSION This article reviews the progress and provides insights into what is in store in the near future for automated analysis for abdominal CT, ultimately leading to fully automated interpretation.
Collapse
|