1
|
Manogaran N, Panabakam N, Selvaraj D, Seerangan K, Khan F, Selvarajan S. An efficient patient's response predicting system using multi-scale dilated ensemble network framework with optimization strategy. Sci Rep 2025; 15:15713. [PMID: 40325044 DOI: 10.1038/s41598-025-00401-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2024] [Accepted: 04/28/2025] [Indexed: 05/07/2025] Open
Abstract
The forecasting of a patient's response to radiotherapy and the likelihood of experiencing harmful long-term health impacts would considerably enhance individual treatment plans. Due to the continuous exposure to radiation, cardiovascular disease and pulmonary fibrosis might occur. For forecasting the response of patients to chemotherapy, the Convolutional Neural Networks (CNN) technique is widely used. With the help of radiotherapy, cancer diseases are diagnosed, but some patients suffer from side effects. The toxicity of radiotherapy and chemotherapy should be estimated. For validating the patient's improvement in treatments, a patient response prediction system is essential. In this paper, a Deep Learning (DL) based patient response prediction system is developed to effectively predict the response of patients, predict prognosis and inform the treatment plans in the early stage. The necessary data for the response prediction are collected manually. The collected data are then processed through the feature selection segment. The Repeated Exploration and Exploitation-based Coati Optimization Algorithm (REE-COA) is employed to select the features. The selected weight features are input into the prediction process. Here, the prediction is performed by Multi-scale Dilated Ensemble Network (MDEN), where we integrated Long-Short term Memory (LSTM), Recurrent Neural Network (RNN) and One-dimensional Convolutional Neural Networks (1DCNN). The final prediction scores are averaged to develop an effective MDEN-based model to predict the patient's response. The proposed MDEN-based patient's response prediction scheme is 0.79%, 2.98%, 2.21% and 1.40% finer than RAN, RNN, LSTM and 1DCNN, respectively. Hence, the proposed system minimizes error rates and enhances accuracy using a weight optimization technique.
Collapse
Affiliation(s)
- Nalini Manogaran
- Department of CSE, S.A. Engineering College (Autonomous), Chennai, 600077, Tamil Nadu, India
| | - Nirupama Panabakam
- Department of CSE, VEMU Institute of Technology, Chitoor, 517112, Andhra Pradesh, India
| | - Durai Selvaraj
- Department of CSE, School of Computing, Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, Chennai, 600062, Tamil Nadu, India
| | - Koteeswaran Seerangan
- Department of CSE (AI and ML), S.A. Engineering College (Autonomous), Chennai, 600077, Tamil Nadu, India
| | - Firoz Khan
- Centre for Information and Communication Sciences, Ball State University, Muncie, USA
| | - Shitharth Selvarajan
- Department of Computer Science, Kebri Dehar University, 250, Kebri Dehar, Ethiopia.
- Department of Computer Science and Engineering, Chennai Institute of Technology, Chennai, India.
- Centre for Research Impact and Outcome, Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, 140401, Punjab, India.
| |
Collapse
|
2
|
Wu W, Duan S, Sun Y, Yu Y, Liu D, Peng D. Deep fuzzy physics-informed neural networks for forward and inverse PDE problems. Neural Netw 2025; 181:106750. [PMID: 39427411 DOI: 10.1016/j.neunet.2024.106750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2024] [Revised: 08/20/2024] [Accepted: 09/17/2024] [Indexed: 10/22/2024]
Abstract
As a grid-independent approach for solving partial differential equations (PDEs), Physics-Informed Neural Networks (PINNs) have garnered significant attention due to their unique capability to simultaneously learn from both data and the governing physical equations. Existing PINNs methods always assume that the data is stable and reliable, but data obtained from commercial simulation software often inevitably have ambiguous and inaccurate problems. Obviously, this will have a negative impact on the use of PINNs to solve forward and inverse PDE problems. To overcome the above problems, this paper proposes a Deep Fuzzy Physics-Informed Neural Networks (FPINNs) that explores the uncertainty in data. Specifically, to capture the uncertainty behind the data, FPINNs learns fuzzy representation through the fuzzy membership function layer and fuzzy rule layer. Afterward, we use deep neural networks to learn neural representation. Subsequently, the fuzzy representation is integrated with the neural representation. Finally, the residual of the physical equation and the data error are considered as the two components of the loss function, guiding the network to optimize towards adherence to the physical laws for accurate prediction of the physical field. Extensive experiment results show that FPINNs outperforms these comparative methods in solving forward and inverse PDE problems on four widely used datasets. The demo code will be released at https://github.com/siyuancncd/FPINNs.
Collapse
Affiliation(s)
- Wenyuan Wu
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Siyuan Duan
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Yuan Sun
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| | - Yang Yu
- Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu, 610213, China; National Key Laboratory of Parallel and Distributed Computing, National University of Defense Technology, Changsha, 410073, China.
| | - Dong Liu
- Science and Technology on Reactor System Design Technology Laboratory, Nuclear Power Institute of China, Chengdu, 610213, China.
| | - Dezhong Peng
- College of Computer Science, Sichuan University, Chengdu, 610065, China.
| |
Collapse
|
3
|
Salari E, Wang J, Wynne JF, Chang C, Wu Y, Yang X. Artificial intelligence-based motion tracking in cancer radiotherapy: A review. J Appl Clin Med Phys 2024; 25:e14500. [PMID: 39194360 PMCID: PMC11540048 DOI: 10.1002/acm2.14500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 07/13/2024] [Accepted: 07/27/2024] [Indexed: 08/29/2024] Open
Abstract
Radiotherapy aims to deliver a prescribed dose to the tumor while sparing neighboring organs at risk (OARs). Increasingly complex treatment techniques such as volumetric modulated arc therapy (VMAT), stereotactic radiosurgery (SRS), stereotactic body radiotherapy (SBRT), and proton therapy have been developed to deliver doses more precisely to the target. While such technologies have improved dose delivery, the implementation of intra-fraction motion management to verify tumor position at the time of treatment has become increasingly relevant. Artificial intelligence (AI) has recently demonstrated great potential for real-time tracking of tumors during treatment. However, AI-based motion management faces several challenges, including bias in training data, poor transparency, difficult data collection, complex workflows and quality assurance, and limited sample sizes. This review presents the AI algorithms used for chest, abdomen, and pelvic tumor motion management/tracking for radiotherapy and provides a literature summary on the topic. We will also discuss the limitations of these AI-based studies and propose potential improvements.
Collapse
Affiliation(s)
- Elahheh Salari
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Jing Wang
- Radiation OncologyIcahn School of Medicine at Mount SinaiNew YorkNew YorkUSA
| | | | - Chih‐Wei Chang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| | - Yizhou Wu
- School of Electrical and Computer EngineeringGeorgia Institute of TechnologyAtlantaGeorgiaUSA
| | - Xiaofeng Yang
- Department of Radiation OncologyEmory UniversityAtlantaGeorgiaUSA
| |
Collapse
|
4
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver motion estimation via deep learning-based angle-agnostic X-ray imaging. Med Phys 2023; 50:6649-6662. [PMID: 37922461 PMCID: PMC10629841 DOI: 10.1002/mp.16691] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 07/17/2023] [Accepted: 08/06/2023] [Indexed: 11/05/2023] Open
Abstract
BACKGROUND Real-time liver imaging is challenged by the short imaging time (within hundreds of milliseconds) to meet the temporal constraint posted by rapid patient breathing, resulting in extreme under-sampling for desired 3D imaging. Deep learning (DL)-based real-time imaging/motion estimation techniques are emerging as promising solutions, which can use a single X-ray projection to estimate 3D moving liver volumes by solved deformable motion. However, such techniques were mostly developed for a specific, fixed X-ray projection angle, thereby impractical to verify and guide arc-based radiotherapy with continuous gantry rotation. PURPOSE To enable deformable motion estimation and 3D liver imaging from individual X-ray projections acquired at arbitrary X-ray scan angles, and to further improve the accuracy of single X-ray-driven motion estimation. METHODS We developed a DL-based method, X360, to estimate the deformable motion of the liver boundary using an X-ray projection acquired at an arbitrary gantry angle (angle-agnostic). X360 incorporated patient-specific prior information from planning 4D-CTs to address the under-sampling issue, and adopted a deformation-driven approach to deform a prior liver surface mesh to new meshes that reflect real-time motion. The liver mesh motion is solved via motion-related image features encoded in the arbitrary-angle X-ray projection, and through a sequential combination of rigid and deformable registration modules. To achieve the angle agnosticism, a geometry-informed X-ray feature pooling layer was developed to allow X360 to extract angle-dependent image features for motion estimation. As a liver boundary motion solver, X360 was also combined with priorly-developed, DL-based optical surface imaging and biomechanical modeling techniques for intra-liver motion estimation and tumor localization. RESULTS With geometry-aware feature pooling, X360 can solve the liver boundary motion from an arbitrary-angle X-ray projection. Evaluated on a set of 10 liver patient cases, the mean (± s.d.) 95-percentile Hausdorff distance between the solved liver boundary and the "ground-truth" decreased from 10.9 (±4.5) mm (before motion estimation) to 5.5 (±1.9) mm (X360). When X360 was further integrated with surface imaging and biomechanical modeling for liver tumor localization, the mean (± s.d.) center-of-mass localization error of the liver tumors decreased from 9.4 (± 5.1) mm to 2.2 (± 1.7) mm. CONCLUSION X360 can achieve fast and robust liver boundary motion estimation from arbitrary-angle X-ray projections for real-time imaging guidance. Serving as a surface motion solver, X360 can be integrated into a combined framework to achieve accurate, real-time, and marker-less liver tumor localization.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, Dallas, Texas, USA
- The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Dallas, Texas, USA
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, Texas, USA
| |
Collapse
|
5
|
Kierner S, Kucharski J, Kierner Z. Taxonomy of hybrid architectures involving rule-based reasoning and machine learning in clinical decision systems: A scoping review. J Biomed Inform 2023; 144:104428. [PMID: 37355025 DOI: 10.1016/j.jbi.2023.104428] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2023] [Revised: 03/28/2023] [Accepted: 06/15/2023] [Indexed: 06/26/2023]
Abstract
BACKGROUND As the application of Artificial Intelligence (AI) technologies increases in the healthcare sector, the industry faces a need to combine medical knowledge, often expressed as clinical rules, with advances in machine learning (ML), which offer high prediction accuracy at the expense of transparency of decision making. PURPOSE This paper seeks to review the present literature, identify hybrid architecture patterns that incorporate rules and machine learning, and evaluate the rationale behind their selection to inform future development and research on the design of transparent and precise clinical decision systems. METHODS PubMed, IEEE Explore, and Google Scholar were queried in search for papers from 1992 to 2022, with the keywords: "clinical decision system", "hybrid clinical architecture", "machine learning and clinical rules". Excluded articles did not use both ML and rules or did not provide any explanation of employed architecture. A proposed taxonomy was used to organize the results, analyze them, and depict them in graphical and tabular form. Two researchers, one with expertise in rule-based systems and another in ML, reviewed identified papers and discussed the work to minimize bias, and the third one re-reviewed the work to ensure consistency of reporting. RESULTS The authors screened 957 papers and reviewed 71 that met their criteria. Five distinct architecture archetypes were determined: Rules are Embedded in ML architecture (REML) (most used), ML pre-processes input data for Rule-Based inference (MLRB), Rule-Based method pre-processes input data for ML prediction (RBML), Rules influence ML training (RMLT), Parallel Ensemble of Rules and ML (PERML), which was rarely observed in clinical contexts. CONCLUSIONS Most architectures in the reviewed literature prioritize prediction accuracy over explainability and trustworthiness, which has led to more complex embedded approaches. Alternatively, parallel (PERML) architectures may be employed, allowing for a more transparent system that is easier to explain to patients and clinicians. The potential of this approach warrants further research. OTHER A limitation of the study may be that it reviews scientific literature, while algorithms implemented in clinical practice may present different distributions of motivations and implementations of hybrid architectures.
Collapse
Affiliation(s)
- Slawomir Kierner
- Lodz University of Technology, Faculty of Electrical, Electronic, Computer and Control Engineering, 27 Isabella Street, 02116 Boston, MA, USA.
| | - Jacek Kucharski
- Lodz University of Technology, Faculty of Electrical, Electronic, Computer and Control Engineering, 18/22 Stefanowskiego St., 90-924 Łodź, Poland.
| | - Zofia Kierner
- University of California, Berkeley College of Letters & Science, Berkeley, CA 94720-1786, USA.
| |
Collapse
|
6
|
Shao HC, Li Y, Wang J, Jiang S, Zhang Y. Real-time liver tumor localization via combined surface imaging and a single x-ray projection. Phys Med Biol 2023; 68:065002. [PMID: 36731143 PMCID: PMC10394117 DOI: 10.1088/1361-6560/acb889] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Revised: 01/12/2023] [Accepted: 02/01/2023] [Indexed: 02/04/2023]
Abstract
Objective. Real-time imaging, a building block of real-time adaptive radiotherapy, provides instantaneous knowledge of anatomical motion to drive delivery adaptation to improve patient safety and treatment efficacy. The temporal constraint of real-time imaging (<500 milliseconds) significantly limits the imaging signals that can be acquired, rendering volumetric imaging and 3D tumor localization extremely challenging. Real-time liver imaging is particularly difficult, compounded by the low soft tissue contrast within the liver. We proposed a deep learning (DL)-based framework (Surf-X-Bio), to track 3D liver tumor motion in real-time from combined optical surface image and a single on-board x-ray projection.Approach. Surf-X-Bio performs mesh-based deformable registration to track/localize liver tumors volumetrically via three steps. First, a DL model was built to estimate liver boundary motion from an optical surface image, using learnt motion correlations between the respiratory-induced external body surface and liver boundary. Second, the residual liver boundary motion estimation error was further corrected by a graph neural network-based DL model, using information extracted from a single x-ray projection. Finally, a biomechanical modeling-driven DL model was applied to solve the intra-liver motion for tumor localization, using the liver boundary motion derived via prior steps.Main results. Surf-X-Bio demonstrated higher accuracy and better robustness in tumor localization, as compared to surface-image-only and x-ray-only models. By Surf-X-Bio, the mean (±s.d.) 95-percentile Hausdorff distance of the liver boundary from the 'ground-truth' decreased from 9.8 (±4.5) (before motion estimation) to 2.4 (±1.6) mm. The mean (±s.d.) center-of-mass localization error of the liver tumors decreased from 8.3 (±4.8) to 1.9 (±1.6) mm.Significance. Surf-X-Bio can accurately track liver tumors from combined surface imaging and x-ray imaging. The fast computational speed (<250 milliseconds per inference) allows it to be applied clinically for real-time motion management and adaptive radiotherapy.
Collapse
Affiliation(s)
- Hua-Chieh Shao
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Yunxiang Li
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Jing Wang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - Steve Jiang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| | - You Zhang
- The Advanced Imaging and Informatics for Radiation Therapy (AIRT) Laboratory, The Medical Artificial Intelligence and Automation (MAIA) Laboratory, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75390, United States of America
| |
Collapse
|
7
|
Lin B, Tan Z, Mo Y, Yang X, Liu Y, Xu B. Intelligent oncology: The convergence of artificial intelligence and oncology. JOURNAL OF THE NATIONAL CANCER CENTER 2023; 3:83-91. [PMID: 39036310 PMCID: PMC11256531 DOI: 10.1016/j.jncc.2022.11.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/07/2022] [Accepted: 11/13/2022] [Indexed: 12/12/2022] Open
Abstract
With increasingly explored ideologies and technologies for potential applications of artificial intelligence (AI) in oncology, we here describe a holistic and structured concept termed intelligent oncology. Intelligent oncology is defined as a cross-disciplinary specialty which integrates oncology, radiology, pathology, molecular biology, multi-omics and computer sciences, aiming to promote cancer prevention, screening, early diagnosis and precision treatment. The development of intelligent oncology has been facilitated by fast AI technology development such as natural language processing, machine/deep learning, computer vision, and robotic process automation. While the concept and applications of intelligent oncology is still in its infancy, and there are still many hurdles and challenges, we are optimistic that it will play a pivotal role for the future of basic, translational and clinical oncology.
Collapse
Affiliation(s)
- Bo Lin
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and Chongqing University School of Medicine, Institute of Intelligent Oncology, Chongqing University, China
| | - Zhibo Tan
- Department of Radiation Oncology, Peking University Shenzhen Hospital, Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Yaqi Mo
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and Chongqing University School of Medicine, Institute of Intelligent Oncology, Chongqing University, China
| | - Xue Yang
- Department of Biochemistry and Molecular Biology, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, National Cancer Research Center, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Yajie Liu
- Department of Radiation Oncology, Peking University Shenzhen Hospital, Department of Radiation Oncology, Peking University Shenzhen Hospital, Shenzhen, China
| | - Bo Xu
- Chongqing Key Laboratory of Intelligent Oncology for Breast Cancer, Chongqing University Cancer Hospital and Chongqing University School of Medicine, Institute of Intelligent Oncology, Chongqing University, China
- Department of Biochemistry and Molecular Biology, Key Laboratory of Breast Cancer Prevention and Therapy, Ministry of Education, National Cancer Research Center, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| |
Collapse
|
8
|
Deep Neuro-Fuzzy System application trends, challenges, and future perspectives: a systematic survey. Artif Intell Rev 2023; 56:865-913. [PMID: 35431395 PMCID: PMC9005344 DOI: 10.1007/s10462-022-10188-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/05/2022] [Indexed: 02/02/2023]
Abstract
Deep neural networks (DNN) have remarkably progressed in applications involving large and complex datasets but have been criticized as a black-box. This downside has recently become a motivation for the research community to pursue the ideas of hybrid approaches, resulting in novel hybrid systems classified as deep neuro-fuzzy systems (DNFS). Studies regarding the implementation of DNFS have rapidly increased in the domains of computing, healthcare, transportation, and finance with high interpretability and reasonable accuracy. However, relatively few survey studies have been found in the literature to provide a comprehensive insight into this domain. Therefore, this study aims to perform a systematic review to evaluate the current progress, trends, arising issues, research gaps, challenges, and future scope related to DNFS studies. A study mapping process was prepared to guide a systematic search for publications related to DNFS published between 2015 and 2020 using five established scientific directories. As a result, a total of 105 studies were identified and critically analyzed to address research questions with the objectives: (i) to understand the concept of DNFS; (ii) to find out DNFS optimization methods; (iii) to visualize the intensity of work carried out in DNFS domain; and (iv) to highlight DNFS application subjects and domains. We believe that this study provides up-to-date guidance for future research in the DNFS domain, allowing for more effective advancement in techniques and processes. The analysis made in this review proves that DNFS-based research is actively growing with a substantial implementation and application scope in the future.
Collapse
|
9
|
Zhang W, Kumar M, Ding W, Li X, Yu J. Variational learning of deep fuzzy theoretic nonparametric model. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
10
|
Shetty MV, D J, Tunga S. Optimized Deformable Model-based Segmentation and Deep Learning for Lung Cancer Classification. THE JOURNAL OF MEDICAL INVESTIGATION 2022; 69:244-255. [PMID: 36244776 DOI: 10.2152/jmi.69.244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Lung cancer is one of the life taking disease and causes more deaths worldwide. Early detection and treatment is necessary to save life. It is very difficult for doctors to interpret and identify diseases using imaging modalities alone. Therefore computer aided diagnosis can assist doctors for the early detection of cancer very accurately. In the proposed work, optimized deformable models and deep learning techniques are applied for the detection and classification of lung cancer. This method involves pre-processing, lung lobe segmentation, lung cancer segmentation, Data augmentation and lung cancer classification. The median filtering is considered for pre-processing and the Bayesian fuzzy clustering is applied for segmenting the lung lobes. The lung cancer segmentation is carried out using Water Cycle Sea Lion Optimization (WSLnO) based deformable model. The data augmentation process is used to augment the size of segmented region in order to perform better classification. The lung cancer classification is done effectively using Shepard Convolutional Neural Network (ShCNN), which is trained by WSLnO algorithm. The proposed WSLnO algorithm is designed by incorporating Water cycle algorithm (WCA) and Sea Lion Optimization (SLnO) algorithm. The performance of the proposed technique is analyzed with various performance metrics and attained the better results in terms of accuracy, sensitivity, specificity and average segmentation accuracy of 0.9303, 0.9123, 0.9133 and 0.9091 respectively. J. Med. Invest. 69 : 244-255, August, 2022.
Collapse
Affiliation(s)
- Mamtha V Shetty
- Department of Electronics & Instrumentation Engineering, JSS Academy of Technical Education, Bengaluru, VTU, India
| | - Jayadevappa D
- Department of Electronics & Instrumentation Engineering, JSS Academy of Technical Education, Bengaluru, VTU, India
| | - Satish Tunga
- Dept. of Electronics & Telecommunication Engineering, M S Ramaiah Institute of Technology, Bengaluru, VTU, India
| |
Collapse
|
11
|
Talwar V, Chufal KS, Joga S. Artificial Intelligence: A New Tool in Oncologist's Armamentarium. Indian J Med Paediatr Oncol 2021. [DOI: 10.1055/s-0041-1735577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
Abstract
AbstractArtificial intelligence (AI) has become an essential tool in human life because of its pivotal role in communications, transportation, media, and social networking. Inspired by the complex neuronal network and its functions in human beings, AI, using computer-based algorithms and training, had been explored since the 1950s. To tackle the enormous amount of patients' clinical data, imaging, histopathological data, and the increasing pace of research on new treatments and clinical trials, and ever-changing guidelines for treatment with the advent of novel drugs and evidence, AI is the need of the hour. There are numerous publications and active work on AI's role in the field of oncology. In this review, we discuss the fundamental terminology of AI, its applications in oncology on the whole, and its limitations. There is an inter-relationship between AI, machine learning and, deep learning. The virtual branch of AI deals with machine learning. While the physical branch of AI deals with the delivery of different forms of treatment—surgery, targeted drug delivery, and elderly care. The applications of AI in oncology include cancer screening, diagnosis (clinical, imaging, and histopathological), radiation therapy (image acquisition, tumor and organs at risk segmentation, image registration, planning, and delivery), prediction of treatment outcomes and toxicities, prediction of cancer cell sensitivity to therapeutics and clinical decision-making. A specific area of interest is in the development of effective drug combinations tailored to every patient and tumor with the help of AI. Radiomics, the new kid on the block, deals with the planning and administration of radiotherapy. As with any new invention, AI has its fallacies. The limitations include lack of external validation and proof of generalizability, difficulty in data access for rare diseases, ethical and legal issues, no precise logic behind the prediction, and last but not the least, lack of education and expertise among medical professionals. A collaboration between departments of clinical oncology, bioinformatics, and data sciences can help overcome these problems in the near future.
Collapse
Affiliation(s)
- Vineet Talwar
- Department of Medical Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| | - Kundan Singh Chufal
- Department of Radiation Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| | - Srujana Joga
- Department of Medical Oncology, Rajiv Gandhi Cancer Institute & Research Centre, New Delhi, India
| |
Collapse
|
12
|
An Improved Correlation Model for Respiration Tracking in Robotic Radiosurgery Using Essential Skin Surface Motion. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2021.3097250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
13
|
Mylonas A, Booth J, Nguyen DT. A review of artificial intelligence applications for motion tracking in radiotherapy. J Med Imaging Radiat Oncol 2021; 65:596-611. [PMID: 34288501 DOI: 10.1111/1754-9485.13285] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 06/29/2021] [Indexed: 11/28/2022]
Abstract
During radiotherapy, the organs and tumour move as a result of the dynamic nature of the body; this is known as intrafraction motion. Intrafraction motion can result in tumour underdose and healthy tissue overdose, thereby reducing the effectiveness of the treatment while increasing toxicity to the patients. There is a growing appreciation of intrafraction target motion management by the radiation oncology community. Real-time image-guided radiation therapy (IGRT) can track the target and account for the motion, improving the radiation dose to the tumour and reducing the dose to healthy tissue. Recently, artificial intelligence (AI)-based approaches have been applied to motion management and have shown great potential. In this review, four main categories of motion management using AI are summarised: marker-based tracking, markerless tracking, full anatomy monitoring and motion prediction. Marker-based and markerless tracking approaches focus on tracking the individual target throughout the treatment. Full anatomy algorithms monitor for intrafraction changes in the full anatomy within the field of view. Motion prediction algorithms can be used to account for the latencies due to the time for the system to localise, process and act.
Collapse
Affiliation(s)
- Adam Mylonas
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia
| | - Jeremy Booth
- Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia.,Institute of Medical Physics, School of Physics, The University of Sydney, Sydney, New South Wales, Australia
| | - Doan Trang Nguyen
- ACRF Image X Institute, Faculty of Medicine and Health, The University of Sydney, Sydney, New South Wales, Australia.,School of Biomedical Engineering, University of Technology Sydney, Sydney, New South Wales, Australia.,Northern Sydney Cancer Centre, Royal North Shore Hospital, St Leonards, New South Wales, Australia
| |
Collapse
|
14
|
Remy C, Ahumada D, Labine A, Côté JC, Lachaine M, Bouchard H. Potential of a probabilistic framework for target prediction from surrogate respiratory motion during lung radiotherapy. Phys Med Biol 2021; 66. [PMID: 33761479 DOI: 10.1088/1361-6560/abf1b8] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Accepted: 03/23/2021] [Indexed: 12/25/2022]
Abstract
Purpose.Respiration-induced motion introduces significant positioning uncertainties in radiotherapy treatments for thoracic sites. Accounting for this motion is a non-trivial task commonly addressed with surrogate-based strategies and latency compensating techniques. This study investigates the potential of a new unified probabilistic framework to predict both future target motion in real-time from a surrogate signal and associated uncertainty.Method.A Bayesian approach is developed, based on a Kalman filter theory adapted specifically for surrogate measurements. Breathing motions are collected simultaneously from a lung target, two external surrogates (abdominal and thoracic markers) and an internal surrogate (liver structure) for 9 volunteers during 4 min, in which severe breathing changes occur to assess the robustness of the method. A comparison with an artificial non-linear neural network (NN) is performed, although no confidence interval prediction is provided. A static worst-case scenario and a simple static design are investigated.Results.Although the NN can reduce the prediction errors from thoracic surrogate in some cases, the Bayesian framework outperforms in most cases the NN when using the other surrogates: bias on predictions is reduced by 38% and 16% on average when using respectively the liver and the abdomen for the simple scenario, and by respectively 40% and 31% for the worst-case scenario. The standard deviation of residuals is reduced on average by up to 42%. The Bayesian method is also found to be more robust to increasing latencies. The thoracic marker appears to be less reliable to predict the target position, while the liver shows to be a better surrogate. A statistical test confirms the significance of both observations.Conclusion.The proposed framework predicts both the future target position and the associated uncertainty, which can be valuably used to further assist motion management decisions. Further investigation is required to improve the predictions by using an adaptive version of the proposed framework.
Collapse
Affiliation(s)
- Charlotte Remy
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V 0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada
| | - Daniel Ahumada
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V 0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada
| | - Alexandre Labine
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V 0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada
| | - Jean-Charles Côté
- Département de radio-oncologie, Centre hospitalier de l'Université de Montréal (CHUM), 1560 rue Sherbrooke est, Montréal, Québec H2L 4M1, Canada
| | - Martin Lachaine
- Elekta Ltd., 2050 de Bleury, Suite 200, Montréal, Québec H3A2J5, Canada
| | - Hugo Bouchard
- Département de physique, Université de Montréal, Complexe des sciences, 1375 Avenue Thérèse-Lavoie-Roux, Montréal, Québec H2V 0B3, Canada.,Centre de recherche du Centre hospitalier de l'Université de Montréal, 900 Rue Saint-Denis, Montréal, Québec, H2X 0A9, Canada.,Département de radio-oncologie, Centre hospitalier de l'Université de Montréal (CHUM), 1560 rue Sherbrooke est, Montréal, Québec H2L 4M1, Canada
| |
Collapse
|
15
|
Kumar M, Singh S, Freudenthaler B. Gaussian fuzzy theoretic analysis for variational learning of nested compositions. Int J Approx Reason 2021. [DOI: 10.1016/j.ijar.2020.12.021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
16
|
Liu Z, Liu F, Chen W, Liu X, Hou X, Shen J, Guan H, Zhen H, Wang S, Chen Q, Chen Y, Zhang F. Automatic Segmentation of Clinical Target Volumes for Post-Modified Radical Mastectomy Radiotherapy Using Convolutional Neural Networks. Front Oncol 2021; 10:581347. [PMID: 33665160 PMCID: PMC7921705 DOI: 10.3389/fonc.2020.581347] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 12/14/2020] [Indexed: 12/17/2022] Open
Abstract
Background This study aims to construct and validate a model based on convolutional neural networks (CNNs), which can fulfil the automatic segmentation of clinical target volumes (CTVs) of breast cancer for radiotherapy. Methods In this work, computed tomography (CT) scans of 110 patients who underwent modified radical mastectomies were collected. The CTV contours were confirmed by two experienced oncologists. A novel CNN was constructed to automatically delineate the CTV. Quantitative evaluation metrics were calculated, and a clinical evaluation was conducted to evaluate the performance of our model. Results The mean Dice similarity coefficient (DSC) of the proposed model was 0.90, and the 95th percentile Hausdorff distance (95HD) was 5.65 mm. The evaluation results of the two clinicians showed that 99.3% of the chest wall CTV slices could be accepted by clinician A, and this number was 98.9% for clinician B. In addition, 9/10 of patients had all slices accepted by clinician A, while 7/10 could be accepted by clinician B. The score differences between the AI (artificial intelligence) group and the GT (ground truth) group showed no statistically significant difference for either clinician. However, the score differences in the AI group were significantly different between the two clinicians. The Kappa consistency index was 0.259. It took 3.45 s to delineate the chest wall CTV using the model. Conclusion Our model could automatically generate the CTVs for breast cancer. AI-generated structures of the proposed model showed a trend that was comparable, or was even better, than those of human-generated structures. Additional multicentre evaluations should be performed for adequate validation before the model can be completely applied in clinical practice.
Collapse
Affiliation(s)
- Zhikai Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Fangjie Liu
- Department of Radiation Oncology, Sun Yat-sen University Cancer Center, Guangzhou, China
| | - Wanqi Chen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Xia Liu
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Xiaorong Hou
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Jing Shen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Hui Guan
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | - Hongnan Zhen
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| | | | - Qi Chen
- MedMind Technology Co., Ltd., Beijing, China
| | - Yu Chen
- MedMind Technology Co., Ltd., Beijing, China
| | - Fuquan Zhang
- Department of Radiation Oncology, Peking Union Medical College Hospital (CAMS), Beijing, China
| |
Collapse
|
17
|
|
18
|
Shan H, Jia X, Yan P, Li Y, Paganetti H, Wang G. Synergizing medical imaging and radiotherapy with deep learning. MACHINE LEARNING-SCIENCE AND TECHNOLOGY 2020. [DOI: 10.1088/2632-2153/ab869f] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
19
|
Nabavi S, Abdoos M, Moghaddam ME, Mohammadi M. Respiratory Motion Prediction Using Deep Convolutional Long Short-Term Memory Network. JOURNAL OF MEDICAL SIGNALS & SENSORS 2020; 10:69-75. [PMID: 32676442 PMCID: PMC7359959 DOI: 10.4103/jmss.jmss_38_19] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2019] [Revised: 09/04/2019] [Accepted: 10/09/2019] [Indexed: 12/24/2022]
Abstract
Background Pulmonary movements during radiation therapy can cause damage to healthy tissues. It is necessary to adapt treatment planning based on tumor motion to avoid damage to healthy tissues. A range of approaches has been proposed to monitor the issue. A treatment planning based on fourdimensional computed tomography (4D CT) images can be addressed as one of the most achievable options. Although several methods proposed to predict pulmonary movements based on mathematical algorithms, the use of deep artificial neural networks has recently been considered. Methods In the current study, convolutional long shortterm memory networks are applied to predict and generate images throughout the breathing cycle. A total of 3295 CT images of six patients in three different views was considered as reference images. The proposed method was evaluated in six experiments based on a leaveonepatientout method similar to crossvalidation. Results The weighted average results of the experiments in terms of the rootmeansquared error and structural similarity index measure are 9 × 10^-3 and 0.943, respectively. Conclusion Utilizing the proposed method, because of its generative nature, which results in the generation of CT images during the breathing cycle, improves the radiotherapy treatment planning in the lack of access to 4D CT images.
Collapse
Affiliation(s)
- Shahabedin Nabavi
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran
| | - Monireh Abdoos
- Faculty of Computer Science and Engineering, Shahid Beheshti University, Tehran, Iran
| | | | - Mohammad Mohammadi
- Department of Medical Physics, Royal Adelaide Hospital, Adelaide, Australia.,Department of Medical Physics, School of Physical Sciences, The University of Adelaide, Adelaide, Australia
| |
Collapse
|
20
|
Deep fuzzy model for non-linear effective connectivity estimation in the intuition of consciousness correlates. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101732] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
21
|
Masood A, Yang P, Sheng B, Li H, Li P, Qin J, Lanfranchi V, Kim J, Feng DD. Cloud-Based Automated Clinical Decision Support System for Detection and Diagnosis of Lung Cancer in Chest CT. IEEE JOURNAL OF TRANSLATIONAL ENGINEERING IN HEALTH AND MEDICINE 2019; 8:4300113. [PMID: 31929952 PMCID: PMC6946021 DOI: 10.1109/jtehm.2019.2955458] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2019] [Revised: 09/02/2019] [Accepted: 11/08/2019] [Indexed: 12/29/2022]
Abstract
Lung cancer is a major cause for cancer-related deaths. The detection of pulmonary cancer in the early stages can highly increase survival rate. Manual delineation of lung nodules by radiologists is a tedious task. We developed a novel computer-aided decision support system for lung nodule detection based on a 3D Deep Convolutional Neural Network (3DDCNN) for assisting the radiologists. Our decision support system provides a second opinion to the radiologists in lung cancer diagnostic decision making. In order to leverage 3-dimensional information from Computed Tomography (CT) scans, we applied median intensity projection and multi-Region Proposal Network (mRPN) for automatic selection of potential region-of-interests. Our Computer Aided Diagnosis (CAD) system has been trained and validated using LUNA16, ANODE09, and LIDC-IDR datasets; the experiments demonstrate the superior performance of our system, attaining sensitivity, specificity, AUROC, accuracy, of 98.4%, 92%, 96% and 98.51% with 2.1 FPs per scan. We integrated cloud computing, trained and validated our Cloud-Based 3DDCNN on the datasets provided by Shanghai Sixth People's Hospital, as well as LUNA16, ANODE09, and LIDC-IDR. Our system outperformed the state-of-the-art systems and obtained an impressive 98.7% sensitivity at 1.97 FPs per scan. This shows the potentials of deep learning, in combination with cloud computing, for accurate and efficient lung nodule detection via CT imaging, which could help doctors and radiologists in treating lung cancer patients.
Collapse
Affiliation(s)
- Anum Masood
- Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Po Yang
- Department of Computer ScienceUniversity of SheffieldSheffieldS1 4DPU.K.
| | - Bin Sheng
- Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghai200240China
| | - Huating Li
- Shanghai Jiao Tong University Affiliated Sixth People’s HospitalShanghai200233China
| | - Ping Li
- Department of ComputingThe Hong Kong Polytechnic UniversityHong Kong
| | - Jing Qin
- Centre for Smart Health, School of NursingThe Hong Kong Polytechnic UniversityHong Kong
| | | | - Jinman Kim
- Biomedical and Multimedia Information Technology Research Group, School of Information TechnologiesThe University of SydneySydneyNSW2006Australia
| | - David Dagan Feng
- Biomedical and Multimedia Information Technology Research Group, School of Information TechnologiesThe University of SydneySydneyNSW2006Australia
| |
Collapse
|
22
|
Boldrini L, Bibault JE, Masciocchi C, Shen Y, Bittner MI. Deep Learning: A Review for the Radiation Oncologist. Front Oncol 2019; 9:977. [PMID: 31632910 PMCID: PMC6779810 DOI: 10.3389/fonc.2019.00977] [Citation(s) in RCA: 79] [Impact Index Per Article: 13.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 09/13/2019] [Indexed: 12/15/2022] Open
Abstract
Introduction: Deep Learning (DL) is a machine learning technique that uses deep neural networks to create a model. The application areas of deep learning in radiation oncology include image segmentation and detection, image phenotyping, and radiomic signature discovery, clinical outcome prediction, image dose quantification, dose-response modeling, radiation adaptation, and image generation. In this review, we explain the methods used in DL and perform a literature review using the Medline database to identify studies using deep learning in radiation oncology. The search was conducted in April 2018, and identified studies published between 1997 and 2018, strongly skewed toward 2015 and later. Methods: A literature review was performed using PubMed/Medline in order to identify important recent publications to be synthesized into a review of the current status of Deep Learning in radiation oncology, directed at a clinically-oriented reader. The search strategy included the search terms "radiotherapy" and "deep learning." In addition, reference lists of selected articles were hand searched for further potential hits of relevance to this review. The search was conducted in April 2018, and identified studies published between 1997 and 2018, strongly skewed toward 2015 and later. Results: Studies using DL for image segmentation were identified in Brain (n = 2), Head and Neck (n = 3), Lung (n = 6), Abdominal (n = 2), and Pelvic (n = 6) cancers. Use of Deep Learning has also been reported for outcome prediction, such as toxicity modeling (n = 3), treatment response and survival (n = 2), or treatment planning (n = 5). Conclusion: Over the past few years, there has been a significant number of studies assessing the performance of DL techniques in radiation oncology. They demonstrate how DL-based systems can aid clinicians in their daily work, be it by reducing the time required for or the variability in segmentation, or by helping to predict treatment outcomes and toxicities. It still remains to be seen when these techniques will be employed in routine clinical practice.
Collapse
Affiliation(s)
- Luca Boldrini
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Jean-Emmanuel Bibault
- Radiation Oncology Department, Georges Pompidou European Hospital, Assistance Publique—Hôpitaux de Paris, Paris Descartes University, Paris Sorbonne Cité, Paris, France
| | - Carlotta Masciocchi
- Dipartimento di Diagnostica per Immagini, Radioterapia Oncologica ed Ematologia, Università Cattolica del Sacro Cuore, Rome, Italy
| | - Yanting Shen
- Department of Engineering Science, University of Oxford, Oxford, United Kingdom
| | - Martin-Immanuel Bittner
- CRUK/MRC Oxford Institute for Radiation Oncology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
23
|
Ali Mirzapour S, Mazur T, Sharp G, Salari E. Intra-fraction motion prediction in MRI-guided radiation therapy using Markov processes. ACTA ACUST UNITED AC 2019; 64:195006. [DOI: 10.1088/1361-6560/ab37a9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
24
|
Rattan R, Kataria T, Banerjee S, Goyal S, Gupta D, Pandita A, Bisht S, Narang K, Mishra SR. Artificial intelligence in oncology, its scope and future prospects with specific reference to radiation oncology. BJR Open 2019; 1:20180031. [PMID: 33178922 PMCID: PMC7592433 DOI: 10.1259/bjro.20180031] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2018] [Revised: 04/01/2019] [Accepted: 04/12/2019] [Indexed: 01/02/2023] Open
Abstract
OBJECTIVE Artificial intelligence (AI) seems to be bridging the gap between the acquisition of data and its meaningful interpretation. These approaches, have shown outstanding capabilities, outperforming most classification and regression methods to date and the ability to automatically learn the most suitable data representation for the task at hand and present it for better correlation. This article tries to sensitize the practising radiation oncologists to understand where the potential role of AI lies and what further can be achieved with it. METHODS AND MATERIALS Contemporary literature was searched and the available literature was sorted and an attempt at writing a comprehensive non-systematic review was made. RESULTS The article addresses various areas in oncology, especially in the field of radiation oncology, where the work based on AI has been done. Whether it's the screening modalities, or diagnosis or the prognostic assays, AI has come with more accurately defining results and survival of patients. Various steps and protocols in radiation oncology are now using AI-based methods, like in the steps of planning, segmentation and delivery of radiation. Benefit of AI across all the platforms of health sector may lead to a more refined and personalized medicine in near future. CONCLUSION AI with the use of machine learning and artificial neural networks has come up with faster and more accurate solutions for the problems faced by oncologist. The uses of AI,are likely to get increased exponentially . However, concerns regarding demographic discrepancies in relation to patients, disease and their natural history and reports of manipulation of AI, the ultimate responsibility will rest on the treating physicians.
Collapse
Affiliation(s)
- Rajit Rattan
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Tejinder Kataria
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Susovan Banerjee
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Shikha Goyal
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Deepak Gupta
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | - Akshi Pandita
- Department of Dermatology, P. N. Behl Skin Institute, New Delhi, India
| | - Shyam Bisht
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
- Department of Dermatology, P. N. Behl Skin Institute, New Delhi, India
| | - Kushal Narang
- Division of Radiation Oncology, Medanta- The Medicity, Gurgaon, Haryana, India
| | | |
Collapse
|
25
|
Ren C, Liu SR, Wu WB, Yu XL, Cheng ZG, Liu FY, Liang P. Experimental and Preliminary Clinical Study of Real-Time Registration in Liver Tumors During Respiratory Motion Based on a Multimodality Image Navigation System. Technol Cancer Res Treat 2019; 18:1533033819857767. [PMID: 31390948 PMCID: PMC6686313 DOI: 10.1177/1533033819857767] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Purpose: To develop a fusion imaging system that combines ultrasound and computed tomography for real-time tumor tracking and to validate the accuracy of performing registration via this approach during a specific breathing phase. Materials and Methods: The initial part of the experimental study was performed using iodized oil injection in pig livers and was focused on determining the accuracy of registration. Eight points (A1-4 and B1-4) at different positions and with different target sizes were selected as target points. During respiratory motion, we used our self-designed system to perform the procedure either with (experimental group, E) or without (control group, C) the respiratory monitoring module. The registration errors were then compared between the 2 groups and within group E. The second part of this study was designed as a preliminary clinical study and was performed in 18 patients. Screening was performed to determine the combination of points on the body surface that provided the highest sensitivity to respiratory motion. Registration was performed either with (group E) or without (group C) the respiratory monitoring module. Registration errors were compared between the 2 groups. Results: In part 1 of this study, there were fewer registration errors at each point in group E than at the corresponding points in group C (P < .01). In group E, there were more registration errors at points A1 and B1 than at the other points (P < .05). There was no significant difference in registration errors among the remaining points. During part 2 of the study, there was a significant difference in the registration errors between the 2 groups (P < .01). Conclusions: Real-time fusion registration is feasible and can be accurately performed during respiratory motions when using this system.
Collapse
Affiliation(s)
- Chao Ren
- 1 Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Shi-Rong Liu
- 2 Peking University Third Hospital, Beijing, China
| | - Wen-Bo Wu
- 3 Baihui Weikang Medical Robot Technology Co, Ltd, Beijing, China
| | - Xiao-Ling Yu
- 1 Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Zhi-Gang Cheng
- 1 Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Fang-Yi Liu
- 1 Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| | - Ping Liang
- 1 Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, China
| |
Collapse
|
26
|
Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med 2018; 98:126-146. [PMID: 29787940 DOI: 10.1016/j.compbiomed.2018.05.018] [Citation(s) in RCA: 168] [Impact Index Per Article: 24.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 05/15/2018] [Accepted: 05/15/2018] [Indexed: 12/17/2022]
Abstract
More than 50% of cancer patients are treated with radiotherapy, either exclusively or in combination with other methods. The planning and delivery of radiotherapy treatment is a complex process, but can now be greatly facilitated by artificial intelligence technology. Deep learning is the fastest-growing field in artificial intelligence and has been successfully used in recent years in many domains, including medicine. In this article, we first explain the concept of deep learning, addressing it in the broader context of machine learning. The most common network architectures are presented, with a more specific focus on convolutional neural networks. We then present a review of the published works on deep learning methods that can be applied to radiotherapy, which are classified into seven categories related to the patient workflow, and can provide some insights of potential future applications. We have attempted to make this paper accessible to both radiotherapy and deep learning communities, and hope that it will inspire new collaborations between these two communities to develop dedicated radiotherapy applications.
Collapse
Affiliation(s)
- Philippe Meyer
- Department of Medical Physics, Paul Strauss Center, Strasbourg, France.
| | | | | | | |
Collapse
|
27
|
Mortality prediction in intensive care units (ICUs) using a deep rule-based fuzzy classifier. J Biomed Inform 2018; 79:48-59. [DOI: 10.1016/j.jbi.2018.02.008] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Revised: 02/13/2018] [Accepted: 02/16/2018] [Indexed: 11/23/2022]
|
28
|
Kalantari A, Kamsin A, Shamshirband S, Gani A, Alinejad-Rokny H, Chronopoulos AT. Computational intelligence approaches for classification of medical data: State-of-the-art, future challenges and research directions. Neurocomputing 2018. [DOI: 10.1016/j.neucom.2017.01.126] [Citation(s) in RCA: 70] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
29
|
Respiratory Motion Modelling Using cGANs. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION – MICCAI 2018 2018. [DOI: 10.1007/978-3-030-00937-3_10] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
|
30
|
Ekins S. The Next Era: Deep Learning in Pharmaceutical Research. Pharm Res 2016; 33:2594-603. [PMID: 27599991 DOI: 10.1007/s11095-016-2029-7] [Citation(s) in RCA: 127] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2016] [Accepted: 08/23/2016] [Indexed: 01/22/2023]
Abstract
Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule's properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique.
Collapse
Affiliation(s)
- Sean Ekins
- Collaborations Pharmaceuticals, Inc, 5616 Hilltop Needmore Road, Fuquay-Varina, North Carolina, 27526, USA. .,Collaborative Drug Discovery, 1633 Bayshore Highway, Suite 342, Burlingame, California, 94010, USA.
| |
Collapse
|