1
|
Linguraru MG, Bakas S, Aboian M, Chang PD, Flanders AE, Kalpathy-Cramer J, Kitamura FC, Lungren MP, Mongan J, Prevedello LM, Summers RM, Wu CC, Adewole M, Kahn CE. Clinical, Cultural, Computational, and Regulatory Considerations to Deploy AI in Radiology: Perspectives of RSNA and MICCAI Experts. Radiol Artif Intell 2024; 6:e240225. [PMID: 38984986 PMCID: PMC11294958 DOI: 10.1148/ryai.240225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2024] [Revised: 04/13/2024] [Accepted: 04/25/2024] [Indexed: 07/11/2024]
Abstract
The Radiological Society of North of America (RSNA) and the Medical Image Computing and Computer Assisted Intervention (MICCAI) Society have led a series of joint panels and seminars focused on the present impact and future directions of artificial intelligence (AI) in radiology. These conversations have collected viewpoints from multidisciplinary experts in radiology, medical imaging, and machine learning on the current clinical penetration of AI technology in radiology and how it is impacted by trust, reproducibility, explainability, and accountability. The collective points-both practical and philosophical-define the cultural changes for radiologists and AI scientists working together and describe the challenges ahead for AI technologies to meet broad approval. This article presents the perspectives of experts from MICCAI and RSNA on the clinical, cultural, computational, and regulatory considerations-coupled with recommended reading materials-essential to adopt AI technology successfully in radiology and, more generally, in clinical practice. The report emphasizes the importance of collaboration to improve clinical deployment, highlights the need to integrate clinical and medical imaging data, and introduces strategies to ensure smooth and incentivized integration. Keywords: Adults and Pediatrics, Computer Applications-General (Informatics), Diagnosis, Prognosis © RSNA, 2024.
Collapse
Affiliation(s)
- Marius George Linguraru
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Spyridon Bakas
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Mariam Aboian
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Peter D. Chang
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Adam E. Flanders
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Jayashree Kalpathy-Cramer
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Felipe C. Kitamura
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Matthew P. Lungren
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - John Mongan
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Luciano M. Prevedello
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Ronald M. Summers
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Carol C. Wu
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Maruf Adewole
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| | - Charles E. Kahn
- From the Sheikh Zayed Institute for Pediatric Surgical Innovation,
Children’s National Hospital, Washington, DC (M.G.L.); Divisions of
Radiology and Pediatrics, George Washington University School of Medicine and
Health Sciences, Washington, DC (M.G.L.); Division of Computational Pathology,
Department of Pathology & Laboratory Medicine, School of Medicine,
Indiana University, Indianapolis, Ind (S.B.); Department of Radiology,
Children’s Hospital of Philadelphia, Philadelphia, Pa (M.A.); Department
of Radiological Sciences, University of California Irvine, Irvine, Calif
(P.D.C.); Department of Radiology, Thomas Jefferson University, Philadelphia, Pa
(A.E.F.); Department of Ophthalmology, University of Colorado Anschutz Medical
Campus, Aurora, Colo (J.K.C.); Department of Applied Innovation and AI,
Diagnósticos da América SA (DasaInova), São Paulo, Brazil
(F.C.K.); Department of Diagnostic Imaging, Universidade Federal de São
Paulo, São Paulo, Brazil (F.C.K.); Microsoft, Nuance, Burlington, Mass
(M.P.L.); Department of Radiology and Biomedical Imaging and Center for
Intelligent Imaging, University of California San Francisco, San Francisco,
Calif (J.M.); Department of Radiology, The Ohio State University Wexner Medical
Center, Columbus, Ohio (L.M.P.); Department of Radiology and Imaging Sciences,
National Institutes of Health Clinical Center, Bethesda, Md (R.M.S.); Division
of Diagnostic Imaging, University of Texas MD Anderson Cancer Center, Houston,
Tex (C.C.W.); Medical Artificial Intelligence Laboratory, University of Lagos
College of Medicine, Lagos, Nigeria (M.A.); and Department of Radiology,
University of Pennsylvania, 3400 Spruce St, 1 Silverstein, Philadelphia, PA
19104-6243 (C.E.K.)
| |
Collapse
|
2
|
Al-Kadi OS, Al-Emaryeen R, Al-Nahhas S, Almallahi I, Braik R, Mahafza W. Empowering brain cancer diagnosis: harnessing artificial intelligence for advanced imaging insights. Rev Neurosci 2024; 35:399-419. [PMID: 38291768 DOI: 10.1515/revneuro-2023-0115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 12/10/2023] [Indexed: 02/01/2024]
Abstract
Artificial intelligence (AI) is increasingly being used in the medical field, specifically for brain cancer imaging. In this review, we explore how AI-powered medical imaging can impact the diagnosis, prognosis, and treatment of brain cancer. We discuss various AI techniques, including deep learning and causality learning, and their relevance. Additionally, we examine current applications that provide practical solutions for detecting, classifying, segmenting, and registering brain tumors. Although challenges such as data quality, availability, interpretability, transparency, and ethics persist, we emphasise the enormous potential of intelligent applications in standardising procedures and enhancing personalised treatment, leading to improved patient outcomes. Innovative AI solutions have the power to revolutionise neuro-oncology by enhancing the quality of routine clinical practice.
Collapse
Affiliation(s)
- Omar S Al-Kadi
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Roa'a Al-Emaryeen
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Sara Al-Nahhas
- King Abdullah II School for Information Technology, University of Jordan, Amman, 11942, Jordan
| | - Isra'a Almallahi
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Ruba Braik
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| | - Waleed Mahafza
- Department of Diagnostic Radiology, Jordan University Hospital, Amman, 11942, Jordan
| |
Collapse
|
3
|
Ramos JS, Cazzolato MT, Linares OC, Maciel JG, Menezes-Reis R, Azevedo-Marques PM, Nogueira-Barbosa MH, Traina Júnior C, Traina AJM. Fast and accurate 3-D spine MRI segmentation using FastCleverSeg. Magn Reson Imaging 2024; 109:134-146. [PMID: 38508290 DOI: 10.1016/j.mri.2024.03.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 03/13/2024] [Accepted: 03/16/2024] [Indexed: 03/22/2024]
Abstract
Accurate and efficient segmenting of vertebral bodies, muscles, and discs is crucial for analyzing various spinal diseases. However, traditional methods are either laborious and time-consuming (manual segmentation) or require extensive training data (fully automatic segmentation). FastCleverSeg, our proposed semi-automatic segmentation approach, addresses those limitations by significantly reducing user interaction while maintaining high accuracy. First, we reduce user interaction by requiring the manual annotation of only two or three slices. Next, we automatically Estimate the Annotation on Intermediary Slices (EANIS) using traditional computer vision/graphics concepts. Finally, our proposed method leverages improved voxel weight balancing to achieve fast and precise volumetric segmentation in the segmentation process. Experimental evaluations on our assembled diverse MRI databases comprising 179 patients (60 male, 119 female), demonstrate a remarkable 25 ms (30 ms standard deviation) processing time and a significant reduction in user interaction compared to existing approaches. Importantly, FastCleverSeg maintains or surpasses the segmentation quality of competing methods, achieving a Dice score of 94%. This invaluable tool empowers physicians to efficiently generate reliable ground truths, expediting the segmentation process and paving the way for future integration with deep learning approaches. In turn, this opens exciting possibilities for future fully automated spine segmentation.
Collapse
Affiliation(s)
- Jonathan S Ramos
- Computer Science Department, Federal University of Rondônia (DACC/UNIR), 364 BR, 76801-059, Rondônia, Brazil; Institute of Mathematics and Computer Sciences, University of Sao Paulo (ICMC/USP), 400 Trabalhador Saocarlense Avenue, 13566-590 São Carlos, São Paulo, Brazil.
| | - Mirela T Cazzolato
- Institute of Mathematics and Computer Sciences, University of Sao Paulo (ICMC/USP), 400 Trabalhador Saocarlense Avenue, 13566-590 São Carlos, São Paulo, Brazil
| | - Oscar C Linares
- Institute of Mathematics and Computer Sciences, University of Sao Paulo (ICMC/USP), 400 Trabalhador Saocarlense Avenue, 13566-590 São Carlos, São Paulo, Brazil
| | - Jamilly G Maciel
- Ribeirao Preto Medical School, University of Sao Paulo (FMRP/USP), 3900 Bandeirantes Avenue, 695014 Ribeirão Preto, São Paulo, Brazil
| | - Rafael Menezes-Reis
- Ribeirao Preto Medical School, University of Sao Paulo (FMRP/USP), 3900 Bandeirantes Avenue, 695014 Ribeirão Preto, São Paulo, Brazil
| | - Paulo M Azevedo-Marques
- Ribeirao Preto Medical School, University of Sao Paulo (FMRP/USP), 3900 Bandeirantes Avenue, 695014 Ribeirão Preto, São Paulo, Brazil
| | - Marcello H Nogueira-Barbosa
- Ribeirao Preto Medical School, University of Sao Paulo (FMRP/USP), 3900 Bandeirantes Avenue, 695014 Ribeirão Preto, São Paulo, Brazil
| | - Caetano Traina Júnior
- Institute of Mathematics and Computer Sciences, University of Sao Paulo (ICMC/USP), 400 Trabalhador Saocarlense Avenue, 13566-590 São Carlos, São Paulo, Brazil
| | - Agma J M Traina
- Institute of Mathematics and Computer Sciences, University of Sao Paulo (ICMC/USP), 400 Trabalhador Saocarlense Avenue, 13566-590 São Carlos, São Paulo, Brazil
| |
Collapse
|
4
|
Prasad V, Jeba Jingle ID, Sriramakrishnan GV. DTDO: Driving Training Development Optimization enabled deep learning approach for brain tumour classification using MRI. NETWORK (BRISTOL, ENGLAND) 2024:1-42. [PMID: 38801074 DOI: 10.1080/0954898x.2024.2351159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 04/29/2024] [Indexed: 05/29/2024]
Abstract
A brain tumour is an abnormal mass of tissue. Brain tumours vary in size, from tiny to large. Moreover, they display variations in location, shape, and size, which add complexity to their detection. The accurate delineation of tumour regions poses a challenge due to their irregular boundaries. In this research, these issues are overcome by introducing the DTDO-ZFNet for detection of brain tumour. The input Magnetic Resonance Imaging (MRI) image is fed to the pre-processing stage. Tumour areas are segmented by utilizing SegNet in which the factors of SegNet are biased using DTDO. The image augmentation is carried out using eminent techniques, such as geometric transformation and colour space transformation. Here, features such as GIST descriptor, PCA-NGIST, statistical feature and Haralick features, SLBT feature, and CNN features are extricated. Finally, the categorization of the tumour is accomplished based on ZFNet, which is trained by utilizing DTDO. The devised DTDO is a consolidation of DTBO and CDDO. The comparison of proposed DTDO-ZFNet with the existing methods, which results in highest accuracy of 0.944, a positive predictive value (PPV) of 0.936, a true positive rate (TPR) of 0.939, a negative predictive value (NPV) of 0.937, and a minimal false-negative rate (FNR) of 0.061%.
Collapse
Affiliation(s)
- Vadamodula Prasad
- Department of Computer Science & Engineering, Lendi Institute of Engineering & Technology, Jonnada, India
| | - Issac Diana Jeba Jingle
- Department of Computer Science & Engineering, Christ (Deemed to be University), Bangalore, India
| | | |
Collapse
|
5
|
Selvi T K, Sumaiya Begum A, Poonkuzhali P, Aarthi R. Brain tumor classification for MRI images using dual-discriminator conditional generative adversarial network. Electromagn Biol Med 2024:1-14. [PMID: 38461438 DOI: 10.1080/15368378.2024.2321352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2023] [Accepted: 02/15/2024] [Indexed: 03/12/2024]
Abstract
This research focuses on improving the detection and classification of brain tumors using a method called Brain Tumor Classification using Dual-Discriminator Conditional Generative Adversarial Network (DDCGAN) for MRI images. The proposed system is implemented in the MATLAB programming language. In this study, images of the brain are taken from a dataset and processed to remove noise and enhance image quality. The brain pictures are taken from Brats MRI image dataset. The images are preprocessed using Structural interval gradient filtering to remove noises and improve the quality of the image. The preprocessing outcomes are given to feature extraction. The features are extracted by Empirical wavelet transform (EWT) and the extracted features are given to the Dual-discriminator conditional generative adversarial network (DDCGAN) for recognizing the brain tumor, which classifies the brain images into glioma, meningioma, pituitary gland, and normal. Then, the weight parameter of DDCGAN is optimized by utilizing Border Collie Optimization (BCO), which is a met a heuristic approach to handle the real world optimization issues. It maximizes the detection accurateness and reduced computational time. Implemented in MATLAB, the experimental results demonstrate that the proposed system achieves a high sensitivity of 99.58%. The BCO-DDCGAN-MRI-BTC method outperforms existing techniques in terms of precision and sensitivity when compared to methods like Kernel Basis SVM (KSVM-HHO-BTC), Joint Training of Two-Channel Deep Neural Network (JT-TCDNN-BTC), and YOLOv2 including Convolutional Neural Network (YOLOv2-CNN-BTC). The research findings indicate that the proposed method enhances the accuracy of brain tumor classification while reducing computational time and errors.
Collapse
Affiliation(s)
- Kalai Selvi T
- Department of Artificial Intelligence and Data Science, Easwari Engineering College, Chennai, Tamil Nadu, India
| | - A Sumaiya Begum
- Department of Electronics and Communication Engineering, R.M.D Engineering College, Chennai, Tamil Nadu, India
| | - P Poonkuzhali
- Department of Electronics and Communication Engineering, R.M.D Engineering College, Chennai, Tamil Nadu, India
| | - R Aarthi
- Department of Electronics and Communication Engineering, R.M.D Engineering College, Chennai, Tamil Nadu, India
| |
Collapse
|
6
|
Liu C, Liu X, Wei Z, Chang Z, Bai Y, Zeng P, Cao Q, Tie C, Lei Z, Sun P, Liang H, Sun Q, Zhang X. Amorphous Albumin Gadolinium-Based Nanoparticles for Ultrahigh-Resolution Magnetic Resonance Angiography. ACS APPLIED MATERIALS & INTERFACES 2024; 16:9702-9712. [PMID: 38363797 PMCID: PMC10911108 DOI: 10.1021/acsami.3c16391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/24/2024] [Accepted: 01/31/2024] [Indexed: 02/18/2024]
Abstract
Magnetic resonance angiography (MRA) contrast agents are extensively utilized in clinical practice due to their capability of improving the image resolution and sensitivity. However, the clinically approved MRA contrast agents have the disadvantages of a limited acquisition time window and high dose administration for effective imaging. Herein, albumin-coated gadolinium-based nanoparticles (BSA-Gd) were meticulously developed for in vivo ultrahigh-resolution MRA. Compared to Gd-DTPA, BSA-Gd exhibits a significantly higher longitudinal relaxivity (r1 = 76.7 mM-1 s-1), nearly 16-fold greater than that of Gd-DTPA, and an extended blood circulation time (t1/2 = 40 min), enabling a dramatically enhanced high-resolution imaging of microvessels (sub-200 μm) and low dose imaging (about 1/16 that of Gd-DTPA). Furthermore, the clinically significant fine vessels were successfully mapped in large mammals, including a circle of Willis, kidney and liver vascular branches, tumor vessels, and differentiated arteries from veins using dynamic contrast-enhanced MRA BSA-Gd, and have superior imaging capability and biocompatibility, and their clinical applications hold substantial promise.
Collapse
Affiliation(s)
- Chenchen Liu
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Guangdong
Provincial Key Laboratory of Biomedical Optical Imaging Technology
& Center for Biomedical Optics and Molecular Imaging, Shenzhen Institute of Advanced Technology, Chinese
Academy of Science, Shenzhen 518055, China
| | - Xiaoming Liu
- Department
of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei
Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Zhihao Wei
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Zong Chang
- Guangdong
Provincial Key Laboratory of Biomedical Optical Imaging Technology
& Center for Biomedical Optics and Molecular Imaging, Shenzhen Institute of Advanced Technology, Chinese
Academy of Science, Shenzhen 518055, China
| | - Yaowei Bai
- Department
of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei
Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Pei Zeng
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Qi Cao
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Changjun Tie
- Paul
C. Lauterbur
Research Center for Biomedical Imaging, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 518055, China
| | - Ziqiao Lei
- Department
of Radiology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Hubei
Province Key Laboratory of Molecular Imaging, Wuhan 430022, China
| | - Peng Sun
- Clinical
& Technical Support, Philips Healthcare, Beijing 100600, China
| | - Huageng Liang
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| | - Qinchao Sun
- Guangdong
Provincial Key Laboratory of Biomedical Optical Imaging Technology
& Center for Biomedical Optics and Molecular Imaging, Shenzhen Institute of Advanced Technology, Chinese
Academy of Science, Shenzhen 518055, China
| | - Xiaoping Zhang
- Department
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
- Institute
of Urology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430022, China
| |
Collapse
|
7
|
S SP, A S, T K, S D. Self-attention-based generative adversarial network optimized with color harmony algorithm for brain tumor classification. Electromagn Biol Med 2024:1-15. [PMID: 38369844 DOI: 10.1080/15368378.2024.2312363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Accepted: 01/25/2024] [Indexed: 02/20/2024]
Abstract
This paper proposes a novel approach, BTC-SAGAN-CHA-MRI, for the classification of brain tumors using a SAGAN optimized with a Color Harmony Algorithm. Brain cancer, with its high fatality rate worldwide, especially in the case of brain tumors, necessitates more accurate and efficient classification methods. While existing deep learning approaches for brain tumor classification have been suggested, they often lack precision and require substantial computational time.The proposed method begins by gathering input brain MR images from the BRATS dataset, followed by a pre-processing step using a Mean Curvature Flow-based approach to eliminate noise. The pre-processed images then undergo the Improved Non-Sub sampled Shearlet Transform (INSST) for extracting radiomic features. These features are fed into the SAGAN, which is optimized with a Color Harmony Algorithm to categorize the brain images into different tumor types, including Gliomas, Meningioma, and Pituitary tumors. This innovative approach shows promise in enhancing the precision and efficiency of brain tumor classification, holding potential for improved diagnostic outcomes in the field of medical imaging. The accuracy acquired for the brain tumor identification from the proposed method is 99.29%. The proposed BTC-SAGAN-CHA-MRI technique achieves 18.29%, 14.09% and 7.34% higher accuracy and 67.92%,54.04%, and 59.08% less Computation Time when analyzed to the existing models, like Brain tumor diagnosis utilizing deep learning convolutional neural network with transfer learning approach (BTC-KNN-SVM-MRI); M3BTCNet: multi model brain tumor categorization under metaheuristic deep neural network features optimization (BTC-CNN-DEMFOA-MRI), and efficient method depending upon hierarchical deep learning neural network classifier for brain tumour categorization (BTC-Hie DNN-MRI) respectively.
Collapse
Affiliation(s)
- Senthil Pandi S
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Ramapuram, Chennai, Tamil Nadu, India
| | - Senthilselvi A
- Department of Computer Science and Engineering, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Kumaragurubaran T
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai, Tamil Nadu, India
| | - Dhanasekaran S
- Department of Information Technology, Kalasalingam Academy of Research and Education (Deemed to be University), Srivilliputtur, Tamilnadu, India
| |
Collapse
|
8
|
Aluri S, Imambi SS. Brain tumour classification using MRI images based on lenet with golden teacher learning optimization. NETWORK (BRISTOL, ENGLAND) 2024; 35:27-54. [PMID: 37947040 DOI: 10.1080/0954898x.2023.2275720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 10/22/2023] [Indexed: 11/12/2023]
Abstract
Brain tumour (BT) is a dangerous neurological disorder produced by abnormal cell growth within the skull or brain. Nowadays, the death rate of people with BT is linearly growing. The finding of tumours at an early stage is crucial for giving treatment to patients, which improves the survival rate of patients. Hence, the BT classification (BTC) is done in this research using magnetic resonance imaging (MRI) images. In this research, the input MRI image is pre-processed using a non-local means (NLM) filter that denoises the input image. For attaining the effective classified result, the tumour area from the MRI image is segmented by the SegNet model. Furthermore, the BTC is accomplished by the LeNet model whose weight is optimized by the Golden Teacher Learning Optimization Algorithm (GTLO) such that the classified output produced by the LeNet model is Gliomas, Meningiomas, and Pituitary tumours. The experimental outcome displays that the GTLO-LeNet achieved an Accuracy of 0.896, Negative Predictive value (NPV) of 0.907, Positive Predictive value (PPV) of 0.821, True Negative Rate (TNR) of 0.880, and True Positive Rate (TPR) of 0.888.
Collapse
Affiliation(s)
- Srilakshmi Aluri
- Research Scholar, Computer Science & Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| | - Sagar S Imambi
- Professor, Computer Science and Engineering, K L Educational foundation, deemed to be University, Vaddeswaram, India
| |
Collapse
|
9
|
Khosravi P, Mohammadi S, Zahiri F, Khodarahmi M, Zahiri J. AI-Enhanced Detection of Clinically Relevant Structural and Functional Anomalies in MRI: Traversing the Landscape of Conventional to Explainable Approaches. J Magn Reson Imaging 2024. [PMID: 38243677 DOI: 10.1002/jmri.29247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Revised: 01/05/2024] [Accepted: 01/08/2024] [Indexed: 01/21/2024] Open
Abstract
Anomaly detection in medical imaging, particularly within the realm of magnetic resonance imaging (MRI), stands as a vital area of research with far-reaching implications across various medical fields. This review meticulously examines the integration of artificial intelligence (AI) in anomaly detection for MR images, spotlighting its transformative impact on medical diagnostics. We delve into the forefront of AI applications in MRI, exploring advanced machine learning (ML) and deep learning (DL) methodologies that are pivotal in enhancing the precision of diagnostic processes. The review provides a detailed analysis of preprocessing, feature extraction, classification, and segmentation techniques, alongside a comprehensive evaluation of commonly used metrics. Further, this paper explores the latest developments in ensemble methods and explainable AI, offering insights into future directions and potential breakthroughs. This review synthesizes current insights, offering a valuable guide for researchers, clinicians, and medical imaging experts. It highlights AI's crucial role in improving the precision and speed of detecting key structural and functional irregularities in MRI. Our exploration of innovative techniques and trends furthers MRI technology development, aiming to refine diagnostics, tailor treatments, and elevate patient care outcomes. LEVEL OF EVIDENCE: 5 TECHNICAL EFFICACY: Stage 1.
Collapse
Affiliation(s)
- Pegah Khosravi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- The CUNY Graduate Center, City University of New York, New York City, New York, USA
| | - Saber Mohammadi
- Department of Biological Sciences, New York City College of Technology, CUNY, New York City, New York, USA
- Department of Biophysics, Tarbiat Modares University, Tehran, Iran
| | - Fatemeh Zahiri
- Department of Cell and Molecular Sciences, Kharazmi University, Tehran, Iran
| | | | - Javad Zahiri
- Department of Neuroscience, University of California San Diego, San Diego, California, USA
| |
Collapse
|
10
|
Lakshmi A, Alagarsamy M, Anbarasa Pandian A, Paramathi Mani D. Evolutionary gravitational neocognitron neural network optimized with marine predators optimization algorithm for MRI brain tumor classification. Electromagn Biol Med 2024:1-18. [PMID: 38217513 DOI: 10.1080/15368378.2024.2301952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2022] [Accepted: 12/13/2023] [Indexed: 01/15/2024]
Abstract
Magnetic resonance imaging (MRI) is a powerful tool for tumor diagnosis in human brain. Here, the MRI images are considered to detect the brain tumor and classify the regions as meningioma, glioma, pituitary and normal types. Numerous existing methods regarding brain tumor detection were suggested previously, but none of the methods accurately categorizes the brain tumor and consumes more computation period. To address these problems, an Evolutionary Gravitational Neocognitron Neural Network optimized with Marine Predators Algorithm is proposed in this article for MRI Brain Tumor Classification (EGNNN-VGG16-MPA-MRI-BTC). Initially, the brain MRI pictures are collected under Brats MRI image dataset. By using Savitzky-Golay Denoising approach, these images are pre-processed. The features are extracted utilizing visual geometry group network (VGG16). By utilizing VGG16, the features, like Grey level features, Haralick Texture features are extracted. These extracted features are given to EGNNN classifier, which categorizes the brain tumor as glioma, meningioma, pituitary gland and normal. Batch Normalization (BN) layer of EGNNN is eliminated and included with VGG16 layer. Marine Predators Optimization Algorithm (MPA) optimizes the weight parameters of EGNNN. The simulation is activated in MATLAB. Finally, the EGNNN-VGG16-MPA-MRI-BTC method attains 38.98%, 46.74%, 23.27% higher accuracy, 24.24%, 37.82%, 13.92% higher precision, 26.94%, 47.04%, 38.94% higher sensitivity compared with the existing AlexNet-SVM-MRI-BTC, RESNET-SGD-MRI-BTC and MobileNet-V2-MRI-BTC models respectively.
Collapse
Affiliation(s)
- A Lakshmi
- Department of Electronics and Communication Engineering, Ramco Institute of Technology, Rajapalayam, Tamil Nadu, India
| | - Manjunathan Alagarsamy
- Department of Electronics and Communication Engineering, K. Ramakrishnan College of Technology, Trichy, Tamil Nadu, India
| | - A Anbarasa Pandian
- Department of Computer Science & Business Systems, Panimalar Engineering College, Poonmallae, Chennai, Tamil Nadu, India
| | - Dinesh Paramathi Mani
- Department of Electronics and Communication Engineering, Sona College of Technology, salem, Tamil Nadu, India
| |
Collapse
|
11
|
Sharif M, Tanvir U, Munir EU, Khan MA, Yasmin M. Brain tumor segmentation and classification by improved binomial thresholding and multi-features selection. JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING 2024; 15:1063-1082. [DOI: 10.1007/s12652-018-1075-x] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Accepted: 09/27/2018] [Indexed: 08/25/2024]
|
12
|
Zhou D, Xu L, Wang T, Wei S, Gao F, Lai X, Cao J. M-DDC: MRI based demyelinative diseases classification with U-Net segmentation and convolutional network. Neural Netw 2024; 169:108-119. [PMID: 37890361 DOI: 10.1016/j.neunet.2023.10.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 09/03/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Childhood demyelinative diseases classification (DDC) with brain magnetic resonance imaging (MRI) is crucial to clinical diagnosis. But few attentions have been paid to DDC in the past. How to accurately differentiate pediatric-onset neuromyelitis optica spectrum disorder (NMOSD) from acute disseminated encephalomyelitis (ADEM) based on MRI is challenging in DDC. In this paper, a novel architecture M-DDC based on joint U-Net segmentation network and deep convolutional network is developed. The U-Net segmentation can provide pixel-level structure information, that helps the lesion areas location and size estimation. The classification branch in DDC can detect the regions of interest inside MRIs, including the white matter regions where lesions appear. The performance of the proposed method is evaluated on MRIs of 201 subjects recorded from the Children's Hospital of Zhejiang University School of Medicine. The comparisons show that the proposed DDC achieves the highest accuracy of 99.19% and dice of 71.1% for ADEM and NMOSD classification and segmentation, respectively.
Collapse
Affiliation(s)
- Deyang Zhou
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Lu Xu
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Tianlei Wang
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Shaonong Wei
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China; HDU-ITMO Joint Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Feng Gao
- Department of Neurology, Children's Hospital, Zhejiang University School of Medicine, 310018, China.
| | - Xiaoping Lai
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Jiuwen Cao
- Machine Learning and I-health International Cooperation Base of Zhejiang Province, Hangzhou Dianzi University, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| |
Collapse
|
13
|
Weninger L, Ecke J, Jütten K, Clusmann H, Wiesmann M, Merhof D, Na CH. Diffusion MRI anomaly detection in glioma patients. Sci Rep 2023; 13:20366. [PMID: 37990121 PMCID: PMC10663596 DOI: 10.1038/s41598-023-47563-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 11/15/2023] [Indexed: 11/23/2023] Open
Abstract
Diffusion-MRI (dMRI) measures molecular diffusion, which allows to characterize microstructural properties of the human brain. Gliomas strongly alter these microstructural properties. Delineation of brain tumors currently mainly relies on conventional MRI-techniques, which are, however, known to underestimate tumor volumes in diffusely infiltrating glioma. We hypothesized that dMRI is well suited for tumor delineation, and developed two different deep-learning approaches. The first diffusion-anomaly detection architecture is a denoising autoencoder, the second consists of a reconstruction and a discrimination network. Each model was exclusively trained on non-annotated dMRI of healthy subjects, and then applied on glioma patients' data. To validate these models, a state-of-the-art supervised tumor segmentation network was modified to generate groundtruth tumor volumes based on structural MRI. Compared to groundtruth segmentations, a dice score of 0.67 ± 0.2 was obtained. Further inspecting mismatches between diffusion-anomalous regions and groundtruth segmentations revealed, that these colocalized with lesions delineated only later on in structural MRI follow-up data, which were not visible at the initial time of recording. Anomaly-detection methods are suitable for tumor delineation in dMRI acquisitions, and may further enhance brain-imaging analysis by detection of occult tumor infiltration in glioma patients, which could improve prognostication of disease evolution and tumor treatment strategies.
Collapse
Affiliation(s)
- Leon Weninger
- Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH Aachen University, Aachen, Germany
- Department of Electrical Engineering, RWTH Aachen University, Aachen, Germany
| | - Jarek Ecke
- Department of Electrical Engineering, RWTH Aachen University, Aachen, Germany
| | - Kerstin Jütten
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany
| | - Hans Clusmann
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany
| | - Martin Wiesmann
- Department of Neuroradiology, RWTH Aachen University, Aachen, Germany
| | - Dorit Merhof
- Faculty of Informatics and Computer Science, University of Regensburg, Regensburg, Germany
- Frauenhofer-Institut für Digitale Medizin, MEVIS, Bremen, Germany
| | - Chuh-Hyoun Na
- Department of Neurosurgery, RWTH Aachen University, Aachen, Germany.
- Center for Integrated Oncology Aachen Bonn Cologne Duesseldorf (CIO ABCD), Aachen, Germany.
| |
Collapse
|
14
|
Krishnamoorthy S, Paulraj S, Selvaraj NP, Ragupathy B, Arumugam S. A novel approach for neural networks based diagnosis and grading of stroke in tumor-affected brain MRIs. NETWORK (BRISTOL, ENGLAND) 2023; 34:190-220. [PMID: 37352128 DOI: 10.1080/0954898x.2023.2225601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 04/28/2023] [Accepted: 06/11/2023] [Indexed: 06/25/2023]
Abstract
Recognition and diagnosis of stroke from magnetic resonance Image (MRIs) are significant for medical procedures in therapeutic standards. The primary goal of this scheme is the discovery of stroke in tumour locale in brain tissues influenced image. The probability of stroke is categorized on brain tumour influenced images into mild, moderate, or serious cases. The mild and moderate phases of stroke are recognized as "Ahead of schedule" findings and serious cases are distinguished as "Advance" determination. The proposed Glioblastoma brain tumour recognition strategy used the Multifaceted Brain Tumour Image Segmentation test open-access dataset for evaluating the presentation. The brain images are classified utilizing the Deep Neural Networks classification algorithm as normal and abnormal images. The tumour region is segmented from the identified set of abnormal images using the normalized graph cut algorithm. The stroke likelihood is identified using the Deep Neural Networks by analysing the proximity of tumour section in brain matters. The proposed stroke analysis framework accurately groups 10 images as "Right on time" stroke probability images and accomplishes 90% order rate. The proposed stroke prediction framework effectively characterizes images as "Advance" stroke probability images and accomplishes 90% characterization rate.
Collapse
Affiliation(s)
| | - Sivakumar Paulraj
- School of Computer Science and Engineering, Vellore Institute of Technology, Chennai, Tamil Nadu, India
| | - Nagendra Prabhu Selvaraj
- Department of Computational Intelligence, SRM Institute of Science and Technology, Chennai, Tamil Nadu, India
| | - Balakumaresan Ragupathy
- Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul, Tamil Nadu, India
| | - Selvapandian Arumugam
- Department of Electronics and Communication Engineering, PSNA College of Engineering and Technology, Dindigul, Tamil Nadu, India
| |
Collapse
|
15
|
Cao Y, Zhou W, Zang M, An D, Feng Y, Yu B. MBANet: A 3D convolutional neural network with multi-branch attention for brain tumor segmentation from MRI images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
16
|
Reddy KR, Dhuli R. A Novel Lightweight CNN Architecture for the Diagnosis of Brain Tumors Using MR Images. Diagnostics (Basel) 2023; 13:diagnostics13020312. [PMID: 36673122 PMCID: PMC9858139 DOI: 10.3390/diagnostics13020312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 12/22/2022] [Accepted: 01/11/2023] [Indexed: 01/18/2023] Open
Abstract
Over the last few years, brain tumor-related clinical cases have increased substantially, particularly in adults, due to environmental and genetic factors. If they are unidentified in the early stages, there is a risk of severe medical complications, including death. So, early diagnosis of brain tumors plays a vital role in treatment planning and improving a patient's condition. There are different forms, properties, and treatments of brain tumors. Among them, manual identification and classification of brain tumors are complex, time-demanding, and sensitive to error. Based on these observations, we developed an automated methodology for detecting and classifying brain tumors using the magnetic resonance (MR) imaging modality. The proposed work includes three phases: pre-processing, classification, and segmentation. In the pre-processing, we started with the skull-stripping process through morphological and thresholding operations to eliminate non-brain matters such as skin, muscle, fat, and eyeballs. Then we employed image data augmentation to improve the model accuracy by minimizing the overfitting. Later in the classification phase, we developed a novel lightweight convolutional neural network (lightweight CNN) model to extract features from skull-free augmented brain MR images and then classify them as normal and abnormal. Finally, we obtained infected tumor regions from the brain MR images in the segmentation phase using a fast-linking modified spiking cortical model (FL-MSCM). Based on this sequence of operations, our framework achieved 99.58% classification accuracy and 95.7% of dice similarity coefficient (DSC). The experimental results illustrate the efficiency of the proposed framework and its appreciable performance compared to the existing techniques.
Collapse
|
17
|
Bao XX, Zhao C, Bao SS, Rao JS, Yang ZY, Li XG. Recognition of necrotic regions in MRI images of chronic spinal cord injury based on superpixel. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 228:107252. [PMID: 36434959 DOI: 10.1016/j.cmpb.2022.107252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2020] [Revised: 08/15/2022] [Accepted: 11/17/2022] [Indexed: 06/16/2023]
Abstract
BACKGROUND AND OBJECTIVE The cystic cavity and its surrounding dense glial scar formed in chronic spinal cord injury (SCI) hinder the regeneration of nerve axons. Accurate location of the necrotic regions formed by the scar and the cavity is conducive to eliminate the re-growth obstacles and promote SCI treatment. This work aims to realize the accurate and automatic location of necrotic regions in the chronic SCI magnetic resonance imaging (MRI). METHODS In this study, a method based on superpixel is proposed to identify the necrotic regions of spinal cord in chronic SCI MRI. Superpixels were obtained by a simple linear iterative clustering algorithm, and feature sets were constructed from intensity statistical features, gray level co-occurrence matrix features, Gabor texture features, local binary pattern features and superpixel areas. Subsequently, the recognition effects of support vector machine (SVM) and random forest (RF) classification model on necrotic regions were compared from accuracy (ACC), positive predictive value (PPV), sensitivity (SE), specificity (SP), Dice coefficient and algorithm running time. RESULTS The method is evaluated on T1- and T2-weighted MRI spinal cord images of 24 adult female Wistar rats. And an automatic recognition method for spinal cord necrosis regions was established based on the SVM classification model finally. The recognition results were 1.00±0.00 (ACC), 0.89±0.09 (PPV), 0.88±0.12 (SE), 1.00±0.00 (SP) and 0.88±0.07 (Dice), respectively. CONCLUSIONS The proposed method can accurately and noninvasively identify the necrotic regions in MRI, which is helpful for the pre-intervention assessment and post-intervention evaluation of chronic SCI research and treatments, and promoting the clinical transformation of chronic SCI research.
Collapse
Affiliation(s)
- Xing-Xing Bao
- Beijing Key Laboratory for Biomaterials and Neural Regeneration, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Can Zhao
- Institute of Rehabilitation Engineering, China Rehabilitation Science Institute, Beijing 100068, China.
| | - Shu-Sheng Bao
- Beijing Key Laboratory for Biomaterials and Neural Regeneration, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Jia-Sheng Rao
- Beijing Key Laboratory for Biomaterials and Neural Regeneration, Beijing Advanced Innovation Center for Biomedical Engineering, School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China.
| | - Zhao-Yang Yang
- Department of Neurobiology, School of Basic Medical Sciences, Capital Medical University, Beijing 100069, China
| | - Xiao-Guang Li
- Department of Neurobiology, School of Basic Medical Sciences, Capital Medical University, Beijing 100069, China
| |
Collapse
|
18
|
Reddy PG, Ramashri T, Krishna KL. Brain Tumour Region Extraction Using Novel Self-Organising Map-Based KFCM Algorithm. PERTANIKA JOURNAL OF SCIENCE AND TECHNOLOGY 2022. [DOI: 10.47836/pjst.31.1.33] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Medical professionals need help finding tumours in the ground truth image of the brain because the tumours’ location, contrast, intensity, size, and shape vary between images because of different acquisition methods, modalities, and the patient’s age. The medical examiner has difficulty manually separating a tumour from other parts of a Magnetic Resonance Imaging (MRI) image. Many semi- and fully automated brain tumour detection systems have been written about in the literature, and they keep improving. The segmentation literature has seen several transformations throughout the years. An in-depth examination of these methods will be the focus of this investigation. We look at the most recent soft computing technologies used in MRI brain analysis through several review papers. This study looks at Self-Organising maps (SOM) with K-means and the kernel Fuzzy c-means (KFCM) method for segmenting them. The suggested SOM networks were first compared to K-means analysis in an experiment based on datasets with well-known cluster solutions. Later, the SOM is combined with KFCM, reducing time complexity and producing more accurate results than other methods. Experiments show that skewed data improves networks’ performance with more SOMs. Finally, performance measures in real-time datasets are analysed using machine learning approaches. The results show that the proposed algorithm has good sensitivity and better accuracy than k-means and other state-of-art methods.
Collapse
|
19
|
Gtifa W, Hamdaoui F, Sakly A. Automated brain tumour segmentation from multi-modality magnetic resonance imaging data based on new particle swarm optimisation segmentation method. Int J Med Robot 2022; 19:e2487. [PMID: 36478373 DOI: 10.1002/rcs.2487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/13/2022]
Abstract
BACKGROUND Segmentation of brain tumours is a complex problem in medical image processing and analysis. It is a time-consuming and error-prone task. Therefore, computer-aided detection systems need to be developed to decrease physicians' workload and improve the accuracy of segmentation. METHODS This paper proposes a level set method constrained by an intuitive artificial intelligence-based approach to perform brain tumour segmentation. By studying 3D brain tumour images, a new segmentation technique based on the Modified Particle Swarm Optimisation (MPSO), Darwin Particle Swarm Optimisation (DPSO), and Fractional Order Darwinian Particle Swarm Optimisation (FODPSO) algorithms were developed. RESULTS The introduced technique was verified according to the MICCAI RASTS 2013 database for high-grade glioma patients. The three algorithms were evaluated using different performance measures: accuracy, sensitivity, specificity, and Dice similarity coefficient to prove the performance and robustness of our 3D segmentation technique. CONCLUSION The result is that the MPSO algorithm consistently outperforms the DPSO and FO DPSO.
Collapse
Affiliation(s)
- Wafa Gtifa
- Laboratory of Automation and Electrical Systems and Environment, Monastir National School of Engineers (ENIM), University of Monastir, Monastir, Tunisia
| | - Fayçal Hamdaoui
- Laboratory of EμE, Monastir Faculty of Sciences (FSM), University of Monastir, Monastir, Tunisia
| | - Anis Sakly
- Laboratory of Automation and Electrical Systems and Environment, Monastir National School of Engineers (ENIM), University of Monastir, Monastir, Tunisia
| |
Collapse
|
20
|
Latif G. DeepTumor: Framework for Brain MR Image Classification, Segmentation and Tumor Detection. Diagnostics (Basel) 2022; 12:diagnostics12112888. [PMID: 36428948 PMCID: PMC9689339 DOI: 10.3390/diagnostics12112888] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2022] [Revised: 11/15/2022] [Accepted: 11/15/2022] [Indexed: 11/23/2022] Open
Abstract
The proper segmentation of the brain tumor from the image is important for both patients and medical personnel due to the sensitivity of the human brain. Operation intervention would require doctors to be extremely cautious and precise to target the brain's required portion. Furthermore, the segmentation process is also important for multi-class tumor classification. This work primarily concentrated on making a contribution in three main areas of brain MR Image processing for classification and segmentation which are: Brain MR image classification, tumor region segmentation and tumor classification. A framework named DeepTumor is presented for the multistage-multiclass Glioma Tumor classification into four classes; Edema, Necrosis, Enhancing and Non-enhancing. For the brain MR image binary classification (Tumorous and Non-tumorous), two deep Convolutional Neural Network) CNN models were proposed for brain MR image classification; 9-layer model with a total of 217,954 trainable parameters and an improved 10-layer model with a total of 80,243 trainable parameters. In the second stage, an enhanced Fuzzy C-means (FCM) based technique is proposed for the tumor segmentation in brain MR images. In the final stage, an enhanced CNN model 3 with 11 hidden layers and a total of 241,624 trainable parameters was proposed for the classification of the segmented tumor region into four Glioma Tumor classes. The experiments are performed using the BraTS MRI dataset. The experimental results of the proposed CNN models for binary classification and multiclass tumor classification are compared with the existing CNN models such as LeNet, AlexNet and GoogleNet as well as with the latest literature.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Computer Science Department, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia;
- Department of Computer Sciences and Mathematics, Université du Québec à Chicoutimi, 555 boulevard de l’Université, Chicoutimi, QC G7H 2B1, Canada
| |
Collapse
|
21
|
Mgbejime GT, Hossin MA, Nneji GU, Monday HN, Ekong F. Parallelistic Convolution Neural Network Approach for Brain Tumor Diagnosis. Diagnostics (Basel) 2022; 12:diagnostics12102484. [PMID: 36292173 PMCID: PMC9600759 DOI: 10.3390/diagnostics12102484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 09/27/2022] [Accepted: 10/03/2022] [Indexed: 11/17/2022] Open
Abstract
Today, Magnetic Resonance Imaging (MRI) is a prominent technique used in medicine, produces a significant and varied range of tissue contrasts in each imaging modalities, and is frequently employed by medical professionals to identify brain malignancies. With brain tumor being a very deadly disease, early detection will help increase the likelihood that the patient will receive the appropriate medical care leading to either a full elimination of the tumor or the prolongation of the patient’s life. However, manually examining the enormous volume of magnetic resonance imaging (MRI) images and identifying a brain tumor or cancer is extremely time-consuming and requires the expertise of a trained medical expert or brain doctor to manually detect and diagnose brain cancer using multiple Magnetic Resonance images (MRI) with various modalities. Due to this underlying issue, there is a growing need for increased efforts to automate the detection and diagnosis process of brain tumor without human intervention. Another major concern most research articles do not consider is the low quality nature of MRI images which can be attributed to noise and artifacts. This article presents a Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm to precisely handle the problem of low quality MRI images by eliminating noisy elements and enhancing the visible trainable features of the image. The enhanced image is then fed to the proposed PCNN to learn the features and classify the tumor using sigmoid classifier. To properly train the model, a publicly available dataset is collected and utilized for this research. Additionally, different optimizers and different values of dropout and learning rates are used in the course of this study. The proposed PCNN with Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm achieved an accuracy of 98.7%, sensitivity of 99.7%, and specificity of 97.4%. In comparison with other state-of-the-art brain tumor methods and pre-trained deep transfer learning models, the proposed PCNN model obtained satisfactory performance.
Collapse
Affiliation(s)
- Goodness Temofe Mgbejime
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Md Altab Hossin
- School of Innovation and Entrepreneurship, Chengdu University, Chengdu 610106, China
| | - Grace Ugochi Nneji
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
- Deep Learning and Intelligent Computing Lab, HACE SOFTTECH, Lagos 102241, Nigeria
- Correspondence: (G.U.N.); (H.N.M.)
| | - Happy Nkanta Monday
- Department of Computing, Oxford Brookes College of Chengdu University of Technology, Chengdu 610059, China
- Deep Learning and Intelligent Computing Lab, HACE SOFTTECH, Lagos 102241, Nigeria
- Correspondence: (G.U.N.); (H.N.M.)
| | - Favour Ekong
- School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
22
|
Safri AA, Nassir CMNCM, Iman IN, Mohd Taib NH, Achuthan A, Mustapha M. Diffusion tensor imaging pipeline measures of cerebral white matter integrity: An overview of recent advances and prospects. World J Clin Cases 2022; 10:8450-8462. [PMID: 36157806 PMCID: PMC9453345 DOI: 10.12998/wjcc.v10.i24.8450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Revised: 06/20/2022] [Accepted: 07/17/2022] [Indexed: 02/05/2023] Open
Abstract
Cerebral small vessel disease (CSVD) is a leading cause of age-related microvascular cognitive decline, resulting in significant morbidity and decreased quality of life. Despite a progress on its key pathophysiological bases and general acceptance of key terms from neuroimaging findings as observed on the magnetic resonance imaging (MRI), key questions on CSVD remain elusive. Enhanced relationships and reliable lesion studies, such as white matter tractography using diffusion-based MRI (dMRI) are necessary in order to improve the assessment of white matter architecture and connectivity in CSVD. Diffusion tensor imaging (DTI) and tractography is an application of dMRI that provides data that can be used to non-invasively appraise the brain white matter connections via fiber tracking and enable visualization of individual patient-specific white matter fiber tracts to reflect the extent of CSVD-associated white matter damage. However, due to a lack of standardization on various sets of software or image pipeline processing utilized in this technique that driven mostly from research setting, interpreting the findings remain contentious, especially to inform an improved diagnosis and/or prognosis of CSVD for routine clinical use. In this minireview, we highlight the advances in DTI pipeline processing and the prospect of this DTI metrics as potential imaging biomarker for CSVD, even for subclinical CSVD in at-risk individuals.
Collapse
Affiliation(s)
- Amanina Ahmad Safri
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Che Mohd Nasril Che Mohd Nassir
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Ismail Nurul Iman
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Nur Hartini Mohd Taib
- Department of Radiology, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
| | - Anusha Achuthan
- School of Computer Sciences, Universiti Sains Malaysia, 11800 USM, Penang, Malaysia
| | - Muzaimi Mustapha
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Health Campus, Kubang Kerian 16150, Kelantan, Malaysia
- Department of Neurosciences, Hospital Universiti Sains Malaysia, Kubang Kerian 16150, Kelantan, Malaysia
| |
Collapse
|
23
|
Burrows L, Chen K, Guo W, Hossack M, McWilliams RG, Torella F. Evaluation of a hybrid pipeline for automated segmentation of solid lesions based on mathematical algorithms and deep learning. Sci Rep 2022; 12:14216. [PMID: 35987824 PMCID: PMC9392778 DOI: 10.1038/s41598-022-18173-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Accepted: 08/05/2022] [Indexed: 01/10/2023] Open
Abstract
We evaluate the accuracy of an original hybrid segmentation pipeline, combining variational and deep learning methods, in the segmentation of CT scans of stented aortic aneurysms, abdominal organs and brain lesions. The hybrid pipeline is trained on 50 aortic CT scans and tested on 10. Additionally, we trained and tested the hybrid pipeline on publicly available datasets of CT scans of abdominal organs and MR scans of brain tumours. We tested the accuracy of the hybrid pipeline against a gold standard (manual segmentation) and compared its performance to that of a standard automated segmentation method with commonly used metrics, including the DICE and JACCARD and volumetric similarity (VS) coefficients, and the Hausdorff Distance (HD). Results. The hybrid pipeline produced very accurate segmentations of the aorta, with mean DICE, JACCARD and VS coefficients of: 0.909, 0.837 and 0.972 in thrombus segmentation and 0.937, 0.884 and 0.970 for stent and lumen segmentation. It consistently outperformed the standard automated method. Similar results were observed when the hybrid pipeline was trained and tested on publicly available datasets, with mean DICE scores of: 0.832 on brain tumour segmentation, and 0.894/0.841/0.853/0.847/0.941 on left kidney/right kidney/spleen/aorta/liver organ segmentation.
Collapse
Affiliation(s)
- Liam Burrows
- Centre for Mathematical Imaging Techniques and Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK.
| | - Ke Chen
- Centre for Mathematical Imaging Techniques and Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 7ZL, UK.
| | - Weihong Guo
- Department of Mathematics, Applied Mathematics and Statistics, Case Western Reserve University, Cleveland, OH, 44106, USA
| | - Martin Hossack
- Liverpool Vascular and Endovascular Service, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| | | | - Francesco Torella
- Liverpool Vascular and Endovascular Service, Liverpool University Hospitals NHS Foundation Trust, Liverpool, UK
| |
Collapse
|
24
|
Multimodal Magnetic Resonance Imaging to Diagnose Knee Osteoarthritis under Artificial Intelligence. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:6488889. [PMID: 35785062 PMCID: PMC9246643 DOI: 10.1155/2022/6488889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 04/25/2022] [Accepted: 05/10/2022] [Indexed: 11/17/2022]
Abstract
This work aimed to investigate the application value of the multimodal magnetic resonance imaging (MRI) algorithm based on the low-rank decomposition denoising (LRDD) in the diagnosis of knee osteoarthritis (KOA), so as to offer a better examination method in the clinic. Seventy-eight patients with KOA were selected as the research objects, and they all underwent T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), fat suppression T2WI (SE-T2WI), and fat saturation T2WI (FS-T2WI). All obtained images were processed by using the I-LRDD algorithm. According to the degree of articular cartilage lesions under arthroscopy, the patients were divided into a group I, a group II, a group III, and a group IV. The sensitivity, specificity, accuracy, and consistency of KOA diagnosis of T1WI, T2WI, SE-T2WI, and FS-T2WI were analyzed by referring to the results of arthroscopy. The results showed that the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) of the I-LRDD algorithm used in this work were higher than those of image block priori denoising (IBPD) and LRDD, and the time consumption was lower than that of IBDP and LRDD (p < 0.05). The sensitivity, specificity, accuracy, and consistency (Kappa value) of multimodal MRI in the diagnosis of KOA were 88.61%, 85.3%, 87.37%, and 0.73%, respectively, which were higher than those of T1WI, T2WI, SE-T2WI, and FS-T2WI. The sensitivity, specificity, accuracy, and consistency of multimodal MRI in diagnosing lesions in group IV were 95%, 96.10%, 95.88%, and 0.70%, respectively, which were much higher than those in groups I, II, and III (p < 0.05). In conclusion, the LRDD algorithm shows a good image processing efficacy, and the multimodal MRI showed a good diagnosis effect on KOA, which was worthy of promotion clinically.
Collapse
|
25
|
Abdollahi H, Chin E, Clark H, Hyde DE, Thomas S, Wu J, Uribe CF, Rahmim A. Radiomics-guided radiation therapy: opportunities and challenges. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6fab] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Accepted: 05/13/2022] [Indexed: 11/11/2022]
Abstract
Abstract
Radiomics is an advanced image-processing framework, which extracts image features and considers them as biomarkers towards personalized medicine. Applications include disease detection, diagnosis, prognosis, and therapy response assessment/prediction. As radiation therapy aims for further individualized treatments, radiomics could play a critical role in various steps before, during and after treatment. Elucidation of the concept of radiomics-guided radiation therapy (RGRT) is the aim of this review, attempting to highlight opportunities and challenges underlying the use of radiomics to guide clinicians and physicists towards more effective radiation treatments. This work identifies the value of RGRT in various steps of radiotherapy from patient selection to follow-up, and subsequently provides recommendations to improve future radiotherapy using quantitative imaging features.
Collapse
|
26
|
Kouli O, Hassane A, Badran D, Kouli T, Hossain-Ibrahim K, Steele JD. Automated brain tumour identification using magnetic resonance imaging: a systematic review and meta-analysis. Neurooncol Adv 2022; 4:vdac081. [PMID: 35769411 PMCID: PMC9234754 DOI: 10.1093/noajnl/vdac081] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Background Automated brain tumor identification facilitates diagnosis and treatment planning. We evaluate the performance of traditional machine learning (TML) and deep learning (DL) in brain tumor detection and segmentation, using MRI. Methods A systematic literature search from January 2000 to May 8, 2021 was conducted. Study quality was assessed using the Checklist for Artificial Intelligence in Medical Imaging (CLAIM). Detection meta-analysis was performed using a unified hierarchical model. Segmentation studies were evaluated using a random effects model. Sensitivity analysis was performed for externally validated studies. Results Of 224 studies included in the systematic review, 46 segmentation and 38 detection studies were eligible for meta-analysis. In detection, DL achieved a lower false positive rate compared to TML; 0.018 (95% CI, 0.011 to 0.028) and 0.048 (0.032 to 0.072) (P < .001), respectively. In segmentation, DL had a higher dice similarity coefficient (DSC), particularly for tumor core (TC); 0.80 (0.77 to 0.83) and 0.63 (0.56 to 0.71) (P < .001), persisting on sensitivity analysis. Both manual and automated whole tumor (WT) segmentation had “good” (DSC ≥ 0.70) performance. Manual TC segmentation was superior to automated; 0.78 (0.69 to 0.86) and 0.64 (0.53 to 0.74) (P = .014), respectively. Only 30% of studies reported external validation. Conclusions The comparable performance of automated to manual WT segmentation supports its integration into clinical practice. However, manual outperformance for sub-compartmental segmentation highlights the need for further development of automated methods in this area. Compared to TML, DL provided superior performance for detection and sub-compartmental segmentation. Improvements in the quality and design of studies, including external validation, are required for the interpretability and generalizability of automated models.
Collapse
Affiliation(s)
- Omar Kouli
- School of Medicine, University of Dundee , Dundee UK
- NHS Greater Glasgow and Clyde , Dundee UK
| | | | | | - Tasnim Kouli
- School of Medicine, University of Dundee , Dundee UK
| | | | - J Douglas Steele
- Division of Imaging Science and Technology, School of Medicine, University of Dundee , UK
| |
Collapse
|
27
|
Zhang J, Jiang Z, Liu D, Sun Q, Hou Y, Liu B. 3D asymmetric expectation-maximization attention network for brain tumor segmentation. NMR IN BIOMEDICINE 2022; 35:e4657. [PMID: 34859922 DOI: 10.1002/nbm.4657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Revised: 10/23/2021] [Accepted: 11/02/2021] [Indexed: 06/13/2023]
Abstract
Automatic brain tumor segmentation on MRI is a prerequisite to provide a quantitative and intuitive assistance for clinical diagnosis and treatment. Meanwhile, 3D deep neural network related brain tumor segmentation models have demonstrated considerable accuracy improvement over corresponding 2D methodologies. However, 3D brain tumor segmentation models generally suffer from high computation cost. Motivated by a recently proposed 3D dilated multi-fiber network (DMF-Net) architecture that pays more attention to reduction of computation cost, we present in this work a novel encoder-decoder neural network, ie a 3D asymmetric expectation-maximization attention network (AEMA-Net), to automatically segment brain tumors. We modify DMF-Net by introducing an asymmetric convolution block into a multi-fiber unit and a dilated multi-fiber unit to capture more powerful deep features for the brain tumor segmentation. In addition, AEMA-Net further incorporates an expectation-maximization attention (EMA) module into the DMF-Net by embedding the EMA block in the third stage of skip connection, which focuses on capturing the long-range dependence of context. We extensively evaluate AEMA-Net on three MRI brain tumor segmentation benchmarks of BraTS 2018, 2019 and 2020 datasets. Experimental results demonstrate that AEMA-Net outperforms both 3D U-Net and DMF-Net, and it achieves competitive performance compared with the state-of-the-art brain tumor segmentation methods.
Collapse
Affiliation(s)
- Jianxin Zhang
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| | - Zongkang Jiang
- Key Lab of Advanced Design and Intelligent Computing (Ministry of Education), Dalian University, Dalian, China
| | - Dongwei Liu
- School of Computer Science and Engineering, Dalian Minzu University, Dalian, China
| | - Qiule Sun
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Yaqing Hou
- Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, China
| | - Bin Liu
- International School of Information Science and Engineering (DUT-RUISE), Dalian University of Technology, Dalian, China
| |
Collapse
|
28
|
Latif G, Ben Brahim G, Iskandar DNFA, Bashar A, Alghazo J. Glioma Tumors' Classification Using Deep-Neural-Network-Based Features with SVM Classifier. Diagnostics (Basel) 2022; 12:diagnostics12041018. [PMID: 35454066 PMCID: PMC9032951 DOI: 10.3390/diagnostics12041018] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 04/08/2022] [Indexed: 11/16/2022] Open
Abstract
The complexity of brain tissue requires skillful technicians and expert medical doctors to manually analyze and diagnose Glioma brain tumors using multiple Magnetic Resonance (MR) images with multiple modalities. Unfortunately, manual diagnosis suffers from its lengthy process, as well as elevated cost. With this type of cancerous disease, early detection will increase the chances of suitable medical procedures leading to either a full recovery or the prolongation of the patient's life. This has increased the efforts to automate the detection and diagnosis process without human intervention, allowing the detection of multiple types of tumors from MR images. This research paper proposes a multi-class Glioma tumor classification technique using the proposed deep-learning-based features with the Support Vector Machine (SVM) classifier. A deep convolution neural network is used to extract features of the MR images, which are then fed to an SVM classifier. With the proposed technique, a 96.19% accuracy was achieved for the HGG Glioma type while considering the FLAIR modality and a 95.46% for the LGG Glioma tumor type while considering the T2 modality for the classification of four Glioma classes (Edema, Necrosis, Enhancing, and Non-enhancing). The accuracies achieved using the proposed method were higher than those reported by similar methods in the extant literature using the same BraTS dataset. In addition, the accuracy results obtained in this work are better than those achieved by the GoogleNet and LeNet pre-trained models on the same dataset.
Collapse
Affiliation(s)
- Ghazanfar Latif
- Faculty of Computer Science and Information Technology, Université du Québec à Chicoutimi, 555 Boulevard de l’Université, Chicoutimi, QC G7H2B1, Canada; or
- Department of Computer Science, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
| | - Ghassen Ben Brahim
- Department of Computer Science, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia
- Correspondence:
| | - D. N. F. Awang Iskandar
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Kota Samarahan 94300, Malaysia;
| | - Abul Bashar
- Department of Computer Engineering, Prince Mohammad bin Fahd University, Khobar 31952, Saudi Arabia;
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA 24450, USA;
| |
Collapse
|
29
|
Diagnosis of Early Cervical Cancer with a Multimodal Magnetic Resonance Image under the Artificial Intelligence Algorithm. CONTRAST MEDIA & MOLECULAR IMAGING 2022; 2022:6495309. [PMID: 35386728 PMCID: PMC8967556 DOI: 10.1155/2022/6495309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 02/18/2022] [Accepted: 02/21/2022] [Indexed: 12/03/2022]
Abstract
This research was conducted to explore the value of multimodal magnetic resonance imaging (MRI) based on the alternating direction algorithm in the diagnosis of early cervical cancer. 64 patients diagnosed with early cervical cancer clinicopathologically were included, and according to the examination methods, they were divided into A group with conventional multimodal MRI examination and B group with the multimodal MRI examination under the alternating direction algorithm. The diagnostic results of two types of multimodal MRI for early cervical cancer staging were compared with the results of clinicopathological examination to judge the application value in the early diagnosis of cervical cancer. The results showed that in the 6 randomly selected samples of early cervical cancer patients, the peak signal-to-noise ratio (PSNR) and structural similarity image measurement (SSIM) of multimodal MRI images under the alternating direction algorithm were significantly higher than those of conventional multimodal MRI images and the image reconstruction was clearer under this algorithm. By comparing MRI multimodal staging, statistical analysis showed that the staging accuracy of B group was 75%, while that of A group was only 59.38%. For the results of postoperative medical examinations, the examination consistency of B group was better than that of A group, with a statistically significant difference (P < 0.05). The area under the receiver operating characteristic (ROC) curve (AUC) of B group was larger than that of A group; thus, sensitivity was improved and misdiagnosis was reduced significantly. Multimodal MRI under the alternating direction algorithm was superior to conventional multimodal MRI examination in the diagnosis of early cervical cancer, as the lesions were displayed more clearly, which was conducive to the detection rate of small lesions and the staging accuracy. Therefore, it could be used as an ideal MRI method for the assistant diagnosis of cervical cancer staging.
Collapse
|
30
|
Intelligent Ultra-Light Deep Learning Model for Multi-Class Brain Tumor Detection. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12083715] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
The diagnosis and surgical resection using Magnetic Resonance (MR) images in brain tumors is a challenging task to minimize the neurological defects after surgery owing to the non-linear nature of the size, shape, and textural variation. Radiologists, clinical experts, and brain surgeons examine brain MRI scans using the available methods, which are tedious, error-prone, time-consuming, and still exhibit positional accuracy up to 2–3 mm, which is very high in the case of brain cells. In this context, we propose an automated Ultra-Light Brain Tumor Detection (UL-BTD) system based on a novel Ultra-Light Deep Learning Architecture (UL-DLA) for deep features, integrated with highly distinctive textural features, extracted by Gray Level Co-occurrence Matrix (GLCM). It forms a Hybrid Feature Space (HFS), which is used for tumor detection using Support Vector Machine (SVM), culminating in high prediction accuracy and optimum false negatives with limited network size to fit within the average GPU resources of a modern PC system. The objective of this study is to categorize multi-class publicly available MRI brain tumor datasets with a minimum time thus real-time tumor detection can be carried out without compromising accuracy. Our proposed framework includes a sensitivity analysis of image size, One-versus-All and One-versus-One coding schemes with stringent efforts to assess the complexity and reliability performance of the proposed system with K-fold cross-validation as a part of the evaluation protocol. The best generalization achieved using SVM has an average detection rate of 99.23% (99.18%, 98.86%, and 99.67%), and F-measure of 0.99 (0.99, 0.98, and 0.99) for (glioma, meningioma, and pituitary tumors), respectively. Our results have been found to improve the state-of-the-art (97.30%) by 2%, indicating that the system exhibits capability for translation in modern hospitals during real-time surgical brain applications. The method needs 11.69 ms with an accuracy of 99.23% compared to 15 ms achieved by the state-of-the-art to earlier to detect tumors on a test image without any dedicated hardware providing a route for a desktop application in brain surgery.
Collapse
|
31
|
Das S, Nayak GK, Saba L, Kalra M, Suri JS, Saxena S. An artificial intelligence framework and its bias for brain tumor segmentation: A narrative review. Comput Biol Med 2022; 143:105273. [PMID: 35228172 DOI: 10.1016/j.compbiomed.2022.105273] [Citation(s) in RCA: 32] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/15/2022] [Accepted: 01/24/2022] [Indexed: 02/06/2023]
Abstract
BACKGROUND Artificial intelligence (AI) has become a prominent technique for medical diagnosis and represents an essential role in detecting brain tumors. Although AI-based models are widely used in brain lesion segmentation (BLS), understanding their effectiveness is challenging due to their complexity and diversity. Several reviews on brain tumor segmentation are available, but none of them describe a link between the threats due to risk-of-bias (RoB) in AI and its architectures. In our review, we focused on linking RoB and different AI-based architectural Cluster in popular DL framework. Further, due to variance in these designs and input data types in medical imaging, it is necessary to present a narrative review considering all facets of BLS. APPROACH The proposed study uses a PRISMA strategy based on 75 relevant studies found by searching PubMed, Scopus, and Google Scholar. Based on the architectural evolution, DL studies were subsequently categorized into four classes: convolutional neural network (CNN)-based, encoder-decoder (ED)-based, transfer learning (TL)-based, and hybrid DL (HDL)-based architectures. These studies were then analyzed considering 32 AI attributes, with clusters including AI architecture, imaging modalities, hyper-parameters, performance evaluation metrics, and clinical evaluation. Then, after these studies were scored for all attributes, a composite score was computed, normalized, and ranked. Thereafter, a bias cutoff (AP(ai)Bias 1.0, AtheroPoint, Roseville, CA, USA) was established to detect low-, moderate- and high-bias studies. CONCLUSION The four classes of architectures, from best-to worst-performing, are TL > ED > CNN > HDL. ED-based models had the lowest AI bias for BLS. This study presents a set of three primary and six secondary recommendations for lowering the RoB.
Collapse
Affiliation(s)
- Suchismita Das
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India; CSE Department, KIIT Deemed to be University, Bhubaneswar, Odisha, India
| | - G K Nayak
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| | - Luca Saba
- Department of Radiology, AOU, University of Cagliari, Cagliari, Italy
| | - Mannudeep Kalra
- Department of Radiology, Massachusetts General Hospital, 55 Fruit Street, Boston, MA, USA
| | - Jasjit S Suri
- Stroke Diagnostic and Monitoring Division, AtheroPoint™ LLC, Roseville, CA, USA.
| | - Sanjay Saxena
- CSE Department, International Institute of Information Technology, Bhubaneswar, Odisha, India
| |
Collapse
|
32
|
Konar D, Bhattacharyya S, Dey S, Panigrahi BK. Optimized activation for quantum-inspired self-supervised neural network based fully automated brain lesion segmentation. APPL INTELL 2022. [DOI: 10.1007/s10489-021-03108-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
|
33
|
Ahuja S, Panigrahi BK, Gandhi TK. Enhanced performance of Dark-Nets for brain tumor classification and segmentation using colormap-based superpixel techniques. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2021.100212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022] Open
|
34
|
Fan W, Yang L, Li J, Dong B. Ultrasound Image-Guided Nerve Block Combined with General Anesthesia under an Artificial Intelligence Algorithm on Patients Undergoing Radical Gastrectomy for Gastric Cancer during and after Operation. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:6914157. [PMID: 35096134 PMCID: PMC8791740 DOI: 10.1155/2022/6914157] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2021] [Revised: 12/13/2021] [Accepted: 12/21/2021] [Indexed: 01/22/2023]
Abstract
This study was aimed at investigating the location of gastric cancer by using a gastroscope image based on an artificial intelligence algorithm for gastric cancer and the effect of ultrasonic-guided nerve block combined with general anesthesia on patients undergoing gastric cancer surgery. A total of 160 patients who were undergoing gastric cancer surgery from March 2019 to March 2021 were collected as the research objects, and the convolutional neural network (CNN) algorithm was used to segment the gastroscope image of gastric cancer. The patients were randomly divided into a simple general anesthesia group of 80 cases and a transversus abdominis plane block combined with rectus abdominis sheath block combined with the general anesthesia group of 80 cases. Then, compare the systolic blood pressure (SBP), diastolic blood pressure (DBP), and heart rate (HR) at the four time points T0, T1, T2, and T3. The times of analgesic drug use within 48 hours after operation and postoperative adverse reactions were recorded. The visual analog scale (VAS) scores were also recorded at 4 h, 12 h, 24 h, and 48 h. The results show that the image quality after segmentation is good: the accuracy of tumor location is 75.67%, which is similar to that of professional endoscopists. Compared with the general anesthesia group, the transversus abdominis plane block combined with the rectus sheath block combined with the general anesthesia group had fewer anesthetics, and the difference was statistically significant (P < 0.05). Compared with the general anesthesia group, SBP, DBP, and HR were significantly reduced at T1, T2, and T3 in the transverse abdominis plane block combined with rectus sheath block and general anesthesia group (P < 0.05). Compared with the simple general anesthesia group, the VAS scores of the transversus abdominis plane block combined with rectus sheath block combined with the general anesthesia group decreased at 4 h, 12 h, and 24 h after surgery, and the difference was statistically significant (P < 0.05). The number of analgesics used in transversus abdominis plane block combined with the rectus sheath block combined with the general anesthesia group within 48 hours after operation was significantly less than that in the general anesthesia group, and the difference was statistically significant (P < 0.05). The average incidence of adverse reactions in the nerve block combined with the general anesthesia group was 2.5%, which was lower than the average incidence of 3.75% in the general anesthesia group. In summary, the CNN algorithm can accurately segment the lesions in the ultrasonic images of gastric cancer, which was convenient for doctors to make a more accurate judgment on the lesions, and provided a basis for the preoperative examination of radical gastrectomy for gastric cancer. Ultrasonic-guided nerve block combined with general anesthesia can effectively improve the analgesic effect of radical gastrectomy for gastric cancer, reduced intraoperative and postoperative adverse reactions and analgesic drug dosage, and had a good effect on postoperative recovery of patients. The combined application of these two methods can further improve the precision treatment of gastric cancer patients and accelerate postoperative recovery.
Collapse
Affiliation(s)
- Wanqiu Fan
- Department of Anesthesiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, 637000 Sichuan, China
| | - Liuyingzi Yang
- Department of Anesthesiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, 637000 Sichuan, China
- Maternal and Child Health Hospital of Shifang, Deyang, 618400 Sichuan, China
| | - Jing Li
- Department of Anesthesiology, People's Hospital of Yilong County, Nanchong, 636000 Sichuan, China
| | - Biqian Dong
- Department of Anesthesiology, Affiliated Hospital of North Sichuan Medical College, Nanchong, 637000 Sichuan, China
| |
Collapse
|
35
|
Latif G, Yousif Al Anezi F, Iskandar DNFA, Bashar A, Alghazo J. Recent Advances in Classification of Brain Tumor from MR Images - State of the Art Review from 2017 to 2021. Curr Med Imaging 2022; 18:903-918. [PMID: 35040408 DOI: 10.2174/1573405618666220117151726] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 09/14/2021] [Accepted: 10/28/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND The task of identifying a tumor in the brain is a complex problem that requires sophisticated skills and inference mechanisms to accurately locate the tumor region. The complex nature of the brain tissue makes the problem of locating, segmenting, and ultimately classifying Magnetic Resonance (MR) images a complex problem. The aim of this review paper is to consolidate the details of the most relevant and recent approaches proposed in this domain for the binary and multi-class classification of brain tumors using brain MR images. OBJECTIVE In this review paper, a detailed summary of the latest techniques used for brain MR image feature extraction and classification is presented. A lot of research papers have been published recently with various techniques proposed for identifying an efficient method for the correct recognition and diagnosis of brain MR images. The review paper allows researchers in the field to familiarize themselves with the latest developments and be able to propose novel techniques that have not yet been explored in this research domain. In addition, the review paper will facilitate researchers, who are new to machine learning algorithms for brain tumor recognition, to understand the basics of the field and pave the way for them to be able to contribute to this vital field of medical research. RESULTS In this paper, the review is performed for all recently proposed methods for both feature extraction and classification. It also identifies the combination of feature extraction methods and classification methods that when combined would be the most efficient technique for the recognition and diagnosis of brain tumor from MR images. In addition, the paper presents the performance metrics particularly the recognition accuracy, of selected research published between 2017- 2021.
Collapse
Affiliation(s)
- Ghazanfar Latif
- College of Computer Engineering and Sciences, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia.
- Université du Québec a Chicoutimi, 555 boulevard de l'Université, Chicoutimi, QC, G7H2B1, Canada
| | - Faisal Yousif Al Anezi
- Management Information Department, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia
| | - D N F Awang Iskandar
- Faculty of Computer Science and Information Technology, Universiti Malaysia Sarawak, Malaysia
| | - Abul Bashar
- College of Computer Engineering and Sciences, Prince Mohammad bin Fahd University, Khobar, Saudi Arabia
| | - Jaafar Alghazo
- Department of Electrical and Computer Engineering, Virginia Military Institute, Lexington, VA Corresponding Author *: Ghazanfar Latif, Department of Computer Science, Prince Mohammad bin Fahd University, Al-Khobar, 31952, Saudi Arabia
| |
Collapse
|
36
|
Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: A systematic review. Digit Health 2022; 8:20552076221074122. [PMID: 35340900 PMCID: PMC8943308 DOI: 10.1177/20552076221074122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2021] [Revised: 11/20/2021] [Accepted: 12/27/2021] [Indexed: 01/10/2023] Open
Abstract
Background Image segmentation is an essential step in the analysis and subsequent characterisation of brain tumours through magnetic resonance imaging. In the literature, segmentation methods are empowered by open-access magnetic resonance imaging datasets, such as the brain tumour segmentation dataset. Moreover, with the increased use of artificial intelligence methods in medical imaging, access to larger data repositories has become vital in method development. Purpose To determine what automated brain tumour segmentation techniques can medical imaging specialists and clinicians use to identify tumour components, compared to manual segmentation. Methods We conducted a systematic review of 572 brain tumour segmentation studies during 2015-2020. We reviewed segmentation techniques using T1-weighted, T2-weighted, gadolinium-enhanced T1-weighted, fluid-attenuated inversion recovery, diffusion-weighted and perfusion-weighted magnetic resonance imaging sequences. Moreover, we assessed physics or mathematics-based methods, deep learning methods, and software-based or semi-automatic methods, as applied to magnetic resonance imaging techniques. Particularly, we synthesised each method as per the utilised magnetic resonance imaging sequences, study population, technical approach (such as deep learning) and performance score measures (such as Dice score). Statistical tests We compared median Dice score in segmenting the whole tumour, tumour core and enhanced tumour. Results We found that T1-weighted, gadolinium-enhanced T1-weighted, T2-weighted and fluid-attenuated inversion recovery magnetic resonance imaging are used the most in various segmentation algorithms. However, there is limited use of perfusion-weighted and diffusion-weighted magnetic resonance imaging. Moreover, we found that the U-Net deep learning technology is cited the most, and has high accuracy (Dice score 0.9) for magnetic resonance imaging-based brain tumour segmentation. Conclusion U-Net is a promising deep learning technology for magnetic resonance imaging-based brain tumour segmentation. The community should be encouraged to contribute open-access datasets so training, testing and validation of deep learning algorithms can be improved, particularly for diffusion- and perfusion-weighted magnetic resonance imaging, where there are limited datasets available.
Collapse
Affiliation(s)
- Jayendra M Bhalodiya
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Sarah N Lim Choi Keung
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| | - Theodoros N Arvanitis
- Institute of Digital Healthcare, Warwick Manufacturing Group, The University of Warwick, UK
| |
Collapse
|
37
|
van Kempen EJ, Post M, Mannil M, Witkam RL, Ter Laan M, Patel A, Meijer FJA, Henssen D. Performance of machine learning algorithms for glioma segmentation of brain MRI: a systematic literature review and meta-analysis. Eur Radiol 2021; 31:9638-9653. [PMID: 34019128 PMCID: PMC8589805 DOI: 10.1007/s00330-021-08035-0] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2021] [Revised: 04/04/2021] [Accepted: 05/03/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES Different machine learning algorithms (MLAs) for automated segmentation of gliomas have been reported in the literature. Automated segmentation of different tumor characteristics can be of added value for the diagnostic work-up and treatment planning. The purpose of this study was to provide an overview and meta-analysis of different MLA methods. METHODS A systematic literature review and meta-analysis was performed on the eligible studies describing the segmentation of gliomas. Meta-analysis of the performance was conducted on the reported dice similarity coefficient (DSC) score of both the aggregated results as two subgroups (i.e., high-grade and low-grade gliomas). This study was registered in PROSPERO prior to initiation (CRD42020191033). RESULTS After the literature search (n = 734), 42 studies were included in the systematic literature review. Ten studies were eligible for inclusion in the meta-analysis. Overall, the MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82-0.86). In addition, a DSC score of 0.83 (95% CI: 0.80-0.87) and 0.82 (95% CI: 0.78-0.87) was observed for the automated glioma segmentation of the high-grade and low-grade gliomas, respectively. However, heterogeneity was considerably high between included studies, and publication bias was observed. CONCLUSION MLAs facilitating automated segmentation of gliomas show good accuracy, which is promising for future implementation in neuroradiology. However, before actual implementation, a few hurdles are yet to be overcome. It is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set. KEY POINTS • MLAs from the included studies showed an overall DSC score of 0.84 (95% CI: 0.82-0.86), indicating a good performance. • MLA performance was comparable when comparing the segmentation results of the high-grade gliomas and the low-grade gliomas. • For future studies using MLAs, it is crucial that quality guidelines are followed when reporting on MLAs, which includes validation on an external test set.
Collapse
Affiliation(s)
- Evi J van Kempen
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Max Post
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Manoj Mannil
- Clinic of Radiology, University Hospital Münster, Münster, Germany
| | - Richard L Witkam
- Department of Anaesthesiology, Pain and Palliative Medicine, Radboud University Medical Center, Nijmegen, The Netherlands
- Department of Neurosurgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Mark Ter Laan
- Department of Neurosurgery, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Ajay Patel
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Frederick J A Meijer
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands
| | - Dylan Henssen
- Department of Medical Imaging, Radboud University Medical Center, Geert Grooteplein Zuid 10, 6525 EZ, Nijmegen, The Netherlands.
| |
Collapse
|
38
|
See KB, Arpin DJ, Vaillancourt DE, Fang R, Coombes SA. Unraveling somatotopic organization in the human brain using machine learning and adaptive supervoxel-based parcellations. Neuroimage 2021; 245:118710. [PMID: 34780917 PMCID: PMC9008369 DOI: 10.1016/j.neuroimage.2021.118710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2021] [Revised: 10/29/2021] [Accepted: 11/03/2021] [Indexed: 12/03/2022] Open
Abstract
In addition to the well-established somatotopy in the pre- and post-central gyrus, there is now strong evidence that somatotopic organization is evident across other regions in the sensorimotor network. This raises several experimental questions: To what extent is activity in the sensorimotor network effector-dependent and effector-independent? How important is the sensorimotor cortex when predicting the motor effector? Is there redundancy in the distributed somatotopically organized network such that removing one region has little impact on classification accuracy? To answer these questions, we developed a novel experimental approach. fMRI data were collected while human subjects performed a precisely controlled force generation task separately with their hand, foot, and mouth. We used a simple linear iterative clustering (SLIC) algorithm to segment whole-brain beta coefficient maps to build an adaptive brain parcellation and then classified effectors using extreme gradient boosting (XGBoost) based on parcellations at various spatial resolutions. This allowed us to understand how data-driven adaptive brain parcellation granularity altered classification accuracy. Results revealed effector-dependent activity in regions of the post-central gyrus, precentral gyrus, and paracentral lobule. SMA, regions of the inferior and superior parietal lobule, and cerebellum each contained effector-dependent and effector-independent representations. Machine learning analyses showed that increasing the spatial resolution of the data-driven model increased classification accuracy, which reached 94% with 1755 supervoxels. Our SLIC-based supervoxel parcellation outperformed classification analyses using established brain templates and random simulations. Occlusion experiments further demonstrated redundancy across the sensorimotor network when classifying effectors. Our observations extend our understanding of effector-dependent and effector-independent organization within the human brain and provide new insight into the functional neuroanatomy required to predict the motor effector used in a motor control task.
Collapse
Affiliation(s)
- Kyle B See
- J. Crayton Pruitt Family Department of Biomedical Engineering, Smart Medical Informatics Learning and Evaluation Lab, College of Engineering, University of Florida, PO Box 116131, Gainesville, FL, United States
| | - David J Arpin
- Laboratory for Rehabilitation Neuroscience, Department of Applied Physiology and Kinesiology, University of Florida, PO Box 118206, Gainesville, FL, United States
| | - David E Vaillancourt
- J. Crayton Pruitt Family Department of Biomedical Engineering, Smart Medical Informatics Learning and Evaluation Lab, College of Engineering, University of Florida, PO Box 116131, Gainesville, FL, United States; Laboratory for Rehabilitation Neuroscience, Department of Applied Physiology and Kinesiology, University of Florida, PO Box 118206, Gainesville, FL, United States
| | - Ruogu Fang
- J. Crayton Pruitt Family Department of Biomedical Engineering, Smart Medical Informatics Learning and Evaluation Lab, College of Engineering, University of Florida, PO Box 116131, Gainesville, FL, United States; Center for Cognitive Aging and Memory, McKnight Brain Institute, University of Florida, Gainesville, United States; Department of Electrical and Computer Engineering, University of Florida, Gainesville, FL, United States.
| | - Stephen A Coombes
- J. Crayton Pruitt Family Department of Biomedical Engineering, Smart Medical Informatics Learning and Evaluation Lab, College of Engineering, University of Florida, PO Box 116131, Gainesville, FL, United States; Laboratory for Rehabilitation Neuroscience, Department of Applied Physiology and Kinesiology, University of Florida, PO Box 118206, Gainesville, FL, United States.
| |
Collapse
|
39
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
40
|
Rosas-Gonzalez S, Birgui-Sekou T, Hidane M, Zemmoura I, Tauber C. Asymmetric Ensemble of Asymmetric U-Net Models for Brain Tumor Segmentation With Uncertainty Estimation. Front Neurol 2021; 12:609646. [PMID: 34659077 PMCID: PMC8515181 DOI: 10.3389/fneur.2021.609646] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2020] [Accepted: 07/22/2021] [Indexed: 11/29/2022] Open
Abstract
Accurate brain tumor segmentation is crucial for clinical assessment, follow-up, and subsequent treatment of gliomas. While convolutional neural networks (CNN) have become state of the art in this task, most proposed models either use 2D architectures ignoring 3D contextual information or 3D models requiring large memory capacity and extensive learning databases. In this study, an ensemble of two kinds of U-Net-like models based on both 3D and 2.5D convolutions is proposed to segment multimodal magnetic resonance images (MRI). The 3D model uses concatenated data in a modified U-Net architecture. In contrast, the 2.5D model is based on a multi-input strategy to extract low-level features from each modality independently and on a new 2.5D Multi-View Inception block that aims to merge features from different views of a 3D image aggregating multi-scale features. The Asymmetric Ensemble of Asymmetric U-Net (AE AU-Net) based on both is designed to find a balance between increasing multi-scale and 3D contextual information extraction and keeping memory consumption low. Experiments on 2019 dataset show that our model improves enhancing tumor sub-region segmentation. Overall, performance is comparable with state-of-the-art results, although with less learning data or memory requirements. In addition, we provide voxel-wise and structure-wise uncertainties of the segmentation results, and we have established qualitative and quantitative relationships between uncertainty and prediction errors. Dice similarity coefficient for the whole tumor, tumor core, and tumor enhancing regions on BraTS 2019 validation dataset were 0.902, 0.815, and 0.773. We also applied our method in BraTS 2018 with corresponding Dice score values of 0.908, 0.838, and 0.800.
Collapse
Affiliation(s)
| | | | - Moncef Hidane
- LIFAT EA 6300, INSA Centre Val de Loire, Université de Tours, Tours, France
| | - Ilyess Zemmoura
- UMR Inserm U1253, iBrain, Université de Tours, Inserm, Tours, France
| | - Clovis Tauber
- UMR Inserm U1253, iBrain, Université de Tours, Inserm, Tours, France
| |
Collapse
|
41
|
Brain tumor detection in MR image using superpixels, principal component analysis and template based K-means clustering algorithm. MACHINE LEARNING WITH APPLICATIONS 2021. [DOI: 10.1016/j.mlwa.2021.100044] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
|
42
|
Wang Y, Jia Z, Lyu Y, Dong Q, Li S, Hu W. Multimodal magnetic resonance imaging analysis in the characteristics of Wilson's disease: A case report and literature review. Open Life Sci 2021; 16:793-799. [PMID: 34458581 PMCID: PMC8374231 DOI: 10.1515/biol-2021-0071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 05/18/2021] [Accepted: 06/11/2021] [Indexed: 12/04/2022] Open
Abstract
Wilson’s disease (WD) is an inherited disorder of copper metabolism. Multimodal magnetic resonance imaging (MRI) has been reported to provide evidence of the extent and severity of brain lesions. However, there are few studies related to the diagnosis of WD with multimodal MRI. Here, we reported a WD patient who was subjected to Sanger sequencing, conventional MRI, and multimodal MRI examinations, including susceptibility-weighted imaging (SWI) and arterial spin labeling (ASL). Sanger sequencing demonstrated two pathogenic mutations in exon 8 of the ATP7B gene. Slit-lamp examination revealed the presence of Kayser–Fleischer rings in both eyes, as well as low serum ceruloplasmin and high 24-h urinary copper excretion on admission. Although the substantia nigra, red nucleus, and lenticular nucleus on T1-weighted imaging and T2-weighted imaging were normal, SWI and ASL showed hypointensities in these regions. Besides, decreased cerebral blood flow was found in the lenticular nucleus and the head of caudate nucleus. The patient recovered well after 1 year and 9 months of follow-up, with only a Unified Wilson Disease Rating Scale score of 1 for neurological symptom. Brain multimodal MRI provided a thorough insight into the WD, which might make up for the deficiency of conventional MRI.
Collapse
Affiliation(s)
- Yun Wang
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| | - Zejin Jia
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| | - Yuelei Lyu
- Department of Imaging, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| | - Qian Dong
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| | - Shujuan Li
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| | - Wenli Hu
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, No. 8 Gongtinan Road, Chaoyang District, Beijing 100020, China
| |
Collapse
|
43
|
Fawzi A, Achuthan A, Belaton B. Brain Image Segmentation in Recent Years: A Narrative Review. Brain Sci 2021; 11:1055. [PMID: 34439674 PMCID: PMC8392552 DOI: 10.3390/brainsci11081055] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 07/10/2021] [Accepted: 07/19/2021] [Indexed: 11/17/2022] Open
Abstract
Brain image segmentation is one of the most time-consuming and challenging procedures in a clinical environment. Recently, a drastic increase in the number of brain disorders has been noted. This has indirectly led to an increased demand for automated brain segmentation solutions to assist medical experts in early diagnosis and treatment interventions. This paper aims to present a critical review of the recent trend in segmentation and classification methods for brain magnetic resonance images. Various segmentation methods ranging from simple intensity-based to high-level segmentation approaches such as machine learning, metaheuristic, deep learning, and hybridization are included in the present review. Common issues, advantages, and disadvantages of brain image segmentation methods are also discussed to provide a better understanding of the strengths and limitations of existing methods. From this review, it is found that deep learning-based and hybrid-based metaheuristic approaches are more efficient for the reliable segmentation of brain tumors. However, these methods fall behind in terms of computation and memory complexity.
Collapse
Affiliation(s)
| | - Anusha Achuthan
- School of Computer Sciences, Universiti Sains Malaysia, Gelugor 11800, Malaysia; (A.F.); (B.B.)
| | | |
Collapse
|
44
|
Ansari SU, Javed K, Qaisar SM, Jillani R, Haider U. Multiple Sclerosis Lesion Segmentation in Brain MRI Using Inception Modules Embedded in a Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:4138137. [PMID: 34484652 PMCID: PMC8410443 DOI: 10.1155/2021/4138137] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/27/2021] [Indexed: 11/17/2022]
Abstract
Multiple sclerosis (MS) is a chronic and autoimmune disease that forms lesions in the central nervous system. Quantitative analysis of these lesions has proved to be very useful in clinical trials for therapies and assessing disease prognosis. However, the efficacy of these quantitative analyses greatly depends on how accurately the MS lesions have been identified and segmented in brain MRI. This is usually carried out by radiologists who label 3D MR images slice by slice using commonly available segmentation tools. However, such manual practices are time consuming and error prone. To circumvent this problem, several automatic segmentation techniques have been investigated in recent years. In this paper, we propose a new framework for automatic brain lesion segmentation that employs a novel convolutional neural network (CNN) architecture. In order to segment lesions of different sizes, we have to pick a specific filter or size 3 × 3 or 5 × 5. Sometimes, it is hard to decide which filter will work better to get the best results. Google Net has solved this problem by introducing an inception module. An inception module uses 3 × 3, 5 × 5, 1 × 1 and max pooling filters in parallel fashion. Results show that incorporating inception modules in a CNN has improved the performance of the network in the segmentation of MS lesions. We compared the results of the proposed CNN architecture for two loss functions: binary cross entropy (BCE) and structural similarity index measure (SSIM) using the publicly available ISBI-2015 challenge dataset. A score of 93.81 which is higher than the human rater with BCE loss function is achieved.
Collapse
Affiliation(s)
- Shahab U. Ansari
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi, Pakistan
| | - Kamran Javed
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi, Pakistan
- National Centre of Artificial Intelligence (NCAI), Saudi Data and Artificial Intelligence Authority (SDAIA), Riyadh, Saudi Arabia
| | - Saeed Mian Qaisar
- Electrical and Computer Engineering Department, Effat University, Jeddah 22332, Saudi Arabia
- Communication and Signal Processing Lab, Energy and Technology Research Center, Effat University, Jeddah 22332, Saudi Arabia
| | - Rashad Jillani
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi, Pakistan
| | - Usman Haider
- Faculty of Computer Science and Engineering, Ghulam Ishaq Khan Institute of Engineering Sciences and Technology, Topi, Pakistan
| |
Collapse
|
45
|
Sethy PK, Behera SK. A data constrained approach for brain tumour detection using fused deep features and SVM. MULTIMEDIA TOOLS AND APPLICATIONS 2021; 80:28745-28760. [DOI: 10.1007/s11042-021-11098-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 04/14/2021] [Accepted: 05/21/2021] [Indexed: 08/02/2023]
|
46
|
Naseer A, Yasir T, Azhar A, Shakeel T, Zafar K. Computer-Aided Brain Tumor Diagnosis: Performance Evaluation of Deep Learner CNN Using Augmented Brain MRI. Int J Biomed Imaging 2021; 2021:5513500. [PMID: 34234822 PMCID: PMC8216815 DOI: 10.1155/2021/5513500] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2021] [Revised: 04/06/2021] [Accepted: 05/31/2021] [Indexed: 11/17/2022] Open
Abstract
Brain tumor is a deadly neurological disease caused by an abnormal and uncontrollable growth of cells inside the brain or skull. The mortality ratio of patients suffering from this disease is growing gradually. Analysing Magnetic Resonance Images (MRIs) manually is inadequate for efficient and accurate brain tumor diagnosis. An early diagnosis of the disease can activate a timely treatment consequently elevating the survival ratio of the patients. Modern brain imaging methodologies have augmented the detection ratio of brain tumor. In the past few years, a lot of research has been carried out for computer-aided diagnosis of human brain tumor to achieve 100% diagnosis accuracy. The focus of this research is on early diagnosis of brain tumor via Convolution Neural Network (CNN) to enhance state-of-the-art diagnosis accuracy. The proposed CNN is trained on a benchmark dataset, BR35H, containing brain tumor MRIs. The performance and sustainability of the model is evaluated on six different datasets, i.e., BMI-I, BTI, BMI-II, BTS, BMI-III, and BD-BT. To improve the performance of the model and to make it sustainable for totally unseen data, different geometric data augmentation techniques, along with statistical standardization, are employed. The proposed CNN-based CAD system for brain tumor diagnosis performs better than other systems by achieving an average accuracy of around 98.8% and a specificity of around 0.99. It also reveals 100% correct diagnosis for two brain MRI datasets, i.e., BTS and BD-BT. The performance of the proposed system is also compared with the other existing systems, and the analysis reveals that the proposed system outperforms all of them.
Collapse
Affiliation(s)
- Asma Naseer
- University of Management and Technology, Lahore, Pakistan
| | - Tahreem Yasir
- University of Management and Technology, Lahore, Pakistan
| | - Arifah Azhar
- University of Management and Technology, Lahore, Pakistan
| | | | - Kashif Zafar
- National University of Computer and Emerging Sciences, Lahore, Pakistan
| |
Collapse
|
47
|
Huang H, Yang G, Zhang W, Xu X, Yang W, Jiang W, Lai X. A Deep Multi-Task Learning Framework for Brain Tumor Segmentation. Front Oncol 2021; 11:690244. [PMID: 34150660 PMCID: PMC8212784 DOI: 10.3389/fonc.2021.690244] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Accepted: 05/17/2021] [Indexed: 11/25/2022] Open
Abstract
Glioma is the most common primary central nervous system tumor, accounting for about half of all intracranial primary tumors. As a non-invasive examination method, MRI has an extremely important guiding role in the clinical intervention of tumors. However, manually segmenting brain tumors from MRI requires a lot of time and energy for doctors, which affects the implementation of follow-up diagnosis and treatment plans. With the development of deep learning, medical image segmentation is gradually automated. However, brain tumors are easily confused with strokes and serious imbalances between classes make brain tumor segmentation one of the most difficult tasks in MRI segmentation. In order to solve these problems, we propose a deep multi-task learning framework and integrate a multi-depth fusion module in the framework to accurately segment brain tumors. In this framework, we have added a distance transform decoder based on the V-Net, which can make the segmentation contour generated by the mask decoder more accurate and reduce the generation of rough boundaries. In order to combine the different tasks of the two decoders, we weighted and added their corresponding loss functions, where the distance map prediction regularized the mask prediction. At the same time, the multi-depth fusion module in the encoder can enhance the ability of the network to extract features. The accuracy of the model will be evaluated online using the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets. This method obtains high-quality segmentation results, and the average Dice is as high as 78%. The experimental results show that this model has great potential in segmenting brain tumors automatically and accurately.
Collapse
Affiliation(s)
- He Huang
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Guang Yang
- Cardiovascular Research Centre, Royal Brompton Hospital, London, United Kingdom
- National Heart and Lung Institute, Imperial College London, London, United Kingdom
| | - Wenbo Zhang
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xiaomei Xu
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Weiji Yang
- College of Life Science, Zhejiang Chinese Medical University, Hangzhou, China
| | - Weiwei Jiang
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| | - Xiaobo Lai
- College of Medical Technology, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
48
|
Handcrafted and Deep Learning-Based Radiomic Models Can Distinguish GBM from Brain Metastasis. JOURNAL OF ONCOLOGY 2021; 2021:5518717. [PMID: 34188680 PMCID: PMC8195660 DOI: 10.1155/2021/5518717] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Revised: 04/22/2021] [Accepted: 05/24/2021] [Indexed: 12/23/2022]
Abstract
Objective The purpose of this study was to investigate the feasibility of applying handcrafted radiomics (HCR) and deep learning-based radiomics (DLR) for the accurate preoperative classification of glioblastoma (GBM) and solitary brain metastasis (BM). Methods A retrospective analysis of the magnetic resonance imaging (MRI) data of 140 patients (110 in the training dataset and 30 in the test dataset) with GBM and 128 patients (98 in the training dataset and 30 in the test dataset) with BM confirmed by surgical pathology was performed. The regions of interest (ROIs) on T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), and contrast-enhanced T1WI (T1CE) were drawn manually, and then, HCR and DLR analyses were performed. On this basis, different machine learning algorithms were implemented and compared to find the optimal modeling method. The final classifiers were identified and validated for different MRI modalities using HCR features and HCR + DLR features. By analyzing the receiver operating characteristic (ROC) curve, the area under the curve (AUC), accuracy, sensitivity, and specificity were calculated to evaluate the predictive efficacy of different methods. Results In multiclassifier modeling, random forest modeling showed the best distinguishing performance among all MRI modalities. HCR models already showed good results for distinguishing between the two types of brain tumors in the test dataset (T1WI, AUC = 0.86; T2WI, AUC = 0.76; T1CE, AUC = 0.93). By adding DLR features, all AUCs showed significant improvement (T1WI, AUC = 0.87; T2WI, AUC = 0.80; T1CE, AUC = 0.97; p < 0.05). The T1CE-based radiomic model showed the best classification performance (AUC = 0.99 in the training dataset and AUC = 0.97 in the test dataset), surpassing the other MRI modalities (p < 0.05). The multimodality radiomic model also showed robust performance (AUC = 1 in the training dataset and AUC = 0.84 in the test dataset). Conclusion Machine learning models using MRI radiomic features can help distinguish GBM from BM effectively, especially the combination of HCR and DLR features.
Collapse
|
49
|
Das P, Pal C, Acharyya A, Chakrabarti A, Basu S. Deep neural network for automated simultaneous intervertebral disc (IVDs) identification and segmentation of multi-modal MR images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 205:106074. [PMID: 33906011 DOI: 10.1016/j.cmpb.2021.106074] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Accepted: 03/22/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE Lower back pain in humans has become a major risk. Classical approaches follow a non-invasive imaging technique for the assessment of spinal intervertebral disc (IVDs) abnormalities, where identification and segmentation of discs are done separately, making it a time-consuming phenomenon. This necessitates designing a robust automated and simultaneous IVDs identification and segmentation of multi-modality MRI images. METHODS We introduced a novel deep neural network architecture coined as 'RIMNet', a Region-to-Image Matching Network model, capable of performing an automated and simultaneous IVDs identification and segmentation of MRI images. The multi-modal input data is being fed to the network with a dropout strategy, by randomly disabling modalities in mini-batches. The performance accuracy as a function of the testing dataset was determined. The execution of the deep neural network model was evaluated by computing the IVDs Identification Accuracy, Dice coefficient, MDOC, Average Symmetric Surface Distance, Jaccard Coefficient, Hausdorff Distance and F1 Score. RESULTS Proposed model has attained 94% identification accuracy, dice coefficient value of 91.7±1% in segmentation and MDOC 90.2±1%. Our model also achieved 0.87±0.02 for Jaccard Coefficient, 0.54±0.04 for ASD and 0.62±0.02 mm Hausdorff Distance. The results have been validated and compared with other methodologies on dataset of MICCAI IVD 2018 challenge. CONCLUSIONS Our proposed deep-learning methodology is capable of performing simultaneous identification and segmentation on IVDs MRI images of the human spine with high accuracy.
Collapse
Affiliation(s)
- Pabitra Das
- A.K.Choudhury School of Information Technology, University of Calcutta, Kolkata 700106, India.
| | - Chandrajit Pal
- Advanced Embedded System and IC Design Laboratory, Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India
| | - Amit Acharyya
- Advanced Embedded System and IC Design Laboratory, Department of Electrical Engineering, Indian Institute of Technology Hyderabad, India
| | - Amlan Chakrabarti
- A.K.Choudhury School of Information Technology, University of Calcutta, Kolkata 700106, India
| | - Saumyajit Basu
- Kothari Medical Centre, 8/3, Alipore Rd, Alipore, Kolkata 700027, India
| |
Collapse
|
50
|
Sun F. Psychological analysis of classroom learning based on face recognition and neural network. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
With the rapid development of deep learning and parallel computing, deep learning neural network based on big data has been applied to the field of facial nerve recognition. This innovative operation has attracted extensive attention of scholars. The reason why the application of neural network is realized lies in deep learning, which can reduce the error and change the weight by means of back propagation and error optimization, so as to extract more key points and features. In spite of this, data collection and key points extraction is still a very complex problem. This paper mainly aims at the above problems, studies the way of deep learning and information extraction and its internal structure, and optimizes its application to classroom learning, so as to provide effective help for the realization of distance education.
Collapse
Affiliation(s)
- Feifei Sun
- School of Mechanical and Electrical Engineering, Zao Zhuang University, Zao Zhuang, Shandong Province, China
| |
Collapse
|