1
|
Tsai HY, Kao YW, Wang JC, Tsai TY, Chung WS, Hsu JS, Hou MF, Weng SF. Multitask deep learning on mammography to predict extensive intraductal component in invasive breast cancer. Eur Radiol 2024; 34:2593-2604. [PMID: 37812297 DOI: 10.1007/s00330-023-10254-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 06/26/2023] [Accepted: 08/07/2023] [Indexed: 10/10/2023]
Abstract
OBJECTIVES To develop a multitask deep learning (DL) algorithm to automatically classify mammography imaging findings and predict the existence of extensive intraductal component (EIC) in invasive breast cancer. METHODS Mammograms with invasive breast cancers from 2010 to 2019 were downloaded for two radiologists performing image segmentation and imaging findings annotation. Images were randomly split into training, validation, and test datasets. A multitask approach was performed on the EfficientNet-B0 neural network mainly to predict EIC and classify imaging findings. Three more models were trained for comparison, including a single-task model (predicting EIC), a two-task model (predicting EIC and cell receptor status), and a three-task model (combining the abovementioned tasks). Additionally, these models were trained in a subgroup of invasive ductal carcinoma. The DeLong test was used to examine the difference in model performance. RESULTS This study enrolled 1459 breast cancers on 3076 images. The EIC-positive rate was 29.0%. The three-task model was the best DL model with an area under the curve (AUC) of EIC prediction of 0.758 and 0.775 at the image and breast (patient) levels, respectively. Mass was the most accurately classified imaging finding (AUC = 0.915), followed by calcifications and mass with calcifications (AUC = 0.878 and 0.824, respectively). Cell receptor status prediction was less accurate (AUC = 0.625-0.653). The multitask approach improves the model training compared to the single-task model, but without significant effects. CONCLUSIONS A mammography-based multitask DL model can perform simultaneous imaging finding classification and EIC prediction. CLINICAL RELEVANCE STATEMENT The study results demonstrated the potential of deep learning to extract more information from mammography for clinical decision-making. KEY POINTS • Extensive intraductal component (EIC) is an independent risk factor of local tumor recurrence after breast-conserving surgery. • A mammography-based deep learning model was trained to predict extensive intraductal component close to radiologists' reading. • The developed multitask deep learning model could perform simultaneous imaging finding classification and extensive intraductal component prediction.
Collapse
Affiliation(s)
- Huei-Yi Tsai
- Graduate Institute of Clinical Medicine, College of Medicine, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- Center for Big Data Research, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Yu-Wei Kao
- Department of Healthcare Administration and Medical Informatics, College of Health Science, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Jo-Ching Wang
- Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Tsung-Yu Tsai
- Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Wei-Shiuan Chung
- Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
- Department of Medical Imaging, Kaohsiung Municipal Siaogang Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Jui-Sheng Hsu
- Department of Medical Imaging, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Ming-Feng Hou
- Department of Biomedical Science and Environmental Biology, College of Life Science, Kaohsiung Medical University, Kaohsiung, Taiwan
| | - Shih-Feng Weng
- Center for Big Data Research, Kaohsiung Medical University, Kaohsiung, Taiwan.
- Department of Healthcare Administration and Medical Informatics, College of Health Science, Kaohsiung Medical University, Kaohsiung, Taiwan.
- Department of Medical Research, Kaohsiung Medical University Hospital, Kaohsiung Medical University, Kaohsiung, Taiwan.
- Center for Medical Informatics and Statistics, Office of R&D, Kaohsiung Medical University, Kaohsiung, Taiwan.
| |
Collapse
|
2
|
Bülow RD, Droste P, Boor P. [Advances in computational quantitative nephropathology]. Pathologie (Heidelb) 2024; 45:140-145. [PMID: 38308066 DOI: 10.1007/s00292-024-01300-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 01/05/2024] [Indexed: 02/04/2024]
Abstract
BACKGROUND Semiquantitative histological scoring systems are frequently used in nephropathology. In computational nephropathology, the focus is on generating quantitative data from histology (so-called pathomics). Several recent studies have collected such data using next-generation morphometry (NGM) based on segmentations by artificial neural networks and investigated their usability for various clinical or diagnostic purposes. AIM To present an overview of the current state of studies regarding renal pathomics and to identify current challenges and potential solutions. MATERIALS AND METHODS Due to the literature restriction (maximum of 30 references), studies were selected based on a database search that processed as much data as possible, used innovative methodologies, and/or were ideally multicentric in design. RESULTS AND DISCUSSION Pathomics studies in the kidney have impressively demonstrated that morphometric data are useful clinically (for example, for prognosis assessment) and translationally. Further development of NGM requires overcoming some challenges, including better standardization and generation of prospective evidence.
Collapse
Affiliation(s)
- Roman D Bülow
- Institut für Pathologie, Sektion Nephropathologie, Universitätsklinikum RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Deutschland
| | - Patrick Droste
- Institut für Pathologie, Sektion Nephropathologie, Universitätsklinikum RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Deutschland
- Medizinische Klinik II, Universitätsklinikum RWTH Aachen, Aachen, Deutschland
| | - Peter Boor
- Institut für Pathologie, Sektion Nephropathologie, Universitätsklinikum RWTH Aachen, Pauwelsstraße 30, 52074, Aachen, Deutschland.
- Medizinische Klinik II, Universitätsklinikum RWTH Aachen, Aachen, Deutschland.
| |
Collapse
|
3
|
Mayer RS, Kinzler MN, Stoll AK, Gretser S, Ziegler PK, Saborowski A, Reis H, Vogel A, Wild PJ, Flinner N. [The model transferability of AI in digital pathology : Potential and reality]. Pathologie (Heidelb) 2024; 45:124-132. [PMID: 38372762 PMCID: PMC10901943 DOI: 10.1007/s00292-024-01299-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 12/18/2023] [Indexed: 02/20/2024]
Abstract
OBJECTIVE Artificial intelligence (AI) holds the potential to make significant advancements in pathology. However, its actual implementation and certification for practical use are currently limited, often due to challenges related to model transferability. In this context, we investigate the factors influencing transferability and present methods aimed at enhancing the utilization of AI algorithms in pathology. MATERIALS AND METHODS Various convolutional neural networks (CNNs) and vision transformers (ViTs) were trained using datasets from two institutions, along with the publicly available TCGA-MIBC dataset. These networks conducted predictions in urothelial tissue and intrahepatic cholangiocarcinoma (iCCA). The objective was to illustrate the impact of stain normalization, the influence of various artifacts during both training and testing, as well as the effects of the NoisyEnsemble method. RESULTS We were able to demonstrate that stain normalization of slides from different institutions has a significant positive effect on the inter-institutional transferability of CNNs and ViTs (respectively +13% and +10%). In addition, ViTs usually achieve a higher accuracy in the external test (here +1.5%). Similarly, we showcased how artifacts in test data can negatively affect CNN predictions and how incorporating these artifacts during training leads to improvements. Lastly, NoisyEnsembles of CNNs (better than ViTs) were shown to enhance transferability across different tissues and research questions (+7% Bladder, +15% iCCA). DISCUSSION It is crucial to be aware of the transferability challenge: achieving good performance during development does not necessarily translate to good performance in real-world applications. The inclusion of existing methods to enhance transferability, such as stain normalization and NoisyEnsemble, and their ongoing refinement, is of importance.
Collapse
Affiliation(s)
- Robin S Mayer
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
| | - Maximilian N Kinzler
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
- Universitätsklinikum, Medizinische Klinik 1, Goethe-Universität Frankfurt, Frankfurt am Main, Deutschland
| | - Alexandra K Stoll
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Deutschland
| | - Steffen Gretser
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
| | - Paul K Ziegler
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
| | - Anna Saborowski
- Klinik für Gastroenterologie, Hepatologie, Infektiologie und Endokrinologie, Medizinische Hochschule Hannover, Hannover, Deutschland
| | - Henning Reis
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
| | - Arndt Vogel
- Klinik für Gastroenterologie, Hepatologie, Infektiologie und Endokrinologie, Medizinische Hochschule Hannover, Hannover, Deutschland
| | - Peter J Wild
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Deutschland
- Wildlab, University Hospital Frankfurt MVZ GmbH, Frankfurt am Main, Deutschland
- Frankfurt Cancer Institute (FCI), Frankfurt am Main, Deutschland
- University Cancer Center (UCT) Frankfurt-Marburg, Frankfurt am Main, Deutschland
| | - Nadine Flinner
- Universitätsklinikum, Dr. Senckenbergisches Institut für Pathologie, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60596, Frankfurt am Main, Deutschland.
- Frankfurt Institute for Advanced Studies (FIAS), Frankfurt am Main, Deutschland.
- Frankfurt Cancer Institute (FCI), Frankfurt am Main, Deutschland.
- University Cancer Center (UCT) Frankfurt-Marburg, Frankfurt am Main, Deutschland.
| |
Collapse
|
4
|
Ghayda RA, Cannarella R, Calogero AE, Shah R, Rambhatla A, Zohdy W, Kavoussi P, Avidor-Reiss T, Boitrelle F, Mostafa T, Saleh R, Toprak T, Birowo P, Salvio G, Calik G, Kuroda S, Kaiyal RS, Ziouziou I, Crafa A, Phuoc NHV, Russo GI, Durairajanayagam D, Al-Hashimi M, Hamoda TAAAM, Pinggera GM, Adriansjah R, Maldonado Rosas I, Arafa M, Chung E, Atmoko W, Rocco L, Lin H, Huyghe E, Kothari P, Solorzano Vazquez JF, Dimitriadis F, Garrido N, Homa S, Falcone M, Sabbaghian M, Kandil H, Ko E, Martinez M, Nguyen Q, Harraz AM, Serefoglu EC, Karthikeyan VS, Tien DMB, Jindal S, Micic S, Bellavia M, Alali H, Gherabi N, Lewis S, Park HJ, Simopoulou M, Sallam H, Ramirez L, Colpi G, Agarwal A. Artificial Intelligence in Andrology: From Semen Analysis to Image Diagnostics. World J Mens Health 2024; 42:39-61. [PMID: 37382282 PMCID: PMC10782130 DOI: 10.5534/wjmh.230050] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/10/2023] [Accepted: 03/17/2023] [Indexed: 06/30/2023] Open
Abstract
Artificial intelligence (AI) in medicine has gained a lot of momentum in the last decades and has been applied to various fields of medicine. Advances in computer science, medical informatics, robotics, and the need for personalized medicine have facilitated the role of AI in modern healthcare. Similarly, as in other fields, AI applications, such as machine learning, artificial neural networks, and deep learning, have shown great potential in andrology and reproductive medicine. AI-based tools are poised to become valuable assets with abilities to support and aid in diagnosing and treating male infertility, and in improving the accuracy of patient care. These automated, AI-based predictions may offer consistency and efficiency in terms of time and cost in infertility research and clinical management. In andrology and reproductive medicine, AI has been used for objective sperm, oocyte, and embryo selection, prediction of surgical outcomes, cost-effective assessment, development of robotic surgery, and clinical decision-making systems. In the future, better integration and implementation of AI into medicine will undoubtedly lead to pioneering evidence-based breakthroughs and the reshaping of andrology and reproductive medicine.
Collapse
Affiliation(s)
- Ramy Abou Ghayda
- Urology Institute, University Hospitals, Case Western Reserve University, Cleveland, OH, USA
| | - Rossella Cannarella
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Aldo E. Calogero
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
| | - Rupin Shah
- Department of Urology, Lilavati Hospital and Research Centre, Mumbai, India
| | - Amarnath Rambhatla
- Department of Urology, Henry Ford Health System, Vattikuti Urology Institute, Detroit, MI, USA
| | - Wael Zohdy
- Andrology and STDs, Cairo University, Cairo, Egypt
| | - Parviz Kavoussi
- Department of Urology, University of Texas Health Science Center at San Antonio, San Antonio, TX, USA
| | - Tomer Avidor-Reiss
- Department of Biological Sciences, University of Toledo, Toledo, OH, USA
- Department of Urology, College of Medicine and Life Sciences, University of Toledo, Toledo, OH, USA
| | - Florence Boitrelle
- Reproductive Biology, Fertility Preservation, Andrology, CECOS, Poissy Hospital, Poissy, France
- Department of Biology, Reproduction, Epigenetics, Environment, and Development, Paris Saclay University, UVSQ, INRAE, BREED, Paris, France
| | - Taymour Mostafa
- Andrology, Sexology & STIs Department, Faculty of Medicine, Cairo University, Cairo, Egypt
| | - Ramadan Saleh
- Department of Dermatology, Venereology and Andrology, Faculty of Medicine, Sohag University, Sohag, Egypt
| | - Tuncay Toprak
- Department of Urology, Fatih Sultan Mehmet Training and Research Hospital, University of Health Sciences, Istanbul, Turkey
| | - Ponco Birowo
- Department of Urology, Dr. Cipto Mangunkusumo Hospital, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Gianmaria Salvio
- Department of Endocrinology, Polytechnic University of Marche, Ancona, Italy
| | - Gokhan Calik
- Department of Urology, Istanbul Medipol University, Istanbul, Turkey
| | - Shinnosuke Kuroda
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
- Department of Urology, Reproduction Center, Yokohama City University Medical Center, Yokohama, Japan
| | - Raneen Sawaid Kaiyal
- Glickman Urological & Kidney Institute, Cleveland Clinic Foundation, Cleveland, OH, USA
| | - Imad Ziouziou
- Department of Urology, College of Medicine and Pharmacy, Ibn Zohr University, Agadir, Morocco
| | - Andrea Crafa
- Department of Clinical and Experimental Medicine, University of Catania, Catania, Italy
| | - Nguyen Ho Vinh Phuoc
- Department of Andrology, Binh Dan Hospital, Ho Chi Minh City, Vietnam
- Department of Urology and Andrology, Pham Ngoc Thach University of Medicine, Ho Chi Minh City, Vietnam
| | | | - Damayanthi Durairajanayagam
- Department of Physiology, Faculty of Medicine, Universiti Teknologi MARA, Sungai Buloh Campus, Selangor, Malaysia
| | - Manaf Al-Hashimi
- Department of Urology, Burjeel Hospital, Abu Dhabi, United Arab Emirates (UAE)
- Khalifa University, College of Medicine and Health Science, Abu Dhabi, United Arab Emirates (UAE)
| | - Taha Abo-Almagd Abdel-Meguid Hamoda
- Department of Urology, King Abdulaziz University, Jeddah, Saudi Arabia
- Department of Urology, Faculty of Medicine, Minia University, El-Minia, Egypt
| | | | - Ricky Adriansjah
- Department of Urology, Hasan Sadikin General Hospital, Universitas Padjadjaran, Banding, Indonesia
| | | | - Mohamed Arafa
- Department of Urology, Hamad Medical Corporation, Doha, Qatar
- Department of Urology, Weill Cornell Medical-Qatar, Doha, Qatar
| | - Eric Chung
- Department of Urology, Princess Alexandra Hospital, University of Queensland, Brisbane QLD, Australia
| | - Widi Atmoko
- Department of Urology, Dr. Cipto Mangunkusumo Hospital, Faculty of Medicine, Universitas Indonesia, Jakarta, Indonesia
| | - Lucia Rocco
- Department of Environmental, Biological and Pharmaceutical Sciences and Technologies, University of Campania “Luigi Vanvitelli”, Caserta, Italy
| | - Haocheng Lin
- Department of Urology, Peking University Third Hospital, Peking University, Beijing, China
| | - Eric Huyghe
- Department of Urology and Andrology, University Hospital of Toulouse, Toulouse, France
| | - Priyank Kothari
- Department of Urology, B.Y.L. Nair Charitable Hospital, Topiwala National Medical College, Mumbai, India
| | | | - Fotios Dimitriadis
- Department of Urology, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nicolas Garrido
- IVIRMA Global Research Alliance, IVI Foundation, Instituto de Investigación Sanitaria La Fe (IIS La Fe), Valencia, Spain
| | - Sheryl Homa
- Department of Biosciences, University of Kent, Canterbury, United Kingdom
| | - Marco Falcone
- Department of Urology, Molinette Hospital, A.O.U. Città della Salute e della Scienza, University of Turin, Torino, Italy
| | - Marjan Sabbaghian
- Department of Andrology, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran
| | | | - Edmund Ko
- Department of Urology, Loma Linda University Health, Loma Linda, CA, USA
| | - Marlon Martinez
- Section of Urology, Department of Surgery, University of Santo Tomas Hospital, Manila, Philippines
| | - Quang Nguyen
- Section of Urology, Department of Surgery, University of Santo Tomas Hospital, Manila, Philippines
- Center for Andrology and Sexual Medicine, Viet Duc University Hospital, Hanoi, Vietnam
- Department of Urology, Andrology and Sexual Medicine, University of Medicine and Pharmacy, Vietnam National University, Hanoi, Vietnam
| | - Ahmed M. Harraz
- Urology and Nephrology Center, Mansoura University, Mansoura, Egypt
- Department of Surgery, Urology Unit, Farwaniya Hospital, Farwaniya, Kuwait
- Department of Urology, Sabah Al Ahmad Urology Center, Kuwait City, Kuwait
| | - Ege Can Serefoglu
- Department of Urology, Biruni University School of Medicine, Istanbul, Turkey
| | | | - Dung Mai Ba Tien
- Department of Andrology, Binh Dan Hospital, Ho Chi Minh City, Vietnam
| | - Sunil Jindal
- Department of Andrology and Reproductive Medicine, Jindal Hospital, Meerut, India
| | - Sava Micic
- Department of Andrology, Uromedica Polyclinic, Belgrade, Serbia
| | - Marina Bellavia
- Andrology and IVF Center, Next Fertility Procrea, Lugano, Switzerland
| | - Hamed Alali
- King Fahad Specialist Hospital, Dammam, Saudi Arabia
| | - Nazim Gherabi
- Andrology Committee of the Algerian Association of Urology, Algiers, Algeria
| | - Sheena Lewis
- Examen Lab Ltd., Northern Ireland, United Kingdom
| | - Hyun Jun Park
- Department of Urology, Pusan National University School of Medicine, Busan, Korea
- Medical Research Institute of Pusan National University Hospital, Busan, Korea
| | - Mara Simopoulou
- Department of Experimental Physiology, School of Health Sciences, Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
| | - Hassan Sallam
- Alexandria University Faculty of Medicine, Alexandria, Egypt
| | - Liliana Ramirez
- IVF Laboratory, CITMER Reproductive Medicine, Mexico City, Mexico
| | - Giovanni Colpi
- Andrology and IVF Center, Next Fertility Procrea, Lugano, Switzerland
| | - Ashok Agarwal
- Global Andrology Forum, Moreland Hills, OH, USA
- Cleveland Clinic, Cleveland, OH, USA
| | | |
Collapse
|
5
|
Chau RCW, Li GH, Tew IM, Thu KM, McGrath C, Lo WL, Ling WK, Hsung RTC, Lam WYH. Accuracy of Artificial Intelligence-Based Photographic Detection of Gingivitis. Int Dent J 2023; 73:724-730. [PMID: 37117096 PMCID: PMC10509417 DOI: 10.1016/j.identj.2023.03.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Revised: 03/19/2023] [Accepted: 03/20/2023] [Indexed: 04/30/2023] Open
Abstract
OBJECTIVES Gingivitis is one of the most prevalent plaque-initiated dental diseases globally. It is challenging to maintain satisfactory plaque control without continuous professional advice. Artificial intelligence may be used to provide automated visual plaque control advice based on intraoral photographs. METHODS Frontal view intraoral photographs fulfilling selection criteria were collected. Along the gingival margin, the gingival conditions of individual sites were labelled as healthy, diseased, or questionable. Photographs were randomly assigned as training or validation datasets. Training datasets were input into a novel artificial intelligence system and its accuracy in detection of gingivitis including sensitivity, specificity, and mean intersection-over-union were analysed using validation dataset. The accuracy was reported according to STARD-2015 statement. RESULTS A total of 567 intraoral photographs were collected and labelled, of which 80% were used for training and 20% for validation. Regarding training datasets, there were total 113,745,208 pixels with 9,270,413; 5,711,027; and 4,596,612 pixels were labelled as healthy, diseased, and questionable respectively. Regarding validation datasets, there were 28,319,607 pixels with 1,732,031; 1,866,104; and 1,116,493 pixels were labelled as healthy, diseased, and questionable, respectively. AI correctly predicted 1,114,623 healthy and 1,183,718 diseased pixels with sensitivity of 0.92 and specificity of 0.94. The mean intersection-over-union of the system was 0.60 and above the commonly accepted threshold of 0.50. CONCLUSIONS Artificial intelligence could identify specific sites with and without gingival inflammation, with high sensitivity and high specificity that are on par with visual examination by human dentist. This system may be used for monitoring of the effectiveness of patients' plaque control.
Collapse
Affiliation(s)
- Reinhard Chun Wang Chau
- Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Guan-Hua Li
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - In Meei Tew
- Faculty of Dentistry, The National University of Malaysia, Kuala Lumpur, Malaysia
| | - Khaing Myat Thu
- Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Colman McGrath
- Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Wai-Lun Lo
- Department of Computer Science, Hong Kong Chu Hai College, Hong Kong Special Administrative Region, China
| | - Wing-Kuen Ling
- School of Information Engineering, Guangdong University of Technology, Guangzhou, China
| | - Richard Tai-Chiu Hsung
- Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China; School of Information Engineering, Guangdong University of Technology, Guangzhou, China; Department of Computer Science, Hong Kong Chu Hai College, Hong Kong Special Administrative Region, China.
| | - Walter Yu Hang Lam
- Faculty of Dentistry, The University of Hong Kong, Hong Kong Special Administrative Region, China; Musketeers Foundation Institute of Data Science, The University of Hong Kong, Hong Kong Special Administrative Region, China.
| |
Collapse
|
6
|
Wary P, Hossu G, Ambarki K, Nickel D, Arberet S, Oster J, Orry X, Laurent V. Deep learning HASTE sequence compared with T2-weighted BLADE sequence for liver MRI at 3 Tesla: a qualitative and quantitative prospective study. Eur Radiol 2023; 33:6817-6827. [PMID: 37188883 DOI: 10.1007/s00330-023-09693-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Revised: 02/26/2023] [Accepted: 03/11/2023] [Indexed: 05/17/2023]
Abstract
OBJECTIVES To qualitatively and quantitatively compare a single breath-hold fast half-Fourier single-shot turbo spin echo sequence with deep learning reconstruction (DL HASTE) with T2-weighted BLADE sequence for liver MRI at 3 T. METHODS From December 2020 to January 2021, patients with liver MRI were prospectively included. For qualitative analysis, sequence quality, presence of artifacts, conspicuity, and presumed nature of the smallest lesion were assessed using the chi-squared and McNemar tests. For quantitative analysis, number of liver lesions, size of the smallest lesion, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in both sequences were assessed using the paired Wilcoxon signed-rank test. Intraclass correlation coefficients (ICCs) and kappa coefficients were used to assess agreement between the two readers. RESULTS One hundred and twelve patients were evaluated. Overall image quality (p = .006), artifacts (p < .001), and conspicuity of the smallest lesion (p = .001) were significantly better for the DL HASTE sequence than for the T2-weighted BLADE sequence. Significantly more liver lesions were detected with the DL HASTE sequence (356 lesions) than with the T2-weighted BLADE sequence (320 lesions; p < .001). CNR was significantly higher for the DL HASTE sequence (p < .001). SNR was higher for the T2-weighted BLADE sequence (p < .001). Interreader agreement was moderate to excellent depending on the sequence. Of the 41 supernumerary lesions visible only on the DL HASTE sequence, 38 (93%) were true-positives. CONCLUSION The DL HASTE sequence can be used to improve image quality and contrast and reduces artifacts, allowing the detection of more liver lesions than with the T2-weighted BLADE sequence. CLINICAL RELEVANCE STATEMENT The DL HASTE sequence is superior to the T2-weighted BLADE sequence for the detection of focal liver lesions and can be used in daily practice as a standard sequence. KEY POINTS • The half-Fourier acquisition single-shot turbo spin echo sequence with deep learning reconstruction (DL HASTE sequence) has better overall image quality, reduced artifacts (particularly motion artifacts), and improved contrast, allowing the detection of more liver lesions than with the T2-weighted BLADE sequence. • The acquisition time of the DL HASTE sequence is at least eight times faster (21 s) than that of the T2-weighted BLADE sequence (3-5 min). • The DL HASTE sequence could replace the conventional T2-weighted BLADE sequence to meet the growing indication for hepatic MRI in clinical practice, given its diagnostic and time-saving performance.
Collapse
Affiliation(s)
- Pierre Wary
- Department of Adult Radiology, CHRU de Nancy, 5 Rue du Morvan, 54500, Vandoeuvre-lès-Nancy, France.
| | - Gabriela Hossu
- Clinical Investigation Center Technological Innovation of Nancy, Inserm, CHRU de Nancy, Vandoeuvre-lès-Nancy, France
- Adaptive Diagnostic and Interventional Imaging, Inserm, CHRU de Nancy, Vandoeuvre-lès-Nancy, France
| | - Khalid Ambarki
- Siemens Healthcare, Siemens Healthcare SAS, Saint Denis, France
| | - Dominik Nickel
- Siemens Healthcare GmbH, MR Application Predevelopment, Erlangen, Germany
| | - Simon Arberet
- Siemens Healthineers, Digital Technology & Innovation, Princeton, NJ, USA
| | - Julien Oster
- Clinical Investigation Center Technological Innovation of Nancy, Inserm, CHRU de Nancy, Vandoeuvre-lès-Nancy, France
- Adaptive Diagnostic and Interventional Imaging, Inserm, CHRU de Nancy, Vandoeuvre-lès-Nancy, France
| | - Xavier Orry
- Department of Adult Radiology, CHRU de Nancy, 5 Rue du Morvan, 54500, Vandoeuvre-lès-Nancy, France
| | - Valérie Laurent
- Department of Adult Radiology, CHRU de Nancy, 5 Rue du Morvan, 54500, Vandoeuvre-lès-Nancy, France
- Adaptive Diagnostic and Interventional Imaging, Inserm, CHRU de Nancy, Vandoeuvre-lès-Nancy, France
| |
Collapse
|
7
|
Said D, Carbonell G, Stocker D, Hectors S, Vietti-Violi N, Bane O, Chin X, Schwartz M, Tabrizian P, Lewis S, Greenspan H, Jégou S, Schiratti JB, Jehanno P, Taouli B. Semiautomated segmentation of hepatocellular carcinoma tumors with MRI using convolutional neural networks. Eur Radiol 2023; 33:6020-6032. [PMID: 37071167 DOI: 10.1007/s00330-023-09613-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/09/2023] [Accepted: 02/26/2023] [Indexed: 04/19/2023]
Abstract
OBJECTIVE To assess the performance of convolutional neural networks (CNNs) for semiautomated segmentation of hepatocellular carcinoma (HCC) tumors on MRI. METHODS This retrospective single-center study included 292 patients (237 M/55F, mean age 61 years) with pathologically confirmed HCC between 08/2015 and 06/2019 and who underwent MRI before surgery. The dataset was randomly divided into training (n = 195), validation (n = 66), and test sets (n = 31). Volumes of interest (VOIs) were manually placed on index lesions by 3 independent radiologists on different sequences (T2-weighted imaging [WI], T1WI pre-and post-contrast on arterial [AP], portal venous [PVP], delayed [DP, 3 min post-contrast] and hepatobiliary phases [HBP, when using gadoxetate], and diffusion-weighted imaging [DWI]). Manual segmentation was used as ground truth to train and validate a CNN-based pipeline. For semiautomated segmentation of tumors, we selected a random pixel inside the VOI, and the CNN provided two outputs: single slice and volumetric outputs. Segmentation performance and inter-observer agreement were analyzed using the 3D Dice similarity coefficient (DSC). RESULTS A total of 261 HCCs were segmented on the training/validation sets, and 31 on the test set. The median lesion size was 3.0 cm (IQR 2.0-5.2 cm). Mean DSC (test set) varied depending on the MRI sequence with a range between 0.442 (ADC) and 0.778 (high b-value DWI) for single-slice segmentation; and between 0.305 (ADC) and 0.667 (T1WI pre) for volumetric-segmentation. Comparison between the two models showed better performance in single-slice segmentation, with statistical significance on T2WI, T1WI-PVP, DWI, and ADC. Inter-observer reproducibility of segmentation analysis showed a mean DSC of 0.71 in lesions between 1 and 2 cm, 0.85 in lesions between 2 and 5 cm, and 0.82 in lesions > 5 cm. CONCLUSION CNN models have fair to good performance for semiautomated HCC segmentation, depending on the sequence and tumor size, with better performance for the single-slice approach. Refinement of volumetric approaches is needed in future studies. KEY POINTS • Semiautomated single-slice and volumetric segmentation using convolutional neural networks (CNNs) models provided fair to good performance for hepatocellular carcinoma segmentation on MRI. • CNN models' performance for HCC segmentation accuracy depends on the MRI sequence and tumor size, with the best results on diffusion-weighted imaging and T1-weighted imaging pre-contrast, and for larger lesions.
Collapse
Affiliation(s)
- Daniela Said
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, Clínica Universidad de los Andes, Santiago, Chile
| | - Guillermo Carbonell
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Radiology, University Hospital Virgen de La Arrixaca, Murcia, Spain
| | - Daniel Stocker
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Institute of Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland
| | - Stefanie Hectors
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Naik Vietti-Violi
- Department of Radiology, Lausanne University Hospital, Lausanne, Switzerland
| | - Octavia Bane
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Xing Chin
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Myron Schwartz
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Parissa Tabrizian
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Sara Lewis
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, 1470 Madison Ave, New York, NY, 10029, USA
| | - Hayit Greenspan
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | | | | | | | - Bachir Taouli
- BioMedical Engineering and Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, USA.
- Department of Diagnostic, Molecular and Interventional Radiology, Icahn School of Medicine at Mount Sinai, 1470 Madison Ave, New York, NY, 10029, USA.
| |
Collapse
|
8
|
Shen DD, Bao SL, Wang Y, Chen YC, Zhang YC, Li XC, Ding YC, Jia ZZ. An automatic and accurate deep learning-based neuroimaging pipeline for the neonatal brain. Pediatr Radiol 2023; 53:1685-1697. [PMID: 36884052 DOI: 10.1007/s00247-023-05620-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Revised: 01/26/2023] [Accepted: 01/27/2023] [Indexed: 03/09/2023]
Abstract
BACKGROUND Accurate segmentation of neonatal brain tissues and structures is crucial for studying normal development and diagnosing early neurodevelopmental disorders. However, there is a lack of an end-to-end pipeline for automated segmentation and imaging analysis of the normal and abnormal neonatal brain. OBJECTIVE To develop and validate a deep learning-based pipeline for neonatal brain segmentation and analysis of structural magnetic resonance images (MRI). MATERIALS AND METHODS Two cohorts were enrolled in the study, including cohort 1 (582 neonates from the developing Human Connectome Project) and cohort 2 (37 neonates imaged using a 3.0-tesla MRI scanner in our hospital).We developed a deep leaning-based architecture capable of brain segmentation into 9 tissues and 87 structures. Then, extensive validations were performed for accuracy, effectiveness, robustness and generality of the pipeline. Furthermore, regional volume and cortical surface estimation were measured through in-house bash script implemented in FSL (Oxford Centre for Functional MRI of the Brain Software Library) to ensure reliability of the pipeline. Dice similarity score (DSC), the 95th percentile Hausdorff distance (H95) and intraclass correlation coefficient (ICC) were calculated to assess the quality of our pipeline. Finally, we finetuned and validated our pipeline on 2-dimensional thick-slice MRI in cohorts 1 and 2. RESULTS The deep learning-based model showed excellent performance for neonatal brain tissue and structural segmentation, with the best DSC and the 95th percentile Hausdorff distance (H95) of 0.96 and 0.99 mm, respectively. In terms of regional volume and cortical surface analysis, our model showed good agreement with ground truth. The ICC values for the regional volume were all above 0.80. Considering the thick-slice image pipeline, the same trend was observed for brain segmentation and analysis. The best DSC and H95 were 0.92 and 3.00 mm, respectively. The regional volumes and surface curvature had ICC values just below 0.80. CONCLUSIONS We propose an automatic, accurate, stable and reliable pipeline for neonatal brain segmentation and analysis from thin and thick structural MRI. The external validation showed very good reproducibility of the pipeline.
Collapse
Affiliation(s)
- Dan Dan Shen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Shan Lei Bao
- Department of Nuclear Medicine, Affiliated Hospital and Medical School of Nantong University, Jiangsu, People's Republic of China
| | - Yan Wang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Ying Chi Chen
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Cheng Zhang
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Xing Can Li
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Yu Chen Ding
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China
| | - Zhong Zheng Jia
- Department of Medical Imaging, Affiliated Hospital and Medical School of Nantong University, NO.20 Xisi Road, Nantong, Jiangsu, 226001, People's Republic of China.
| |
Collapse
|
9
|
Wicaksono KP, Fujimoto K, Fushimi Y, Sakata A, Okuchi S, Hinoda T, Nakajima S, Yamao Y, Yoshida K, Miyake KK, Numamoto H, Saga T, Nakamoto Y. Super-resolution application of generative adversarial network on brain time-of-flight MR angiography: image quality and diagnostic utility evaluation. Eur Radiol 2023; 33:936-946. [PMID: 36006430 DOI: 10.1007/s00330-022-09103-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 08/02/2022] [Accepted: 08/06/2022] [Indexed: 02/03/2023]
Abstract
OBJECTIVES To develop a generative adversarial network (GAN) model to improve image resolution of brain time-of-flight MR angiography (TOF-MRA) and to evaluate the image quality and diagnostic utility of the reconstructed images. METHODS We included 180 patients who underwent 1-min low-resolution (LR) and 4-min high-resolution (routine) brain TOF-MRA scans. We used 50 patients' datasets for training, 12 for quantitative image quality evaluation, and the rest for diagnostic validation. We modified a pix2pix GAN to suit TOF-MRA datasets and fine-tuned GAN-related parameters, including loss functions. Maximum intensity projection images were generated and compared using multi-scale structural similarity (MS-SSIM) and information theoretic-based statistic similarity measure (ISSM) index. Two radiologists scored vessels' visibilities using a 5-point Likert scale. Finally, we evaluated sensitivities and specificities of GAN-MRA in depicting aneurysms, stenoses, and occlusions. RESULTS The optimal model was achieved with a lambda of 1e5 and L1 + MS-SSIM loss. Image quality metrics for GAN-MRA were higher than those for LR-MRA (MS-SSIM, 0.87 vs. 0.73; ISSM, 0.60 vs. 0.35; p.adjusted < 0.001). Vessels' visibility of GAN-MRA was superior to LR-MRA (rater A, 4.18 vs. 2.53; rater B, 4.61 vs. 2.65; p.adjusted < 0.001). In depicting vascular abnormalities, GAN-MRA showed comparable sensitivities and specificities, with greater sensitivity for aneurysm detection by one rater (93% vs. 84%, p < 0.05). CONCLUSIONS An optimized GAN could significantly improve the image quality and vessel visibility of low-resolution brain TOF-MRA with equivalent sensitivity and specificity in detecting aneurysms, stenoses, and occlusions. KEY POINTS • GAN could significantly improve the image quality and vessel visualization of low-resolution brain MR angiography (MRA). • With optimally adjusted training parameters, the GAN model did not degrade diagnostic performance by generating substantial false positives or false negatives. • GAN could be a promising approach for obtaining higher resolution TOF-MRA from images scanned in a fraction of time.
Collapse
Affiliation(s)
- Krishna Pandu Wicaksono
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Koji Fujimoto
- Department of Real World Data Research and Development, Graduate School of Medicine, Kyoto University, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto, 606-8507, Japan.
| | - Yasutaka Fushimi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Akihiko Sakata
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Sachi Okuchi
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Takuya Hinoda
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Satoshi Nakajima
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Yukihiro Yamao
- Department of Neurosurgery, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Kazumichi Yoshida
- Department of Neurosurgery, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Kanae Kawai Miyake
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Hitomi Numamoto
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Tsuneo Saga
- Department of Advanced Medical Imaging Research, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| | - Yuji Nakamoto
- Department of Diagnostic Imaging and Nuclear Medicine, Graduate School of Medicine, Kyoto University, Kyoto, 606-8507, Japan
| |
Collapse
|
10
|
Wang H, Liu X, Kong L, Huang Y, Chen H, Ma X, Duan Y, Shao Y, Feng A, Shen Z, Gu H, Kong Q, Xu Z, Zhou Y. Improving CBCT image quality to the CT level using RegGAN in esophageal cancer adaptive radiotherapy. Strahlenther Onkol 2023; 199:485-497. [PMID: 36688953 PMCID: PMC10133081 DOI: 10.1007/s00066-022-02039-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 12/04/2022] [Indexed: 01/24/2023]
Abstract
OBJECTIVE This study aimed to improve the image quality and CT Hounsfield unit accuracy of daily cone-beam computed tomography (CBCT) using registration generative adversarial networks (RegGAN) and apply synthetic CT (sCT) images to dose calculations in radiotherapy. METHODS The CBCT/planning CT images of 150 esophageal cancer patients undergoing radiotherapy were used for training (120 patients) and testing (30 patients). An unsupervised deep-learning method, the 2.5D RegGAN model with an adaptively trained registration network, was proposed, through which sCT images were generated. The quality of deep-learning-generated sCT images was quantitatively compared to the reference deformed CT (dCT) image using mean absolute error (MAE), root mean square error (RMSE) of Hounsfield units (HU), and peak signal-to-noise ratio (PSNR). The dose calculation accuracy was further evaluated for esophageal cancer radiotherapy plans, and the same plans were calculated on dCT, CBCT, and sCT images. RESULTS The quality of sCT images produced by RegGAN was significantly improved compared to the original CBCT images. ReGAN achieved image quality in the testing patients with MAE sCT vs. CBCT: 43.7 ± 4.8 vs. 80.1 ± 9.1; RMSE sCT vs. CBCT: 67.2 ± 12.4 vs. 124.2 ± 21.8; and PSNR sCT vs. CBCT: 27.9 ± 5.6 vs. 21.3 ± 4.2. The sCT images generated by the RegGAN model showed superior accuracy on dose calculation, with higher gamma passing rates (93.3 ± 4.4, 90.4 ± 5.2, and 84.3 ± 6.6) compared to original CBCT images (89.6 ± 5.7, 85.7 ± 6.9, and 72.5 ± 12.5) under the criteria of 3 mm/3%, 2 mm/2%, and 1 mm/1%, respectively. CONCLUSION The proposed deep-learning RegGAN model seems promising for generation of high-quality sCT images from stand-alone thoracic CBCT images in an efficient way and thus has the potential to support CBCT-based esophageal cancer adaptive radiotherapy.
Collapse
Affiliation(s)
- Hao Wang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China.,Institute of Modern Physics, Fudan University, Shanghai, China
| | - Xiao Liu
- Department of Radiotherapy, Shandong Cancer Hospital and Institute, Shandong First Medical University and Shandong Academy of Medical Sciences, Jinan, China
| | | | - Ying Huang
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hua Chen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Xiurui Ma
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China
| | - Yanhua Duan
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yan Shao
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Aihui Feng
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Zhenjiong Shen
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Hengle Gu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Qing Kong
- Institute of Modern Physics, Fudan University, Shanghai, China
| | - Zhiyong Xu
- Department of Radiation Oncology, Shanghai Chest Hospital, Shanghai Jiaotong University, Shanghai, China
| | - Yongkang Zhou
- Department of Radiation Oncology, Zhongshan Hospital, Fudan University, Shanghai, China.
| |
Collapse
|
11
|
Rahimpour M, Saint Martin MJ, Frouin F, Akl P, Orlhac F, Koole M, Malhaire C. Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI. Eur Radiol 2023; 33:959-69. [PMID: 36074262 DOI: 10.1007/s00330-022-09113-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 07/09/2022] [Accepted: 08/14/2022] [Indexed: 02/04/2023]
Abstract
OBJECTIVES To develop a visual ensemble selection of deep convolutional neural networks (CNN) for 3D segmentation of breast tumors using T1-weighted dynamic contrast-enhanced (T1-DCE) MRI. METHODS Multi-center 3D T1-DCE MRI (n = 141) were acquired for a cohort of patients diagnosed with locally advanced or aggressive breast cancer. Tumor lesions of 111 scans were equally divided between two radiologists and segmented for training. The additional 30 scans were segmented independently by both radiologists for testing. Three 3D U-Net models were trained using either post-contrast images or a combination of post-contrast and subtraction images fused at either the image or the feature level. Segmentation accuracy was evaluated quantitatively using the Dice similarity coefficient (DSC) and the Hausdorff distance (HD95) and scored qualitatively by a radiologist as excellent, useful, helpful, or unacceptable. Based on this score, a visual ensemble approach selecting the best segmentation among these three models was proposed. RESULTS The mean and standard deviation of DSC and HD95 between the two radiologists were equal to 77.8 ± 10.0% and 5.2 ± 5.9 mm. Using the visual ensemble selection, a DSC and HD95 equal to 78.1 ± 16.2% and 14.1 ± 40.8 mm was reached. The qualitative assessment was excellent (resp. excellent or useful) in 50% (resp. 77%). CONCLUSION Using subtraction images in addition to post-contrast images provided complementary information for 3D segmentation of breast lesions by CNN. A visual ensemble selection allowing the radiologist to select the most optimal segmentation obtained by the three 3D U-Net models achieved comparable results to inter-radiologist agreement, yielding 77% segmented volumes considered excellent or useful. KEY POINTS • Deep convolutional neural networks were developed using T1-weighted post-contrast and subtraction MRI to perform automated 3D segmentation of breast tumors. • A visual ensemble selection allowing the radiologist to choose the best segmentation among the three 3D U-Net models outperformed each of the three models. • The visual ensemble selection provided clinically useful segmentations in 77% of cases, potentially allowing for a valuable reduction of the manual 3D segmentation workload for the radiologist and greatly facilitating quantitative studies on non-invasive biomarker in breast MRI.
Collapse
|
12
|
Jung W, Kim J, Ko J, Jeong G, Kim HG. Highly accelerated 3D MPRAGE using deep neural network-based reconstruction for brain imaging in children and young adults. Eur Radiol 2022. [PMID: 35319078 DOI: 10.1007/s00330-022-08687-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 01/12/2022] [Accepted: 02/20/2022] [Indexed: 12/17/2022]
Abstract
OBJECTIVES This study aimed to accelerate the 3D magnetization-prepared rapid gradient-echo (MPRAGE) sequence for brain imaging through the deep neural network (DNN). METHODS This retrospective study used the k-space data of 240 scans (160 for the training set, mean ± standard deviation age, 93 ± 80 months, 94 males; 80 for the test set, 106 ± 83 months, 44 males) of conventional MPRAGE (C-MPRAGE) and 102 scans (77 ± 74 months, 52 males) of both C-MPRAGE and accelerated MPRAGE. All scans were acquired with 3T scanners. DNN was developed with simulated-acceleration data generated by under-sampling. Quantitative error metrics were compared between images reconstructed with DNN, GRAPPA, and E-SPIRIT using the paired t-test. Qualitative image quality was compared between C-MPRAGE and accelerated MPRAGE reconstructed with DNN (DNN-MPRAGE) by two readers. Lesions were segmented and the agreement between C-MPRAGE and DNN-MPRAGE was assessed using linear regression. RESULTS Accelerated MPRAGE reduced scan times by 38% compared to C-MPRAGE (142 s vs. 320 s). For quantitative error metrics, DNN showed better performance than GRAPPA and E-SPIRIT (p < 0.001). For qualitative evaluation, overall image quality of DNN-MPRAGE was comparable (p > 0.999) or better (p = 0.025) than C-MPRAGE, depending on the reader. Pixelation was reduced in DNN-MPRAGE (p < 0.001). Other qualitative parameters were comparable (p > 0.05). Lesions in C-MPRAGE and DNN-MPRAGE showed good agreement for the dice similarity coefficient (= 0.68) and linear regression (R2 = 0.97; p < 0.001). CONCLUSIONS DNN-MPRAGE reduced acquisition time by 38% and revealed comparable image quality to C-MPRAGE. KEY POINTS • DNN-MPRAGE reduced acquisition times by 38%. • DNN-MPRAGE outperformed conventional reconstruction on accelerated scans (SSIM of DNN-MPRAGE = 0.96, GRAPPA = 0.43, E-SPIRIT = 0.88; p < 0.001). • Compared to C-MPRAGE scans, DNN-MPRAGE showed improved mean scores for overall image quality (2.46 vs. 2.52; p < 0.001) or comparable perceived SNR (2.56 vs. 2.58; p = 0.08).
Collapse
|
13
|
Abstract
The field of artificial intelligence (AI) is rapidly advancing, and AI models are increasingly applied in the medical field, especially in medical imaging, pathology, natural language processing, and biosignal analysis. On the basis of these advances, telemedicine, which allows people to receive medical services outside of hospitals or clinics, is also developing in many countries. The mechanisms of deep learning used in medical AI include convolutional neural networks, residual neural networks, and generative adversarial networks. Herein, we investigate the possibility of using these AI methods in the field of craniofacial surgery, with potential applications including craniofacial trauma, congenital anomalies, and cosmetic surgery.
Collapse
Affiliation(s)
- Jeong Yeop Ryu
- Department of Plastic and Reconstructive Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Ho Yun Chung
- Department of Plastic and Reconstructive Surgery, School of Medicine, Kyungpook National University, Daegu, Korea.,Cell & Matrix Research Institute, School of Medicine, Kyungpook National University, Daegu, Korea
| | - Kang Young Choi
- Department of Plastic and Reconstructive Surgery, School of Medicine, Kyungpook National University, Daegu, Korea
| |
Collapse
|
14
|
Park SY, Kim K, Woo SH, Park JT, Jeong S, Kim J, Hong S. Artificial neural network approach for acute poisoning mortality prediction in emergency departments. Clin Exp Emerg Med 2021; 8:229-236. [PMID: 34649411 PMCID: PMC8517465 DOI: 10.15441/ceem.20.113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/20/2020] [Accepted: 10/29/2020] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE The number of deaths due to acute poisoning (AP) is on the increase. It is crucial to predict AP patient mortality to identify those requiring intensive care for providing appropriate patient care as well as preserving medical resources. The aim of this study is to predict the risk of in-hospital mortality associated with AP using an artificial neural network (ANN) model. METHODS In this multicenter retrospective study, ANN and logistic regression models were constructed using the clinical and laboratory data of 1,304 patients seeking emergency treatment for AP. The ANN model was first trained on 912/1,304 (70%) randomly selected patients and then tested on the remaining 392/1,304 (30%). Receiver operating characteristic curve analysis was used to evaluate the mortality prediction of the two models. RESULTS Age, endotracheal intubation status, and intensive care unit admission were significant predictors of mortality in patients with AP in the multivariate logistic regression model. The ANN model indicated age, Glasgow Coma Scale, intensive care unit admission, and endotracheal intubation status were critical factors among the 12 independent variables related to in-hospital mortality. The area under the receiver operating characteristic curve for mortality prediction was significantly higher in the ANN model compared to the logistic regression model. CONCLUSION This study establishes that the ANN model could be a valuable tool for predicting the risk of death following AP. Thus, it may facilitate effective patient triage and improve the outcomes.
Collapse
Affiliation(s)
- Seon Yeong Park
- Department of Emergency Medicine, Daejeon St. Mary’s Hospital, The Catholic University of Korea College of Medicine, Daejeon, Korea
| | | | - Seon Hee Woo
- Department of Emergency Medicine, Incheon St. Mary’s Hospital, The Catholic University of Korea College of Medicine, Incheon, Korea
| | - Jung Taek Park
- Department of Emergency Medicine, Uijeongbu St. Mary’s Hospital, The Catholic University of Korea College of Medicine, Uijeongbu, Korea
| | - Sikyoung Jeong
- Department of Emergency Medicine, Daejeon St. Mary’s Hospital, The Catholic University of Korea College of Medicine, Daejeon, Korea
| | - Jinwoo Kim
- Department of Emergency Medical Service, Daejeon Health Institute of Technology, Daejeon, Korea
| | - Sungyoup Hong
- Department of Emergency Medicine, Daejeon St. Mary’s Hospital, The Catholic University of Korea College of Medicine, Daejeon, Korea
| |
Collapse
|
15
|
Kim MW, Jung J, Park SJ, Park YS, Yi JH, Yang WS, Kim JH, Cho BJ, Ha SO. Application of convolutional neural networks for distal radio-ulnar fracture detection on plain radiographs in the emergency room. Clin Exp Emerg Med 2021; 8:120-127. [PMID: 34237817 PMCID: PMC8273672 DOI: 10.15441/ceem.20.091] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Accepted: 09/24/2020] [Indexed: 11/23/2022] Open
Abstract
OBJECTIVE Recent studies have suggested that deep-learning models can satisfactorily assist in fracture diagnosis. We aimed to evaluate the performance of two of such models in wrist fracture detection. METHODS We collected image data of patients who visited with wrist trauma at the emergency department. A dataset extracted from January 2018 to May 2020 was split into training (90%) and test (10%) datasets, and two types of convolutional neural networks (i.e., DenseNet-161 and ResNet-152) were trained to detect wrist fractures. Gradient-weighted class activation mapping was used to highlight the regions of radiograph scans that contributed to the decision of the model. Performance of the convolutional neural network models was evaluated using the area under the receiver operating characteristic curve. RESULTS For model training, we used 4,551 radiographs from 798 patients and 4,443 radiographs from 1,481 patients with and without fractures, respectively. The remaining 10% (300 radiographs from 100 patients with fractures and 690 radiographs from 230 patients without fractures) was used as a test dataset. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of DenseNet-161 and ResNet-152 in the test dataset were 90.3%, 90.3%, 80.3%, 95.6%, and 90.3% and 88.6%, 88.4%, 76.9%, 94.7%, and 88.5%, respectively. The area under the receiver operating characteristic curves of DenseNet-161 and ResNet-152 for wrist fracture detection were 0.962 and 0.947, respectively. CONCLUSION We demonstrated that DenseNet-161 and ResNet-152 models could help detect wrist fractures in the emergency room with satisfactory performance.
Collapse
Affiliation(s)
- Min Woong Kim
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jaewon Jung
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Se Jin Park
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Young Sun Park
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jeong Hyeon Yi
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Won Seok Yang
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Jin Hyuck Kim
- Department of Neurology, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Bum-Joo Cho
- Medical Artificial Intelligence Center, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea.,Department of Ophthalmology, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| | - Sang Ook Ha
- Department of Emergency Medicine, Hallym University Sacred Heart Hospital, Hallym University Medical Center, Anyang, Korea
| |
Collapse
|
16
|
Bang CS. [Deep Learning in Upper Gastrointestinal Disorders: Status and Future Perspectives]. Korean J Gastroenterol 2021; 75:120-131. [PMID: 32209800 DOI: 10.4166/kjg.2020.75.3.120] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Revised: 03/01/2020] [Accepted: 03/02/2020] [Indexed: 12/18/2022]
Abstract
Artificial intelligence using deep learning has been applied to gastrointestinal disorders for the detection, classification, and delineation of various lesion images. With the accumulation of enormous medical records, the evolution of computation power with graphic processing units, and the widespread use of open-source libraries in large-scale machine learning processes, medical artificial intelligence is overcoming its traditional limitations. This paper explains the basic concepts of deep learning model establishment and summarizes previous studies on upper gastrointestinal disorders. The limitations and perspectives on future development are also discussed.
Collapse
Affiliation(s)
- Chang Seok Bang
- Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea
| |
Collapse
|
17
|
Yin Y, Yakar D, Dierckx RAJO, Mouridsen KB, Kwee TC, de Haas RJ. Liver fibrosis staging by deep learning: a visual-based explanation of diagnostic decisions of the model. Eur Radiol 2021; 31:9620-9627. [PMID: 34014382 PMCID: PMC8589780 DOI: 10.1007/s00330-021-08046-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2021] [Accepted: 05/04/2021] [Indexed: 12/14/2022]
Abstract
Objectives Deep learning has been proven to be able to stage liver fibrosis based on contrast-enhanced CT images. However, until now, the algorithm is used as a black box and lacks transparency. This study aimed to provide a visual-based explanation of the diagnostic decisions made by deep learning. Methods The liver fibrosis staging network (LFS network) was developed at contrast-enhanced CT images in the portal venous phase in 252 patients with histologically proven liver fibrosis stage. To give a visual explanation of the diagnostic decisions made by the LFS network, Gradient-weighted Class Activation Mapping (Grad-cam) was used to produce location maps indicating where the LFS network focuses on when predicting liver fibrosis stage. Results The LFS network had areas under the receiver operating characteristic curve of 0.92, 0.89, and 0.88 for staging significant fibrosis (F2–F4), advanced fibrosis (F3–F4), and cirrhosis (F4), respectively, on the test set. The location maps indicated that the LFS network had more focus on the liver surface in patients without liver fibrosis (F0), while it focused more on the parenchyma of the liver and spleen in case of cirrhosis (F4). Conclusions Deep learning methods are able to exploit CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage. Therefore, we suggest using the entire upper abdomen on CT images when developing deep learning–based liver fibrosis staging algorithms. Key Points • Deep learning algorithms can stage liver fibrosis using contrast-enhanced CT images, but the algorithm is still used as a black box and lacks transparency. • Location maps produced by Gradient-weighted Class Activation Mapping can indicate the focus of the liver fibrosis staging network. • Deep learning methods use CT-based information from the liver surface, liver parenchyma, and extrahepatic information to predict liver fibrosis stage. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-08046-x.
Collapse
Affiliation(s)
- Yunchao Yin
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands
| | - Derya Yakar
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands
| | - Rudi A J O Dierckx
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands
| | - Kim B Mouridsen
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands
- Department of Clinical Medicine - Center of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
| | - Thomas C Kwee
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands
| | - Robbert J de Haas
- Department of Radiology, Medical Imaging Center Groningen, University of Groningen, University Medical Center Groningen, PO Box 30001, 9700 RB, Groningen, The Netherlands.
| |
Collapse
|
18
|
Nowak S, Mesropyan N, Faron A, Block W, Reuter M, Attenberger UI, Luetkens JA, Sprinkart AM. Detection of liver cirrhosis in standard T2-weighted MRI using deep transfer learning. Eur Radiol 2021; 31:8807-8815. [PMID: 33974149 PMCID: PMC8523404 DOI: 10.1007/s00330-021-07858-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Revised: 02/12/2021] [Accepted: 03/10/2021] [Indexed: 12/17/2022]
Abstract
Objectives To investigate the diagnostic performance of deep transfer learning (DTL) to detect liver cirrhosis from clinical MRI. Methods The dataset for this retrospective analysis consisted of 713 (343 female) patients who underwent liver MRI between 2017 and 2019. In total, 553 of these subjects had a confirmed diagnosis of liver cirrhosis, while the remainder had no history of liver disease. T2-weighted MRI slices at the level of the caudate lobe were manually exported for DTL analysis. Data were randomly split into training, validation, and test sets (70%/15%/15%). A ResNet50 convolutional neural network (CNN) pre-trained on the ImageNet archive was used for cirrhosis detection with and without upstream liver segmentation. Classification performance for detection of liver cirrhosis was compared to two radiologists with different levels of experience (4th-year resident, board-certified radiologist). Segmentation was performed using a U-Net architecture built on a pre-trained ResNet34 encoder. Differences in classification accuracy were assessed by the χ2-test. Results Dice coefficients for automatic segmentation were above 0.98 for both validation and test data. The classification accuracy of liver cirrhosis on validation (vACC) and test (tACC) data for the DTL pipeline with upstream liver segmentation (vACC = 0.99, tACC = 0.96) was significantly higher compared to the resident (vACC = 0.88, p < 0.01; tACC = 0.91, p = 0.01) and to the board-certified radiologist (vACC = 0.96, p < 0.01; tACC = 0.90, p < 0.01). Conclusion This proof-of-principle study demonstrates the potential of DTL for detecting cirrhosis based on standard T2-weighted MRI. The presented method for image-based diagnosis of liver cirrhosis demonstrated expert-level classification accuracy. Key Points • A pipeline consisting of two convolutional neural networks (CNNs) pre-trained on an extensive natural image database (ImageNet archive) enables detection of liver cirrhosis on standard T2-weighted MRI. • High classification accuracy can be achieved even without altering the pre-trained parameters of the convolutional neural networks. • Other abdominal structures apart from the liver were relevant for detection when the network was trained on unsegmented images. Supplementary Information The online version contains supplementary material available at 10.1007/s00330-021-07858-1.
Collapse
Affiliation(s)
- Sebastian Nowak
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Narine Mesropyan
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Anton Faron
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Wolfgang Block
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Martin Reuter
- Image Analysis, German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany.,A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Boston, MA, USA.,Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Ulrike I Attenberger
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Julian A Luetkens
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany
| | - Alois M Sprinkart
- Department of Diagnostic and Interventional Radiology, Quantitative Imaging Lab Bonn (QILaB), University Hospital Bonn (Universitätsklinikum Bonn), Venusberg-Campus 1, 53127, Bonn, Germany.
| |
Collapse
|
19
|
Kyventidis N, Angelopoulos C. Intraoral radiograph anatomical region classification using neural networks. Int J Comput Assist Radiol Surg 2021; 16:447-55. [PMID: 33625664 DOI: 10.1007/s11548-021-02321-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2020] [Accepted: 01/27/2021] [Indexed: 10/22/2022]
Abstract
PURPOSE Dental radiography represents 13% of all radiological diagnostic imaging. Eliminating the need for manual classification of digital intraoral radiographs could be especially impactful in terms of time savings and metadata quality. However, automating the task can be challenging due to the limited variation and possible overlap of the depicted anatomy. This study attempted to use neural networks to automate the classification of anatomical regions in intraoral radiographs among 22 unique anatomical classes. METHODS Thirty-six literature-based neural network models were systematically developed and trained with full supervision and three different data augmentation strategies. Only libre software and limited computational resources were utilized. The training and validation datasets consisted of 15,254 intraoral periapical and bite-wing radiographs, previously obtained for diagnostic purposes. All models were then comparatively evaluated on a separate dataset as regards their classification performance. Top-1 accuracy, area-under-the-curve and F1-score were used as performance metrics. Pairwise comparisons were performed among all models with Mc Nemar's test. RESULTS Cochran's Q test indicated a statistically significant difference in classification performance across all models (p < 0.001). Post hoc analysis showed that while most models performed adequately on the task, advanced architectures used in deep learning such as VGG16, MobilenetV2 and InceptionResnetV2 were more robust to image distortions than those in the baseline group (MLPs, 3-block convolutional models). Advanced models exhibited classification accuracy ranging from 81 to 89%, F1-score between 0.71 and 0.86 and AUC of 0.86 to 0.94. CONCLUSIONS According to our findings, automated classification of anatomical classes in digital intraoral radiographs is feasible with an expected top-1 classification accuracy of almost 90%, even for images with significant distortions or overlapping anatomy. Model architecture, data augmentation strategies, the use of pooling and normalization layers as well as model capacity were identified as the factors most contributing to classification performance.
Collapse
|
20
|
Hasenstab K, Cunha GM, Ichikawa S, Dehkordy SF, Lee MH, Kim SJ, Schlein A, Covarrubias Y, Sirlin CB, Fowler KJ. CNN color-coded difference maps accurately display longitudinal changes in liver MRI-PDFF. Eur Radiol 2021; 31:5041-5049. [PMID: 33449180 DOI: 10.1007/s00330-020-07649-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 11/24/2020] [Accepted: 12/18/2020] [Indexed: 01/19/2023]
Abstract
OBJECTIVES To assess the feasibility of a CNN-based liver registration algorithm to generate difference maps for visual display of spatiotemporal changes in liver PDFF, without needing manual annotations. METHODS This retrospective exploratory study included 25 patients with suspected or confirmed NAFLD, who underwent PDFF-MRI at two time points at our institution. PDFF difference maps were generated by applying a CNN-based liver registration algorithm, then subtracting follow-up from baseline PDFF maps. The difference maps were post-processed by smoothing (5 cm2 round kernel) and applying a categorical color scale. Two fellowship-trained abdominal radiologists and one radiology resident independently reviewed difference maps to visually determine segmental PDFF change. Their visual assessment was compared with manual ROI-based measurements of each Couinaud segment and whole liver PDFF using intraclass correlation (ICC) and Bland-Altman analysis. Inter-reader agreement for visual assessment was calculated (ICC). RESULTS The mean patient age was 49 years (12 males). Baseline and follow-up PDFF ranged from 2.0 to 35.3% and 3.5 to 32.0%, respectively. PDFF changes ranged from - 20.4 to 14.1%. ICCs against the manual reference exceeded 0.95 for each reader, except for segment 2 (2 readers ICC = 0.86-0.91) and segment 4a (reader 3 ICC = 0.94). Bland-Altman limits of agreement were within 5% across all three readers. Inter-reader agreement for visually assessed PDFF change (whole liver and segmental) was excellent (ICCs > 0.96), except for segment 2 (ICC = 0.93). CONCLUSIONS Visual assessment of liver segmental PDFF changes using a CNN-generated difference map strongly agreed with manual estimates performed by an expert reader and yielded high inter-reader agreement. KEY POINTS • Visual assessment of longitudinal changes in quantitative liver MRI can be performed using a CNN-generated difference map and yields strong agreement with manual estimates performed by expert readers.
Collapse
Affiliation(s)
- Kyle Hasenstab
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA.
- Department of Mathematics and Statistics, San Diego State University, San Diego, CA, USA.
| | - Guilherme Moura Cunha
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | | | - Soudabeh Fazeli Dehkordy
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Min Hee Lee
- Soonchunhyang University Bucheon Hospital, Gyeonggi-do, South Korea
| | - Soo Jin Kim
- National Cancer Center, Republic of Korea, Gyeonggi-do, South Korea
| | - Alexandra Schlein
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Yesenia Covarrubias
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Claude B Sirlin
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| | - Kathryn J Fowler
- Liver Imaging Group, Department of Radiology, University of California, San Diego, La Jolla, CA, USA
| |
Collapse
|
21
|
Oestmann PM, Wang CJ, Savic LJ, Hamm CA, Stark S, Schobert I, Gebauer B, Schlachter T, Lin M, Weinreb JC, Batra R, Mulligan D, Zhang X, Duncan JS, Chapiro J. Deep learning-assisted differentiation of pathologically proven atypical and typical hepatocellular carcinoma (HCC) versus non-HCC on contrast-enhanced MRI of the liver. Eur Radiol 2021; 31:4981-4990. [PMID: 33409782 DOI: 10.1007/s00330-020-07559-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/06/2020] [Accepted: 11/23/2020] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To train a deep learning model to differentiate between pathologically proven hepatocellular carcinoma (HCC) and non-HCC lesions including lesions with atypical imaging features on MRI. METHODS This IRB-approved retrospective study included 118 patients with 150 lesions (93 (62%) HCC and 57 (38%) non-HCC) pathologically confirmed through biopsies (n = 72), resections (n = 29), liver transplants (n = 46), and autopsies (n = 3). Forty-seven percent of HCC lesions showed atypical imaging features (not meeting Liver Imaging Reporting and Data System [LI-RADS] criteria for definitive HCC/LR5). A 3D convolutional neural network (CNN) was trained on 140 lesions and tested for its ability to classify the 10 remaining lesions (5 HCC/5 non-HCC). Performance of the model was averaged over 150 runs with random sub-sampling to provide class-balanced test sets. A lesion grading system was developed to demonstrate the similarity between atypical HCC and non-HCC lesions prone to misclassification by the CNN. RESULTS The CNN demonstrated an overall accuracy of 87.3%. Sensitivities/specificities for HCC and non-HCC lesions were 92.7%/82.0% and 82.0%/92.7%, respectively. The area under the receiver operating curve was 0.912. CNN's performance was correlated with the lesion grading system, becoming less accurate the more atypical imaging features the lesions showed. CONCLUSION This study provides proof-of-concept for CNN-based classification of both typical- and atypical-appearing HCC lesions on multi-phasic MRI, utilizing pathologically confirmed lesions as "ground truth." KEY POINTS • A CNN trained on atypical appearing pathologically proven HCC lesions not meeting LI-RADS criteria for definitive HCC (LR5) can correctly differentiate HCC lesions from other liver malignancies, potentially expanding the role of image-based diagnosis in primary liver cancer with atypical features. • The trained CNN demonstrated an overall accuracy of 87.3% and a computational time of < 3 ms which paves the way for clinical application as a decision support instrument.
Collapse
Affiliation(s)
- Paula M Oestmann
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany.,Faculty of Medicine, Heinrich-Heine-University Düsseldorf, Düsseldorf, Germany
| | - Clinton J Wang
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Department of Biomedical Engineering, Yale School of Engineering and Applied Science, New Haven, CT, 06520, USA
| | - Lynn J Savic
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany
| | - Charlie A Hamm
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany
| | - Sophie Stark
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany.,Faculty of Medicine, Albert-Ludwigs-University Freiburg, Freiburg, Germany
| | - Isabel Schobert
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany
| | - Bernhard Gebauer
- Institute of Radiology, Berlin Institute of Health, Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität, 10117, Berlin, Germany
| | - Todd Schlachter
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA
| | - MingDe Lin
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA
| | - Jeffrey C Weinreb
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA
| | - Ramesh Batra
- Department of Transplantation and Immunology, 333 Cedar Street, New Haven, CT, 06520, USA
| | - David Mulligan
- Department of Transplantation and Immunology, 333 Cedar Street, New Haven, CT, 06520, USA
| | - Xuchen Zhang
- Department of Pathology, Yale School of Medicine, 310 Cedar Street, New Haven, CT, 06520, USA
| | - James S Duncan
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.,Department of Biomedical Engineering, Yale School of Engineering and Applied Science, New Haven, CT, 06520, USA
| | - Julius Chapiro
- Department of Radiology and Biomedical Imaging, Yale School of Medicine, 333 Cedar Street, New Haven, CT, 06520, USA.
| |
Collapse
|
22
|
Janjic T, Pereverzyev S, Hammerl M, Neubauer V, Lerchner H, Wallner V, Steiger R, Kiechl-Kohlendorfer U, Zimmermann M, Buchheim A, Grams AE, Gizewski ER. Feed-forward neural networks using cerebral MR spectroscopy and DTI might predict neurodevelopmental outcome in preterm neonates. Eur Radiol 2020; 30:6441-6451. [PMID: 32683551 PMCID: PMC7599175 DOI: 10.1007/s00330-020-07053-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 06/11/2020] [Accepted: 06/30/2020] [Indexed: 11/28/2022]
Abstract
Objectives We aimed to evaluate the ability of feed-forward neural networks (fNNs) to predict the neurodevelopmental outcome (NDO) of very preterm neonates (VPIs) at 12 months corrected age by using biomarkers of cerebral MR proton spectroscopy (1H-MRS) and diffusion tensor imaging (DTI) at term-equivalent age (TEA). Methods In this prospective study, 300 VPIs born before 32 gestational weeks received an MRI scan at TEA between September 2013 and December 2017. Due to missing or poor-quality spectroscopy data and missing neurodevelopmental tests, 173 VPIs were excluded. Data sets consisting of 103 and 115 VPIs were considered for prediction of motor and cognitive developmental delay, respectively. Five metabolite ratios and two DTI characteristics in six different areas of the brain were evaluated. A feature selection algorithm was developed for receiving a subset of characteristics prevalent for the VPIs with a developmental delay. Finally, the predictors were constructed employing multiple fNNs and fourfold cross-validation. Results By employing the constructed fNN predictors, we were able to predict cognitive delays of VPIs with 85.7% sensitivity, 100% specificity, 100% positive predictive value (PPV) and 99.1% negative predictive value (NPV). For the prediction of motor delay, we achieved a sensitivity of 76.9%, a specificity of 98.9%, a PPV of 90.9% and an NPV of 96.7%. Conclusion FNNs might be able to predict motor and cognitive development of VPIs at 12 months corrected age when employing biomarkers of cerebral 1H-MRS and DTI quantified at TEA. Key Points • A feed-forward neuronal network is a promising tool for outcome prediction in premature infants. • Cerebral proton magnetic resonance spectroscopy and diffusion tensor imaging can be used for the construction of early prognostic biomarkers. • Premature infants that would most benefit from early intervention services can be spotted at the time of optimal neuroplasticity. Electronic supplementary material The online version of this article (10.1007/s00330-020-07053-8) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- T Janjic
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria. .,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria.
| | - S Pereverzyev
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria
| | - M Hammerl
- Department of Paediatrics II, Neonatology, Medical University of Innsbruck, Innsbruck, Austria
| | - V Neubauer
- Department of Paediatrics II, Neonatology, Medical University of Innsbruck, Innsbruck, Austria
| | - H Lerchner
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria
| | - V Wallner
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria
| | - R Steiger
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria
| | - U Kiechl-Kohlendorfer
- Department of Paediatrics II, Neonatology, Medical University of Innsbruck, Innsbruck, Austria
| | - M Zimmermann
- Department of Paediatrics II, Neonatology, Medical University of Innsbruck, Innsbruck, Austria
| | - A Buchheim
- Institute of Psychology, University of Innsbruck, Innsbruck, Austria
| | - A E Grams
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria
| | - E R Gizewski
- Department of Neuroradiology, Medical University of Innsbruck, Anichstraße 35, 6020, Innsbruck, Austria.,Neuroimaging Research Core Facility, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|