1
|
Pernaton L, Cellier D, Buono R, Pierre A, Sauzet M, Blay JY, Pérol O, Fervers B. [Cancer and nutritional management of overweight and obesity: Practice evaluation]. Bull Cancer 2025; 112:478-494. [PMID: 39863506 DOI: 10.1016/j.bulcan.2024.10.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2024] [Revised: 10/27/2024] [Accepted: 10/30/2024] [Indexed: 01/27/2025]
Abstract
CONTEXT The aim of this practice evaluation was to assess weight trends during and after a nutritional intervention in cancer patients and survivors. METHODS This retrospective study was conducted between January 2014 and October 2020 in adults with different cancer types managed at the Léon-Bérard Cancer Center, undergoing treatment or during post-treatment follow-up, with a BMI≥25kg/m2 and who had at least 3 consultations with a nutrition physician. Nutritional management focused on behavioral, metabolic and nutritional aspects. Anthropometrics measurements, i.e., waist circumference, weight and BMI, were monitored prospectively during the nutritional consultation. The aim of this study was to evaluate the impact of the nutritional intervention on the anthropometrics measurements. RESULTS Overall, 247 patients were included in the analysis. The median duration of the nutritional intervention was 7.2months. Between the first and the last nutrition consultation, waist circumference was reduced in 97.2% of the patients, with a median loss of 10cm; weight and BMI were reduced in 85.0% and 83.8% of the patients respectively. Six months after the end of the nutritional intervention, 53.7% of patients had stable or continued reduced weight. CONCLUSION This analysis of practice shows the positive impact of a nutritional intervention during cancer treatments on anthropometric parameters, and maintenance or continued weight loss at distance from the intervention in half of the patients.
Collapse
Affiliation(s)
- Léo Pernaton
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France; Université Claude-Bernard, Lyon, France
| | - Dominique Cellier
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France.
| | - Romain Buono
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France
| | - Antoine Pierre
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France
| | - Marine Sauzet
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France
| | - Jean-Yves Blay
- Département d'oncologie médicale, centre Léon-Bérard, 69008 Lyon, France; Université Claude-Bernard, Lyon, France
| | - Olivia Pérol
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France; Inserm U1296 rayonnements : défense, santé, environnement, centre Léon-Bérard, 69008 Lyon, France
| | - Béatrice Fervers
- Département prévention cancer environnement, centre Léon-Bérard, 69008 Lyon, France; Inserm U1296 rayonnements : défense, santé, environnement, centre Léon-Bérard, 69008 Lyon, France
| |
Collapse
|
2
|
Dai J, Liu T, Torigian DA, Tong Y, Han S, Nie P, Zhang J, Li R, Xie F, Udupa JK. GA-Net: A geographical attention neural network for the segmentation of body torso tissue composition. Med Image Anal 2024; 91:102987. [PMID: 37837691 PMCID: PMC10841506 DOI: 10.1016/j.media.2023.102987] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Revised: 07/27/2023] [Accepted: 09/28/2023] [Indexed: 10/16/2023]
Abstract
PURPOSE Body composition analysis (BCA) of the body torso plays a vital role in the study of physical health and pathology and provides biomarkers that facilitate the diagnosis and treatment of many diseases, such as type 2 diabetes mellitus, cardiovascular disease, obstructive sleep apnea, and osteoarthritis. In this work, we propose a body composition tissue segmentation method that can automatically delineate those key tissues, including subcutaneous adipose tissue, skeleton, skeletal muscle tissue, and visceral adipose tissue, on positron emission tomography/computed tomography scans of the body torso. METHODS To provide appropriate and precise semantic and spatial information that is strongly related to body composition tissues for the deep neural network, first we introduce a new concept of the body area and integrate it into our proposed segmentation network called Geographical Attention Network (GA-Net). The body areas are defined following anatomical principles such that the whole body torso region is partitioned into three non-overlapping body areas. Each body composition tissue of interest is fully contained in exactly one specific minimal body area. Secondly, the proposed GA-Net has a novel dual-decoder schema that is composed of a tissue decoder and an area decoder. The tissue decoder segments the body composition tissues, while the area decoder segments the body areas as an auxiliary task. The features of body areas and body composition tissues are fused through a soft attention mechanism to gain geographical attention relevant to the body tissues. Thirdly, we propose a body composition tissue annotation approach that takes the body area labels as the region of interest, which significantly improves the reproducibility, precision, and efficiency of delineating body composition tissues. RESULTS Our evaluations on 50 low-dose unenhanced CT images indicate that GA-Net outperforms other architectures statistically significantly based on the Dice metric. GA-Net also shows improvements for the 95% Hausdorff Distance metric in most comparisons. Notably, GA-Net exhibits more sensitivity to subtle boundary information and produces more reliable and robust predictions for such structures, which are the most challenging parts to manually mend in practice, with potentially significant time-savings in the post hoc correction of these subtle boundary placement errors. Due to the prior knowledge provided from body areas, GA-Net achieves competitive performance with less training data. Our extension of the dual-decoder schema to TransUNet and 3D U-Net demonstrates that the new schema significantly improves the performance of these classical neural networks as well. Heatmaps obtained from attention gate layers further illustrate the geographical guidance function of body areas for identifying body tissues. CONCLUSIONS (i) Prior anatomic knowledge supplied in the form of appropriately designed anatomic container objects significantly improves the segmentation of bodily tissues. (ii) Of particular note are the improvements achieved in the delineation of subtle boundary features which otherwise would take much effort for manual correction. (iii) The method can be easily extended to existing networks to improve their accuracy for this application.
Collapse
Affiliation(s)
- Jian Dai
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| | - Shiwei Han
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Pengju Nie
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Jing Zhang
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Ran Li
- School of Information Science and Engineering, Yanshan University, Qinhuangdao 066004, Hebei, China; The Key Laboratory for Computer Virtual Technology and System Integration of Hebei Province, Yanshan University, Qinhuangdao 066004, Hebei, China.
| | - Fei Xie
- School of AOAIR, Xidian University, Xi'an 710071, Shaanxi, China.
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia 19104, PA, United States of America.
| |
Collapse
|
3
|
Agrawal V, Udupa J, Tong Y, Torigian D. BRR-Net: A tandem architectural CNN-RNN for automatic body region localization in CT images. Med Phys 2020; 47:5020-5031. [PMID: 32761899 DOI: 10.1002/mp.14439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 06/22/2020] [Accepted: 07/22/2020] [Indexed: 12/25/2022] Open
Abstract
PURPOSE Automatic identification of consistently defined body regions in medical images is vital in many applications. In this paper, we describe a method to automatically demarcate the superior and inferior boundaries for neck, thorax, abdomen, and pelvis body regions in computed tomography (CT) images. METHODS For any three-dimensional (3D) CT image I, following precise anatomic definitions, we denote the superior and inferior axial boundary slices of the neck, thorax, abdomen, and pelvis body regions by NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), PS(I), and PI(I), respectively. Of these, by definition, AI(I) = PS(I), and so the problem reduces to demarcating seven body region boundaries. Our method consists of a two-step approach. In the first step, a convolutional neural network (CNN) is trained to classify each axial slice in I into one of nine categories: the seven body region boundaries, plus legs (defined as all axial slices inferior to PI(I)), and the none-of-the-above category. This CNN uses a multichannel approach to exploit the interslice contrast, providing the neural network with additional visual context at the body region boundaries. In the second step, to improve the predictions for body region boundaries that are very subtle and that exhibit low contrast, a recurrent neural network (RNN) is trained on features extracted by CNN, limited to a flexible window about the predictions from the CNN. RESULTS The method is evaluated on low-dose CT images from 442 patient scans, divided into training and testing sets with a ratio of 70:30. Using only the CNN, overall absolute localization error for NS(I), NI(I), TS(I), TI(I), AS(I), AI(I), and PI(I) expressed in terms of number of slices (nS) is (mean ± SD): 0.61 ± 0.58, 1.05 ± 1.13, 0.31 ± 0.46, 1.85 ± 1.96, 0.57 ± 2.44, 3.42 ± 3.16, and 0.50 ± 0.50, respectively. Using the RNN to refine the CNN's predictions for select classes improved the accuracy of TI(I) and AI(I) to: 1.35 ± 1.71 and 2.83 ± 2.75, respectively. This model outperforms the results achieved in our previous work by 2.4, 1.7, 3.1, 1.1, and 2 slices, respectively for TS(I), TI(I), AS(I), AI(I) = PS(I), and PI(I) classes with statistical significance. The model trained on low-dose CT images was also tested on diagnostic CT images for NS(I), NI(I), and TS(I) classes; the resulting errors were: 1.48 ± 1.33, 2.56 ± 2.05, and 0.58 ± 0.71, respectively. CONCLUSIONS Standardized body region definitions are a prerequisite for effective implementation of quantitative radiology, but the literature is severely lacking in the precise identification of body regions. The method presented in this paper significantly outperforms earlier works by a large margin, and the deviations of our results from ground truth are comparable to variations observed in manual labeling by experts. The solution presented in this work is critical to the adoption and employment of the idea of standardized body regions, and clears the path for development of applications requiring accurate demarcations of body regions. The work is indispensable for automatic anatomy recognition, delineation, and contouring for radiation therapy planning, as it not only automates an essential part of the process, but also removes the dependency on experts for accurately demarcating body regions in a study.
Collapse
Affiliation(s)
- Vibhu Agrawal
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Jayaram Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| | - Drew Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA, 19104, USA
| |
Collapse
|
4
|
Attanasio S, Forte SM, Restante G, Gabelloni M, Guglielmi G, Neri E. Artificial intelligence, radiomics and other horizons in body composition assessment. Quant Imaging Med Surg 2020; 10:1650-1660. [PMID: 32742958 PMCID: PMC7378090 DOI: 10.21037/qims.2020.03.10] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Accepted: 02/04/2020] [Indexed: 01/10/2023]
Abstract
This paper offers a brief overview of common non-invasive techniques for body composition assessment methods, and of the way images extracted by these methods can be processed with artificial intelligence (AI) and radiomic analysis. These new techniques are becoming more and more appealing in the field of health care, thanks to their ability to treat and process a huge amount of data, suggest new correlations between extracted imaging biomarkers and traits of several diseases as well as lead to the possibility to realise an increasingly personalized medicine. The idea is to suggest the use of AI applications and radiomic analysis to search for features that may be extracted from medical images [computed tomography (CT) and magnetic resonance imaging (MRI)], and that may turn out to be good predictors of metabolic disorder diseases and cancer. This could lead to patient-specific treatments and management of several diseases linked with excessive body fat.
Collapse
Affiliation(s)
- Simona Attanasio
- Department of Translational Research, University of Pisa, Pisa, Italy
| | - Sara Maria Forte
- Department of Translational Research, University of Pisa, Pisa, Italy
| | - Giuliana Restante
- Department of Translational Research, University of Pisa, Pisa, Italy
| | - Michela Gabelloni
- Department of Translational Research, University of Pisa, Pisa, Italy
| | - Giuseppe Guglielmi
- Department of Clinical and Experimental Medicine, University of Foggia, Foggia, Italy
| | - Emanuele Neri
- Department of Translational Research, University of Pisa, Pisa, Italy
| |
Collapse
|
5
|
Liu T, Pan J, Torigian DA, Xu P, Miao Q, Tong Y, Udupa JK. ABCNet: A new efficient 3D dense-structure network for segmentation and analysis of body tissue composition on body-torso-wide CT images. Med Phys 2020; 47:2986-2999. [PMID: 32170754 DOI: 10.1002/mp.14141] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 03/02/2020] [Accepted: 03/03/2020] [Indexed: 12/16/2022] Open
Abstract
PURPOSE Quantification of body tissue composition is important for research and clinical purposes, given the association between the presence and severity of several disease conditions, such as the incidence of cardiovascular and metabolic disorders, survival after chemotherapy, etc., with the quantity and quality of body tissue composition. In this work, we aim to automatically segment four key body tissues of interest, namely subcutaneous adipose tissue, visceral adipose tissue, skeletal muscle, and skeletal structures from body-torso-wide low-dose computed tomography (CT) images. METHOD Based on the idea of residual Encoder-Decoder architecture, a novel neural network design named ABCNet is proposed. The proposed system makes full use of multiscale features from four resolution levels to improve the segmentation accuracy. This network is built on a uniform convolutional unit and its derived units, which makes the ABCNet easy to implement. Several parameter compression methods, including Bottleneck, linear increasing feature maps in Dense Blocks, and memory-efficient techniques, are employed to lighten the network while making it deeper. The strategy of dynamic soft Dice loss is introduced to optimize the network in coarse-to-fine tuning. The proposed segmentation algorithm is accurate, robust, and very efficient in terms of both time and memory. RESULTS A dataset composed of 38 low-dose unenhanced CT images, with 25 male and 13 female subjects in the age range 31-83 yr and ranging from normal to overweight to obese, is utilized to evaluate ABCNet. We compare four state-of-the-art methods including DeepMedic, 3D U-Net, V-Net, Dense V-Net, against ABCNet on this dataset. We employ a shuffle-split fivefold cross-validation strategy: In each experimental group, 18, 5, and 15 CT images are randomly selected out of 38 CT image sets for training, validation, and testing, respectively. The commonly used evaluation metrics - precision, recall, and F1-score (or Dice) - are employed to measure the segmentation quality. The results show that ABCNet achieves superior performance in accuracy of segmenting body tissues from body-torso-wide low-dose CT images compared to other state-of-the-art methods, reaching 92-98% in common accuracy metrics such as F1-score. ABCNet is also time-efficient and memory-efficient. It costs about 18 h to train and an average of 12 sec to segment four tissue components from a body-torso-wide CT image, on an ordinary desktop with a single ordinary GPU. CONCLUSIONS Motivated by applications in body tissue composition quantification on large population groups, our goal in this paper was to create an efficient and accurate body tissue segmentation method for use on body-torso-wide CT images. The proposed ABCNet achieves peak performance in both accuracy and efficiency that seems hard to improve any more. The experiments performed demonstrate that ABCNet can be run on an ordinary desktop with a single ordinary GPU, with practical times for both training and testing, and achieves superior accuracy compared to other state-of-the-art segmentation methods for the task of body tissue composition analysis from low-dose CT images.
Collapse
Affiliation(s)
- Tiange Liu
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, 066004, China
| | - Junwen Pan
- School of Information Science and Engineering, Yanshan University, Qinhuangdao, 066004, China.,College of Intelligence and Computing, Tianjin University, Tianjin, 300072, China
| | - Drew A Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Pengfei Xu
- School of Information Science and Technology, Northwest University, Xi'an, 710127, China
| | - Qiguang Miao
- School of Computer Science and Technology, Xidian University, Xi'an, 710126, China
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| | - Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, 19104, PA, USA
| |
Collapse
|