1
|
Derathé A, Reche F, Guy S, Charrière K, Trilling B, Jannin P, Moreau-Gaudry A, Gibaud B, Voros S. LapEx: A new multimodal dataset for context recognition and practice assessment in laparoscopic surgery. Sci Data 2025; 12:342. [PMID: 40011540 PMCID: PMC11865446 DOI: 10.1038/s41597-025-04588-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Accepted: 02/05/2025] [Indexed: 02/28/2025] Open
Abstract
In Surgical Data Science (SDS), there is an increasing demand for large, realistic annotated datasets to facilitate the development of machine learning techniques. However, in laparoscopic surgery, most publicly available datasets focus on low-granularity procedural annotations (such as phases or steps) and image segmentation of instruments or specific organs, often using animal models that lack clinical realism. Furthermore, annotation variability is seldom evaluated. In this work, we compiled 30 sleeve gastrectomy procedures and performed three levels of annotations for a specific step of this procedure (the fundus dissection): a procedural annotation of fine-grained activities, a semantic segmentation of the laparoscopic images, and the assessment of a surgical skill, specifically the quality of exposition of the surgical scene. We also conducted a comprehensive annotation variability analysis, highlighting the complexity of these tasks and providing a baseline for evaluating machine learning models. The dataset is publicly available and serves as a valuable resource for advancing SDS research.
Collapse
Affiliation(s)
- Arthur Derathé
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France
| | - Fabian Reche
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France
- Department of digestive surgery, CHU de Grenoble, Grenoble, France
| | - Sylvain Guy
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France
| | - Katia Charrière
- Clinical Investigation Center - Innovative Technology, CHU de Grenoble, Grenoble, France
| | - Bertrand Trilling
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France
- Department of digestive surgery, CHU de Grenoble, Grenoble, France
| | - Pierre Jannin
- Université Rennes, INSERM, LTSI - UMR S 1099, 35000, Rennes, France
| | - Alexandre Moreau-Gaudry
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France
- Clinical Investigation Center - Innovative Technology, CHU de Grenoble, Grenoble, France
| | - Bernard Gibaud
- Université Rennes, INSERM, LTSI - UMR S 1099, 35000, Rennes, France
| | - Sandrine Voros
- Univ. Grenoble Alpes, CNRS, UMR 5525, VetAgro Sup, Grenoble INP, INSERM, TIMC, 38000, Grenoble, France.
| |
Collapse
|
2
|
Jung J, Lee H, Jung H, Kim H. Essential properties and explanation effectiveness of explainable artificial intelligence in healthcare: A systematic review. Heliyon 2023; 9:e16110. [PMID: 37234618 PMCID: PMC10205582 DOI: 10.1016/j.heliyon.2023.e16110] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2022] [Revised: 03/26/2023] [Accepted: 05/05/2023] [Indexed: 05/28/2023] Open
Abstract
Background Significant advancements in the field of information technology have influenced the creation of trustworthy explainable artificial intelligence (XAI) in healthcare. Despite improved performance of XAI, XAI techniques have not yet been integrated into real-time patient care. Objective The aim of this systematic review is to understand the trends and gaps in research on XAI through an assessment of the essential properties of XAI and an evaluation of explanation effectiveness in the healthcare field. Methods A search of PubMed and Embase databases for relevant peer-reviewed articles on development of an XAI model using clinical data and evaluating explanation effectiveness published between January 1, 2011, and April 30, 2022, was conducted. All retrieved papers were screened independently by the two authors. Relevant papers were also reviewed for identification of the essential properties of XAI (e.g., stakeholders and objectives of XAI, quality of personalized explanations) and the measures of explanation effectiveness (e.g., mental model, user satisfaction, trust assessment, task performance, and correctability). Results Six out of 882 articles met the criteria for eligibility. Artificial Intelligence (AI) users were the most frequently described stakeholders. XAI served various purposes, including evaluation, justification, improvement, and learning from AI. Evaluation of the quality of personalized explanations was based on fidelity, explanatory power, interpretability, and plausibility. User satisfaction was the most frequently used measure of explanation effectiveness, followed by trust assessment, correctability, and task performance. The methods of assessing these measures also varied. Conclusion XAI research should address the lack of a comprehensive and agreed-upon framework for explaining XAI and standardized approaches for evaluating the effectiveness of the explanation that XAI provides to diverse AI stakeholders.
Collapse
Affiliation(s)
- Jinsun Jung
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Center for Human-Caring Nurse Leaders for the Future by Brain Korea 21 (BK 21) Four Project, College of Nursing, Seoul National University, Seoul, Republic of Korea
| | - Hyungbok Lee
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Emergency Nursing Department, Seoul National University Hospital, Seoul, Republic of Korea
| | - Hyunggu Jung
- Department of Computer Science and Engineering, University of Seoul, Seoul, Republic of Korea
- Department of Artificial Intelligence, University of Seoul, Seoul, Republic of Korea
| | - Hyeoneui Kim
- College of Nursing, Seoul National University, Seoul, Republic of Korea
- Research Institute of Nursing Science, College of Nursing, Seoul National University, Seoul, Republic of Korea
| |
Collapse
|