1
|
Hernández I, Soberanis-Mukul R, Mangulabnan JE, Sahu M, Winter J, Vedula S, Ishii M, Hager G, Taylor RH, Unberath M. Investigating keypoint descriptors for camera relocalization in endoscopy surgery. Int J Comput Assist Radiol Surg 2023; 18:1135-1142. [PMID: 37160580 PMCID: PMC10958396 DOI: 10.1007/s11548-023-02918-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 04/12/2023] [Indexed: 05/11/2023]
Abstract
PURPOSE Recent advances in computer vision and machine learning have resulted in endoscopic video-based solutions for dense reconstruction of the anatomy. To effectively use these systems in surgical navigation, a reliable image-based technique is required to constantly track the endoscopic camera's position within the anatomy, despite frequent removal and re-insertion. In this work, we investigate the use of recent learning-based keypoint descriptors for six degree-of-freedom camera pose estimation in intraoperative endoscopic sequences and under changes in anatomy due to surgical resection. METHODS Our method employs a dense structure from motion (SfM) reconstruction of the preoperative anatomy, obtained with a state-of-the-art patient-specific learning-based descriptor. During the reconstruction step, each estimated 3D point is associated with a descriptor. This information is employed in the intraoperative sequences to establish 2D-3D correspondences for Perspective-n-Point (PnP) camera pose estimation. We evaluate this method in six intraoperative sequences that include anatomical modifications obtained from two cadaveric subjects. RESULTS Show that this approach led to translation and rotation errors of 3.9 mm and 0.2 radians, respectively, with 21.86% of localized cameras averaged over the six sequences. In comparison to an additional learning-based descriptor (HardNet++), the selected descriptor can achieve a better percentage of localized cameras with similar pose estimation performance. We further discussed potential error causes and limitations of the proposed approach. CONCLUSION Patient-specific learning-based descriptors can relocalize images that are well distributed across the inspected anatomy, even where the anatomy is modified. However, camera relocalization in endoscopic sequences remains a persistently challenging problem, and future research is necessary to increase the robustness and accuracy of this technique.
Collapse
Affiliation(s)
| | | | | | - Manish Sahu
- Johns Hopkins University, Baltimore, 21211, MD, USA
| | - Jonas Winter
- Johns Hopkins University, Baltimore, 21211, MD, USA
| | | | - Masaru Ishii
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| | | | - Russell H Taylor
- Johns Hopkins University, Baltimore, 21211, MD, USA
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| | - Mathias Unberath
- Johns Hopkins University, Baltimore, 21211, MD, USA
- Johns Hopkins Medical Institutions, Baltimore, 21287, MD, USA
| |
Collapse
|
2
|
Hira S, Singh D, Kim TS, Gupta S, Hager G, Sikder S, Vedula SS. Video-based assessment of intraoperative surgical skill. Int J Comput Assist Radiol Surg 2022; 17:1801-1811. [PMID: 35635639 PMCID: PMC10323985 DOI: 10.1007/s11548-022-02681-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2021] [Accepted: 05/11/2022] [Indexed: 11/27/2022]
Abstract
PURPOSE Surgeons' skill in the operating room is a major determinant of patient outcomes. Assessment of surgeons' skill is necessary to improve patient outcomes and quality of care through surgical training and coaching. Methods for video-based assessment of surgical skill can provide objective and efficient tools for surgeons. Our work introduces a new method based on attention mechanisms and provides a comprehensive comparative analysis of state-of-the-art methods for video-based assessment of surgical skill in the operating room. METHODS Using a dataset of 99 videos of capsulorhexis, a critical step in cataract surgery, we evaluated image feature-based methods and two deep learning methods to assess skill using RGB videos. In the first method, we predict instrument tips as keypoints and predict surgical skill using temporal convolutional neural networks. In the second method, we propose a frame-wise encoder (2D convolutional neural network) followed by a temporal model (recurrent neural network), both of which are augmented by visual attention mechanisms. We computed the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and predictive values through fivefold cross-validation. RESULTS To classify a binary skill label (expert vs. novice), the range of AUC estimates was 0.49 (95% confidence interval; CI = 0.37 to 0.60) to 0.76 (95% CI = 0.66 to 0.85) for image feature-based methods. The sensitivity and specificity were consistently high for none of the methods. For the deep learning methods, the AUC was 0.79 (95% CI = 0.70 to 0.88) using keypoints alone, 0.78 (95% CI = 0.69 to 0.88) and 0.75 (95% CI = 0.65 to 0.85) with and without attention mechanisms, respectively. CONCLUSION Deep learning methods are necessary for video-based assessment of surgical skill in the operating room. Attention mechanisms improved discrimination ability of the network. Our findings should be evaluated for external validity in other datasets.
Collapse
Affiliation(s)
- Sanchit Hira
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
| | - Digvijay Singh
- Department of Computer Science, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
| | - Tae Soo Kim
- Department of Computer Science, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
- Malone Center for Engineering in Healthcare, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
| | - Shobhit Gupta
- Indian Institute of Technology, Hauz Khas, New Delhi, 110016, India
| | - Gregory Hager
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
- Department of Computer Science, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
- Malone Center for Engineering in Healthcare, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
| | - Shameema Sikder
- Laboratory for Computational Sensing & Robotics, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
- Malone Center for Engineering in Healthcare, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA
- Wilmer Eye Institute, Johns Hopkins University School of Medicine, 615 N. Wolfe Street, Baltimore, MD, 21287, USA
| | - S Swaroop Vedula
- Malone Center for Engineering in Healthcare, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD, 21218, USA.
| |
Collapse
|
3
|
Holden MS, O'Brien M, Malpani A, Naz H, Tseng YW, Ishii L, Swaroop Vedula S, Ishii M, Hager G. Reconstructing the nasal septum from instrument motion during septoplasty surgery. J Med Imaging (Bellingham) 2021; 8:065001. [PMID: 34796250 DOI: 10.1117/1.jmi.8.6.065001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Accepted: 10/18/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Surgery involves modifying anatomy to achieve a goal. Reconstructing anatomy can facilitate surgical care through surgical planning, real-time decision support, or anticipating outcomes. Tool motion is a rich source of data that can be used to quantify anatomy. Our work develops and validates a method for reconstructing the nasal septum from unstructured motion of the Cottle elevator during the elevation phase of septoplasty surgery, without need to explicitly delineate the surface of the septum. Approach: The proposed method uses iterative closest point registration to initially register a template septum to the tool motion. Subsequently, statistical shape modeling with iterative most likely oriented point registration is used to fit the reconstructed septum to Cottle tip position and orientation during flap elevation. Regularization of the shape model and transformation is incorporated. The proposed methods were validated on 10 septoplasty surgeries performed on cadavers by operators of varying experience level. Preoperative CT images of the cadaver septums were segmented as ground truth. Results: We estimated reconstruction error as the difference between the projections of the Cottle tip onto the surface of the reconstructed septum and the ground-truth septum segmented from the CT image. We found translational differences of 2.74 ( 2.06 - 2.81 ) mm and a rotational differences of 8.95 ( 7.11 - 10.55 ) deg between the reconstructed septum and the ground-truth septum [median (interquartile range)], given the optimal regularization parameters. Conclusions: Accurate reconstruction of the nasal septum can be achieved from tool tracking data during septoplasty surgery on cadavers. This enables understanding of the septal anatomy without need for traditional medical imaging. This result may be used to facilitate surgical planning, intraoperative care, or skills assessment.
Collapse
Affiliation(s)
- Matthew S Holden
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States.,Carleton University, School of Computer Science, Ottawa, Canada
| | - Molly O'Brien
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Anand Malpani
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Hajira Naz
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Ya-Wei Tseng
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Lisa Ishii
- Johns Hopkins University, School of Medicine, Department of Otolaryngology-Head and Neck Surgery, Baltimore, Maryland, United States
| | - S Swaroop Vedula
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| | - Masaru Ishii
- Johns Hopkins University, School of Medicine, Department of Otolaryngology-Head and Neck Surgery, Baltimore, Maryland, United States
| | - Gregory Hager
- Johns Hopkins University, Malone Center for Engineering in Healthcare, Baltimore, Maryland, United States
| |
Collapse
|
4
|
Meireles OR, Rosman G, Altieri MS, Carin L, Hager G, Madani A, Padoy N, Pugh CM, Sylla P, Ward TM, Hashimoto DA. SAGES consensus recommendations on an annotation framework for surgical video. Surg Endosc 2021; 35:4918-4929. [PMID: 34231065 DOI: 10.1007/s00464-021-08578-9] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2021] [Accepted: 05/26/2021] [Indexed: 11/25/2022]
Abstract
BACKGROUND The growing interest in analysis of surgical video through machine learning has led to increased research efforts; however, common methods of annotating video data are lacking. There is a need to establish recommendations on the annotation of surgical video data to enable assessment of algorithms and multi-institutional collaboration. METHODS Four working groups were formed from a pool of participants that included clinicians, engineers, and data scientists. The working groups were focused on four themes: (1) temporal models, (2) actions and tasks, (3) tissue characteristics and general anatomy, and (4) software and data structure. A modified Delphi process was utilized to create a consensus survey based on suggested recommendations from each of the working groups. RESULTS After three Delphi rounds, consensus was reached on recommendations for annotation within each of these domains. A hierarchy for annotation of temporal events in surgery was established. CONCLUSIONS While additional work remains to achieve accepted standards for video annotation in surgery, the consensus recommendations on a general framework for annotation presented here lay the foundation for standardization. This type of framework is critical to enabling diverse datasets, performance benchmarks, and collaboration.
Collapse
Affiliation(s)
- Ozanan R Meireles
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| | - Guy Rosman
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, USA
| | - Maria S Altieri
- Department of Surgery, East Carolina University, Greenville, USA
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, USA
| | - Gregory Hager
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, USA
| | - Amin Madani
- Department of Surgery, University Health Network, Toronto, Canada
| | - Nicolas Padoy
- ICube, University of Strasbourg, Strasbourg, France
- IHU Strasbourg, Strasbourg, France
| | - Carla M Pugh
- Department of Surgery, Stanford University, Stanford, USA
| | - Patricia Sylla
- Department of Surgery, Mount Sinai Medical Center, New York, USA
| | - Thomas M Ward
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA
| | - Daniel A Hashimoto
- Department of Surgery, Massachusetts General Hospital, 15 Parkman Street, WAC460, Boston, MA, 02114, USA.
| |
Collapse
|
5
|
|
6
|
Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov D. Ethical implications of AI in robotic surgical training: A Delphi consensus statement. Eur Urol Focus 2021; 8:613-622. [PMID: 33941503 DOI: 10.1016/j.euf.2021.04.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 03/02/2021] [Accepted: 04/08/2021] [Indexed: 12/12/2022]
Abstract
CONTEXT As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them. OBJECTIVES To provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee. EVIDENCE ACQUISITION The project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement. EVIDENCE SYNTHESIS There was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI. CONCLUSIONS Using the Delphi methodology, we achieved international consensus among experts to develop and reach content validation for guidance on ethical implications of AI in surgical training. Providing an ethical foundation for launching narrow AI applications in surgical training. This guidance will require further validation. PATIENT SUMMARY As the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.In this paper we provide guidance on ethical implications of AI in surgical training.
Collapse
Affiliation(s)
- Justin W Collins
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology.
| | - Hani J Marcus
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - Ahmed Ghazi
- Simulation Innovation Laboratory, University of Rochester, USA
| | - Ashwin Sridhar
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; University College London Hospital, Division of Uro-oncology
| | - Daniel Hashimoto
- Surgical Artificial Intelligence and Innovation Laboratory, Massachusetts General Hospital, USA
| | - Gregory Hager
- Malone Center for engineering in healthcare, Department of Computer Science, John Hopkins University, Baltimore, USA
| | - Alberto Arezzo
- Department of Surgical Sciences, University of Torino, Italy
| | | | - Lena Maier-Hein
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Keno Marz
- Deutsches Krebsforschungszentrum, Division of Computer Assisted Medical Interventions, Heidelberg, Germany
| | - Pietro Valdastri
- STORM Lab, School of Electronic and Electrical Engineering, University of Leeds, Leeds, UK
| | - Kensaku Mori
- Director of Information Technology Center, Nagoya University, Japan
| | - Daniel Elson
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Stamatia Giannarou
- Hamlyn Centre for robotic surgery, Department of Surgery and cancer, Imperial College London, UK
| | - Mark Slack
- Honorary Senior Lecturer, University of Cambridge, Cambridge UK; CMO CMR Surgical, Cambridge, UK
| | - Luke Hares
- Chief technology director, CMR Surgical, Cambridge, UK
| | - Yanick Beaulieu
- Division of Cardiology and Critical Care, Sacré-Coeur Hospital, University of Montreal, Montreal, Canada
| | - Jeff Levy
- Institute for Surgical Excellence, Philadelphia, USA
| | - Guy Laplante
- Director, Global Medical Affairs at Medtronic Minimally Invasive Therapies, Brampton, Canada
| | - Arvind Ramadorai
- Director, Digital-Assisted Surgery (DAS), Medtronic Surgical Robotics, North Haven, CT, USA
| | - Anthony Jarc
- Applied Research, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | - Ben Andrews
- Strategy, Intuitive Surgical, Inc., Sunnyvale, CA, USA
| | | | | | | | - Tom Kimpe
- BARCO NV - Healthcare division, Kortrijk, Belgium
| | - David Hawkes
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| | - John D Kelly
- University College London, Division of Surgery and Interventional Science, Research Department of Targeted Intervention; Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London; University College London Hospital, Division of Uro-oncology
| | - Danail Stoyanov
- Wellcome/ESPRC Centre for Interventional and Surgical Sciences (WEISS), University College London
| |
Collapse
|
7
|
Gonzalez GT, Kaur U, Rahman M, Venkatesh V, Sanchez N, Hager G, Xue Y, Voyles R, Wachs J. From the Dexterous Surgical Skill to the Battlefield-A Robotics Exploratory Study. Mil Med 2021; 186:288-294. [PMID: 33499518 DOI: 10.1093/milmed/usaa253] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 07/15/2020] [Accepted: 08/18/2020] [Indexed: 11/14/2022] Open
Abstract
INTRODUCTION Short response time is critical for future military medical operations in austere settings or remote areas. Such effective patient care at the point of injury can greatly benefit from the integration of semi-autonomous robotic systems. To achieve autonomy, robots would require massive libraries of maneuvers collected with the goal of training machine learning algorithms. Although this is attainable in controlled settings, obtaining surgical data in austere settings can be difficult. Hence, in this article, we present the Dexterous Surgical Skill (DESK) database for knowledge transfer between robots. The peg transfer task was selected as it is one of the six main tasks of laparoscopic training. In addition, we provide a machine learning framework to evaluate novel transfer learning methodologies on this database. METHODS A set of surgical gestures was collected for a peg transfer task, composed of seven atomic maneuvers referred to as surgemes. The collected Dexterous Surgical Skill dataset comprises a set of surgical robotic skills using the four robotic platforms: Taurus II, simulated Taurus II, YuMi, and the da Vinci Research Kit. Then, we explored two different learning scenarios: no-transfer and domain-transfer. In the no-transfer scenario, the training and testing data were obtained from the same domain; whereas in the domain-transfer scenario, the training data are a blend of simulated and real robot data, which are tested on a real robot. RESULTS Using simulation data to train the learning algorithms enhances the performance on the real robot where limited or no real data are available. The transfer model showed an accuracy of 81% for the YuMi robot when the ratio of real-tosimulated data were 22% to 78%. For the Taurus II and the da Vinci, the model showed an accuracy of 97.5% and 93%, respectively, training only with simulation data. CONCLUSIONS The results indicate that simulation can be used to augment training data to enhance the performance of learned models in real scenarios. This shows potential for the future use of surgical data from the operating room in deployable surgical robots in remote areas.
Collapse
Affiliation(s)
- Glebys T Gonzalez
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Upinder Kaur
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Masudur Rahman
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | | | - Natalia Sanchez
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Gregory Hager
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA
| | - Yexiang Xue
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Richard Voyles
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA
| | - Juan Wachs
- Department of Industrial Engineering, Purdue University, West Lafayette, IN 47906, USA.,Department of Surgery, Indianapolis, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| |
Collapse
|
8
|
Kim TK, Yi PH, Wei J, Shin JW, Hager G, Hui FK, Sair HI, Lin CT. Deep Learning Method for Automated Classification of Anteroposterior and Posteroanterior Chest Radiographs. J Digit Imaging 2021; 32:925-930. [PMID: 30972585 DOI: 10.1007/s10278-019-00208-0] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Ensuring correct radiograph view labeling is important for machine learning algorithm development and quality control of studies obtained from multiple facilities. The purpose of this study was to develop and test the performance of a deep convolutional neural network (DCNN) for the automated classification of frontal chest radiographs (CXRs) into anteroposterior (AP) or posteroanterior (PA) views. We obtained 112,120 CXRs from the NIH ChestX-ray14 database, a publicly available CXR database performed in adult (106,179 (95%)) and pediatric (5941 (5%)) patients consisting of 44,810 (40%) AP and 67,310 (60%) PA views. CXRs were used to train, validate, and test the ResNet-18 DCNN for classification of radiographs into anteroposterior and posteroanterior views. A second DCNN was developed in the same manner using only the pediatric CXRs (2885 (49%) AP and 3056 (51%) PA). Receiver operating characteristic (ROC) curves with area under the curve (AUC) and standard diagnostic measures were used to evaluate the DCNN's performance on the test dataset. The DCNNs trained on the entire CXR dataset and pediatric CXR dataset had AUCs of 1.0 and 0.997, respectively, and accuracy of 99.6% and 98%, respectively, for distinguishing between AP and PA CXR. Sensitivity and specificity were 99.6% and 99.5%, respectively, for the DCNN trained on the entire dataset and 98% for both sensitivity and specificity for the DCNN trained on the pediatric dataset. The observed difference in performance between the two algorithms was not statistically significant (p = 0.17). Our DCNNs have high accuracy for classifying AP/PA orientation of frontal CXRs, with only slight reduction in performance when the training dataset was reduced by 95%. Rapid classification of CXRs by the DCNN can facilitate annotation of large image datasets for machine learning and quality assurance purposes.
Collapse
Affiliation(s)
- Tae Kyung Kim
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Paul H Yi
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Jinchi Wei
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ji Won Shin
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Gregory Hager
- Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Ferdinand K Hui
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Haris I Sair
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA.,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA
| | - Cheng Ting Lin
- The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, MD, USA. .,Radiology Artificial Intelligence Lab (RAIL), Malone Center for Engineering in Healthcare, Johns Hopkins University Whiting School of engineering, Baltimore, MD, USA.
| |
Collapse
|
9
|
Varticovski L, Kim S, Baek S, Prokunina L, Hager G. Global chromatin landscape identifies bladder cancer metastatic progression. Eur J Cancer 2020. [DOI: 10.1016/s0959-8049(20)31148-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
10
|
Berges A, Vedula S, Tanner E, Fader A, Scheib S, Hager G, Chen C, Malpani A. 98: Do attending physicians and trainees agree about what happens in the operating room during robot-assisted laparoscopic hysterectomy procedures? Am J Obstet Gynecol 2019. [DOI: 10.1016/j.ajog.2019.01.128] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
11
|
Krishnan S, Garg A, Patil S, Lea C, Hager G, Abbeel P, Goldberg K. Transition state clustering: Unsupervised surgical trajectory segmentation for robot learning. Int J Rob Res 2017. [DOI: 10.1177/0278364917743319] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Demonstration trajectories collected from a supervisor in teleoperation are widely used for robot learning, and temporally segmenting the trajectories into shorter, less-variable segments can improve the efficiency and reliability of learning algorithms. Trajectory segmentation algorithms can be sensitive to noise, spurious motions, and temporal variation. We present a new unsupervised segmentation algorithm, transition state clustering (TSC), which leverages repeated demonstrations of a task by clustering segment endpoints across demonstrations. TSC complements any motion-based segmentation algorithm by identifying candidate transitions, clustering them by kinematic similarity, and then correlating the kinematic clusters with available sensory and temporal features. TSC uses a hierarchical Dirichlet process Gaussian mixture model to avoid selecting the number of segments a priori. We present simulated results to suggest that TSC significantly reduces the number of false-positive segments in dynamical systems observed with noise as compared with seven probabilistic and non-probabilistic segmentation algorithms. We additionally compare algorithms that use piecewise linear segment models, and find that TSC recovers segments of a generated piecewise linear trajectory with greater accuracy in the presence of process and observation noise. At the maximum noise level, TSC recovers the ground truth 49% more accurately than alternatives. Furthermore, TSC runs 100× faster than the next most accurate alternative autoregressive models, which require expensive Markov chain Monte Carlo (MCMC)-based inference. We also evaluated TSC on 67 recordings of surgical needle passing and suturing. We supplemented the kinematic recordings with manually annotated visual features that denote grasp and penetration conditions. On this dataset, TSC finds 83% of needle passing transitions and 73% of the suturing transitions annotated by human experts.
Collapse
|
12
|
Abstract
This article explores the relationship between public opinion and nonincremental policy change by extending the analysis of Wright et al. (1985, 1987). We develop a two-step model in which we first relate the level of a relevant outcome measure of a policy to the degree of opinion liberalism, the strategy of Wright and his colleagues. Then we posit that policy shocks will move the policy system into greater policy-opinion congruence. The model is tested for two policy areas that have undergone nonincremental change over the late 1970s and early 1980s: tax policy and education policy.
Collapse
|
13
|
Vedula SS, Malpani A, Ahmidi N, Khudanpur S, Hager G, Chen CCG. Task-Level vs. Segment-Level Quantitative Metrics for Surgical Skill Assessment. J Surg Educ 2016; 73:482-489. [PMID: 26896147 DOI: 10.1016/j.jsurg.2015.11.009] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 09/21/2015] [Accepted: 11/08/2015] [Indexed: 06/05/2023]
Abstract
OBJECTIVE Task-level metrics of time and motion efficiency are valid measures of surgical technical skill. Metrics may be computed for segments (maneuvers and gestures) within a task after hierarchical task decomposition. Our objective was to compare task-level and segment (maneuver and gesture)-level metrics for surgical technical skill assessment. DESIGN Our analyses include predictive modeling using data from a prospective cohort study. We used a hierarchical semantic vocabulary to segment a simple surgical task of passing a needle across an incision and tying a surgeon's knot into maneuvers and gestures. We computed time, path length, and movements for the task, maneuvers, and gestures using tool motion data. We fit logistic regression models to predict experience-based skill using the quantitative metrics. We compared the area under a receiver operating characteristic curve (AUC) for task-level, maneuver-level, and gesture-level models. SETTING Robotic surgical skills training laboratory. PARTICIPANTS In total, 4 faculty surgeons with experience in robotic surgery and 14 trainee surgeons with no or minimal experience in robotic surgery. RESULTS Experts performed the task in shorter time (49.74s; 95% CI = 43.27-56.21 vs. 81.97; 95% CI = 69.71-94.22), with shorter path length (1.63m; 95% CI = 1.49-1.76 vs. 2.23; 95% CI = 1.91-2.56), and with fewer movements (429.25; 95% CI = 383.80-474.70 vs. 728.69; 95% CI = 631.84-825.54) than novices. Experts differed from novices on metrics for individual maneuvers and gestures. The AUCs were 0.79; 95% CI = 0.62-0.97 for task-level models, 0.78; 95% CI = 0.6-0.96 for maneuver-level models, and 0.7; 95% CI = 0.44-0.97 for gesture-level models. There was no statistically significant difference in AUC between task-level and maneuver-level (p = 0.7) or gesture-level models (p = 0.17). CONCLUSIONS Maneuver-level and gesture-level metrics are discriminative of surgical skill and can be used to provide targeted feedback to surgical trainees.
Collapse
Affiliation(s)
- S Swaroop Vedula
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland.
| | - Anand Malpani
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland
| | - Narges Ahmidi
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland
| | - Sanjeev Khudanpur
- Department of Electrical & Computer Engineering, Johns Hopkins University, Baltimore, Maryland
| | - Gregory Hager
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland
| | - Chi Chiung Grace Chen
- Department of Gynecology & Obstetrics, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
14
|
Richa R, Linhares R, Comunello E, von Wangenheim A, Schnitzler JY, Wassmer B, Guillemot C, Thuret G, Gain P, Hager G, Taylor R. Fundus image mosaicking for information augmentation in computer-assisted slit-lamp imaging. IEEE Trans Med Imaging 2014; 33:1304-1312. [PMID: 24718569 DOI: 10.1109/tmi.2014.2309440] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Laser photocoagulation is currently the standard treatment for sight-threatening diseases worldwide, namely diabetic retinopathy and retinal vein occlusions. The slit lamp biomicroscope is the most commonly used device for this procedure, specially for the treatment of the eye periphery. However, only a small portion of the retina can be visualized through the biomicroscope, complicating the task of localizing and identifying surgical targets, increasing treatment duration and patient discomfort. In order to assist surgeons, we propose a method for creating intraoperative retina maps for view expansion using a slit-lamp device. Based on the mosaicking method described by Richa et al, 2012, the proposed method is a combination of direct and feature-based methods, suitable for the textured nature of the human retina. In this paper, we describe three major enhancements to the original formulation. The first is a visual tracking method using local illumination compensation to cope with the challenging visualization conditions. The second is an efficient pixel selection scheme for increased computational efficiency. The third is an entropy-based mosaic update method to dynamically improve the retina map during exploration. To evaluate the performance of the proposed method, we conducted several experiments on human subjects with a computer-assisted slit-lamp prototype. We also demonstrate the practical value of the system for photo documentation, diagnosis and intraoperative navigation.
Collapse
|
15
|
|
16
|
Fleming IN, Kut C, Macura KJ, Su LM, Rivaz H, Schneider CM, Hamper U, Lotan T, Taylor R, Hager G, Boctor E. Ultrasound elastography as a tool for imaging guidance during prostatectomy: initial experience. Med Sci Monit 2013; 18:CR635-42. [PMID: 23111738 PMCID: PMC3560608 DOI: 10.12659/msm.883540] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
BACKGROUND During laparoscopic or robotic assisted laparoscopic prostatectomy, the surgeon lacks tactile feedback which can help him tailor the size of the excision. Ultrasound elastography (USE) is an emerging imaging technology which maps the stiffness of tissue. In the paper we are evaluating USE as a palpation equivalent tool for intraoperative image guided robotic assisted laparoscopic prostatectomy. MATERIAL/METHODS Two studies were performed: 1) A laparoscopic ultrasound probe was used in a comparative study of manual palpation versus USE in detecting tumor surrogates in synthetic and ex-vivo tissue phantoms; N=25 participants (students) were asked to provide the presence, size and depth of these simulated lesions, and 2) A standard ultrasound probe was used for the evaluation of USE on ex-vivo human prostate specimens (N=10 lesions in N=6 specimens) to differentiate hard versus soft lesions with pathology correlation. Results were validated by pathology findings, and also by in-vivo and ex-vivo MR imaging correlation. RESULTS In the comparative study, USE displayed higher accuracy and specificity in tumor detection (sensitivity=84%, specificity=74%). Tumor diameters and depths were better estimated using USE versus with manual palpation. USE also proved consistent in identification of lesions in ex-vivo prostate specimens; hard and soft, malignant and benign, central and peripheral. CONCLUSIONS USE is a strong candidate for assisting surgeons by providing palpation equivalent evaluation of the tumor location, boundaries and extra-capsular extension. The results encourage us to pursue further testing in the robotic laparoscopic environment.
Collapse
|
17
|
Lea C, Facker J, Hager G, Taylor R, Saria S. 3D Sensing Algorithms Towards Building an Intelligent Intensive Care Unit. AMIA Jt Summits Transl Sci Proc 2013; 2013:136-40. [PMID: 24303253 PMCID: PMC3845759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Intensive Care Units (ICUs) are chaotic places where hundreds of tasks are carried out by many different people. Timely and coordinated execution of these tasks are directly related to quality of patient outcomes. An improved understanding of the current care process can aid in improving quality. Our goal is to build towards a system that automatically catalogs various tasks being performed by the bedside. We propose a set of techniques using computer vision and machine learning to develop a system that passively senses the environment and identifies seven common actions such as documenting, checking up on a patient, and performing a procedure. Preliminary evaluation of our system on 5.5 hours of data from the Pediatric ICU obtains overall task recognition accuracy of 70%. Furthermore, we show how it can be used to summarize and visualize tasks. Our system provides a significant departure from current approaches used for quality improvement. With further improvement, we think that such a system could realistically be deployed in the ICU.
Collapse
Affiliation(s)
- Colin Lea
- Computer Science Department, Johns Hopkins University, Baltimore, MD 21218
,
Primary contact: Colin Lea (
)
| | - James Facker
- Department of Anesthesia and Critical Care, Johns Hopkins University, Baltimore, MD 21218
| | - Gregory Hager
- Computer Science Department, Johns Hopkins University, Baltimore, MD 21218
| | - Russell Taylor
- Computer Science Department, Johns Hopkins University, Baltimore, MD 21218
| | - Suchi Saria
- Computer Science Department, Johns Hopkins University, Baltimore, MD 21218
,
Department of Health Policy and Management, Johns Hopkins University, Baltimore, MD 21218
| |
Collapse
|
18
|
Abstract
In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.
Collapse
Affiliation(s)
- R Richa
- Laboratory of Computational Sensing and Robotics, The Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | | | |
Collapse
|
19
|
Aust S, Pils D, Cacsire Castillo-Tong D, Hager G, Obermayr E, Heinze G, Kohl M, Schuster E, Wolf A, Schiebel I, Sehouli J, Braicu I, Vergote I, Van Gorp T, Mahne S, Concin N, Speiser P, Zeillinger R. Eine kombinierte Blut-basierte Gen-Expressions und Plasma-Protein Signatur für die Diagnose von epithelialem Ovarialkarzinom - Eine Studie des OVCAD Konsortiums. Geburtshilfe Frauenheilkd 2012. [DOI: 10.1055/s-0032-1309219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
|
20
|
Liu WP, Mirota DJ, Uneri A, Otake Y, Hager G, Reh DD, Ishii M, Gallia GL, Siewerdsen JH. A Clinical Pilot Study of a Modular Video-CT Augmentation System for Image-Guided Skull Base Surgery. Proc SPIE Int Soc Opt Eng 2012; 8316:831633. [PMID: 37476578 PMCID: PMC10358450 DOI: 10.1117/12.911724] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/22/2023]
Abstract
Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in real-time, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7±0.3) pixels and mean target registration error of (2.3±1.5)mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (un-augmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.
Collapse
Affiliation(s)
- Wen P Liu
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - Daniel J Mirota
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - Ali Uneri
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - Yoshito Otake
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| | - Gregory Hager
- Department of Computer Science, Johns Hopkins University, Baltimore MD
| | - Douglas D Reh
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins Medical Institute, Baltimore MD
| | - Masaru Ishii
- Department of Otolaryngology-Head & Neck Surgery, Johns Hopkins Medical Institute, Baltimore MD
| | - Gary L Gallia
- Department of Neurosurgery and Oncology, Johns Hopkins Medical Institute, Baltimore MD
| | - Jeffrey H Siewerdsen
- Department of Computer Science, Johns Hopkins University, Baltimore MD
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore MD
| |
Collapse
|
21
|
Kumar R, Jog A, Vagvolgyi B, Nguyen H, Hager G, Chen CCG, Yuh D. Objective measures for longitudinal assessment of robotic surgery training. J Thorac Cardiovasc Surg 2011; 143:528-34. [PMID: 22172215 DOI: 10.1016/j.jtcvs.2011.11.002] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/04/2011] [Revised: 09/11/2011] [Accepted: 11/07/2011] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Current robotic training approaches lack the criteria for automatically assessing and tracking (over time) technical skills separately from clinical proficiency. We describe the development and validation of a novel automated and objective framework for the assessment of training. METHODS We are able to record all system variables (stereo instrument video, hand and instrument motion, buttons and pedal events) from the da Vinci surgical systems using a portable archival system integrated with the robotic surgical system. Data can be collected unsupervised, and the archival system does not change system operations in any way. Our open-ended multicenter protocol is collecting surgical skill benchmarking data from 24 trainees to surgical proficiency, subject only to their continued availability. Two independent experts performed structured (objective structured assessment of technical skills) assessments on longitudinal data from 8 novice and 4 expert surgeons to generate baseline data for training and to validate our computerized statistical analysis methods in identifying the ranges of operational and clinical skill measures. RESULTS Objective differences in operational and technical skill between known experts and other subjects were quantified. The longitudinal learning curves and statistical analysis for trainee performance measures are reported. Graphic representations of the skills developed for feedback to the trainees are also included. CONCLUSIONS We describe an open-ended longitudinal study and automated motion recognition system capable of objectively differentiating between clinical and technical operational skills in robotic surgery. Our results have demonstrated a convergence of trainee skill parameters toward those derived from expert robotic surgeons during the course of our training protocol.
Collapse
Affiliation(s)
- Rajesh Kumar
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | | | | | |
Collapse
|
22
|
Kumar R, Jog A, Malpani A, Vagvolgyi B, Yuh D, Nguyen H, Hager G, Chen CCG. Assessing system operation skills in robotic surgery trainees. Int J Med Robot 2011; 8:118-24. [PMID: 22114003 DOI: 10.1002/rcs.449] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/03/2011] [Indexed: 01/14/2023]
Abstract
BACKGROUND With increased use of robotic surgery in specialties including urology, development of training methods has also intensified. However, current approaches lack the ability to discriminate between operational and surgical skills. METHODS An automated recording system was used to longitudinally (monthly) acquire instrument motion/telemetry and video for four basic surgical skills - suturing, manipulation, transection, and dissection. Statistical models were then developed to discriminate the human-machine skill differences between practicing expert surgeons and trainees. RESULTS Data from six trainees and two experts was analyzed to validate the first ever statistical models of operational skills, and demonstrate classification with very high accuracy (91.7% for masters, and 88.2% for camera motion) and sensitivity. CONCLUSIONS The paper reports on a longitudinal study aimed at tracking robotic surgery trainees to proficiency, and methods capable of objectively assessing operational and technical skills that would be used in assessing trainee progress at the participating institutions.
Collapse
Affiliation(s)
- Rajesh Kumar
- Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | | | | | | | |
Collapse
|
23
|
Kumar R, Zhao Q, Seshamani S, Mullin G, Hager G, Dassopoulos T. Assessment of Crohn's disease lesions in wireless capsule endoscopy images. IEEE Trans Biomed Eng 2011; 59:355-62. [PMID: 22020661 DOI: 10.1109/tbme.2011.2172438] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Capsule endoscopy (CE) provides noninvasive access to a large part of the small bowel that is otherwise inaccessible without invasive and traumatic treatment. However, it also produces large amounts of data (approximately 50,000 images) that must be then manually reviewed by a clinician. Such large datasets provide an opportunity for application of image analysis and supervised learning methods. Automated analysis of CE images has only focused on detection, and often only for bleeding. Compared to these detection approaches, we explored assessment of discrete disease for lesions created by mucosal inflammation in Crohn's disease (CD). Our work is the first study to systematically explore supervised classification for CD lesions, a classifier cascade to classify discrete lesions, as well as quantitative assessment of lesion severity. We used a well-developed database of 47 studies for evaluation of these methods. The developed methods show high agreement with ground truth severity ratings manually assigned by an expert, and good precision ( > 90% for lesion detection) and recall ( > 90%) for lesions of varying severity.
Collapse
Affiliation(s)
- Rajesh Kumar
- Department of Computer Science, Johns Hopkins Univeristy, Baltimore, MD 21218, USA.
| | | | | | | | | | | |
Collapse
|
24
|
Pils D, Hager G, Wolf A, Aust S, Grimm C, Speiser P, Sehouli J, Braicu I, Cadron I, Vergote I, Mahner S, Cacsire-Castillo Tong D, Zeillinger R. Molekulare Subklassifizierung als unabhängiger Prognosefaktor bei Patientinnen mit Ovarialkarzinom – eine Studie des OVCAD Konsortiums. Geburtshilfe Frauenheilkd 2011. [DOI: 10.1055/s-0031-1286469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
|
25
|
Pils D, Hager G, Wolf A, Aust S, Grimm C, Speiser P, Sehouli J, Braicu I, Cadron I, Vergote I, Mahner S, Tong D, Zeillinger R. Molekulare Subklassifizierung als unabhängiger Prognosefaktor bei Patientinnen mit Ovarialkarzinom – eine Studie des OVCAD Konsortiums. Geburtshilfe Frauenheilkd 2011. [DOI: 10.1055/s-0031-1278584] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022] Open
|
26
|
Richa R, Balicki M, Meisner E, Sznitman R, Taylor R, Hager G. Visual Tracking of Surgical Tools for Proximity Detection in Retinal Surgery. Information Processing in Computer-Assisted Interventions 2011. [DOI: 10.1007/978-3-642-21504-9_6] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
27
|
|
28
|
Mayr B, Hager G. Postmeiotic NOR-expression during spermiogenesis of the domestic pig (Sus scrofa domestica L.). Zentralbl Veterinarmed A 2010; 27:780-7. [PMID: 6784408 DOI: 10.1111/j.1439-0442.1980.tb02031.x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
|
29
|
|
30
|
Seshamani S, Kumar R, Dassopoulos T, Mullin G, Hager G. Augmenting capsule endoscopy diagnosis: a similarity learning approach. Med Image Comput Comput Assist Interv 2010; 13:454-62. [PMID: 20879347 DOI: 10.1007/978-3-642-15745-5_56] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/22/2022]
Abstract
The current procedure for diagnosis of Crohn's disease (CD) from Capsule Endoscopy is a tedious manual process which requires the clinician to visually inspect large video sequences for matching and categorization of diseased areas (lesions). Automated methods for matching and classification can help improve this process by reducing diagnosis time and improving consistency of categorization. In this paper, we propose a novel SVM-based similarity learning method for distinguishing between correct and incorrect matches in Capsule Endoscopy (CE). We also show that this can be used in conjunction with a voting scheme to categorize lesion images. Results show that our methods outperform standard classifiers in discriminating similar from dissimilar lesion images, as well as in lesion categorization. We also show that our methods drastically reduce the complexity (training time) by requiring only one half of the data for training, without compromising the accuracy of the classifier.
Collapse
Affiliation(s)
- S Seshamani
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA
| | | | | | | | | |
Collapse
|
31
|
Ejima S, Hager G, Fehske H. Quantum phase transition in a 1D transport model with boson-affected hopping: Luttinger liquid versus charge-density-wave behavior. Phys Rev Lett 2009; 102:106404. [PMID: 19392136 DOI: 10.1103/physrevlett.102.106404] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/05/2008] [Indexed: 05/27/2023]
Abstract
We solve a very general two-channel fermion-boson model describing charge transport within some background medium by means of a refined pseudosite density-matrix renormalization group technique. Performing a careful finite-size scaling analysis, we determine the ground-state phase diagram and convincingly prove that the model exhibits a metal-insulator quantum phase transition for the half-filled band case. In order to characterize the metallic and insulating regimes we calculate, besides the local particle densities and fermion-boson correlation functions, the kinetic energy, the charge-structure factor, the Luttinger liquid charge exponent, and the single-particle excitation gap for a one-dimensional infinite system.
Collapse
Affiliation(s)
- S Ejima
- Institut für Physik, Ernst-Moritz-Arndt-Universität Greifswald, 17489 Greifswald, Germany
| | | | | |
Collapse
|
32
|
Varadarajan B, Reiley C, Lin H, Khudanpur S, Hager G. Data-derived models for segmentation with application to surgical assessment and training. Med Image Comput Comput Assist Interv 2009; 12:426-34. [PMID: 20426016 DOI: 10.1007/978-3-642-04268-3_53] [Citation(s) in RCA: 43] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2023]
Abstract
This paper addresses automatic skill assessment in robotic minimally invasive surgery. Hidden Markov models (HMMs) are developed for individual surgical gestures (or surgemes) that comprise a typical bench-top surgical training task. It is known that such HMMs can be used to recognize and segment surgemes in previously unseen trials. Here, the topology of each surgeme HMM is designed in a data-driven manner, mixing trials from multiple surgeons with varying skill levels, resulting in HMM states that model skill-specific sub-gestures. The sequence of HMM states visited while performing a surgeme are therefore indicative of the surgeon's skill level. This expectation is confirmed by the average edit distance between the state-level "transcripts" of the same surgeme performed by two surgeons with different expertise levels. Some surgemes are further shown to be more indicative of skill than others.
Collapse
Affiliation(s)
- Balakrishnan Varadarajan
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD 21218, USA.
| | | | | | | | | |
Collapse
|
33
|
Rivaz H, Boctor E, Foroughi P, Zellars R, Fichtinger G, Hager G. Ultrasound elastography: a dynamic programming approach. IEEE Trans Med Imaging 2008; 27:1373-1377. [PMID: 18815089 DOI: 10.1109/tmi.2008.917243] [Citation(s) in RCA: 83] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
This paper introduces a 2-D strain imaging technique based on minimizing a cost function using dynamic programming (DP). The cost function incorporates similarity of echo amplitudes and displacement continuity. Since tissue deformations are smooth, the incorporation of the smoothness into the cost function results in reduced decorrelation noise. As a result, the method generates high-quality strain images of freehand palpation elastography with up to 10% compression, showing that the method is more robust to signal decorrelation (caused by scatterer motion in high axial compression and nonaxial motions of the probe) in comparison to the standard correlation techniques. The method operates in less than 1 s and is thus also potentially suitable for real time elastography.
Collapse
Affiliation(s)
- Hassan Rivaz
- Engineering Research Center for Computer Integrated Surgery, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA
| | | | | | | | | | | |
Collapse
|
34
|
Abstract
BACKGROUND Recent advances in computational image processing have made it possible to reconstruct camera motion and scene geometry from a series of monocular images. By applying these methods to endoscopic image sequences, it is possible to create detailed, quantitative anatomic reconstructions. Such anatomic reconstructions have many potential clinical uses. Our objectives in this study are to (1) develop a process flow for reconstruction from endoscopic image sequences and (2) present results supporting the hypothesis that such reconstructions can be computed. METHODS We first outline the overall process flow for endoscopic reconstruction. Then, we present an instantiation of this process flow using recently developed methods in computational vision. We apply these methods to cadaverous specimens for which ground truth endoscopic motion is known. RESULTS We are able to produce consistent estimates of endoscopic motion and dense reconstructions of the surrounding anatomy for >65% of 1373 image pairs. CONCLUSION Our study indicates that processing endoscopic images to produce anatomic structure is feasible. Such reconstructions have high potential clinical value for intraoperative navigation, diagnosis, and treatment planning.
Collapse
Affiliation(s)
- Hanzi Wang
- Center for Computer-Integrated Surgical Systems and Technology, The Johns Hopkins University, Baltimore, Maryland 21218, USA
| | | | | | | |
Collapse
|
35
|
Fleming I, Balicki M, Koo J, Iordachita I, Mitchell B, Handa J, Hager G, Taylor R. Cooperative robot assistant for retinal microsurgery. Med Image Comput Comput Assist Interv 2008; 11:543-50. [PMID: 18982647 DOI: 10.1007/978-3-540-85990-1_65] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
This paper describes the development and results of initial testing of a cooperative robot assistant for retinal microsurgery. In the cooperative control paradigm, the surgeon and the robot share control of a tool attached to the robot through a force sensor. The system senses forces exerted by the operator on the tool and uses this information in various control modes to provide smooth, tremor-free, precise positional control and force scaling. The robot manipulator is specifically designed with retinal microsurgery in mind, having high efficacy, flexibility and ergonomics while meeting the accuracy and safety requirements of microsurgery. We have tested this robot on a biological model and we report the results for reliably cannulating approximately 80 microm diameter veins (equivalent in size to human retinal veins). We also describe improvements to the robot and the experimental setup facilitating more advanced set of experiments.
Collapse
Affiliation(s)
- Ioana Fleming
- ERC for Computer Integrated Surgery, Johns Honkins University, Baltimore, MD, USA.
| | | | | | | | | | | | | | | |
Collapse
|
36
|
Rivaz H, Zellars R, Hager G, Fichtinger G, Boctor E. 9C-1 Beam Steering Approach for Speckle Characterization and Out-of-Plane Motion Estimation in Real Tissue. ACTA ACUST UNITED AC 2007. [DOI: 10.1109/ultsym.2007.200] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
37
|
Boctor E, deOliveira M, Choti M, Ghanem R, Taylor R, Hager G, Fichtinger G. Ultrasound monitoring of tissue ablation via deformation model and shape priors. Med Image Comput Comput Assist Interv 2007; 9:405-12. [PMID: 17354798 DOI: 10.1007/11866763_50] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/14/2023]
Abstract
A rapid approach to monitor ablative therapy through optimizing shape and elasticity parameters is introduced. Our motivating clinical application is targeting and intraoperative monitoring of hepatic tumor thermal ablation, but the method translates to the generic problem of encapsulated stiff masses (solid organs, tumors, ablated lesions, etc.) in ultrasound imaging. The approach involves the integration of the following components: a biomechanical computational model of the tissue, a correlation approach to estimate/track tissue deformation, and an optimization method to solve the inverse problem and recover the shape parameters in the volume of interest. Successful convergence and reliability studies were conducted on simulated data. Then ex-vivo studies were performed on 18 ex-vivo bovine liver samples previously ablated under ultrasound monitoring in controlled laboratory environment. While B-mode ultrasound does not clearly identify the development of necrotic lesions, the proposed technique can potentially segment the ablation zone. The same framework can also yield both partial and full elasticity reconstruction.
Collapse
Affiliation(s)
- Emad Boctor
- Engineering Research Center, Johns Hopkins University, Baltimore, MD, USA.
| | | | | | | | | | | | | |
Collapse
|
38
|
Watrowski RA, Castillo-Tong DC, Hager G, Fabjani G, Zeillinger R. Ile/Val-655-Polymorphismus des HER-2-Gens als möglicher Riskofaktor für das Brust- und Ovarialkarzinom. Geburtshilfe Frauenheilkd 2006. [DOI: 10.1055/s-2006-952203] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
|
39
|
Bash R, Wang H, Anderson C, Yodh J, Hager G, Lindsay SM, Lohr D. AFM imaging of protein movements: histone H2A-H2B release during nucleosome remodeling. FEBS Lett 2006; 580:4757-61. [PMID: 16876789 DOI: 10.1016/j.febslet.2006.06.101] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2006] [Accepted: 06/10/2006] [Indexed: 11/22/2022]
Abstract
Being able to follow assembly/disassembly reactions of biomolecular complexes directly at the single molecule level would be very useful. Here, we use an AFM technique that can simultaneously obtain topographic images and identify the locations of a specific type of protein within those images to monitor the histone H2A component of nucleosomes acted on by human Swi-Snf, an ATP-dependent nucleosome remodeling complex. Activation of remodeling results in significant H2A release from nucleosomes, based on recognition imaging and nucleosome height changes, and changes in the recognition patterns of H2A associated directly with hSwi-Snf complexes.
Collapse
Affiliation(s)
- R Bash
- Biodesign Institute, Arizona State University, Tempe, AZ 85287-5601, USA
| | | | | | | | | | | | | |
Collapse
|
40
|
Boctor EM, Iordachita I, Choti MA, Hager G, Fichtinger G. Bootstrapped ultrasound calibration. Stud Health Technol Inform 2006; 119:61-6. [PMID: 16404015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
This paper introduces an enhanced (bootstrapped) method for tracked ultrasound probe calibration. Prior to calibration, a position sensor is used to track an ultrasound probe in 3D space, while the US image is used to determine calibration target locations within the image. From this information, an estimate of the transformation matrix of the scan plane with respect to the position sensor is computed. While all prior calibration methods terminate at this phase, we use this initial calibration estimate to bootstrap an additional optimization of the transformation matrix on independent data to yield the minimum reconstruction error on calibration targets. The bootstrapped workflow makes use of a closed-form calibration solver and associated sensitivity analysis, allowing for rapid and robust convergence to an optimal calibration matrix. Bootstrapping demonstrates superior reconstruction accuracy.
Collapse
Affiliation(s)
- Emad M Boctor
- Engineering Research Center, Johns Hopkins University, USA
| | | | | | | | | |
Collapse
|
41
|
Abstract
With the advancement of minimally invasive techniques for surgical and diagnostic procedures, there is a growing need for the development of methods for improved visualization of internal body structures. Video mosaicking is one method for doing this. This approach provides a broader field of view of the scene by stitching together images in a video sequence. Of particular importance is the need for online processing to provide real-time feedback and visualization for image-guided surgery and diagnosis. We propose a method for online video mosaicking applied to endoscopic imagery, with examples in microscopic retinal imaging and catadioptric endometrial imaging.
Collapse
|
42
|
Wang H, Bash R, Yodh JG, Hager G, Lindsay SM, Lohr D. Using atomic force microscopy to study nucleosome remodeling on individual nucleosomal arrays in situ. Biophys J 2005; 87:1964-71. [PMID: 15345572 PMCID: PMC1304599 DOI: 10.1529/biophysj.104.042606] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
In eukaryotes, genomic processes like transcription, replication, repair, and recombination typically require alterations in nucleosome structure on specific DNA regions to operate. ATP-dependent nucleosome remodeling complexes provide a major mechanism for carrying out such alterations in vivo. To learn more about the action of these important complexes, we have utilized an atomic force microscopy in situ technique that permits comparison of the same individual molecules before and after activation of a particular process, in this case nucleosome remodeling. This direct approach was used to look for changes induced by the action of the human Swi-Snf remodeling complex on individual, single-copy mouse mammary tumor virus promoter nucleosomal arrays. Using this technique, we detect a variety of changes on remodeling. Many of these changes are larger in scale than suggested from previous studies and involve a number of DNA-mediated events, including a preference for the removal of a complete turn (80 basepairs) of nucleosomal DNA. The latter result raises the possibility of an unanticipated mode of human Swi-Snf interaction with the nucleosome, namely via the 11-nm histone surface.
Collapse
Affiliation(s)
- H Wang
- Department of Physics and Astronomy, Arizona State University, Tempe, Arizona 85287, USA
| | | | | | | | | | | |
Collapse
|
43
|
Viswanathan A, Boctor EM, Taylor RH, Hager G, Fichtinger G. Immediate Ultrasound Calibration with Three Poses and Minimal Image Processing. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2004 2004. [DOI: 10.1007/978-3-540-30136-3_55] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
44
|
Bash R, Wang H, Yodh J, Hager G, Lindsay SM, Lohr D. Nucleosomal arrays can be salt-reconstituted on a single-copy MMTV promoter DNA template: their properties differ in several ways from those of comparable 5S concatameric arrays. Biochemistry 2003; 42:4681-90. [PMID: 12705831 DOI: 10.1021/bi026887o] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
Subsaturated nucleosomal arrays were reconstituted on a single-copy MMTV promoter DNA fragment by salt dialysis procedures and studied by atomic force microscopy. Up to an occupation level of approximately eight nucleosomes on this 1900 bp template, salt reconstitution produces nucleosomal arrays which look very similar to comparably loaded 5S rDNA nucleosomal arrays; i.e., nucleosomes are dispersed on the DNA template. Thus, at these occupation levels, the single-copy MMTV template forms arrays suitable for biophysical analyses. A quantitative comparison of the population features of subsaturated MMTV and 5S arrays detects differences between the two: a requirement for higher histone levels to achieve a given level of nucleosome occupation on MMTV templates, indicating that nucleosome loading is thermodynamically less favorable on this template; a preference for pairwise nucleosome occupation of the MMTV (but not the 5S) template at midrange occupation levels; and an enhanced salt stability for nucleosomes on MMTV versus 5S arrays, particularly in the midrange of array occupation. When average occupation levels exceed approximately eight nucleosomes per template, MMTV arrays show a significant level of mainly intramolecular compaction; 5S arrays do not. Taken together, these results show clearly that the nature of the underlying DNA template can affect the physical properties of nucleosomal arrays. DNA sequence-directed differences in the physical properties of chromatin may have important consequences for functional processes such as gene regulation.
Collapse
Affiliation(s)
- R Bash
- Department of Physics and Astronomy, Arizona State University, Tempe, Arizona 85287, USA
| | | | | | | | | | | |
Collapse
|
45
|
Affiliation(s)
- G Hager
- National Cancer Institute, Bethesda, MD, USA
| |
Collapse
|
46
|
Hager G, Formanek M, Gedlicka C, Knerer B, Kornfehl J. Ethanol decreases expression of p21 and increases hyperphosphorylated pRb in cell lines of squamous cell carcinomas of the head and neck. Alcohol Clin Exp Res 2001; 25:496-501. [PMID: 11329487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
Abstract
BACKGROUND Alcohol increases the risk of cancers of the upper aerodigestive tract, but the biological mechanisms of this ethanol effect are still unclear. We recently reported that ethanol is able to induce in vitro proliferation accompanied by an increased number of cells in the S phase of the cell cycle in squamous cell carcinoma cell lines of the head and neck (SCCHN). In the current study we investigated the influence of ethanol over a limited period of time (96 hr) on cell cycle-regulating proteins involved in G1/S phase transition. METHODS Synchronized cells of SCCHN cell lines JPPA (larynx) and SCC 9 and SCC 25 (tongue), as well as HaCaT (human immortalized keratinocytes)-used as a control-were cultured for 96 hr in the presence or absence of ethanol (10-3M). At several time intervals the expression of cyclin D1 and p21 and the phosphorylation status of the retinoblastoma protein (pRb) were determined by Western or Northern Blot analysis, or both. RESULTS Ethanol had no influence on the protein expression of cyclin D1. In contrast, a distinct downregulation of p21 at the protein as well as the mRNA level could be detected. Furthermore, as a downstream event, the hyperphosphorylated form of the pRb increased. CONCLUSIONS In the acute alcohol in vitro experiments, the marked downregulation of the important cell cycle inhibitor p21 and the corresponding increase of hyperphosphorylated pRb accelerate the progression of cells from the G1 to the S phase in the cell cycle. The importance of these data and their relevance to in vivo conditions remain speculative, but it could be a critical step in the multistep process of SCCHN carcinogenesis induced by ethanol.
Collapse
Affiliation(s)
- G Hager
- Department of Oto-Rhino-Laryngology, Head and Neck Surgery, General Hospital, University of Vienna, Austria
| | | | | | | | | |
Collapse
|
47
|
Abstract
Motoneurons respond to peripheral nerve transection by either regenerative or degenerative events depending on their state of maturation. Since the expression of c-Jun has been involved in the early signalling of the regenerative process that follows nerve transection in adults, we have investigated c-Jun on rat neonatal axotomized motoneurons during the period in which neuronal death is induced. Changes in levels of c-Jun protein and its mRNA were determined by means of quantitative immunocytochemistry and in situ hybridization. Three hours after nerve transection performed on postnatal day (P)3, c-Jun protein and mRNA is induced in axotomized spinal cord motoneurons, and high levels were reached between 1 and 10 days after. This response is associated with a detectable c-Jun activation by phosphorylation on serine 63. No changes were found in the levels of activating transcription factor -2. Most of dying motoneurons were not labelled by either a specific c-Jun antibody or a c-jun mRNA probe. However, dying motoneurons were specifically stained by a polyclonal anti c-Jun antibody, indicating that some c-Jun antibodies react with unknown epitopes, probably distinct from c-Jun p39, that are specifically associated with apoptosis.
Collapse
Affiliation(s)
- A Casanovas
- Unitat de Neurobiologia Cellular, Departament de Ciències Mèdiques Bàsiques, Facultat de Medicina, Universitat de Lleida, Spain
| | | | | | | | | |
Collapse
|
48
|
Horvat A, Schwaiger F, Hager G, Brocker F, Streif R, Knyazev P, Ullrich A, Kreutzberg GW. A novel role for protein tyrosine phosphatase shp1 in controlling glial activation in the normal and injured nervous system. J Neurosci 2001; 21:865-74. [PMID: 11157073 PMCID: PMC6762306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023] Open
Abstract
Tyrosine phosphorylation regulated by protein tyrosine kinases and phosphatases plays an important role in the activation of glial cells. Here we examined the expression of intracellular protein tyrosine phosphatase SHP1 in the normal and injured adult rat and mouse CNS. Our study showed that in the intact CNS, SHP1 was expressed in astrocytes as well as in pyramidal cells in hippocampus and cortex. Axotomy of peripheral nerves and direct cortical lesion led to a massive upregulation of SHP1 in activated microglia and astrocytes, whereas the neuronal expression of SHP1 was not affected. In vitro experiments revealed that in astrocytes, SHP1 associates with epidermal growth factor (EGF)-receptor, whereas in microglia, SHP1 associates with colony-stimulating factor (CSF)-1-receptor. In postnatal and adult moth-eaten viable (me(v)/me(v)) mice, which are characterized by reduced SHP1 activity, a strong increase in reactive astrocytes, defined by GFAP immunoreactivity, was observed throughout the intact CNS, whereas neither the morphology nor the number of microglial cells appeared modified. Absence of (3)[H]-thymidine-labeled nuclei indicated that astrocytic proliferation does not occur. In response to injury, cell number as well as proliferation of microglia were reduced in me(v)/me(v) mice, whereas the posttraumatic astrocytic reaction did not differ from wild-type littermates. The majority of activated microglia in mutant mice showed rounded and ameboid morphology. However, the regeneration rate after facial nerve injury in me(v)/me(v) mice was similar to that in wild-type littermates. These results emphasize that SHP1 as a part of different signaling pathways plays an important role in the global regulation of astrocytic and microglial activation in the normal and injured CNS.
Collapse
MESH Headings
- Animals
- Astrocytes/metabolism
- Astrocytes/pathology
- Axotomy
- Cells, Cultured
- Cerebral Cortex/metabolism
- Cerebral Cortex/pathology
- Disease Models, Animal
- Glial Fibrillary Acidic Protein/metabolism
- Head Injuries, Penetrating/enzymology
- Head Injuries, Penetrating/pathology
- Immunohistochemistry
- Male
- Mice
- Mice, Inbred C57BL
- Mice, Mutant Strains
- Nerve Crush
- Nerve Regeneration
- Neuroglia/enzymology
- Neuroglia/pathology
- Peripheral Nerves/metabolism
- Peripheral Nerves/pathology
- Pyramidal Cells/metabolism
- Pyramidal Cells/pathology
- RNA, Messenger/metabolism
- Rats
- Rats, Wistar
- Trauma, Nervous System/enzymology
- Trauma, Nervous System/pathology
Collapse
Affiliation(s)
- A Horvat
- Department of Neuromorphology, Max-Planck-Institute of Neurobiology, D-82152 Martinsried, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
49
|
Abstract
Direct injury of the brain is followed by inflammatory responses regulated by cytokines and chemoattractants secreted from resident glia and invading cells of the peripheral immune system. In contrast, after remote lesion of the central nervous system, exemplified here by peripheral transection or crush of the facial and hypoglossal nerve, the locally observed inflammatory activation is most likely triggered by the damaged cells themselves, that is, the injured neurons. The authors investigated the expression of the chemoattractants monocyte chemoattractant protein MCP-1, regulation on activation normal T-cell expressed and secreted (RANTES), and interferon-gamma inducible protein IP10 after peripheral nerve lesion of the facial and hypoglossal nuclei. In situ hybridization and immunohistochemistry revealed an induction of neuronal MCP-1 expression within 6 hours postoperation, reaching a peak at 3 days and remaining up-regulated for up to 6 weeks. MCP-1 expression was almost exclusively confined to neurons but was also present on a few scattered glial cells. The authors found no alterations in the level of expression and cellular distribution of RANTES or IP10, which were both confined to neurons. Protein expression of the MCP-1 receptor CCR2 did not change. MCP-1, expressed by astrocytes and activated microglia, has been shown to be crucial for monocytic, or T-cell chemoattraction, or both. Accordingly, expression of MCP-1 by neurons and its corresponding receptor in microglia suggests that this chemokine is involved in neuron and microglia interaction.
Collapse
Affiliation(s)
- A Flügel
- Department of Neuroimmunology, Max-Planck-Institute of Neurobiology, Martinsried, Germany
| | | | | | | | | | | | | | | |
Collapse
|
50
|
Hager G, Formanek M, Gedlicka C, Thurnher D, Knerer B, Kornfehl J. 1,25(OH)2 vitamin D3 induces elevated expression of the cell cycle-regulating genes P21 and P27 in squamous carcinoma cell lines of the head and neck. Acta Otolaryngol 2001; 121:103-9. [PMID: 11270487 DOI: 10.1080/000164801300006353] [Citation(s) in RCA: 69] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Abstract
The biologically active form of vitamin D3, 1,25-dihydroxyvitamin D3 [1,25(OH)2D3], inhibits proliferation and induces differentiation for various malignant cells, including squamous cell carcinoma cell lines of the head and neck (SCCHN). These effects are due to an arrest of cells in the G0/G1 phase of the cell cycle and are predominantly mediated by the vitamin D receptor. To further explore the molecular mechanisms of the antiproliferative activity in SCCHN we studied the influence of 1,25(OH)2D3 on the expression of the G1 phase-regulating proteins cyclin D1, p21 and p27. Furthermore, as a direct target of G1 protein complexes, we investigated the phosphorylation status of the retinoblastoma protein (pRb). Synchronized cells of 2 SCCHN cell lines [JPPA (laryngeal carcinoma) and SCC 9 (tongue carcinoma)] and human immortalized keratinocytes (HaCaT) were cultured for 96 h in the presence or absence (ethanol as control) of 1,25(OH)2D3 (10(-7) M). At various time intervals the cell cycle status was detected by fluorescence-activated cell sorting (FACS) analysis and in parallel the expression of cell cycle-regulating proteins was determined at the protein and mRNA levels. In all cell lines tested 1,25(OH)2D3 caused an arrest of cells in the G0/G1 phase of the cell cycle and markedly induced the expression of the inhibitors p21 and p27. No influence was detectable on the expression of cyclin D1. Induction of p21 and p27 mRNA revealed transcriptional regulation by the vitamin D receptor. Simultaneously, hyperphosphorylated pRb was transformed to the hypophosphorylated form. Our results demonstrate that the biologically active form of vitamin D3 directly regulates the expression of p21 and p27, inducing a G0/G1 phase arrest: one mechanism by which 1,25(OH)2D3 controls cell proliferation inSCCHN.
Collapse
Affiliation(s)
- G Hager
- Department of Oto-Rhino-Laryngology, Head and Neck Surger, University of Vienna, Austria
| | | | | | | | | | | |
Collapse
|