1
|
Prasad V, Arashpour M. ShARP-WasteSeg: A shape-aware approach to real-time segmentation of recyclables from cluttered construction and demolition waste. WASTE MANAGEMENT (NEW YORK, N.Y.) 2025; 195:231-239. [PMID: 39929039 DOI: 10.1016/j.wasman.2025.02.006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Revised: 01/28/2025] [Accepted: 02/03/2025] [Indexed: 02/12/2025]
Abstract
Instance segmentation is the fundamental computer vision task that facilitates robotic sorting by localizing object instances. This task becomes particularly challenging when dealing with Construction and Demolition Waste (CDW), as CDW objects often exhibit complex, non-uniform shapes and are frequently overlapped or occluded due to cluttering. Current waste segmentation benchmarks relying on fully connected networks for pixel-wise classification overlook crucial shape and boundary information. It is imperative to use shape information to guide mask prediction in order to improve waste segmentation accuracy. In response, this paper introduces ShARP-WasteSeg; a Shape-Aware Real-Time Precise Waste Segmentation framework. This conceptually straightforward approach mutually learns objects masks and boundaries within a single network, resulting in sharper mask predictions for complex recyclables despite cluttering. ShARP-WasteSeg enhances the segmentation process by extracting boundary features from depth maps, which are rich in shape and location information. These features complement RGB boundary features, guiding the final mask predictions through feature fusion. Moreover, it leverages the ground-breaking capabilities of cross-stage partial networks to optimize the feature extraction process, permitting real-time applicability of the multi-modal approach. Tested on a challenging CDW dataset representing real conditions, ShARP-WasteSeg improved Mask Average Precision (AP) by 7.91%, and the boundary-sensitive Boundary Average Precision by a significant 11.44%, demonstrating the effectiveness of the proposed shape-aware approach in increasing boundary quality of predicted masks for cluttered CDW recyclables.
Collapse
Affiliation(s)
- Vineet Prasad
- Department of Civil Engineering, Monash University, Melbourne, VIC 3800, Australia.
| | - Mehrdad Arashpour
- Department of Civil Engineering, Monash University, Melbourne, VIC 3800, Australia.
| |
Collapse
|
2
|
Zhang S, Yuan Z, Zhou X, Wang H, Chen B, Wang Y. VENet: Variational energy network for gland segmentation of pathological images and early gastric cancer diagnosis of whole slide images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 250:108178. [PMID: 38652995 DOI: 10.1016/j.cmpb.2024.108178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 04/08/2024] [Accepted: 04/13/2024] [Indexed: 04/25/2024]
Abstract
BACKGROUND AND OBJECTIVE Gland segmentation of pathological images is an essential but challenging step for adenocarcinoma diagnosis. Although deep learning methods have recently made tremendous progress in gland segmentation, they have not given satisfactory boundary and region segmentation results of adjacent glands. These glands usually have a large difference in glandular appearance, and the statistical distribution between the training and test sets in deep learning is inconsistent. These problems make networks not generalize well in the test dataset, bringing difficulties to gland segmentation and early cancer diagnosis. METHODS To address these problems, we propose a Variational Energy Network named VENet with a traditional variational energy Lv loss for gland segmentation of pathological images and early gastric cancer detection in whole slide images (WSIs). It effectively integrates the variational mathematical model and the data-adaptability of deep learning methods to balance boundary and region segmentation. Furthermore, it can effectively segment and classify glands in large-size WSIs with reliable nucleus width and nucleus-to-cytoplasm ratio features. RESULTS The VENet was evaluated on the 2015 MICCAI Gland Segmentation challenge (GlaS) dataset, the Colorectal Adenocarcinoma Glands (CRAG) dataset, and the self-collected Nanfang Hospital dataset. Compared with state-of-the-art methods, our method achieved excellent performance for GlaS Test A (object dice 0.9562, object F1 0.9271, object Hausdorff distance 73.13), GlaS Test B (object dice 94.95, object F1 95.60, object Hausdorff distance 59.63), and CRAG (object dice 95.08, object F1 92.94, object Hausdorff distance 28.01). For the Nanfang Hospital dataset, our method achieved a kappa of 0.78, an accuracy of 0.9, a sensitivity of 0.98, and a specificity of 0.80 on the classification task of test 69 WSIs. CONCLUSIONS The experimental results show that the proposed model accurately predicts boundaries and outperforms state-of-the-art methods. It can be applied to the early diagnosis of gastric cancer by detecting regions of high-grade gastric intraepithelial neoplasia in WSI, which can assist pathologists in analyzing large WSI and making accurate diagnostic decisions.
Collapse
Affiliation(s)
- Shuchang Zhang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Ziyang Yuan
- Academy of Military Sciences of the People's Liberation Army, Beijing, China.
| | - Xianchen Zhou
- Department of Mathematics, National University of Defense Technology, Changsha, China
| | - Hongxia Wang
- Department of Mathematics, National University of Defense Technology, Changsha, China.
| | - Bo Chen
- Suzhou Research Center, Institute of Automation, Chinese Academy of Sciences, Suzhou, China
| | - Yadong Wang
- Department of Laboratory Pathology, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou, China
| |
Collapse
|
3
|
Liu M, Han Y, Wang J, Wang C, Wang Y, Meijering E. LSKANet: Long Strip Kernel Attention Network for Robotic Surgical Scene Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2024; 43:1308-1322. [PMID: 38015689 DOI: 10.1109/tmi.2023.3335406] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
Surgical scene segmentation is a critical task in Robotic-assisted surgery. However, the complexity of the surgical scene, which mainly includes local feature similarity (e.g., between different anatomical tissues), intraoperative complex artifacts, and indistinguishable boundaries, poses significant challenges to accurate segmentation. To tackle these problems, we propose the Long Strip Kernel Attention network (LSKANet), including two well-designed modules named Dual-block Large Kernel Attention module (DLKA) and Multiscale Affinity Feature Fusion module (MAFF), which can implement precise segmentation of surgical images. Specifically, by introducing strip convolutions with different topologies (cascaded and parallel) in two blocks and a large kernel design, DLKA can make full use of region- and strip-like surgical features and extract both visual and structural information to reduce the false segmentation caused by local feature similarity. In MAFF, affinity matrices calculated from multiscale feature maps are applied as feature fusion weights, which helps to address the interference of artifacts by suppressing the activations of irrelevant regions. Besides, the hybrid loss with Boundary Guided Head (BGH) is proposed to help the network segment indistinguishable boundaries effectively. We evaluate the proposed LSKANet on three datasets with different surgical scenes. The experimental results show that our method achieves new state-of-the-art results on all three datasets with improvements of 2.6%, 1.4%, and 3.4% mIoU, respectively. Furthermore, our method is compatible with different backbones and can significantly increase their segmentation accuracy. Code is available at https://github.com/YubinHan73/LSKANet.
Collapse
|
4
|
Du H, Wang J, Liu M, Wang Y, Meijering E. SwinPA-Net: Swin Transformer-Based Multiscale Feature Pyramid Aggregation Network for Medical Image Segmentation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:5355-5366. [PMID: 36121961 DOI: 10.1109/tnnls.2022.3204090] [Citation(s) in RCA: 21] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The precise segmentation of medical images is one of the key challenges in pathology research and clinical practice. However, many medical image segmentation tasks have problems such as large differences between different types of lesions and similar shapes as well as colors between lesions and surrounding tissues, which seriously affects the improvement of segmentation accuracy. In this article, a novel method called Swin Pyramid Aggregation network (SwinPA-Net) is proposed by combining two designed modules with Swin Transformer to learn more powerful and robust features. The two modules, named dense multiplicative connection (DMC) module and local pyramid attention (LPA) module, are proposed to aggregate the multiscale context information of medical images. The DMC module cascades the multiscale semantic feature information through dense multiplicative feature fusion, which minimizes the interference of shallow background noise to improve the feature expression and solves the problem of excessive variation in lesion size and type. Moreover, the LPA module guides the network to focus on the region of interest by merging the global attention and the local attention, which helps to solve similar problems. The proposed network is evaluated on two public benchmark datasets for polyp segmentation task and skin lesion segmentation task as well as a clinical private dataset for laparoscopic image segmentation task. Compared with existing state-of-the-art (SOTA) methods, the SwinPA-Net achieves the most advanced performance and can outperform the second-best method on the mean Dice score by 1.68%, 0.8%, and 1.2% on the three tasks, respectively.
Collapse
|
5
|
Li S, Shi S, Fan Z, He X, Zhang N. Deep information-guided feature refinement network for colorectal gland segmentation. Int J Comput Assist Radiol Surg 2023; 18:2319-2328. [PMID: 36934367 DOI: 10.1007/s11548-023-02857-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 02/22/2023] [Indexed: 03/20/2023]
Abstract
PURPOSE Reliable quantification of colorectal histopathological images is based on the precise segmentation of glands but precise segmentation of glands is challenging as glandular morphology varies widely across histological grades, such as malignant glands and non-gland tissues are too similar to be identified, and tightly connected glands are even highly possibly to be incorrectly segmented as one gland. METHODS A deep information-guided feature refinement network is proposed to improve gland segmentation. Specifically, the backbone deepens the network structure to obtain effective features while maximizing the retained information, and a Multi-Scale Fusion module is proposed to increase the receptive field. In addition, to segment dense glands individually, a Multi-Scale Edge-Refined module is designed to strengthen the boundaries of glands. RESULTS The comparative experiments on the eight recently proposed deep learning methods demonstrated that our proposed network has better overall performance and is more competitive on Test B. The F1 score of Test A and Test B is 0.917 and 0.876, respectively; the object-level Dice is 0.921 and 0.884; and the object-level Hausdorff is 43.428 and 87.132, respectively. CONCLUSION The proposed colorectal gland segmentation network can effectively extract features with high representational ability and enhance edge features while retaining details to the maximum, dramatically improving the segmentation performance on malignant glands, and better segmentation results of multi-scale and closed glands can also be obtained.
Collapse
Affiliation(s)
- Sheng Li
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Shuling Shi
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Zhenbang Fan
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Xiongxiong He
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China
| | - Ni Zhang
- College of Information Engineering, Zhejiang University of Technology, Hangzhou, 310014, Zhejiang, China.
| |
Collapse
|
6
|
Shen W, Wang Y, Liu M, Wang J, Ding R, Zhang Z, Meijering E. Branch Aggregation Attention Network for Robotic Surgical Instrument Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:3408-3419. [PMID: 37342952 DOI: 10.1109/tmi.2023.3288127] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/23/2023]
Abstract
Surgical instrument segmentation is of great significance to robot-assisted surgery, but the noise caused by reflection, water mist, and motion blur during the surgery as well as the different forms of surgical instruments would greatly increase the difficulty of precise segmentation. A novel method called Branch Aggregation Attention network (BAANet) is proposed to address these challenges, which adopts a lightweight encoder and two designed modules, named Branch Balance Aggregation module (BBA) and Block Attention Fusion module (BAF), for efficient feature localization and denoising. By introducing the unique BBA module, features from multiple branches are balanced and optimized through a combination of addition and multiplication to complement strengths and effectively suppress noise. Furthermore, to fully integrate the contextual information and capture the region of interest, the BAF module is proposed in the decoder, which receives adjacent feature maps from the BBA module and localizes the surgical instruments from both global and local perspectives by utilizing a dual branch attention mechanism. According to the experimental results, the proposed method has the advantage of being lightweight while outperforming the second-best method by 4.03%, 1.53%, and 1.34% in mIoU scores on three challenging surgical instrument datasets, respectively, compared to the existing state-of-the-art methods. Code is available at https://github.com/SWT-1014/BAANet.
Collapse
|
7
|
Das R, Bose S, Chowdhury RS, Maulik U. Dense Dilated Multi-Scale Supervised Attention-Guided Network for histopathology image segmentation. Comput Biol Med 2023; 163:107182. [PMID: 37379615 DOI: 10.1016/j.compbiomed.2023.107182] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Revised: 05/24/2023] [Accepted: 06/13/2023] [Indexed: 06/30/2023]
Abstract
Over the last couple of decades, the introduction and proliferation of whole-slide scanners led to increasing interest in the research of digital pathology. Although manual analysis of histopathological images is still the gold standard, the process is often tedious and time consuming. Furthermore, manual analysis also suffers from intra- and interobserver variability. Separating structures or grading morphological changes can be difficult due to architectural variability of these images. Deep learning techniques have shown great potential in histopathology image segmentation that drastically reduces the time needed for downstream tasks of analysis and providing accurate diagnosis. However, few algorithms have clinical implementations. In this paper, we propose a new deep learning model Dense Dilated Multiscale Supervised Attention-Guided (D2MSA) Network for histopathology image segmentation that makes use of deep supervision coupled with a hierarchical system of novel attention mechanisms. The proposed model surpasses state-of-the-art performance while using similar computational resources. The performance of the model has been evaluated for the tasks of gland segmentation and nuclei instance segmentation, both of which are clinically relevant tasks to assess the state and progress of malignancy. Here, we have used histopathology image datasets for three different types of cancer. We have also performed extensive ablation tests and hyperparameter tuning to ensure the validity and reproducibility of the model performance. The proposed model is available at www.github.com/shirshabose/D2MSA-Net.
Collapse
Affiliation(s)
- Rangan Das
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Shirsha Bose
- Department of Informatics, Technical University of Munich, Munich, Bavaria 85748, Germany.
| | - Ritesh Sur Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| | - Ujjwal Maulik
- Department of Computer Science Engineering, Jadavpur University, Kolkata, 700032, West Bengal, India.
| |
Collapse
|
8
|
Dabass M, Dabass J. An Atrous Convolved Hybrid Seg-Net Model with residual and attention mechanism for gland detection and segmentation in histopathological images. Comput Biol Med 2023; 155:106690. [PMID: 36827788 DOI: 10.1016/j.compbiomed.2023.106690] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 02/06/2023] [Accepted: 02/14/2023] [Indexed: 02/21/2023]
Abstract
PURPOSE A clinically compatible computerized segmentation model is presented here that aspires to supply clinical gland informative details by seizing every small and intricate variation in medical images, integrate second opinions, and reduce human errors. APPROACH It comprises of enhanced learning capability that extracts denser multi-scale gland-specific features, recover semantic gap during concatenation, and effectively handle resolution-degradation and vanishing gradient problems. It is having three proposed modules namely Atrous Convolved Residual Learning Module in the encoder as well as decoder, Residual Attention Module in the skip connection paths, and Atrous Convolved Transitional Module as the transitional and output layer. Also, pre-processing techniques like patch-sampling, stain-normalization, augmentation, etc. are employed to develop its generalization capability. To verify its robustness and invigorate network invariance against digital variability, extensive experiments are carried out employing three different public datasets i.e., GlaS (Gland Segmentation Challenge), CRAG (Colorectal Adenocarcinoma Gland) and LC-25000 (Lung Colon-25000) dataset and a private HosC (Hospital Colon) dataset. RESULTS The presented model accomplished combative gland detection outcomes having F1-score (GlaS(Test A(0.957), Test B(0.926)), CRAG(0.935), LC 25000(0.922), HosC(0.963)); and gland segmentation results having Object-Dice Index (GlaS(Test A(0.961), Test B(0.933)), CRAG(0.961), LC-25000(0.940), HosC(0.929)), and Object-Hausdorff Distance (GlaS(Test A(21.77) and Test B(69.74)), CRAG(87.63), LC-25000(95.85), HosC(83.29)). In addition, validation score (GlaS (Test A(0.945), Test B(0.937)), CRAG(0.934), LC-25000(0.911), HosC(0.928)) supplied by the proficient pathologists is integrated for the end segmentation results to corroborate the applicability and appropriateness for assistance at the clinical level applications. CONCLUSION The proposed system will assist pathologists in devising precise diagnoses by offering a referential perspective during morphology assessment of colon histopathology images.
Collapse
Affiliation(s)
- Manju Dabass
- EECE Deptt, The NorthCap University, Gurugram, India.
| | - Jyoti Dabass
- DBT Centre of Excellence Biopharmaceutical Technology, IIT, Delhi, India
| |
Collapse
|
9
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
10
|
Gao E, Jiang H, Zhou Z, Yang C, Chen M, Zhu W, Shi F, Chen X, Zheng J, Bian Y, Xiang D. Automatic multi-tissue segmentation in pancreatic pathological images with selected multi-scale attention network. Comput Biol Med 2022; 151:106228. [PMID: 36306579 DOI: 10.1016/j.compbiomed.2022.106228] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 10/13/2022] [Accepted: 10/16/2022] [Indexed: 12/27/2022]
Abstract
The morphology of tissues in pathological images has been used routinely by pathologists to assess the degree of malignancy of pancreatic ductal adenocarcinoma (PDAC). Automatic and accurate segmentation of tumor cells and their surrounding tissues is often a crucial step to obtain reliable morphological statistics. Nonetheless, it is still a challenge due to the great variation of appearance and morphology. In this paper, a selected multi-scale attention network (SMANet) is proposed to segment tumor cells, blood vessels, nerves, islets and ducts in pancreatic pathological images. The selected multi-scale attention module is proposed to enhance effective information, supplement useful information and suppress redundant information at different scales from the encoder and decoder. It includes selection unit (SU) module and multi-scale attention (MA) module. The selection unit module can effectively filter features. The multi-scale attention module enhances effective information through spatial attention and channel attention, and combines different level features to supplement useful information. This helps learn the information of different receptive fields to improve the segmentation of tumor cells, blood vessels and nerves. An original-feature fusion unit is also proposed to supplement the original image information to reduce the under-segmentation of small tissues such as islets and ducts. The proposed method outperforms state-of-the-arts deep learning algorithms on our PDAC pathological images and achieves competitive results on the GlaS challenge dataset. The mDice and mIoU have reached 0.769 and 0.665 in our PDAC dataset.
Collapse
Affiliation(s)
- Enting Gao
- School of Electronic and Information Engineering, Suzhou University of Science and Technology, Suzhou, China
| | - Hui Jiang
- Department of Pathology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Zhibang Zhou
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Changxing Yang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Muyang Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Weifang Zhu
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Fei Shi
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Xinjian Chen
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China
| | - Jian Zheng
- Department of Medical Imaging, Suzhou Institute of Biomedical Engineering and Technology, Chinese Academy of Sciences, Jiangsu 215163, China
| | - Yun Bian
- Department of Radiology, Changhai Hospital, The Navy Military Medical University, Shanghai, China
| | - Dehui Xiang
- School of Electronic and Information Engineering, Soochow University, Jiangsu 215006, China.
| |
Collapse
|
11
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Li Y, Xue Y, Li L, Zhang X, Qian X. Domain Adaptive Box-Supervised Instance Segmentation Network for Mitosis Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2469-2485. [PMID: 35389862 DOI: 10.1109/tmi.2022.3165518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The number of mitotic cells present in histopathological slides is an important predictor of tumor proliferation in the diagnosis of breast cancer. However, the current approaches can hardly perform precise pixel-level prediction for mitosis datasets with only weak labels (i.e., only provide the centroid location of mitotic cells), and take no account of the large domain gap across histopathological slides from different pathology laboratories. In this work, we propose a Domain adaptive Box-supervised Instance segmentation Network (DBIN) to address the above issues. In DBIN, we propose a high-performance Box-supervised Instance-Aware (BIA) head with the core idea of redesigning three box-supervised mask loss terms. Furthermore, we add a Pseudo-Mask-supervised Semantic (PMS) head for enriching characteristics extracted from underlying feature maps. Besides, we align the pixel-level feature distributions between source and target domains by a Cross-Domain Adaptive Module (CDAM), so as to adapt the detector learned from one lab can work well on unlabeled data from another lab. The proposed method achieves state-of-the-art performance across four mainstream datasets. A series of analysis and experiments show that our proposed BIA and PMS head can accomplish mitosis pixel-wise localization under weak supervision, and we can boost the generalization ability of our model by CDAM.
Collapse
|
13
|
Wu Y, Cheng M, Huang S, Pei Z, Zuo Y, Liu J, Yang K, Zhu Q, Zhang J, Hong H, Zhang D, Huang K, Cheng L, Shao W. Recent Advances of Deep Learning for Computational Histopathology: Principles and Applications. Cancers (Basel) 2022; 14:1199. [PMID: 35267505 PMCID: PMC8909166 DOI: 10.3390/cancers14051199] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2021] [Revised: 02/16/2022] [Accepted: 02/22/2022] [Indexed: 01/10/2023] Open
Abstract
With the remarkable success of digital histopathology, we have witnessed a rapid expansion of the use of computational methods for the analysis of digital pathology and biopsy image patches. However, the unprecedented scale and heterogeneous patterns of histopathological images have presented critical computational bottlenecks requiring new computational histopathology tools. Recently, deep learning technology has been extremely successful in the field of computer vision, which has also boosted considerable interest in digital pathology applications. Deep learning and its extensions have opened several avenues to tackle many challenging histopathological image analysis problems including color normalization, image segmentation, and the diagnosis/prognosis of human cancers. In this paper, we provide a comprehensive up-to-date review of the deep learning methods for digital H&E-stained pathology image analysis. Specifically, we first describe recent literature that uses deep learning for color normalization, which is one essential research direction for H&E-stained histopathological image analysis. Followed by the discussion of color normalization, we review applications of the deep learning method for various H&E-stained image analysis tasks such as nuclei and tissue segmentation. We also summarize several key clinical studies that use deep learning for the diagnosis and prognosis of human cancers from H&E-stained histopathological images. Finally, online resources and open research problems on pathological image analysis are also provided in this review for the convenience of researchers who are interested in this exciting field.
Collapse
Affiliation(s)
- Yawen Wu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Michael Cheng
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Shuo Huang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Zongxiang Pei
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Yingli Zuo
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jianxin Liu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kai Yang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Qi Zhu
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Jie Zhang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Honghai Hong
- Department of Clinical Laboratory, The Third Affiliated Hospital of Guangzhou Medical University, Guangzhou 510006, China;
| | - Daoqiang Zhang
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| | - Kun Huang
- Department of Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA; (M.C.); (J.Z.); (K.H.)
- Regenstrief Institute, Indiana University, Indianapolis, IN 46202, USA
| | - Liang Cheng
- Departments of Pathology and Laboratory Medicine, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - Wei Shao
- MIIT Key Laboratory of Pattern Analysis and Machine Intelligence, College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China; (Y.W.); (S.H.); (Z.P.); (Y.Z.); (J.L.); (K.Y.); (Q.Z.); (D.Z.)
| |
Collapse
|
14
|
Xie Y, Zhang J, Liao Z, Verjans J, Shen C, Xia Y. Intra- and Inter-Pair Consistency for Semi-Supervised Gland Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:894-905. [PMID: 34951847 DOI: 10.1109/tip.2021.3136716] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Accurate gland segmentation in histology tissue images is a critical but challenging task. Although deep models have demonstrated superior performance in medical image segmentation, they commonly require a large amount of annotated data, which are hard to obtain due to the extensive labor costs and expertise required. In this paper, we propose an intra- and inter-pair consistency-based semi-supervised (I2CS) model that can be trained on both labeled and unlabeled histology images for gland segmentation. Considering that each image contains glands and hence different images could potentially share consistent semantics in the feature space, we introduce a novel intra- and inter-pair consistency module to explore such consistency for learning with unlabeled data. It first characterizes the pixel-level relation between a pair of images in the feature space to create an attention map that highlights the regions with the same semantics but on different images. Then, it imposes a consistency constraint on the attention maps obtained from multiple image pairs, and thus filters low-confidence attention regions to generate refined attention maps that are then merged with original features to improve their representation ability. In addition, we also design an object-level loss to address the issues caused by touching glands. We evaluated our model against several recent gland segmentation methods and three typical semi-supervised methods on the GlaS and CRAG datasets. Our results not only demonstrate the effectiveness of the proposed due consistency module and Obj-Dice loss, but also indicate that the proposed I2CS model achieves state-of-the-art gland segmentation performance on both benchmarks.
Collapse
|
15
|
Wang H, Xian M, Vakanski A. TA-Net: Topology-Aware Network for Gland Segmentation. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION. IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION 2022; 2022:3241-3249. [PMID: 35509894 PMCID: PMC9063467 DOI: 10.1109/wacv51458.2022.00330] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gland segmentation is a critical step to quantitatively assess the morphology of glands in histopathology image analysis. However, it is challenging to separate densely clustered glands accurately. Existing deep learning-based approaches attempted to use contour-based techniques to alleviate this issue but only achieved limited success. To address this challenge, we propose a novel topology-aware network (TA-Net) to accurately separate densely clustered and severely deformed glands. The proposed TA-Net has a multitask learning architecture and enhances the generalization of gland segmentation by learning shared representation from two tasks: instance segmentation and gland topology estimation. The proposed topology loss computes gland topology using gland skeletons and markers. It drives the network to generate segmentation results that comply with the true gland topology. We validate the proposed approach on the GlaS and CRAG datasets using three quantitative metrics, F1-score, object-level Dice coefficient, and object-level Hausdorff distance. Extensive experiments demonstrate that TA-Net achieves state-of-the-art performance on the two datasets. TA-Net outperforms other approaches in the presence of densely clustered glands.
Collapse
|
16
|
Li W, Li J, Polson J, Wang Z, Speier W, Arnold C. High resolution histopathology image generation and segmentation through adversarial training. Med Image Anal 2021; 75:102251. [PMID: 34814059 DOI: 10.1016/j.media.2021.102251] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 07/09/2021] [Accepted: 09/20/2021] [Indexed: 12/01/2022]
Abstract
Semantic segmentation of histopathology images can be a vital aspect of computer-aided diagnosis, and deep learning models have been effectively applied to this task with varying levels of success. However, their impact has been limited due to the small size of fully annotated datasets. Data augmentation is one avenue to address this limitation. Generative Adversarial Networks (GANs) have shown promise in this respect, but previous work has focused mostly on classification tasks applied to MR and CT images, both of which have lower resolution and scale than histopathology images. There is limited research that applies GANs as a data augmentation approach for large-scale image semantic segmentation, which requires high-quality image-mask pairs. In this work, we propose a multi-scale conditional GAN for high-resolution, large-scale histopathology image generation and segmentation. Our model consists of a pyramid of GAN structures, each responsible for generating and segmenting images at a different scale. Using semantic masks, the generative component of our model is able to synthesize histopathology images that are visually realistic. We demonstrate that these synthesized images along with their masks can be used to boost segmentation performance, especially in the semi-supervised scenario.
Collapse
Affiliation(s)
- Wenyuan Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA.
| | - Jiayun Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Jennifer Polson
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Zichen Wang
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA
| | - Corey Arnold
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA; The Department of Pathology & Laboratory Medicine, UCLA, Los Angeles, USA.
| |
Collapse
|
17
|
Yi J, Wu P, Tang H, Liu B, Huang Q, Qu H, Han L, Fan W, Hoeppner DJ, Metaxas DN. Object-Guided Instance Segmentation With Auxiliary Feature Refinement for Biological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2403-2414. [PMID: 33945472 DOI: 10.1109/tmi.2021.3077285] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment. In this paper, we propose a novel box-based instance segmentation method. Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region. However, existing methods can hardly differentiate the target from its neighboring objects within the same bounding box region due to their similar textures and low-contrast boundaries. To deal with this problem, in this paper, we propose an object-guided instance segmentation method. Our method first detects the center points of the objects, from which the bounding box parameters are then predicted. To perform segmentation, an object-guided coarse-to-fine segmentation branch is built along with the detection branch. The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region. To further improve the segmentation quality, we design an auxiliary feature refinement module that densely samples and refines point-wise features in the boundary regions. Experimental results on three biological image datasets demonstrate the advantages of our method. The code will be available at https://github.com/yijingru/ObjGuided-Instance-Segmentation.
Collapse
|
18
|
Xie Y, Zhang J, Lu H, Shen C, Xia Y. SESV: Accurate Medical Image Segmentation by Predicting and Correcting Errors. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:286-296. [PMID: 32956049 DOI: 10.1109/tmi.2020.3025308] [Citation(s) in RCA: 34] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Medical image segmentation is an essential task in computer-aided diagnosis. Despite their prevalence and success, deep convolutional neural networks (DCNNs) still need to be improved to produce accurate and robust enough segmentation results for clinical use. In this paper, we propose a novel and generic framework called Segmentation-Emendation-reSegmentation-Verification (SESV) to improve the accuracy of existing DCNNs in medical image segmentation, instead of designing a more accurate segmentation model. Our idea is to predict the segmentation errors produced by an existing model and then correct them. Since predicting segmentation errors is challenging, we design two ways to tolerate the mistakes in the error prediction. First, rather than using a predicted segmentation error map to correct the segmentation mask directly, we only treat the error map as the prior that indicates the locations where segmentation errors are prone to occur, and then concatenate the error map with the image and segmentation mask as the input of a re-segmentation network. Second, we introduce a verification network to determine whether to accept or reject the refined mask produced by the re-segmentation network on a region-by-region basis. The experimental results on the CRAG, ISIC, and IDRiD datasets suggest that using our SESV framework can improve the accuracy of DeepLabv3+ substantially and achieve advanced performance in the segmentation of gland cells, skin lesions, and retinal microaneurysms. Consistent conclusions can also be drawn when using PSPNet, U-Net, and FPN as the segmentation network, respectively. Therefore, our SESV framework is capable of improving the accuracy of different DCNNs on different medical image segmentation tasks.
Collapse
|
19
|
Dabass M, Vashisth S, Vig R. Attention-Guided deep atrous-residual U-Net architecture for automated gland segmentation in colon histopathology images. INFORMATICS IN MEDICINE UNLOCKED 2021. [DOI: 10.1016/j.imu.2021.100784] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022] Open
|
20
|
Hamamoto R, Suvarna K, Yamada M, Kobayashi K, Shinkai N, Miyake M, Takahashi M, Jinnai S, Shimoyama R, Sakai A, Takasawa K, Bolatkan A, Shozu K, Dozen A, Machino H, Takahashi S, Asada K, Komatsu M, Sese J, Kaneko S. Application of Artificial Intelligence Technology in Oncology: Towards the Establishment of Precision Medicine. Cancers (Basel) 2020; 12:3532. [PMID: 33256107 PMCID: PMC7760590 DOI: 10.3390/cancers12123532] [Citation(s) in RCA: 85] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2020] [Revised: 11/21/2020] [Accepted: 11/24/2020] [Indexed: 02/07/2023] Open
Abstract
In recent years, advances in artificial intelligence (AI) technology have led to the rapid clinical implementation of devices with AI technology in the medical field. More than 60 AI-equipped medical devices have already been approved by the Food and Drug Administration (FDA) in the United States, and the active introduction of AI technology is considered to be an inevitable trend in the future of medicine. In the field of oncology, clinical applications of medical devices using AI technology are already underway, mainly in radiology, and AI technology is expected to be positioned as an important core technology. In particular, "precision medicine," a medical treatment that selects the most appropriate treatment for each patient based on a vast amount of medical data such as genome information, has become a worldwide trend; AI technology is expected to be utilized in the process of extracting truly useful information from a large amount of medical data and applying it to diagnosis and treatment. In this review, we would like to introduce the history of AI technology and the current state of medical AI, especially in the oncology field, as well as discuss the possibilities and challenges of AI technology in the medical field.
Collapse
Affiliation(s)
- Ryuji Hamamoto
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Kruthi Suvarna
- Indian Institute of Technology Bombay, Powai, Mumbai 400 076, India;
| | - Masayoshi Yamada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Endoscopy, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku Tokyo 104-0045, Japan
| | - Kazuma Kobayashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Norio Shinkai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Mototaka Miyake
- Department of Diagnostic Radiology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Masamichi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of Neurosurgery and Neuro-Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Shunichi Jinnai
- Department of Dermatologic Oncology, National Cancer Center Hospital, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan;
| | - Ryo Shimoyama
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Akira Sakai
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Department of NCC Cancer Science, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ken Takasawa
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Amina Bolatkan
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Kanto Shozu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Ai Dozen
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
| | - Hidenori Machino
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Satoshi Takahashi
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Ken Asada
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Masaaki Komatsu
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Jun Sese
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Humanome Lab, 2-4-10 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan
| | - Syuzo Kaneko
- Division of Molecular Modification and Cancer Biology, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (M.Y.); (K.K.); (N.S.); (M.T.); (R.S.); (A.S.); (K.T.); (A.B.); (K.S.); (A.D.); (H.M.); (S.T.); (K.A.); (M.K.); (J.S.); (S.K.)
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| |
Collapse
|