1
|
Liu L, Nishikawa H, Zhou J, Taniguchi I, Onoye T. Computer-Vision-Oriented Adaptive Sampling in Compressive Sensing. SENSORS (BASEL, SWITZERLAND) 2024; 24:4348. [PMID: 39001127 PMCID: PMC11244518 DOI: 10.3390/s24134348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Revised: 06/30/2024] [Accepted: 07/02/2024] [Indexed: 07/16/2024]
Abstract
Compressive sensing (CS) is recognized for its adeptness at compressing signals, making it a pivotal technology in the context of sensor data acquisition. With the proliferation of image data in Internet of Things (IoT) systems, CS is expected to reduce the transmission cost of signals captured by various sensor devices. However, the quality of CS-reconstructed signals inevitably degrades as the sampling rate decreases, which poses a challenge in terms of the inference accuracy in downstream computer vision (CV) tasks. This limitation imposes an obstacle to the real-world application of existing CS techniques, especially for reducing transmission costs in sensor-rich environments. In response to this challenge, this paper contributes a CV-oriented adaptive CS framework based on saliency detection to the field of sensing technology that enables sensor systems to intelligently prioritize and transmit the most relevant data. Unlike existing CS techniques, the proposal prioritizes the accuracy of reconstructed images for CV purposes, not only for visual quality. The primary objective of this proposal is to enhance the preservation of information critical for CV tasks while optimizing the utilization of sensor data. This work conducts experiments on various realistic scenario datasets collected by real sensor devices. Experimental results demonstrate superior performance compared to existing CS sampling techniques across the STL10, Intel, and Imagenette datasets for classification and KITTI for object detection. Compared with the baseline uniform sampling technique, the average classification accuracy shows a maximum improvement of 26.23%, 11.69%, and 18.25%, respectively, at specific sampling rates. In addition, even at very low sampling rates, the proposal is demonstrated to be robust in terms of classification and detection as compared to state-of-the-art CS techniques. This ensures essential information for CV tasks is retained, improving the efficacy of sensor-based data acquisition systems.
Collapse
Affiliation(s)
- Luyang Liu
- Graduate School of Information Science and Technology, Osaka University, Osaka 5650871, Japan
| | - Hiroki Nishikawa
- Graduate School of Science and Engineering, Hosei University, Tokyo 1848584, Japan
| | - Jinjia Zhou
- Graduate School of Information Science and Technology, Osaka University, Osaka 5650871, Japan
| | - Ittetsu Taniguchi
- Graduate School of Information Science and Technology, Osaka University, Osaka 5650871, Japan
| | - Takao Onoye
- Graduate School of Information Science and Technology, Osaka University, Osaka 5650871, Japan
| |
Collapse
|
2
|
Zhang Y, Ye X, Wu W, Luo Y, Chen M, Du Y, Wen Y, Song H, Liu Y, Zhang G, Wang L. Morphological Rule-Constrained Object Detection of Key Structures in Infant Fundus Image. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2024; 21:1031-1041. [PMID: 37018340 DOI: 10.1109/tcbb.2023.3234100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The detection of optic disc and macula is an essential step for ROP (Retinopathy of prematurity) zone segmentation and disease diagnosis. This paper aims to enhance deep learning-based object detection with domain-specific morphological rules. Based on the fundus morphology, we define five morphological rules, i.e., number restriction (maximum number of optic disc and macula is one), size restriction (e.g., optic disc width: 1.05 +/- 0.13 mm), distance restriction (distance between the optic disc and macula/fovea: 4.4 +/- 0.4 mm), angle/slope restriction (optic disc and macula should roughly be positioned in the same horizontal line), position restriction (In OD, the macula is on the left side of the optic disc; vice versa for OS). A case study on 2953 infant fundus images (with 2935 optic disc instances and 2892 macula instances) proves the effectiveness of the proposed method. Without the morphological rules, naïve object detection accuracies of optic disc and macula are 0.955 and 0.719, respectively. With the proposed method, false-positive ROIs (region of interest) are further ruled out, and the accuracy of the macula is raised to 0.811. The IoU (intersection over union) and RCE (relative center error) metrics are also improved .
Collapse
|
3
|
Zeng C, Zhao S, Chen B, Zeng A, Li S. Feature-correlation-aware history-preserving-sparse-coding framework for automatic vertebra recognition. Comput Biol Med 2023; 160:106977. [PMID: 37163964 DOI: 10.1016/j.compbiomed.2023.106977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2023] [Revised: 03/17/2023] [Accepted: 04/23/2023] [Indexed: 05/12/2023]
Abstract
Automatic vertebra recognition from magnetic resonance imaging (MRI) is of significance in disease diagnosis and surgical treatment of spinal patients. Although modern methods have achieved remarkable progress, vertebra recognition still faces two challenges in practice: (1) Vertebral appearance challenge: The vertebral repetitive nature causes similar appearance among different vertebrae, while pathological variation causes different appearance among the same vertebrae; (2) Field of view (FOV) challenge: The FOVs of the input MRI images are unpredictable, which exacerbates the appearance challenge because there may be no specific-appearing vertebrae to assist recognition. In this paper, we propose a Feature-cOrrelation-aware history-pReserving-sparse-Coding framEwork (FORCE) to extract highly discriminative features and alleviate these challenges. FORCE is a recognition framework with two elaborated modules: (1) A feature similarity regularization (FSR) module to constrain the features of the vertebrae with the same label (but potentially with different appearances) to be closer in the latent feature space in an Eigenmap-based regularization manner. (2) A cumulative sparse representation (CSR) module to achieve feed-forward sparse coding while preventing historical features from being erased, which leverages both the intrinsic advantages of sparse codes and the historical features for obtaining more discriminative sparse codes encoding each vertebra. These two modules are embedded into the vertebra recognition framework in a plug-and-play manner to improve feature discrimination. FORCE is trained and evaluated on a challenging dataset containing 600 MRI images. The evaluation results show that FORCE achieves high performance in vertebra recognition and outperforms other state-of-the-art methods.
Collapse
Affiliation(s)
- Chenyi Zeng
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China
| | - Shen Zhao
- Department of Artificial Intelligence, Sun Yat-sen University, Guangzhou 510006, China.
| | - Bin Chen
- ZheJiang University, Hangzhou, Zhejiang, China
| | - An Zeng
- Guangdong University Of Technology, Guangzhou, Guangdong, China
| | | |
Collapse
|
4
|
Li Y, Xue Y, Li L, Zhang X, Qian X. Domain Adaptive Box-Supervised Instance Segmentation Network for Mitosis Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2469-2485. [PMID: 35389862 DOI: 10.1109/tmi.2022.3165518] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
The number of mitotic cells present in histopathological slides is an important predictor of tumor proliferation in the diagnosis of breast cancer. However, the current approaches can hardly perform precise pixel-level prediction for mitosis datasets with only weak labels (i.e., only provide the centroid location of mitotic cells), and take no account of the large domain gap across histopathological slides from different pathology laboratories. In this work, we propose a Domain adaptive Box-supervised Instance segmentation Network (DBIN) to address the above issues. In DBIN, we propose a high-performance Box-supervised Instance-Aware (BIA) head with the core idea of redesigning three box-supervised mask loss terms. Furthermore, we add a Pseudo-Mask-supervised Semantic (PMS) head for enriching characteristics extracted from underlying feature maps. Besides, we align the pixel-level feature distributions between source and target domains by a Cross-Domain Adaptive Module (CDAM), so as to adapt the detector learned from one lab can work well on unlabeled data from another lab. The proposed method achieves state-of-the-art performance across four mainstream datasets. A series of analysis and experiments show that our proposed BIA and PMS head can accomplish mitosis pixel-wise localization under weak supervision, and we can boost the generalization ability of our model by CDAM.
Collapse
|
5
|
Yu C, Hu J, Li G, Zhu S, Bai S, Yi Z. Segmentation for regions of interest in radiotherapy by self-supervised learning. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109370] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
6
|
Graph-Embedded Online Learning for Cell Detection and Tumour Proportion Score Estimation. ELECTRONICS 2022. [DOI: 10.3390/electronics11101642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Cell detection in microscopy images can provide useful clinical information. Most methods based on deep learning for cell detection are fully supervised. Without enough labelled samples, the accuracy of these methods would drop rapidly. To handle limited annotations and massive unlabelled data, semi-supervised learning methods have been developed. However, many of these are trained off-line, and are unable to process new incoming data to meet the needs of clinical diagnosis. Therefore, we propose a novel graph-embedded online learning network (GeoNet) for cell detection. It can locate and classify cells with dot annotations, saving considerable manpower. Trained by both historical data and reliable new samples, the online network can predict nuclear locations for upcoming new images while being optimized. To be more easily adapted to open data, it engages dynamic graph regularization and learns the inherent nonlinear structures of cells. Moreover, GeoNet can be applied to downstream tasks such as quantitative estimation of tumour proportion score (TPS), which is a useful indicator for lung squamous cell carcinoma treatment and prognostics. Experimental results for five large datasets with great variability in cell type and morphology validate the effectiveness and generalizability of the proposed method. For the lung squamous cell carcinoma (LUSC) dataset, the detection F1-scores of GeoNet for negative and positive tumour cells are 0.734 and 0.769, respectively, and the relative error of GeoNet for TPS estimation is 11.1%.
Collapse
|
7
|
Zhao S, Chen B, Chang H, Chen B, Li S. Reasoning discriminative dictionary-embedded network for fully automatic vertebrae tumor diagnosis. Med Image Anal 2022; 79:102456. [DOI: 10.1016/j.media.2022.102456] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Revised: 04/01/2022] [Accepted: 04/08/2022] [Indexed: 11/24/2022]
|
8
|
Xue Y, Liu S, Li Y, Wang P, Qian X. A new weakly supervised strategy for surgical tool detection. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107860] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
9
|
Windisch A, Gallien T, Schwarzlmüller C. A machine learning pipeline for autonomous numerical analytic continuation of Dyson-Schwinger equations. EPJ WEB OF CONFERENCES 2022. [DOI: 10.1051/epjconf/202225809003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Dyson-Schwinger equations (DSEs) are a non-perturbative way to express n-point functions in quantum field theory. Working in Euclidean space and in Landau gauge, for example, one can study the quark propagator Dyson-Schwinger equation in the real and complex domain, given that a suitable and tractable truncation has been found. When aiming for solving these equations in the complex domain, that is, for complex external momenta, one has to deform the integration contour of the radial component in the complex plane of the loop momentum expressed in hyper-spherical coordinates. This has to be done in order to avoid poles and branch cuts in the integrand of the self-energy loop. Since the nature of Dyson-Schwinger equations is such, that they have to be solved in a self-consistent way, one cannot analyze the analytic properties of the integrand after every iteration step, as this would not be feasible. In these proceedings, we suggest a machine learning pipeline based on deep learning (DL) approaches to computer vision (CV), as well as deep reinforcement learning (DRL), that could solve this problem autonomously by detecting poles and branch cuts in the numerical integrand after every iteration step and by suggesting suitable integration contour deformations that avoid these obstructions. We sketch out a proof of principle for both of these tasks, that is, the pole and branch cut detection, as well as the contour deformation.
Collapse
|
10
|
Liang H, Cheng Z, Zhong H, Qu A, Chen L. A region-based convolutional network for nuclei detection and segmentation in microscopy images. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103276] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
11
|
Xue Y, Li Y, Liu S, Wang P, Qian X. Oriented Localization of Surgical Tools by Location Encoding. IEEE Trans Biomed Eng 2021; 69:1469-1480. [PMID: 34652994 DOI: 10.1109/tbme.2021.3120430] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Surgical tool localization is the foundation to a series of advanced surgical functions e.g. image guided surgical navigation. For precise scenarios like surgical tool localization, sophisticated tools and sensitive tissues can be quite close. This requires a higher localization accuracy than general object localization. And it is also meaningful to know the orientation of tools. To achieve these, this paper proposes a Compressive Sensing based Location Encoding scheme, which formulates the task of surgical tool localization in pixel space into a task of vector regression in encoding space. Furthermore with this scheme, the method is able to capture orientation of surgical tools rather than simply outputting horizontal bounding boxes. To prevent gradient vanishing, a novel back-propagation rule for sparse reconstruction is derived. The back-propagation rule is applicable to different implementations of sparse reconstruction and renders the entire network end-to-end trainable. Finally, the proposed approach gives more accurate bounding boxes as well as capturing the orientation of tools, and achieves state-of-the-art performance compared with 9 competitive both oriented and non-oriented localization methods (RRD, RefineDet, etc) on a mainstream surgical image dataset: m2cai16-tool-locations. A range of experiments support our claim that regression in CSLE space performs better than traditionally detecting bounding boxes in pixel space.
Collapse
|
12
|
Xu Y, Wu T, Charlton JR, Gao F, Bennett KM. Small Blob Detector Using Bi-Threshold Constrained Adaptive Scales. IEEE Trans Biomed Eng 2021; 68:2654-2665. [PMID: 33347401 PMCID: PMC8461780 DOI: 10.1109/tbme.2020.3046252] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in medical imaging technology bring great promises for medicine practices. Imaging biomarkers are discovered to inform disease diagnosis, prognosis, and treatment assessment. Detecting and segmenting objects from images are often the first steps in quantitative measurement of these biomarkers. The challenges of detecting objects in images, particularly small objects known as blobs, include low image resolution, image noise and overlap among the blobs. This research proposes a Bi-Threshold Constrained Adaptive Scale (BTCAS) blob detector to uncover the relationship between the U-Net threshold and the Difference of Gaussian (DoG) scale to derive a multi-threshold, multi-scale small blob detector. With lower and upper bounds on the probability thresholds from U-Net, two binarized maps of the distance are rendered between blob centers. Each blob is transformed to a DoG space with an adaptively identified local optimum scale. A Hessian convexity map is rendered using the adaptive scale, and the under-segmentation typical of the U-Net is resolved. To validate the performance of the proposed BTCAS, a 3D simulated dataset (n = 20) of blobs, a 3D MRI dataset of human kidneys and a 3D MRI dataset of mouse kidneys, are studied. BTCAS is compared against four state-of-the-art methods: HDoG, U-Net with standard thresholding, U-Net with optimal thresholding, and UH-DoG using precision, recall, F-score, Dice and IoU. We conclude that BTCAS statistically outperforms the compared detectors.
Collapse
Affiliation(s)
- Yanzhe Xu
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Teresa Wu
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Jennifer R. Charlton
- Department of Pediatrics, Division Nephrology, University of Virginia, Charlottesville, 22908-0386, USA
| | - Fei Gao
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Kevin M. Bennett
- Department of Radiology, Washington University, St. Louis, MO, 63130, USA
| |
Collapse
|
13
|
Zhang Q, Yun KK, Wang H, Yoon SW, Lu F, Won D. Automatic cell counting from stimulated Raman imaging using deep learning. PLoS One 2021; 16:e0254586. [PMID: 34288972 PMCID: PMC8294532 DOI: 10.1371/journal.pone.0254586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Accepted: 06/29/2021] [Indexed: 11/28/2022] Open
Abstract
In this paper, we propose an automatic cell counting framework for stimulated Raman scattering (SRS) images, which can assist tumor tissue characteristic analysis, cancer diagnosis, and surgery planning processes. SRS microscopy has promoted tumor diagnosis and surgery by mapping lipids and proteins from fresh specimens and conducting a fast disclose of fundamental diagnostic hallmarks of tumors with a high resolution. However, cell counting from label-free SRS images has been challenging due to the limited contrast of cells and tissue, along with the heterogeneity of tissue morphology and biochemical compositions. To this end, a deep learning-based cell counting scheme is proposed by modifying and applying U-Net, an effective medical image semantic segmentation model that uses a small number of training samples. The distance transform and watershed segmentation algorithms are also implemented to yield the cell instance segmentation and cell counting results. By performing cell counting on SRS images of real human brain tumor specimens, promising cell counting results are obtained with > 98% of area under the curve (AUC) and R = 0.97 in terms of cell counting correlation between SRS and histological images with hematoxylin and eosin (H&E) staining. The proposed cell counting scheme illustrates the possibility and potential of performing cell counting automatically in near real time and encourages the study of applying deep learning techniques in biomedical and pathological image analyses.
Collapse
Affiliation(s)
- Qianqian Zhang
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Kyung Keun Yun
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Hao Wang
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Sang Won Yoon
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Fake Lu
- Department of Biomedical Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| | - Daehan Won
- Department of System Science and Industrial Engineering, State University of New York at Binghamton, Binghamton, NY, United States of America
| |
Collapse
|
14
|
Ciorîță A, Tripon SC, Mircea IG, Podar D, Barbu-Tudoran L, Mircea C, Pârvu M. The Morphological and Anatomical Traits of the Leaf in Representative Vinca Species Observed on Indoor- and Outdoor-Grown Plants. PLANTS (BASEL, SWITZERLAND) 2021; 10:622. [PMID: 33805226 PMCID: PMC8064346 DOI: 10.3390/plants10040622] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 03/18/2021] [Accepted: 03/22/2021] [Indexed: 12/26/2022]
Abstract
Morphological and anatomical traits of the Vinca leaf were examined using microscopy techniques. Outdoor Vinca minor and V. herbacea plants and greenhouse cultivated V. major and V. major var. variegata plants had interspecific variations. All Vinca species leaves are hypostomatic. However, except for V. minor leaf, few stomata were also present on the upper epidermis. V. minor leaf had the highest stomatal index and V. major had the lowest, while the distribution of trichomes on the upper epidermis was species-specific. Differentiated palisade and spongy parenchyma tissues were present in all Vinca species' leaves. However, V. minor and V. herbacea leaves had a more organized anatomical aspect, compared to V. major and V. major var. variegata leaves. Additionally, as a novelty, the cellular to intercellular space ratio of the Vinca leaf's mesophyll was revealed herein with the help of computational analysis. Lipid droplets of different sizes and aspects were localized in the spongy parenchyma cells. Ultrastructural characteristics of the cuticle and its epicuticular waxes were described for the first time. Moreover, thick layers of cutin seemed to be characteristic of the outdoor plants only. This could be an adaptation to the unpredictable environmental conditions, but nevertheless, it might influence the chemical composition of plants.
Collapse
Affiliation(s)
- Alexandra Ciorîță
- Faculty of Biology and Geology, Babeș-Bolyai University, 44 Republicii Street, 400015 Cluj-Napoca, Romania; (A.C.); (D.P.); (C.M.)
- Electron Microscopy Center, Faculty of Biology and Geology, Babeș-Bolyai University, 5-7 Clinicilor Street, 400006 Cluj-Napoca, Romania; (S.C.T.); (L.B.-T.)
- Integrated Electron Microscopy Laboratory, National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat Street, 400293 Cluj-Napoca, Romania
| | - Septimiu Cassian Tripon
- Electron Microscopy Center, Faculty of Biology and Geology, Babeș-Bolyai University, 5-7 Clinicilor Street, 400006 Cluj-Napoca, Romania; (S.C.T.); (L.B.-T.)
- Integrated Electron Microscopy Laboratory, National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat Street, 400293 Cluj-Napoca, Romania
| | - Ioan Gabriel Mircea
- Faculty of Mathematics and Informatics, Babeș-Bolyai University, 1 M. Kogalniceanu Street, 400084 Cluj-Napoca, Romania;
| | - Dorina Podar
- Faculty of Biology and Geology, Babeș-Bolyai University, 44 Republicii Street, 400015 Cluj-Napoca, Romania; (A.C.); (D.P.); (C.M.)
| | - Lucian Barbu-Tudoran
- Electron Microscopy Center, Faculty of Biology and Geology, Babeș-Bolyai University, 5-7 Clinicilor Street, 400006 Cluj-Napoca, Romania; (S.C.T.); (L.B.-T.)
- Integrated Electron Microscopy Laboratory, National Institute for Research and Development of Isotopic and Molecular Technologies, 67-103 Donat Street, 400293 Cluj-Napoca, Romania
| | - Cristina Mircea
- Faculty of Biology and Geology, Babeș-Bolyai University, 44 Republicii Street, 400015 Cluj-Napoca, Romania; (A.C.); (D.P.); (C.M.)
| | - Marcel Pârvu
- Faculty of Biology and Geology, Babeș-Bolyai University, 44 Republicii Street, 400015 Cluj-Napoca, Romania; (A.C.); (D.P.); (C.M.)
| |
Collapse
|
15
|
Nateghi R, Danyali H, Helfroush MS. A deep learning approach for mitosis detection: Application in tumor proliferation prediction from whole slide images. Artif Intell Med 2021; 114:102048. [PMID: 33875159 DOI: 10.1016/j.artmed.2021.102048] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 02/25/2021] [Accepted: 02/28/2021] [Indexed: 02/07/2023]
Abstract
The tumor proliferation, which is correlated with tumor grade, is a crucial biomarker indicative of breast cancer patients' prognosis. The most commonly used method in predicting tumor proliferation speed is the counting of mitotic figures in Hematoxylin and Eosin (H&E) histological slides. Manual mitosis counting is known to suffer from reproducibility problems. This paper presents a fully automated system for tumor proliferation prediction from whole slide images via mitosis counting. First, by considering the epithelial tissue as mitosis activity regions, we build a deep-learning-based region of interest detection method to select the high mitosis activity regions from whole slide images. Second, we learned a set of deep neural networks to detect mitosis detection from selected areas. The proposed mitosis detection system is designed to effectively overcome the mitosis detection challenges by two novel deep preprocessing and two-step hard negative mining approaches. Third, we trained a Support Vector Machine (SVM) classifier to predict the final tumor proliferation score. The proposed method was evaluated on the dataset of the Tumor Proliferation Assessment Challenge (TUPAC16) and achieved a 73.81 % F-measure and 0.612 weighted kappa score, respectively, outperforming all previous approaches significantly. Experimental results demonstrate that the proposed system considerably improves the tumor proliferation prediction accuracy and provides a reliable automated tool to support health care make-decisions.
Collapse
Affiliation(s)
- Ramin Nateghi
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Habibollah Danyali
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| | - Mohammad Sadegh Helfroush
- Department of Electrical and Electronics Engineering, Shiraz University of Technology, Shiraz, Iran.
| |
Collapse
|
16
|
Xue Y, Li Y, Liu S, Zhang X, Qian X. Crowd Scene Analysis Encounters High Density and Scale Variation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2745-2757. [PMID: 33502976 DOI: 10.1109/tip.2021.3049963] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Crowd scene analysis receives growing attention due to its wide applications. Grasping the accurate crowd location is important for identifying high-risk regions. In this article, we propose a Compressed Sensing based Output Encoding (CSOE) scheme, which casts detecting pixel coordinates of small objects into a task of signal regression in encoding signal space. To prevent gradient vanishing, we derive our own sparse reconstruction backpropagation rule that is adaptive to distinct implementations of sparse reconstruction and makes the whole model end-to-end trainable. With the support of CSOE and the backpropagation rule, the proposed method shows more robustness to deep model training error, which is especially harmful to crowd counting and localization. The proposed method achieves state-of-the-art performance across four mainstream datasets, especially achieves excellent results in highly crowded scenes. A series of analysis and experiments support our claim that regression in CSOE space is better than traditionally detecting coordinates of small objects in pixel space for highly crowded scenes.
Collapse
|