1
|
Automated reporting of cervical biopsies using artificial intelligence. PLOS DIGITAL HEALTH 2024; 3:e0000381. [PMID: 38648217 PMCID: PMC11034655 DOI: 10.1371/journal.pdig.0000381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Accepted: 10/03/2023] [Indexed: 04/25/2024]
Abstract
When detected at an early stage, the 5-year survival rate for people with invasive cervical cancer is 92%. Being aware of signs and symptoms of cervical cancer and early detection greatly improve the chances of successful treatment. We have developed an Artificial Intelligence (AI) algorithm, trained and evaluated on cervical biopsies for automated reporting of digital diagnostics. The aim is to increase overall efficiency of pathological diagnosis and to have the performance tuned to high sensitivity for malignant cases. Having a tool for triage/identifying cancer and high grade lesions may potentially reduce reporting time by identifying areas of interest in a slide for the pathologist and therefore improving efficiency. We trained and validated our algorithm on 1738 cervical WSIs with one WSI per patient. On the independent test set of 811 WSIs, we achieved 93.4% malignant sensitivity for classifying slides. Recognising a WSI, with our algorithm, takes approximately 1.5 minutes on the NVIDIA Tesla V100 GPU. Whole slide images of different formats (TIFF, iSyntax, and CZI) can be processed using this code, and it is easily extendable to other formats.
Collapse
|
2
|
Magnifying Networks for Histopathological Images with Billions of Pixels. Diagnostics (Basel) 2024; 14:524. [PMID: 38472996 DOI: 10.3390/diagnostics14050524] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2024] [Revised: 02/25/2024] [Accepted: 02/26/2024] [Indexed: 03/14/2024] Open
Abstract
Amongst the other benefits conferred by the shift from traditional to digital pathology is the potential to use machine learning for diagnosis, prognosis, and personalization. A major challenge in the realization of this potential emerges from the extremely large size of digitized images, which are often in excess of 100,000 × 100,000 pixels. In this paper, we tackle this challenge head-on by diverging from the existing approaches in the literature-which rely on the splitting of the original images into small patches-and introducing magnifying networks (MagNets). By using an attention mechanism, MagNets identify the regions of the gigapixel image that benefit from an analysis on a finer scale. This process is repeated, resulting in an attention-driven coarse-to-fine analysis of only a small portion of the information contained in the original whole-slide images. Importantly, this is achieved using minimal ground truth annotation, namely, using only global, slide-level labels. The results from our tests on the publicly available Camelyon16 and Camelyon17 datasets demonstrate the effectiveness of MagNets-as well as the proposed optimization framework-in the task of whole-slide image classification. Importantly, MagNets process at least five times fewer patches from each whole-slide image than any of the existing end-to-end approaches.
Collapse
|
3
|
Disease: An ill-founded concept at odds with the principle of patient-centred medicine. J Eval Clin Pract 2024. [PMID: 38368599 DOI: 10.1111/jep.13973] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 12/15/2023] [Accepted: 02/01/2024] [Indexed: 02/20/2024]
Abstract
BACKGROUND Despite the at least decades long record of philosophical recognition and interest, the intricacy of the deceptively familiar appearing concepts of 'disease', 'disorder', 'disability', and so forth, has only recently begun showing itself with clarity in the popular discourse wherein its newly emerging prominence stems from the liberties and restrictions contingent upon it. Whether a person is deemed to be afflicted by a disease or a disorder governs their ability to access health care, be it free at the point of use or provided by an insurer; it also influences the treatment of individuals by the judicial system and employers; it even affects one's own perception of self. AIMS All existing philosophical definitions of disease struggle with coherency, causing much confusion and strife, and leading to inconsistencies in real-world practice. Hence, there is a real need for an alternative. MATERIALS AND METHODS In the present article I analyse the variety of contemporary views of disease, showing them all to be inadequate and lacking in firm philosophical foundations, and failing to meet the desideratum of patient-driven care. RESULTS Illuminated by the insights emanating from the said analysis, I introduce a novel approach with firm ethical foundations, which foundations are rooted in sentience, that is the subjective experience of sentient beings. DISCUSSION I argue that the notion of disease is at best superfluous, and likely even harmful in the provision of compassionate and patient-centred care. CONCLUSION Using a series of presently contentious cases illustrate the power of the proposed framework which is capable of providing actionable and humane solutions to problems that leave the current theories confounded.
Collapse
|
4
|
Localization and phenotyping of tuberculosis bacteria using a combination of deep learning and SVMs. Comput Biol Med 2023; 167:107573. [PMID: 37913616 DOI: 10.1016/j.compbiomed.2023.107573] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 09/09/2023] [Accepted: 10/11/2023] [Indexed: 11/03/2023]
Abstract
Successful treatment of pulmonary tuberculosis (TB) depends on early diagnosis and careful monitoring of treatment response. Identification of acid-fast bacilli by fluorescence microscopy of sputum smears is a common tool for both tasks. Microscopy-based analysis of the intracellular lipid content and dimensions of individual Mycobacterium tuberculosis (Mtb) cells also describe phenotypic changes which may improve our biological understanding of antibiotic therapy for TB. However, fluorescence microscopy is a challenging, time-consuming and subjective procedure. In this work, we automate examination of fields of view (FOVs) from microscopy images to determine the lipid content and dimensions (length and width) of Mtb cells. We introduce an adapted variation of the UNet model to efficiently localising bacteria within FOVs stained by two fluorescence dyes; auramine O to identify Mtb and LipidTox Red to identify intracellular lipids. Thereafter, we propose a feature extractor in conjunction with feature descriptors to extract a representation into a support vector multi-regressor and estimate the length and width of each bacterium. Using a real-world data corpus from Tanzania, the proposed method i) outperformed previous methods for bacterial detection with a 8% improvement (Dice coefficient) and ii) estimated the cell length and width with a root mean square error of less than 0.01%. Our network can be used to examine phenotypic characteristics of Mtb cells visualised by fluorescence microscopy, improving consistency and time efficiency of this procedure compared to manual methods.
Collapse
|
5
|
Resolving the ethical quagmire of the persistent vegetative state. J Eval Clin Pract 2023; 29:1108-1118. [PMID: 37157947 DOI: 10.1111/jep.13848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Accepted: 04/05/2023] [Indexed: 05/10/2023]
Abstract
BACKGROUND A patient is diagnosed with the persistent vegetative state (PVS) when they show no evidence of the awareness of the self or the environment for an extended period of time. The chance of recovery of any mental function or the ability to interact in a meaningful way is low. Though rare, the condition, considering its nature as a state outwith the realm of the conscious, coupled with the trauma experienced by the patient's kin as well as health care staff confronted with painful decisions regarding the patient's care, has attracted a considerable amount of discussion within the bioethics community. AIMS At present, there is a wealth of literature that discusses the relevant neurology, that elucidates the plethora of ethical challenges in understanding and dealing with the condition, and that analyses the real-world cases which have prominently featured in the mainstream media as a result of emotionally charged, divergent views concerning the provision of care to the patient. However, there is scarcely anything in the published scholarly literature that proposes concrete and practically actionable solutions to the now widely recognized moral conundrums. The present article describes a step in that direction. MATERIALS & METHODS I start from the very foundations, laying out a sentientist approach which serves as the basis for the consequent moral decision-making, and then proceed to systematically identify and deconstruct the different cases of discord, using the aforementioned foundations as the basis for their resolution. RESULTS A major intellectual contribution concerns the fluidity of the duty of care which I argue is demanded by the sentientist focus. DISCUSSION The said duty is shown initially to have for its object the patient, which depending on the circumstances, can change to the patient's kin, or the health care staff themselves. CONCLUSION In conclusion, the proposed framework represents the first comprehensive proposal regarding the decision-making processes involved in the deliberation on the provision of life sustaining treatment to a patient in a PVS.
Collapse
|
6
|
A Siamese Transformer Network for Zero-Shot Ancient Coin Classification. J Imaging 2023; 9:107. [PMID: 37367455 PMCID: PMC10299244 DOI: 10.3390/jimaging9060107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 05/10/2023] [Accepted: 05/19/2023] [Indexed: 06/28/2023] Open
Abstract
Ancient numismatics, the study of ancient coins, has in recent years become an attractive domain for the application of computer vision and machine learning. Though rich in research problems, the predominant focus in this area to date has been on the task of attributing a coin from an image, that is of identifying its issue. This may be considered the cardinal problem in the field and it continues to challenge automatic methods. In the present paper, we address a number of limitations of previous work. Firstly, the existing methods approach the problem as a classification task. As such, they are unable to deal with classes with no or few exemplars (which would be most, given over 50,000 issues of Roman Imperial coins alone), and require retraining when exemplars of a new class become available. Hence, rather than seeking to learn a representation that distinguishes a particular class from all the others, herein we seek a representation that is overall best at distinguishing classes from one another, thus relinquishing the demand for exemplars of any specific class. This leads to our adoption of the paradigm of pairwise coin matching by issue, rather than the usual classification paradigm, and the specific solution we propose in the form of a Siamese neural network. Furthermore, while adopting deep learning, motivated by its successes in the field and its unchallenged superiority over classical computer vision approaches, we also seek to leverage the advantages that transformers have over the previously employed convolutional neural networks, and in particular their non-local attention mechanisms, which ought to be particularly useful in ancient coin analysis by associating semantically but not visually related distal elements of a coin's design. Evaluated on a large data corpus of 14,820 images and 7605 issues, using transfer learning and only a small training set of 542 images of 24 issues, our Double Siamese ViT model is shown to surpass the state of the art by a large margin, achieving an overall accuracy of 81%. Moreover, our further investigation of the results shows that the majority of the method's errors are unrelated to the intrinsic aspects of the algorithm itself, but are rather a consequence of unclean data, which is a problem that can be easily addressed in practice by simple pre-processing and quality checking.
Collapse
|
7
|
Caveat Medicus: It's Time to Re-Think Stratification, You May Not Be Helping. Biomark Insights 2023; 18:11772719231174746. [PMID: 37200865 PMCID: PMC10186568 DOI: 10.1177/11772719231174746] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Accepted: 04/21/2023] [Indexed: 05/20/2023] Open
Abstract
Background The focus of the present Letter is on the large and seemingly fertile body of work captured under the umbrella of 'patient stratification'. Objectives I identify and explain a fundamental methodological flaw underlying the manner in which the development of an increasingly large number of new stratification strategies is approached. Design I show an inherent conflict between the assumptions made, and the very purpose of stratification and its application in practice. Methods I analyse the methodological underpinnings of stratification as presently done and draw parallels with conceptually similarly flawed precedents which are now widely recognized. Results The highlighted flaw is shown to undermine the overarching ultimate goal of improved patient outcomes by undue fixation on an ill-founded proxy. Conclusion I issue a call for a re-think of the problem and the processes leading to the adoption of new stratification strategies in the clinic.
Collapse
|
8
|
Lymphocyte Classification from Hoechst Stained Slides with Deep Learning. Cancers (Basel) 2022; 14:cancers14235957. [PMID: 36497439 PMCID: PMC9738034 DOI: 10.3390/cancers14235957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2022] [Revised: 11/24/2022] [Accepted: 11/28/2022] [Indexed: 12/03/2022] Open
Abstract
Multiplex immunofluorescence and immunohistochemistry benefit patients by allowing cancer pathologists to identify proteins expressed on the surface of cells. This enables cell classification, better understanding of the tumour microenvironment, and more accurate diagnoses, prognoses, and tailored immunotherapy based on the immune status of individual patients. However, these techniques are expensive. They are time consuming processes which require complex staining and imaging techniques by expert technicians. Hoechst staining is far cheaper and easier to perform, but is not typically used as it binds to DNA rather than to the proteins targeted by immunofluorescence techniques. In this work we show that through the use of deep learning it is possible to identify an immune cell subtype without immunofluorescence. We train a deep convolutional neural network to identify cells expressing the T lymphocyte marker CD3 from Hoechst 33342 stained tissue only. CD3 expressing cells are often used in key prognostic metrics such as assessment of immune cell infiltration, and by identifying them without the need for costly immunofluorescence, we present a promising new approach to cheaper prediction and improvement of patient outcomes. We also show that by using deep learning interpretability techniques, we can gain insight into the previously unknown morphological features which make this possible.
Collapse
|
9
|
A systemic challenge in dietetics: Methodological inadequacies, erroneous claims, and misleadinginterpretations, and transparency of post-publication scrutiny. Nutr Health 2022; 28:319-323. [PMID: 35414320 PMCID: PMC9388950 DOI: 10.1177/02601060221094126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Background: Obesity is sweeping across the developed world. Yet, the public
remains largely confused when it comes to the nature of dietary habits which would serve
to counteract this trend. Aim: I highlight the responsibility that the
scientific community bears when it comes to the confusion, and explain the kind of actions
that are needed if the public trust in science is to be maintained. Methods:
Starting from an example of a recently published and prominently featured article in a
leading journal, I analyse various common methodological aspects of dietetics research and
the consequent claims, contextualizing this within the broader environment which includes
the scientific publishing process and the mainstream media. Results:
Methodological inadequacies, erroneous claims, and misleading interpretations of findings
are often found in dietetics research, highlighting the deficiencies of the system which
fails to uphold the fundamental principles of scientific inquiry. Conclusion:
It is imperative that individual scientists speak out and challenge poor science,
unsatisfactory publishing processes, and bombastic and misleading communication of
research.
Collapse
|
10
|
Abstract
Identifying the configuration of chess pieces from an image of a chessboard is a problem in computer vision that has not yet been solved accurately. However, it is important for helping amateur chess players improve their games by facilitating automatic computer analysis without the overhead of manually entering the pieces. Current approaches are limited by the lack of large datasets and are not designed to adapt to unseen chess sets. This paper puts forth a new dataset synthesised from a 3D model that is an order of magnitude larger than existing ones. Trained on this dataset, a novel end-to-end chess recognition system is presented that combines traditional computer vision techniques with deep learning. It localises the chessboard using a RANSAC-based algorithm that computes a projective transformation of the board onto a regular grid. Using two convolutional neural networks, it then predicts an occupancy mask for the squares in the warped image and finally classifies the pieces. The described system achieves an error rate of 0.23% per square on the test set, 28 times better than the current state of the art. Further, a few-shot transfer learning approach is developed that is able to adapt the inference system to a previously unseen chess set using just two photos of the starting position, obtaining a per-square accuracy of 99.83% on images of that new chess set. The code, dataset, and trained models are made available online.
Collapse
|
11
|
Assessment of Immunological Features in Muscle-Invasive Bladder Cancer Prognosis Using Ensemble Learning. Cancers (Basel) 2021; 13:cancers13071624. [PMID: 33915698 PMCID: PMC8036815 DOI: 10.3390/cancers13071624] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2021] [Revised: 03/11/2021] [Accepted: 03/16/2021] [Indexed: 01/03/2023] Open
Abstract
The clinical staging and prognosis of muscle-invasive bladder cancer (MIBC) routinely includes the assessment of patient tissue samples by a pathologist. Recent studies corroborate the importance of image analysis in identifying and quantifying immunological markers from tissue samples that can provide further insight into patient prognosis. In this paper, we apply multiplex immunofluorescence to MIBC tissue sections to capture whole-slide images and quantify potential prognostic markers related to lymphocytes, macrophages, tumour buds, and PD-L1. We propose a machine-learning-based approach for the prediction of 5 year prognosis with different combinations of image, clinical, and spatial features. An ensemble model comprising several functionally different models successfully stratifies MIBC patients into two risk groups with high statistical significance (p value < 1×10-5). Critical to improving MIBC survival rates, our method correctly classifies 71.4% of the patients who succumb to MIBC, which is significantly more than the 28.6% of the current clinical gold standard, the TNM staging system.
Collapse
|
12
|
Corrigendum: Deep Learning for Whole Slide Image Analysis: An Overview. Front Med (Lausanne) 2020; 7:419. [PMID: 32974358 PMCID: PMC7466414 DOI: 10.3389/fmed.2020.00419] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Accepted: 06/30/2020] [Indexed: 11/13/2022] Open
|
13
|
Abstract LB-368: Applications of automated image analysis, machine learning and spatial statistics for the improvement of stage II colorectal cancer prognosis. Cancer Res 2020. [DOI: 10.1158/1538-7445.am2020-lb-368] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Abstract
Background and Objectives: The tumor microenvironment (TME) plays an important role on tumor progression and patient survival outcome. The TME varies significantly amongst patients as well as within individual tumors. Although a number of studies have reported the prognostic significance of the various TME components, only a very small number of those address the issue of intra-tumor heterogeneity. In this study, we evaluate the densities and interactions of tumor infiltrating lymphocytes, macrophages and tumor buds (TBs) in order to create a more personalized prognosis for patients with stage II colorectal cancer (CRC). This was achieved through the use of multiplexed immunofluorescence, automated image analysis and machine learning approaches. In addition, we developed an objective methodology for studying the intra-tumor heterogeneity and assess its impact on patient survival outcome. Methods: Multiplexed immunofluorescence and automated image analysis using HALO® software were applied for the quantification of CD3+, CD8+ T cells, CD68+, CD163+ macrophages and TBs, across 2 sequential whole slide images (WSI). This was performed on 230 stage II CRC patient samples from Scotland and Japan. Density and spatial relationships between the cellular subpopulations were averaged across the WSI to form input for a prognostic model. To evaluate the intra-patient heterogeneity a further analysis method was developed which divided the WSI into grids with a fixed tile area of 0.785mm2. Tiles with significantly small or large numbers of the feature of interest were considered hot or coldspots respectively. The number of each objects' hot or coldspots within each patient were then calculated. Two machine learning algorithms were employed for the analysis of the data from each analysis method, which lead to the development of two new prognostic risk models. Results: The first combinatorial prognostic model, utilizing the averaged data, consisted of lymphocyte infiltration, the number of lymphocytes within 50µm of TBs and CD68+ /CD163+ macrophage cell ratio. This model was shown to identify a subpopulation of patients who exhibit 100% survival over a 5-year follow-up period. This finding was confirmed in an independent and international validation cohort. The second prognostic model using the results from the spatial heatmap analysis, included the number of TB hotspots as well as the number of hotspots for the proximity of lymphocytes to TBs. This model was shown to be of high prognostic significance. Conclusion: This work demonstrates how by applying digital pathology and machine learning approaches it is possible to identify stage II CRC patients for whom surgical resection alone may be curative. Furthermore, we report a new methodology to evaluate the intra-tumor heterogeneity which was found to improve stage II CRC patient stratification when compared to the current clinical gold standards.
Citation Format: Ines P. Nearchou, Daniel A. Soutar, Kate Lillard, Ueno Hideki, Ognjen Arandjelović, David J. Harrison, Peter D. Caie. Applications of automated image analysis, machine learning and spatial statistics for the improvement of stage II colorectal cancer prognosis [abstract]. In: Proceedings of the Annual Meeting of the American Association for Cancer Research 2020; 2020 Apr 27-28 and Jun 22-24. Philadelphia (PA): AACR; Cancer Res 2020;80(16 Suppl):Abstract nr LB-368.
Collapse
|
14
|
Tracking of Deformable Objects Using Dynamically and Robustly Updating Pictorial Structures. J Imaging 2020; 6:jimaging6070061. [PMID: 34460654 PMCID: PMC8321174 DOI: 10.3390/jimaging6070061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 06/22/2020] [Accepted: 06/29/2020] [Indexed: 11/16/2022] Open
Abstract
The problem posed by complex, articulated or deformable objects has been at the focus of much tracking research for a considerable length of time. However, it remains a major challenge, fraught with numerous difficulties. The increased ubiquity of technology in all realms of our society has made the need for effective solutions all the more urgent. In this article, we describe a novel method which systematically addresses the aforementioned difficulties and in practice outperforms the state of the art. Global spatial flexibility and robustness to deformations are achieved by adopting a pictorial structure based geometric model, and localized appearance changes by a subspace based model of part appearance underlain by a gradient based representation. In addition to one-off learning of both the geometric constraints and part appearances, we introduce a continuing learning framework which implements information discounting i.e., the discarding of historical appearances in favour of the more recent ones. Moreover, as a means of ensuring robustness to transient occlusions (including self-occlusions), we propose a solution for detecting unlikely appearance changes which allows for unreliable data to be rejected. A comprehensive evaluation of the proposed method, the analysis and discussing of findings, and a comparison with several state-of-the-art methods demonstrates the major superiority of our algorithm.
Collapse
|
15
|
Big Data Driven Detection of Trees in Suburban Scenes Using Visual Spectrum Eye Level Photography. SENSORS (BASEL, SWITZERLAND) 2020; 20:E3051. [PMID: 32481523 PMCID: PMC7308893 DOI: 10.3390/s20113051] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 05/11/2020] [Accepted: 05/15/2020] [Indexed: 11/23/2022]
Abstract
The aim of the work described in this paper is to detect trees in eye level view images. Unlike previous work that universally considers highly constrained environments, such as natural parks and wooded areas, or simple scenes with little clutter and clear tree separation, our focus is on much more challenging suburban scenes, which are rich in clutter and highly variable in type and appearance (houses, falls, shrubs, cars, bicycles, pedestrians, hydrants, lamp posts, etc.). Thus, we motivate and introduce three different approaches: (i) a conventional computer vision based approach, employing manually engineered steps and making use of explicit human knowledge of the application domain, (ii) a more machine learning oriented approach, which learns from densely extracted local features in the form of scale invariant features (SIFT), and (iii) a machine learning based approach, which employs both colour and appearance models as a means of making the most of available discriminative information. We also make a significant contribution in regards to the collection of training and evaluation data. In contrast to the existing work, which relies on manual data collection (thus risking unintended bias) or corpora constrained in variability and limited in size (thus not allowing for reliable generalisation inferences to be made), we show how large amounts of representative data can be collected automatically using freely available tools, such as Google's Street View, and equally automatically processed to produce a large corpus of minimally biased imagery. Using a large data set collected in the manner and comprising tens of thousands of images, we confirm our theoretical arguments that motivated our machine learning based and colour-aware histograms of oriented gradients based method, which achieved a recall of 95% and precision of 97%.
Collapse
|
16
|
Cold beverage-induced vasovagal syncope in a healthy young adult man: a case report. J Med Case Rep 2020; 14:37. [PMID: 32111239 PMCID: PMC7048024 DOI: 10.1186/s13256-020-2358-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Accepted: 01/26/2020] [Indexed: 11/22/2022] Open
Abstract
Background Swallowing-induced syncope is rare and there are few case reports of it in the existing medical literature. Even rarer are instances involving young and healthy individuals, with no existing pre-conditions or apparent risk factors. Hence the value of such case reports in understanding the phenomenon better and potentially inferring patterns of practical interest is significant; here we describe an unusual case of a swallowing-induced syncope in a young, healthy, and active white man. Case presentation A healthy 32-year-old white man experienced a syncopal episode following the ingestion of a cold carbonated beverage on a hot day. He rapidly recovered consciousness and save for mild lightheadedness all ill effects disappeared within minutes. On examination no concerns were detected and he was discharged, with the cause being ascribed to esophageal stimulation effected vagus nerve overactivation. Conclusions The suddenness and unpredictability of swallowing-induced syncope make it a potentially dangerous condition, with risks both to the patient as well as, depending on the context, others. However, it is poorly understood due to its infrequency. The present case report adds to the body of much needed evidence which should help facilitate an improved understanding of the phenomenon.
Collapse
|
17
|
Deep Learning for Whole Slide Image Analysis: An Overview. Front Med (Lausanne) 2019; 6:264. [PMID: 31824952 PMCID: PMC6882930 DOI: 10.3389/fmed.2019.00264] [Citation(s) in RCA: 105] [Impact Index Per Article: 21.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 10/29/2019] [Indexed: 12/15/2022] Open
Abstract
The widespread adoption of whole slide imaging has increased the demand for effective and efficient gigapixel image analysis. Deep learning is at the forefront of computer vision, showcasing significant improvements over previous methodologies on visual understanding. However, whole slide images have billions of pixels and suffer from high morphological heterogeneity as well as from different types of artifacts. Collectively, these impede the conventional use of deep learning. For the clinical translation of deep learning solutions to become a reality, these challenges need to be addressed. In this paper, we review work on the interdisciplinary attempt of training deep neural networks using whole slide images, and highlight the different ideas underlying these methodologies.
Collapse
|
18
|
A more principled use of the p-value? Not so fast: a critique of Colquhoun's argument. ROYAL SOCIETY OPEN SCIENCE 2019; 6:181519. [PMID: 31218019 PMCID: PMC6549987 DOI: 10.1098/rsos.181519] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Accepted: 04/16/2019] [Indexed: 06/09/2023]
Abstract
The usefulness of the statistic known as the p-value, as a means of quantifying the strength of evidence for the presence of an effect from empirical data has long been questioned in the statistical community. In recent years, there has been a notable increase in the awareness of both fundamental and practical limitations of the statistic within the target research fields and especially biomedicine. In this article, I analyse the recently published article (Colquhoun 2017 R. Soc. open sci. 4, 171085 (doi:10.1098/rsos.171085)) which, in summary, argues that with a better understanding and thus more appropriate use of the statistic, many of the aforementioned limitations can be addressed. In particular, I demonstrate that the (often implicit) premises of this counterargument are questionable, in some cases arguably inconsistent, and that therefore the counterargument provides little if any justification for the continued use of the p-value. Additionally, my analysis should help researchers seeking to interpret their empirical data by illustrating the nuanced nature and the multiplicity of statistical, methodological and epistemological issues which must be considered in this process.
Collapse
|
19
|
|
20
|
Diagnosis Prediction from Electronic Health Records Using the Binary Diagnosis History Vector Representation. J Comput Biol 2017; 24:767-786. [DOI: 10.1089/cmb.2017.0023] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
|
21
|
Data Mining Approach to Estimate the Duration of Drug Therapy from Longitudinal Electronic Medical Records. ACTA ACUST UNITED AC 2017. [DOI: 10.2174/1875036201709010001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Background:
Electronic Medical Records (EMRs) from primary/ ambulatory care systems present a new and promising source of information for conducting clinical and translational research.
Objectives:
To address the methodological and computational challenges in order to extract reliable medication information from raw data which is often complex, incomplete and erroneous. To assess whether the use of specific chaining fields of medication information may additionally improve the data quality.
Methods:
Guided by a range of challenges associated with missing and internally inconsistent data, we introduce two methods for the robust extraction of patient-level medication data. First method relies on chaining fields to estimate duration of treatment (“chaining”), while second disregards chaining fields and relies on the chronology of records (“continuous”). Centricity EMR database was used to estimate treatment duration with both methods for two widely prescribed drugs among type 2 diabetes patients: insulin and glucagon-like peptide-1 receptor agonists.
Results:
At individual patient level the “chaining” approach could identify the treatment alterations longitudinally and produced more robust estimates of treatment duration for individual drugs, while the “continuous” method was unable to capture that dynamics. At population level, both methods produced similar estimates of average treatment duration, however, notable differences were observed at individual-patient level.
Conclusion:
The proposed algorithms explicitly identify and handle longitudinal erroneous or missing entries and estimate treatment duration with specific drug(s) of interest, which makes them a valuable tool for future EMR based clinical and pharmaco-epidemiological studies. To improve accuracy of real-world based studies, implementing chaining fields of medication information is recommended.
Collapse
|
22
|
Complex temporal topic evolution modelling using the Kullback-Leibler divergence and the Bhattacharyya distance. EURASIP JOURNAL ON BIOINFORMATICS & SYSTEMS BIOLOGY 2016; 2016:16. [PMID: 27746813 PMCID: PMC5042987 DOI: 10.1186/s13637-016-0050-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2016] [Accepted: 09/12/2016] [Indexed: 11/10/2022]
Abstract
The rapidly expanding corpus of medical research literature presents major challenges in the understanding of previous work, the extraction of maximum information from collected data, and the identification of promising research directions. We present a case for the use of advanced machine learning techniques as an aide in this task and introduce a novel methodology that is shown to be capable of extracting meaningful information from large longitudinal corpora and of tracking complex temporal changes within it. Our framework is based on (i) the discretization of time into epochs, (ii) epoch-wise topic discovery using a hierarchical Dirichlet process-based model, and (iii) a temporal similarity graph which allows for the modelling of complex topic changes. More specifically, this is the first work that discusses and distinguishes between two groups of particularly challenging topic evolution phenomena: topic splitting and speciation and topic convergence and merging, in addition to the more widely recognized emergence and disappearance and gradual evolution. The proposed framework is evaluated on a public medical literature corpus.
Collapse
|
23
|
On the discovery of hospital admission patterns-a clarification. Bioinformatics 2016; 32:2078. [PMID: 26819471 DOI: 10.1093/bioinformatics/btw049] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 01/20/2016] [Indexed: 11/12/2022] Open
Abstract
CONTACT ognjen.arandjelvoic@gmail.com.
Collapse
|
24
|
Abstract
AbstractI describe a simple modification which can be applied to any citation count based index (e.g. Hirsch’s h-index) quantifying a researcher’s publication output. The key idea behind the proposed approach is that the merit for the citations of a paper should be distributed amongst its authors according to their relative contributions. In addition to producing inherently fairer metrics I show that the proposed modification has the potential to normalize partially for the unfair effects of honorary authorship and thus discourage this practice.
Collapse
|
25
|
Abstract
In the context of resistance training the so-called "sticking point" is commonly understood as the position in a lift in which a disproportionately large increase in the difficulty to continue the lift is experienced. If the lift is taken to the point of momentary muscular failure, the sticking point is usually where the failure occurs. Hence the sticking point is associated with an increased chance of exercise form deterioration or breakdown. Understanding the mechanisms that lead to the occurrence of sticking points as well as different training strategies that can be used to overcome them is important to strength practitioners (trainees and coaches alike) and instrumental for the avoidance of injury and continued progress. In this article we survey and consolidate the body of existing research on the topic: we discuss different definitions of the sticking point adopted in the literature and propose a more precise definition, describe different muscular and biomechanical aspects that give rise to sticking points, and review the effectiveness of different training modalities used to address them.
Collapse
|
26
|
Discovering hospital admission patterns using models learnt from electronic hospital records. Bioinformatics 2015; 31:3970-6. [DOI: 10.1093/bioinformatics/btv508] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2015] [Accepted: 08/24/2015] [Indexed: 11/12/2022] Open
|
27
|
Clinical Trial Adaptation by Matching Evidence in Complementary Patient Sub-groups of Auxiliary Blinding Questionnaire Responses. PLoS One 2015; 10:e0131524. [PMID: 26161797 PMCID: PMC4498692 DOI: 10.1371/journal.pone.0131524] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2014] [Accepted: 06/03/2015] [Indexed: 11/30/2022] Open
Abstract
Clinical trial adaptation refers to any adjustment of the trial protocol after the onset of the trial. Such adjustment may take on various forms, including the change in the dose of administered medicines, the frequency of administering an intervention, the number of trial participants, or the duration of the trial, to name just some possibilities. The main goal is to make the process of introducing new medical interventions to patients more efficient, either by reducing the cost or the time associated with evaluating their safety and efficacy. The principal challenge, which is an outstanding research problem, is to be found in the question of how adaptation should be performed so as to minimize the chance of distorting the outcome of the trial. In this paper we propose a novel method for achieving this. Unlike most of the previously published work, our approach focuses on trial adaptation by sample size adjustment i.e. by reducing the number of trial participants in a statistically informed manner. We adopt a stratification framework recently proposed for the analysis of trial outcomes in the presence of imperfect blinding and based on the administration of a generic auxiliary questionnaire that allows the participants to express their belief concerning the assigned intervention (treatment or control). We show that this data, together with the primary measured variables, can be used to make the probabilistically optimal choice of the particular sub-group a participant should be removed from if trial size reduction is desired. Extensive experiments on a series of simulated trials are used to illustrate the effectiveness of our method.
Collapse
|
28
|
|
29
|
Detection of dynamic background due to swaying movements from motion features. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:332-344. [PMID: 25494505 DOI: 10.1109/tip.2014.2378034] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Dynamically changing background (dynamic background) still presents a great challenge to many motion-based video surveillance systems. In the context of event detection, it is a major source of false alarms. There is a strong need from the security industry either to detect and suppress these false alarms, or dampen the effects of background changes, so as to increase the sensitivity to meaningful events of interest. In this paper, we restrict our focus to one of the most common causes of dynamic background changes: 1) that of swaying tree branches and 2) their shadows under windy conditions. Considering the ultimate goal in a video analytics pipeline, we formulate a new dynamic background detection problem as a signal processing alternative to the previously described but unreliable computer vision-based approaches. Within this new framework, we directly reduce the number of false alarms by testing if the detected events are due to characteristic background motions. In addition, we introduce a new data set suitable for the evaluation of dynamic background detection. It consists of real-world events detected by a commercial surveillance system from two static surveillance cameras. The research question we address is whether dynamic background can be detected reliably and efficiently using simple motion features and in the presence of similar but meaningful events, such as loitering. Inspired by the tree aerodynamics theory, we propose a novel method named local variation persistence (LVP), that captures the key characteristics of swaying motions. The method is posed as a convex optimization problem, whose variable is the local variation. We derive a computationally efficient algorithm for solving the optimization problem, the solution of which is then used to form a powerful detection statistic. On our newly collected data set, we demonstrate that the proposed LVP achieves excellent detection results and outperforms the best alternative adapted from existing art in the dynamic background literature.
Collapse
|
30
|
A risky business or a safe BET? A Fuzzy Set Event Tree for estimating hazard in biotelemetry studies. Anim Behav 2014. [DOI: 10.1016/j.anbehav.2014.04.025] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
31
|
|
32
|
On self-propagating methodological flaws in performance normalization for strength and power sports. Sports Med 2013; 43:451-61. [PMID: 23526417 DOI: 10.1007/s40279-013-0035-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
Performance in strength and power sports is greatly affected by a variety of anthropometric factors. The goal of performance normalization is to factor out the effects of confounding factors and compute a canonical (normalized) performance measure from the observed absolute performance. Performance normalization is applied in the ranking of elite athletes, as well as in the early stages of youth talent selection. Consequently, it is crucial that the process is principled and fair. The corpus of previous work on this topic, which is significant, is uniform in the methodology adopted. Performance normalization is universally reduced to a regression task: the collected performance data are used to fit a regression function that is then used to scale future performances. The present article demonstrates that this approach is fundamentally flawed. It inherently creates a bias that unfairly penalizes athletes with certain allometric characteristics, and, by virtue of its adoption in the ranking and selection of elite athletes, propagates and strengthens this bias over time. The main flaws are shown to originate in the criteria for selecting the data used for regression, as well as in the manner in which the regression model is applied in normalization. This analysis brings into light the aforesaid methodological flaws and motivates further work on the development of principled methods, the foundations of which are also laid out in this work.
Collapse
|
33
|
A new framework for interpreting the outcomes of imperfectly blinded controlled clinical trials. PLoS One 2012; 7:e48984. [PMID: 23236350 PMCID: PMC3516527 DOI: 10.1371/journal.pone.0048984] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2012] [Accepted: 10/03/2012] [Indexed: 11/18/2022] Open
Abstract
It is well known that the outcome of an intervention is affected both by the inherent effects of the intervention and the patient's expectations. For this reason in comparative clinical trials an effort is made to conceal the nature of the administered intervention from the participants in the trial i.e. to blind the trial. Yet, in practice perfect blinding is impossible to ensure or even verify post hoc. The current clinical standard is to follow up the trial with an auxiliary questionnaire, which allows trial participants to express in closed form their belief concerning the intervention, i.e. trial group assignment (treatment or control). Auxiliary questionnaire responses are then used to compute the extent of blinding in the trial in the form of a blinding index. If the estimated extent of blinding exceeds a particular threshold the trial is deemed sufficiently blinded; otherwise, the strength of evidence of the trial is brought into question. This may necessitate that the trial is repeated. In this paper we make several contributions. Firstly, we identify a series of problems of the aforesaid clinical practice and discuss them in context of the most commonly used blinding indexes. Secondly, we formulate a novel approach for handling imperfectly blinded trials. We adopt a feedback questionnaire of the same form as that which is currently in use, but interpret the collected data using a novel statistical method, significantly different from that proposed in the previous work. Unlike the previously proposed approaches, our method is void of any ad hoc free parameters and robust to small changes in the participants' feedback responses. Our method also does not discard any data and is not predicated on any strong assumptions used to interpret participants' feedback. The key idea behind the present method is that it is meaningful to compare only the corresponding treatment and control participant sub-groups, that is, sub-groups matched by their auxiliary responses. A series of experiments on simulated trials is used to demonstrate the effectiveness of the proposed approach and its superiority over those currently in use.
Collapse
|
34
|
Common variants of the resistance mechanism in the Smith machine: analysis of mechanical loading characteristics and application to strength-oriented and hypertrophy-oriented training. J Strength Cond Res 2012; 26:350-63. [PMID: 22228113 DOI: 10.1519/jsc.0b013e318220e6d2] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The Smith machine is a pervasive weight-training apparatus, used extensively by a wide population of weight trainers, from novices to high-level athletes. The advantages of using a Smith machine over free-weight resistance are disputed, with conflicting findings reported in the literature. In this study, we are interested in practical differences between 3 types of loading mechanisms found in modern Smith machines. In addition to the basic design comprising a constrained weighted barbell, alterations with a counterweight and a viscous resistance component are examined. The approach taken is that of employing a recently proposed representation of force characteristics that may be exhibited by a trainee and a predictive model of thus effected adaptation. A computer simulation is used to predict the effects of the 3 linear Smith machine designs in the framework of different exercise protocols. Our results demonstrate that each resistance component, vertically constrained load, counterweight, and viscous, can be matched with a particular training context, in which it should be preferred. Thus, a number of practical guidelines for weight-training practitioners are recommended. In summary, (a) at low intensities (55-75% of 1 repetition maximum [1RM]) used in strength-endurance training, a viscous resistance containing the Smith machine was found to offer advantages over both a constrained load only and counterweighted designs; (b) at medium intensities (75-85% of 1RM) typically employed in hypertrophy-specific training, the counterweighted Smith machine design was found to offer the best choice in terms of high-force development and total external work performed; finally, (c) at high training intensity (90-100% of 1RM), the optimal prescription was found to be more dependent on the specific athlete's weaknesses, highlighting the need for continual monitoring of the athlete's force production capabilities. To ensure that appropriate adjustments are made to the athlete's training regimen, the practitioner should consider the full set of findings of this article and the accompanying discussion.
Collapse
|
35
|
Multiple-object Tracking in Cluttered and Crowded Public Spaces. ADVANCES IN VISUAL COMPUTING 2010. [DOI: 10.1007/978-3-642-17277-9_10] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|
36
|
Face Recognition from Video Using the Generic Shape-Illumination Manifold. COMPUTER VISION – ECCV 2006 2006. [DOI: 10.1007/11744085_3] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
|