1
|
Cai ZM, Li ZZ, Zhong NN, Cao LM, Xiao Y, Li JQ, Huo FY, Liu B, Xu C, Zhao Y, Rao L, Bu LL. Revolutionizing lymph node metastasis imaging: the role of drug delivery systems and future perspectives. J Nanobiotechnology 2024; 22:135. [PMID: 38553735 PMCID: PMC10979629 DOI: 10.1186/s12951-024-02408-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 03/18/2024] [Indexed: 04/02/2024] Open
Abstract
The deployment of imaging examinations has evolved into a robust approach for the diagnosis of lymph node metastasis (LNM). The advancement of technology, coupled with the introduction of innovative imaging drugs, has led to the incorporation of an increasingly diverse array of imaging techniques into clinical practice. Nonetheless, conventional methods of administering imaging agents persist in presenting certain drawbacks and side effects. The employment of controlled drug delivery systems (DDSs) as a conduit for transporting imaging agents offers a promising solution to ameliorate these limitations intrinsic to metastatic lymph node (LN) imaging, thereby augmenting diagnostic precision. Within the scope of this review, we elucidate the historical context of LN imaging and encapsulate the frequently employed DDSs in conjunction with a variety of imaging techniques, specifically for metastatic LN imaging. Moreover, we engage in a discourse on the conceptualization and practical application of fusing diagnosis and treatment by employing DDSs. Finally, we venture into prospective applications of DDSs in the realm of LNM imaging and share our perspective on the potential trajectory of DDS development.
Collapse
Affiliation(s)
- Ze-Min Cai
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Zi-Zhan Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Nian-Nian Zhong
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Lei-Ming Cao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Yao Xiao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Jia-Qi Li
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Fang-Yi Huo
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
| | - Bing Liu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
- Department of Oral & Maxillofacial Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430079, Hubei, China
| | - Chun Xu
- School of Dentistry, The University of Queensland, Brisbane, QLD, 4066, Australia
| | - Yi Zhao
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China
- Department of Prosthodontics, School and Hospital of Stomatology, Wuhan University, Wuhan, China
| | - Lang Rao
- Institute of Biomedical Health Technology and Engineering, Shenzhen Bay Laboratory, Shenzhen, 518132, China.
| | - Lin-Lin Bu
- State Key Laboratory of Oral & Maxillofacial Reconstruction and Regeneration, Key Laboratory of Oral Biomedicine Ministry of Education, Hubei Key Laboratory of Stomatology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430072, China.
- Department of Oral & Maxillofacial Head Neck Oncology, School & Hospital of Stomatology, Wuhan University, Wuhan, 430079, Hubei, China.
| |
Collapse
|
2
|
Chu B, Chen Z, Shi H, Wu X, Wang H, Dong F, He Y. Fluorescence, ultrasonic and photoacoustic imaging for analysis and diagnosis of diseases. Chem Commun (Camb) 2023; 59:2399-2412. [PMID: 36744435 DOI: 10.1039/d2cc06654h] [Citation(s) in RCA: 22] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2023]
Abstract
Biomedical imaging technology, which allows us to peer deeply within living subjects and visually explore the delivery and distribution of agents in living things, is producing tremendous opportunities for the early diagnosis and precise therapy of diseases. In this feature article, based on reviewing the latest representative examples of progress together with our recent efforts in the bioimaging field, we intend to introduce three typical kinds of non-invasive imaging technologies, i.e., fluorescence, ultrasonic and photoacoustic imaging, in which optical and/or acoustic signals are employed for analyzing various diseases. In particular, fluorescence imaging possesses a series of outstanding advantages, such as high temporal resolution, as well as rapid and sensitive feedback. Hence, in the first section, we will introduce the latest studies on developing novel fluorescence imaging methods for imaging bacterial infections, cancer and lymph node metastasis in a long-term and real-time manner. However, the issues of imaging penetration depth induced by photon scattering and light attenuation of biological tissue limit their widespread in vivo imaging applications. Taking advantage of the excellect penetration depth of acoustic signals, ultrasonic imaging has been widely applied for determining the location, size and shape of organs, identifying normal and abnormal tissues, as well as confirming the edges of lesions in hospitals. Thus, in the second section, we will briefly summarize recent advances in ultrasonic imaging techniques for diagnosing diseases in deep tissues. Nevertheless, the absence of lesion targeting and dependency on a professional technician may lead to the possibility of false-positive diagnosis. By combining the merits of both optical and acoustic signals, newly-developed photoacoustic imaging, simultaneously featuring higher temporal and spatial resolution with good sensitivity, as well as deeper penetration depth, is discussed in the third secretion. In the final part, we further discuss the major challenges and prospects for developing imaging technology for accurate disease diagnosis. We believe that these non-invasive imaging technologies will introduce a new perspective for the precise diagnosis of various diseases in the future.
Collapse
Affiliation(s)
- Binbin Chu
- Suzhou Key Laboratory of Nanotechnology and Biomedicine, Institute of Functional Nano and Soft Materials (FUNSOM), Soochow University, Suzhou, Jiangsu 215123, China.
| | - Zhiming Chen
- Department of Ultrasound, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China.
| | - Haoliang Shi
- Suzhou Key Laboratory of Nanotechnology and Biomedicine, Institute of Functional Nano and Soft Materials (FUNSOM), Soochow University, Suzhou, Jiangsu 215123, China.
| | - Xiaofeng Wu
- Department of Ultrasound, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China.
| | - Houyu Wang
- Suzhou Key Laboratory of Nanotechnology and Biomedicine, Institute of Functional Nano and Soft Materials (FUNSOM), Soochow University, Suzhou, Jiangsu 215123, China.
| | - Fenglin Dong
- Department of Ultrasound, The First Affiliated Hospital of Soochow University, Suzhou, Jiangsu 215006, China.
| | - Yao He
- Suzhou Key Laboratory of Nanotechnology and Biomedicine, Institute of Functional Nano and Soft Materials (FUNSOM), Soochow University, Suzhou, Jiangsu 215123, China.
| |
Collapse
|
3
|
Segmentation of breast lesion in DCE-MRI by multi-level thresholding using sine cosine algorithm with quasi opposition-based learning. Pattern Anal Appl 2022. [DOI: 10.1007/s10044-022-01099-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
4
|
Yaothak J, Simpson JC, Heffernan LF, Tsai YS, Lin CC. 2D-GolgiTrack-a semi-automated tracking system to quantify morphological changes and dynamics of the Golgi apparatus and Golgi-derived membrane tubules. Med Biol Eng Comput 2021; 60:151-169. [PMID: 34783979 DOI: 10.1007/s11517-021-02460-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 10/07/2021] [Indexed: 11/25/2022]
Abstract
The Golgi apparatus and membrane tubules derived from this organelle play essential roles in membrane trafficking in eukaryotic cells. High-resolution live cell imaging is one highly suitable method for studying the molecular mechanisms of dynamics of organelles during membrane trafficking events. Due to the complex morphological changes and dynamic movements of the Golgi apparatus and associated membrane tubules during membrane trafficking, it is challenging to accurately quantify them. In this study, a semi-automated 2D tracking system, 2D-GolgiTrack, has been established for quantifying morphological changes and movements of Golgi elements, specifically encompassing the Golgi apparatus and its associated tubules, the fission and fusion of Golgi tubules, and the kinetics of formation of Golgi tubules and redistribution of the Golgi-associated protein Rab6A to the endoplasmic reticulum. The Golgi apparatus and associated tubules are segmented by a combination of Otsu's method and adaptive local normalization thresholding. Curvilinear skeletons and tips of skeletons of segmented tubules are used for calculating tubule length by the Geodesic method. The k-nearest neighbor is applied to search the possible candidate objects in the next frame and link the correct objects of adjacent frames by a tracking algorithm to calculate changes in morphological features of each Golgi object or tubule, e.g., number, length, shape, branch point and position, and fission or fusion events. Tracked objects are classified into morphological subtypes, and the Track-Map function of morphological evolution visualizes events of fission and fusion. Our 2D-GolgiTrack not only provides tracking results with 95% accuracy, but also maps morphological evolution for fast visual interpretation of the fission and fusion events. Our tracking system is able to characterize key morphological and dynamic features of the Golgi apparatus and associated tubules, enabling biologists to gain a greater understanding of the molecular mechanisms of membrane traffic involving this essential organelle. Graphical Abstract Overview of the semi-automated 2D tracking system. There are two main parts to the system, namely detection and tracking. The workflow process requires a raw sequence of images (a), which is filtered by the Gaussian filter method (c), and threshold intensity (b) to segment elements of Golgi cisternae (d) and tubules (e). Post-processing outputs are binary images of the cisternae area and tubule skeletons. The tubules are classified into three lengths, namely short, medium, and long tubules (f). Outputs of segmentation are calculated as morphological features (g). The tracking processing starts by loading the segmented outputs (h) and key-inputs of direction reference (i; (DR)) and interval setting of the start ((S)) and end ((E)) frame numbers (j). A tubule of interest is selected by the user (k; (GTinterest, S) as the tubule input ((GTIN)) at the current frame ((i = S)). The tracking algorithm tracks and links the correct tubules at each subsequent frame ((i = i + 1)). The locations of tubule tips are determined for detecting tubule branches using the (DR) to identify the direction of tubule growth (l: (1); (GTtipBr, i); Golgi cisternae: white area; Golgi tubule: white skeleton; tubule tips: green dots; branched tubules: two branches due to the (DR) of growth of the simulated tubule moving from left-to-right away from the Golgi cisternae location). According to the position of the (GTIN), five candidates ((GTcandidates, i)) are searched using the k-nearest neighbor method (l: (2)). Matching of tubules between the (GTIN) and those (GTcandidates, i) uses the bounding box technique to check the amount of tubule-overlap based on the tracking conditions (l: (3)). If there is tubule-overlap, the system collects that tubule as the final output ((GTOUT, i)). By contrast, shape (see the Extent feature in Table reftab:1) and distance features are used to generate the tracked output, which has a priority of a minimum of both of these features ((MinDIST, EXTENT)); otherwise, it is from the minimum of the distance ((MinDIST)). Once a loop of the interval track to the last frame is finished ((i = E + 1)), a Track-Map is generated allowing visualization of the morphological pattern of tubule formation and movement, including identification of fission and fusion events (m). Dynamic features are calculated (n). Related outputs are saved, and all features obtained from the detection and tracking processing are exported as MS Excel files (o).
Collapse
Affiliation(s)
- Jindaporn Yaothak
- Department of Biomedical Engineering, Chung Yuan Christian University, Taoyuan, Taiwan
| | - Jeremy C Simpson
- Cell Screening Laboratory, School of Biology and Environmental Science, Science Centre West, University College Dublin, Dublin 4, Ireland
| | - Linda F Heffernan
- Cell Screening Laboratory, School of Biology and Environmental Science, Science Centre West, University College Dublin, Dublin 4, Ireland
| | - Yuh-Show Tsai
- Department of Biomedical Engineering, Chung Yuan Christian University, Taoyuan, Taiwan.
| | - Chung-Chih Lin
- Department of Life Sciences and Institute of Genome Sciences, National Yang-Ming University, Taipei, Taiwan.
- Biophotonics Interdisciplinary Research Center, National Yang-Ming University, Taipei, Taiwan.
- Brain Research Center, National Yang-Ming University, Taipei, Taiwan.
| |
Collapse
|
5
|
Xing J, Li Z, Wang B, Qi Y, Yu B, Zanjani FG, Zheng A, Duits R, Tan T. Lesion Segmentation in Ultrasound Using Semi-Pixel-Wise Cycle Generative Adversarial Nets. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:2555-2565. [PMID: 32149651 DOI: 10.1109/tcbb.2020.2978470] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Breast cancer is the most common invasive cancer with the highest cancer occurrence in females. Handheld ultrasound is one of the most efficient ways to identify and diagnose the breast cancer. The area and the shape information of a lesion is very helpful for clinicians to make diagnostic decisions. In this study we propose a new deep-learning scheme, semi-pixel-wise cycle generative adversarial net (SPCGAN) for segmenting the lesion in 2D ultrasound. The method takes the advantage of a fully convolutional neural network (FCN) and a generative adversarial net to segment a lesion by using prior knowledge. We compared the proposed method to a fully connected neural network and the level set segmentation method on a test dataset consisting of 32 malignant lesions and 109 benign lesions. Our proposed method achieved a Dice similarity coefficient (DSC) of 0.92 while FCN and the level set achieved 0.90 and 0.79 respectively. Particularly, for malignant lesions, our method increases the DSC (0.90) of the fully connected neural network to 0.93 significantly (p 0.001). The results show that our SPCGAN can obtain robust segmentation results. The framework of SPCGAN is particularly effective when sufficient training samples are not available compared to FCN. Our proposed method may be used to relieve the radiologists' burden for annotation.
Collapse
|
6
|
Meng Y, Dincer H, Yüksel S. Understanding the innovative developments with two-stage technology S-curve of nuclear energy projects. PROGRESS IN NUCLEAR ENERGY 2021. [DOI: 10.1016/j.pnucene.2021.103924] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
7
|
Breast DCE-MRI segmentation for lesion detection by multi-level thresholding using student psychological based optimization. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102925] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
8
|
Lei YM, Yin M, Yu MH, Yu J, Zeng SE, Lv WZ, Li J, Ye HR, Cui XW, Dietrich CF. Artificial Intelligence in Medical Imaging of the Breast. Front Oncol 2021; 11:600557. [PMID: 34367938 PMCID: PMC8339920 DOI: 10.3389/fonc.2021.600557] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2020] [Accepted: 07/07/2021] [Indexed: 12/24/2022] Open
Abstract
Artificial intelligence (AI) has invaded our daily lives, and in the last decade, there have been very promising applications of AI in the field of medicine, including medical imaging, in vitro diagnosis, intelligent rehabilitation, and prognosis. Breast cancer is one of the common malignant tumors in women and seriously threatens women's physical and mental health. Early screening for breast cancer via mammography, ultrasound and magnetic resonance imaging (MRI) can significantly improve the prognosis of patients. AI has shown excellent performance in image recognition tasks and has been widely studied in breast cancer screening. This paper introduces the background of AI and its application in breast medical imaging (mammography, ultrasound and MRI), such as in the identification, segmentation and classification of lesions; breast density assessment; and breast cancer risk assessment. In addition, we also discuss the challenges and future perspectives of the application of AI in medical imaging of the breast.
Collapse
Affiliation(s)
- Yu-Meng Lei
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Miao Yin
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Mei-Hui Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Jing Yu
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Shu-E Zeng
- Department of Medical Ultrasound, Hubei Cancer Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wen-Zhi Lv
- Department of Artificial Intelligence, Julei Technology, Wuhan, China
| | - Jun Li
- Department of Medical Ultrasound, The First Affiliated Hospital of Medical College, Shihezi University, Xinjiang, China
| | - Hua-Rong Ye
- Department of Medical Ultrasound, China Resources & Wisco General Hospital, Academic Teaching Hospital of Wuhan University of Science and Technology, Wuhan, China
| | - Xin-Wu Cui
- Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Christoph F. Dietrich
- Department Allgemeine Innere Medizin (DAIM), Kliniken Beau Site, Salem und Permanence, Bern, Switzerland
| |
Collapse
|
9
|
Shen YT, Chen L, Yue WW, Xu HX. Artificial intelligence in ultrasound. Eur J Radiol 2021; 139:109717. [PMID: 33962110 DOI: 10.1016/j.ejrad.2021.109717] [Citation(s) in RCA: 95] [Impact Index Per Article: 23.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 03/28/2021] [Accepted: 04/11/2021] [Indexed: 12/13/2022]
Abstract
Ultrasound (US), a flexible green imaging modality, is expanding globally as a first-line imaging technique in various clinical fields following with the continual emergence of advanced ultrasonic technologies and the well-established US-based digital health system. Actually, in US practice, qualified physicians should manually collect and visually evaluate images for the detection, identification and monitoring of diseases. The diagnostic performance is inevitably reduced due to the intrinsic property of high operator-dependence from US. In contrast, artificial intelligence (AI) excels at automatically recognizing complex patterns and providing quantitative assessment for imaging data, showing high potential to assist physicians in acquiring more accurate and reproducible results. In this article, we will provide a general understanding of AI, machine learning (ML) and deep learning (DL) technologies; We then review the rapidly growing applications of AI-especially DL technology in the field of US-based on the following anatomical regions: thyroid, breast, abdomen and pelvis, obstetrics heart and blood vessels, musculoskeletal system and other organs by covering image quality control, anatomy localization, object detection, lesion segmentation, and computer-aided diagnosis and prognosis evaluation; Finally, we offer our perspective on the challenges and opportunities for the clinical practice of biomedical AI systems in US.
Collapse
Affiliation(s)
- Yu-Ting Shen
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China
| | - Liang Chen
- Department of Gastroenterology, Shanghai Tenth People's Hospital, Tongji University School of Medicine, Shanghai, 200072, PR China
| | - Wen-Wen Yue
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| | - Hui-Xiong Xu
- Department of Medical Ultrasound, Shanghai Tenth People's Hospital, Ultrasound Research and Education Institute, Tongji University School of Medicine, Tongji University Cancer Center, Shanghai Engineering Research Center of Ultrasound Diagnosis and Treatment, National Clnical Research Center of Interventional Medicine, Shanghai, 200072, PR China.
| |
Collapse
|
10
|
de Cesare I, Zamora-Chimal CG, Postiglione L, Khazim M, Pedone E, Shannon B, Fiore G, Perrino G, Napolitano S, di Bernardo D, Savery NJ, Grierson C, di Bernardo M, Marucci L. ChipSeg: An Automatic Tool to Segment Bacterial and Mammalian Cells Cultured in Microfluidic Devices. ACS OMEGA 2021; 6:2473-2476. [PMID: 33553865 PMCID: PMC7859942 DOI: 10.1021/acsomega.0c03906] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2020] [Accepted: 11/20/2020] [Indexed: 05/14/2023]
Abstract
Extracting quantitative measurements from time-lapse images is necessary in external feedback control applications, where segmentation results are used to inform control algorithms. We describe ChipSeg, a computational tool that segments bacterial and mammalian cells cultured in microfluidic devices and imaged by time-lapse microscopy, which can be used also in the context of external feedback control. The method is based on thresholding and uses the same core functions for both cell types. It allows us to segment individual cells in high cell density microfluidic devices, to quantify fluorescent protein expression over a time-lapse experiment, and to track individual mammalian cells. ChipSeg enables robust segmentation in external feedback control experiments and can be easily customized for other experimental settings and research aims.
Collapse
Affiliation(s)
- Irene de Cesare
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
| | - Criseida G. Zamora-Chimal
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Lorena Postiglione
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
| | - Mahmoud Khazim
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Elisa Pedone
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Barbara Shannon
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biochemistry, University of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Gianfranco Fiore
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Giansimone Perrino
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Sara Napolitano
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
| | - Diego di Bernardo
- Telethon
Institute of Genetic and Medicine Via Campi Flegrei 34, 80078 Pozzuoli, Italy
- Department
of Chemical, Materials and Industrial Production Engineering, University of Naples Federico II, 80125 Naples, Italy
| | - Nigel J. Savery
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biochemistry, University of Bristol, University Walk, Bristol BS8 1TD, U.K.
| | - Claire Grierson
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Biological Sciences, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
| | - Mario di Bernardo
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- Department
of EE and ICT, University of Naples Federico
II, Via Claudio 21, 80125 Naples, Italy
| | - Lucia Marucci
- Department
of Engineering Mathematics, University of
Bristol, Woodland Road, Bristol BS8 1UB, U.K.
- BrisSynBio,
Life Sciences Building, University of Bristol, Tyndall Avenue, Bristol BS8 1TQ, U.K.
- School
of Cellular and Molecular Medicine, University
of Bristol, University Walk, Bristol BS8 1TD, U.K.
| |
Collapse
|
11
|
Generation of a local lung respiratory motion model using a weighted sparse algorithm and motion prior-based registration. Comput Biol Med 2020; 123:103913. [PMID: 32768049 DOI: 10.1016/j.compbiomed.2020.103913] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2020] [Revised: 06/15/2020] [Accepted: 07/10/2020] [Indexed: 11/22/2022]
Abstract
Respiration-introduced tumor location uncertainty is a challenge in lung percutaneous interventions, especially for the respiratory motion estimation of the tumor and surrounding vessel structures. In this work, a local motion modeling method is proposed based on whole-chest computed tomography (CT) and CT-fluoroscopy (CTF) scans. A weighted sparse statistical modeling (WSSM) method that can accurately capture location errors for each landmark point is proposed for lung motion prediction. By varying the sparse weight coefficients of the WSSM method, newly input motion information is approximately represented by a sparse linear combination of the respiratory motion repository and employed to serve as prior knowledge for the following registration process. We have also proposed an adaptive motion prior-based registration method to improve the motion prediction accuracy of the motion model in the region of interest (ROI). This registration method adopts a B-spline scheme to interactively weight the relative influence of the prior knowledge, model surface and image intensity information by locally controlling the deformation in the CTF image region. The proposed method has been evaluated on 15 image pairs between the end-expiratory (EE) and end-inspiratory (EI) phases and 31 four-dimensional CT (4DCT) datasets. The results reveal that the proposed WSSM method achieved a better motion prediction performance than other existing lung statistical motion modeling methods, and the motion prior-based registration method can generate more accurate local motion information in the ROI.
Collapse
|
12
|
A statistical weighted sparse-based local lung motion modelling approach for model-driven lung biopsy. Int J Comput Assist Radiol Surg 2020; 15:1279-1290. [PMID: 32347465 DOI: 10.1007/s11548-020-02154-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2019] [Accepted: 04/03/2020] [Indexed: 10/24/2022]
Abstract
PURPOSE Lung biopsy is currently the most effective procedure for cancer diagnosis. However, respiration-induced location uncertainty presents a challenge in precise lung biopsy. To reduce the medical image requirements for motion modelling, in this study, local lung motion information in the region of interest (ROI) is extracted from whole chest computed tomography (CT) and CT-fluoroscopy scans to predict the motion of potentially cancerous tissue and important vessels during the model-driven lung biopsy process. METHODS The motion prior of the ROI was generated via a sparse linear combination of a subset of motion information from a respiratory motion repository, and a weighted sparse-based statistical model was used to preserve the local respiratory motion details. We also employed a motion prior-based registration method to improve the motion estimation accuracy in the ROI and designed adaptive variable coefficients to interactively weigh the relative influence of the prior knowledge and image intensity information during the registration process. RESULTS The proposed method was applied to ten test subjects for the estimation of the respiratory motion field. The quantitative analysis resulted in a mean target registration error of 1.5 (0.8) mm and an average symmetric surface distance of 1.4 (0.6) mm. CONCLUSIONS The proposed method shows remarkable advantages over traditional methods in preserving local motion details and reducing the estimation error in the ROI. These results also provide a benchmark for lung respiratory motion modelling in the literature.
Collapse
|
13
|
Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J Biomed Health Inform 2020; 24:984-993. [DOI: 10.1109/jbhi.2019.2960821] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
14
|
Gorbenko I, Mikołajczyk K, Jasionowska M, Narloch J, Kałużyński K. Automatic segmentation of facial soft tissue in MRI data based on non-rigid normalization in application to soft tissue thickness measurement. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2019.101698] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
15
|
Qi G, Guan W, He Z, Huang A. Adaptive kernel fuzzy C-Means clustering algorithm based on cluster structure1. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2019. [DOI: 10.3233/jifs-182750] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- Geqi Qi
- Key Laboratory of Transport Industry of Big Data Application Technologies for Comprehensive Transport, Ministry of Transport, Beijing Jiaotong University, Beijing, P. R. China
| | - Wei Guan
- Key Laboratory of Transport Industry of Big Data Application Technologies for Comprehensive Transport, Ministry of Transport, Beijing Jiaotong University, Beijing, P. R. China
| | - Zhengbing He
- College of Metropolitan Transportation, Beijing University of Technology, Beijing, China
| | - Ailing Huang
- Key Laboratory of Transport Industry of Big Data Application Technologies for Comprehensive Transport, Ministry of Transport, Beijing Jiaotong University, Beijing, P. R. China
| |
Collapse
|
16
|
Keatmanee C, Chaumrattanakul U, Kotani K, Makhanov SS. Initialization of active contours for segmentation of breast cancer via fusion of ultrasound, Doppler, and elasticity images. ULTRASONICS 2019; 94:438-453. [PMID: 29477236 DOI: 10.1016/j.ultras.2017.12.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 12/15/2017] [Accepted: 12/19/2017] [Indexed: 06/08/2023]
Abstract
Active contours (snakes) are an efficient method for segmentation of ultrasound (US) images of breast cancer. However, the method produces inaccurate results if the seeds are initialized improperly (far from the true boundaries and close to the false boundaries). Therefore, we propose a novel initialization method based on the fusion of a conventional US image with elasticity and Doppler images. The proposed fusion method (FM) has been tested against four state-of-the-art initialization methods on 90 ultrasound images from a database collected by the Thammasat University Hospital of Thailand. The ground truth was hand-drawn by three leading radiologists of the hospital. The reference methods are: center of divergence (CoD), force field segmentation (FFS), Poisson Inverse Gradient Vector Flow (PIG), and quasi-automated initialization (QAI). A variety of numerical tests proves the advantages of the FM. For the raw US images, the percentage of correctly initialized contours is: FM-94.2%, CoD-0%, FFS-0%, PIG-26.7%, QAI-42.2%. The percentage of correctly segmented tumors is FM-84.4%, CoD-0%, FFS-0%, PIG-16.67%, QAI-22.44%. For reduced field of view US images, the percentage of correctly initialized contours is: FM-94.2%, CoD-0%, FFS-0%, PIG-65.6%, QAI-67.8%. The correctly segmented tumors are FM-88.9%, CoD-0%, FFS-0%, PIG-48.9%, QAI-44.5%. The accuracy, in terms of the average Hausdorff distance, is respectively 2.29 pixels, 33.81, 34.71, 7.7, and 8.4, whereas in terms of the Jaccard index, it is 0.9, 0.18, 0.19, 0.63, and 0.48.
Collapse
Affiliation(s)
- Chadaporn Keatmanee
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand; Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | | | - Kazunori Kotani
- Japan Advanced Institute of Science and Technology, Ishikawa, Japan
| | - Stanislav S Makhanov
- Sirindhorn International Institute of Technology, Thammasat University, Pathum Thani, Thailand.
| |
Collapse
|
17
|
Wu GG, Zhou LQ, Xu JW, Wang JY, Wei Q, Deng YB, Cui XW, Dietrich CF. Artificial intelligence in breast ultrasound. World J Radiol 2019; 11:19-26. [PMID: 30858931 PMCID: PMC6403465 DOI: 10.4329/wjr.v11.i2.19] [Citation(s) in RCA: 59] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/29/2018] [Revised: 01/14/2019] [Accepted: 01/26/2019] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) is gaining extensive attention for its excellent performance in image-recognition tasks and increasingly applied in breast ultrasound. AI can conduct a quantitative assessment by recognizing imaging information automatically and make more accurate and reproductive imaging diagnosis. Breast cancer is the most commonly diagnosed cancer in women, severely threatening women's health, the early screening of which is closely related to the prognosis of patients. Therefore, utilization of AI in breast cancer screening and detection is of great significance, which can not only save time for radiologists, but also make up for experience and skill deficiency on some beginners. This article illustrates the basic technical knowledge regarding AI in breast ultrasound, including early machine learning algorithms and deep learning algorithms, and their application in the differential diagnosis of benign and malignant masses. At last, we talk about the future perspectives of AI in breast ultrasound.
Collapse
Affiliation(s)
- Ge-Ge Wu
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Li-Qiang Zhou
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Jian-Wei Xu
- Department of Ultrasound, The First Affiliated Hospital of Zhengzhou University, Zhengzhou 450052, Henan Province, China
| | - Jia-Yu Wang
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Qi Wei
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - You-Bin Deng
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Xin-Wu Cui
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
| | - Christoph F Dietrich
- Sino-German Tongji-Caritas Research Center of Ultrasound in Medicine, Department of Medical Ultrasound, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, Hubei Province, China
- Medical Clinic 2, Caritas-Krankenhaus Bad Mergentheim, Academic Teaching Hospital of the University of Würzburg, Würzburg 97980, Germany
| |
Collapse
|
18
|
Lin Z, Xu H, Zhang D. Automated Muscle Segmentation from Dynamic Computed Tomographic Angiography Images for Diagnosis of Peripheral Arterial Occlusive Disease. IEEE ACCESS 2019; 7:146506-146511. [DOI: 10.1109/access.2019.2944935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/07/2025]
|
19
|
Fuzzy entropy based on differential evolution for breast gland segmentation. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2018; 41:1101-1114. [PMID: 30203178 DOI: 10.1007/s13246-018-0672-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/26/2018] [Accepted: 08/09/2018] [Indexed: 10/28/2022]
Abstract
For the diagnosis and treatment of breast tumors, the automatic detection of glands is a crucial step. The true segmentation of the gland is directly related to effective treatment effect of the patient. Therefore, it is necessary to propose an automatic segmentation algorithm based on mammary gland features. A segmentation method of differential evolution (DE) fuzzy entropy based on mammary gland is proposed in the paper. According to the image fuzzy entropy, the evaluation function of image segmentation is constructed in the first step. Then, the method adopts DE, the image fuzzy entropy parameter is regard as the initial population of individual. After the mutation, crossover and selection of three evolutionary processes to search for the maximum fuzzy entropy of parameters, the optimal threshold of the segmented gland is achieved. Finally, the mammary gland is segmented by the threshold method of maximum fuzzy entropy. Eight breast images with four tissue types are tested 100 times, with accuracy (Acc), sensitivity (Sen), specificity (Spe), positive predictive value (PPV), negative predicted value (NPV), and average structural similarity (Mssim) to measure the segmentation result. The Acc of the proposed algorithm is 98.46 ± 8.02E-03%, 95.93 ± 2.38E-02%, 93.88 ± 6.59E-02%, 94.73 ± 1.82E-01%, 96.19 ± 1.15E-02%, and 97.51 ± 1.36E-02%, 96.64 ± 6.35E-02%, and 94.76 ± 6.21E-02%, respectively. The mean Mssim values of the 100 tests were 0.985, 0.933, 0.924, 0.907, 0.984, 0.928, 0.938, and 0.941, respectively. Our proposed algorithm is more effective and robust in comparison to the other fuzzy entropy based on swarm intelligent optimization algorithms. The experimental results show that the proposed algorithm has higher accuracy in the segmentation of mammary glands, and may serve as a gold standard in the analysis of treatment of breast tumors.
Collapse
|
20
|
Hann A, Bettac L, Haenle MM, Graeter T, Berger AW, Dreyhaupt J, Schmalstieg D, Zoller WG, Egger J. Algorithm guided outlining of 105 pancreatic cancer liver metastases in Ultrasound. Sci Rep 2017; 7:12779. [PMID: 28986569 PMCID: PMC5630585 DOI: 10.1038/s41598-017-12925-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 09/20/2017] [Indexed: 12/19/2022] Open
Abstract
Manual segmentation of hepatic metastases in ultrasound images acquired from patients suffering from pancreatic cancer is common practice. Semiautomatic measurements promising assistance in this process are often assessed using a small number of lesions performed by examiners who already know the algorithm. In this work, we present the application of an algorithm for the segmentation of liver metastases due to pancreatic cancer using a set of 105 different images of metastases. The algorithm and the two examiners had never assessed the images before. The examiners first performed a manual segmentation and, after five weeks, a semiautomatic segmentation using the algorithm. They were satisfied in up to 90% of the cases with the semiautomatic segmentation results. Using the algorithm was significantly faster and resulted in a median Dice similarity score of over 80%. Estimation of the inter-operator variability by using the intra class correlation coefficient was good with 0.8. In conclusion, the algorithm facilitates fast and accurate segmentation of liver metastases, comparable to the current gold standard of manual segmentation.
Collapse
Affiliation(s)
- Alexander Hann
- Department of Internal Medicine I, Ulm University, Ulm, Germany. .,Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany.
| | - Lucas Bettac
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Mark M Haenle
- Department of Internal Medicine I, Ulm University, Ulm, Germany
| | - Tilmann Graeter
- Department of Diagnostic and Interventional Radiology, Ulm University, Ulm, Germany
| | | | - Jens Dreyhaupt
- Institute of Epidemiology & Medical Biometry, Ulm University, Ulm, Germany
| | - Dieter Schmalstieg
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria
| | - Wolfram G Zoller
- Department of Internal Medicine and Gastroenterology, Katharinenhospital, Kriegsbergstraße 60, 70174, Stuttgart, Germany
| | - Jan Egger
- Institute for Computer Graphics and Vision, Graz University of Technology, Inffeldgasse 16, 8010, Graz, Austria.,BioTechMed, Krenngasse 37/1, 8010, Graz, Austria
| |
Collapse
|
21
|
Li J, Lin D, Wang YP. Segmentation of multicolor fluorescence in situ hybridization images using an improved fuzzy C-means clustering algorithm by incorporating both spatial and spectral information. J Med Imaging (Bellingham) 2017; 4:044001. [PMID: 29021991 PMCID: PMC5633778 DOI: 10.1117/1.jmi.4.4.044001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 09/12/2017] [Indexed: 11/14/2022] Open
Abstract
Multicolor fluorescence in situ hybridization (M-FISH) is a multichannel imaging technique for rapid detection of chromosomal abnormalities. It is a critical and challenging step to segment chromosomes from M-FISH images toward better chromosome classification. Recently, several fuzzy C-means (FCM) clustering-based methods have been proposed for M-FISH image segmentation or classification, e.g., adaptive fuzzy C-means (AFCM) and improved AFCM (IAFCM), but most of these methods used only one channel imaging information with limited accuracy. To improve the segmentation for better accuracy and more robustness, we proposed an FCM clustering-based method, denoted by spatial- and spectral-FCM. Our method has the following advantages: (1) it is able to exploit information from neighboring pixels (spatial information) to reduce the noise and (2) it can incorporate pixel information across different channels simultaneously (spectral information) into the model. We evaluated the performance of our method by comparing with other FCM-based methods in terms of both accuracy and false-positive detection rate on synthetic, hybrid, and real images. The comparisons on 36 M-FISH images have shown that our proposed method results in higher segmentation accuracy ([Formula: see text]) and a lower false-positive ratio ([Formula: see text]) than conventional FCM (accuracy: [Formula: see text], and false-positive ratio: [Formula: see text]) and the IAFCM (accuracy: [Formula: see text] and false-positive ratio: [Formula: see text]) methods by incorporating both spatial and spectral information from M-FISH images.
Collapse
Affiliation(s)
- Jingyao Li
- Tulane University, Department of Biomedical Engineering, New Orleans, Louisiana, United States
| | - Dongdong Lin
- Tulane University, Department of Biomedical Engineering, New Orleans, Louisiana, United States
| | - Yu-Ping Wang
- Tulane University, Department of Biomedical Engineering, New Orleans, Louisiana, United States
- Tulane University, Department of Global Biostatistics and Data Sciences, New Orleans, Louisiana, United States
| |
Collapse
|