1
|
Alexandra Mészáros L, Madarász L, Kádár S, Ficzere M, Farkas A, Kristóf Nagy Z. Machine vision-based non-destructive dissolution prediction of meloxicam-containing tablets. Int J Pharm 2024; 655:124013. [PMID: 38503398 DOI: 10.1016/j.ijpharm.2024.124013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/15/2024] [Accepted: 03/15/2024] [Indexed: 03/21/2024]
Abstract
Machine vision systems have emerged for quality assessment of solid dosage forms in the pharmaceutical industry. These can offer a versatile tool for continuous manufacturing while supporting the framework of process analytical technology, quality-by-design, and real-time release testing. The aim of this work is to develop a digital UV/VIS imaging-based system for predicting the in vitro dissolution of meloxicam-containing tablets. The alteration of the dissolution profiles of the samples required different levels of the critical process parameters, including compression force, particle size and content of the API. These process parameters were predicted non-destructively by multivariate analysis of UV/VIS images taken from the tablets. The dissolution profile prediction was also executed using solely the image data and applying artificial neural networks. The prediction error (RMSE) of the dissolution profile points was less than 5%. The alteration of the API content directly affected the maximum concentrations observed at the end of the dissolution tests. This parameter was predicted with a relative error of less than 10% by PLS models that are based on the color components of UV and VIS images. In conclusion, this paper presents a modern, non-destructive PAT solution for real-time testing of the dissolution of tablets.
Collapse
Affiliation(s)
- Lilla Alexandra Mészáros
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary
| | - Lajos Madarász
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary
| | - Szabina Kádár
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary
| | - Máté Ficzere
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary
| | - Attila Farkas
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111 Budapest, Műegyetem rakpart 3, Hungary.
| |
Collapse
|
2
|
Péterfi O, Mészáros LA, Szabó-Szőcs B, Ficzere M, Sipos E, Farkas A, Galata DL, Nagy ZK. UV-VIS imaging-based investigation of API concentration fluctuation caused by the sticking behaviour of pharmaceutical powder blends. Int J Pharm 2024; 655:124010. [PMID: 38493839 DOI: 10.1016/j.ijpharm.2024.124010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2024] [Revised: 03/14/2024] [Accepted: 03/14/2024] [Indexed: 03/19/2024]
Abstract
Surface powder sticking in pharmaceutical mixing vessels poses a risk to the uniformity and quality of drug formulations. This study explores methods for evaluating the amount of pharmaceutical powder mixtures adhering to the metallic surfaces. Binary powder blends consisting of amlodipine and microcrystalline cellulose (MCC) were used to investigate the effect of the mixing order on the adherence to the vessel wall. Elevated API concentrations were measured on the wall and within the dislodged material from the surface, regardless of the mixing order of the components. UV imaging was used to determine the particle size and the distribution of the API on the metallic surface. The results were compared to chemical maps obtained by Raman chemical imaging. The combination of UV and VIS imaging enabled the rapid acquisition of chemical maps, covering a substantially large area representative of the analysed sample. UV imaging was also applied in tablet inspection to detect tablets that fail to meet the content uniformity criteria. The results present powder adherence as a possible source of poor content uniformity, highlighting the need for 100% inspection of pharmaceutical products to ensure product quality and safety.
Collapse
Affiliation(s)
- Orsolya Péterfi
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Lilla Alexandra Mészáros
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Bence Szabó-Szőcs
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Máté Ficzere
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Emese Sipos
- Department of Pharmaceutical Industry and Management, Faculty of Pharmacy, George Emil Palade University of Medicine, Pharmacy, Sciences and Technology of Targu Mures, Gheorghe Marinescu Street 38, 540142 Targu Mures, Romania
| | - Attila Farkas
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Dorián László Galata
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary.
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| |
Collapse
|
3
|
Li Q, Wen X, Liang S, Sun X, Ma H, Zhang Y, Tan Y, Hong H, Luo Y. Enhancing bighead carp cutting: Chilled storage insights and machine vision-based segmentation algorithm development. Food Chem 2024; 450:139280. [PMID: 38631209 DOI: 10.1016/j.foodchem.2024.139280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2023] [Revised: 03/21/2024] [Accepted: 04/05/2024] [Indexed: 04/19/2024]
Abstract
To enhance market demand and fish utilization, cutting processing is essential for fish. Bighead carp were cut into four primary cuts: head, dorsal, belly, and tail, collectively accounting for 77.03% of the fish's total weight. These cuts were refrigerated at 4 °C for 10 days, during which the muscle from each cut was analyzed. Pseudomonas.fragi proliferated most rapidly and was most abundant in eye muscle (EM), while Aeromonas.sobria showed similar growth patterns in tail muscle (TM). Notably, EM exhibited the highest rate of fat oxidation. TM experienced the most rapid protein degradation. Furthermore, to facilitate the cutting applied in mechanical processing, a machine vision-based algorithm was developed. This algorithm utilized color threshold and morphological parameters to segment image background and divide bighead carp region. Consequently, each cut of bighead carp had a different storage quality and the machine vision-based algorithm proved effective for processing bighead carp.
Collapse
Affiliation(s)
- Qing Li
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Xinyi Wen
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Shijie Liang
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Xiaoyue Sun
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Huawei Ma
- ASEAN Key Laboratory of Comprehensive Exploitation and Utilization of Aquatic Germplasm Resources, Guangxi Academy of Fishery Sciences, Nanning 530021, China
| | - Yihan Zhang
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Yuqing Tan
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China
| | - Hui Hong
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China.
| | - Yongkang Luo
- Beijing Laboratory for Food Quality and Safety, College of Food Science and Nutritional Engineering, China Agricultural University, Beijing 100083, China.
| |
Collapse
|
4
|
Heng Q, Yu S, Zhang Y. A new AI-based approach for automatic identification of tea leaf disease using deep neural network based on hybrid pooling. Heliyon 2024; 10:e26465. [PMID: 38434404 PMCID: PMC10906319 DOI: 10.1016/j.heliyon.2024.e26465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Revised: 02/09/2024] [Accepted: 02/14/2024] [Indexed: 03/05/2024] Open
Abstract
The degree of production efficiency and the quality of the commodities produced may both be directly impacted by the presence of illnesses in tea leaves. These days, this procedure may be automated with the use of artificial intelligence tools, and a number of approaches have been put out to satisfy these needs. Nonetheless, current research efforts have focused on improving diagnosis accuracy and expanding the variety of illnesses that might affect tea leaves. In this article, a new method is proposed for accurately diagnosing tea leaf diseases using artificial intelligence techniques. In the proposed method, the input images are preprocessed to remove redundant information. Then, a hybrid pooling-based Convolutional Neural Network (CNN) is employed to extract image features. In this method, the pooling layers of the CNN model are randomly adjusted based on either max pooling or average pooling functions. This strategy can enhance the efficiency of the CNN-based feature extraction model. In this method, the pooling layers of the CNN model are randomly adjusted based on either max pooling or average pooling functions. This strategy can enhance the efficiency of the CNN-based feature extraction model. After feature extraction, a weighted Random Forest (WRF) model is used for the detection of tea leaf diseases. The outputs of the decision tree models and their corresponding weights are used to identify tea leaf illnesses in this classification model, where each tree in the random forest is given a weight depending on how well it performs. The Cuckoo Search Optimization (CSO) method is used in the proposed classification model to give a weight to each tree. Tea Sickness Dataset (TSD) has been used as the basis for evaluating the suggested method's effectiveness. The findings show that the suggested approach has an average accuracy of 92.47% in identifying seven different forms of tea leaf illnesses. Additionally, the recall and accuracy metrics indicate results of 92.35 and 92.26, respectively, indicating improvements over earlier techniques.
Collapse
Affiliation(s)
- Qidong Heng
- School of Public Administration, Beijing City University, Beijing, 100083, China
| | - Sibo Yu
- School of Information Science and Engineering, Beijing City University, Beijing, 100083, China
| | - Yandong Zhang
- Anxi Campus - Anxi College of Tea Science, Fujian Agriculture and Forestry University, Fuzhou, 350000, Fujian, China
| |
Collapse
|
5
|
Murray J, Heng D, Lygate A, Porto L, Abade A, Manica S, Franco A. Applying artificial intelligence to determination of legal age of majority from radiographic data. Morphologie 2024; 108:100723. [PMID: 37897941 DOI: 10.1016/j.morpho.2023.100723] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2023] [Accepted: 08/24/2023] [Indexed: 10/30/2023]
Abstract
Forensic odontologists use biological patterns to estimate chronological age for the judicial system. The age of majority is a legally significant period with a limited set of reliable oral landmarks. Currently, experts rely on the questionable development of third molars to assess whether litigants can be prosecuted as legal adults. Identification of new and novel patterns may illuminate features more dependably indicative of chronological age, which have, until now, remained unseen. Unfortunately, biased perceptions and limited cognitive capacity compromise the ability of researchers to notice new patterns. The present study demonstrates how artificial intelligence can break through identification barriers and generate new estimation modalities. A convolutional neural network was trained with 4003 panoramic-radiographs to sort subjects into 'under-18' and 'over-18' age categories. The resultant architecture identified legal adults with a high predictive accuracy equally balanced between precision, specificity and recall. Moving forward, AI-based methods could improve courtroom efficiency, stand as automated assessment methods and contribute to our understanding of biological ageing.
Collapse
Affiliation(s)
- J Murray
- Department of Forensic Odontology, University of Dundee, Nethergate, Dundee DD1 4HN, UK.
| | - D Heng
- Department of Forensic Odontology, University of Dundee, Nethergate, Dundee DD1 4HN, UK
| | - A Lygate
- Department of Forensic Odontology, University of Dundee, Nethergate, Dundee DD1 4HN, UK
| | - L Porto
- Department of Mechanical Engineering, University of Brasilia, Federal District 70910-900, Brazil
| | - A Abade
- Departmento de Computacao, Instituto Federal de Educacao, Ciencie e Tecnologia de Mato Grosso, Cuiaba, Mato Grosso, Brazil
| | - S Manica
- Department of Forensic Odontology, University of Dundee, Nethergate, Dundee DD1 4HN, UK
| | - A Franco
- Department of Forensic Odontology, University of Dundee, Nethergate, Dundee DD1 4HN, UK; Division of Forensic Dentistry, Faculdade São Leopoldo Mandic, Campinas, Brazil
| |
Collapse
|
6
|
Jiang Z, Zhang L, Li W. A machine vision method for the evaluation of ship-to-ship collision risk. Heliyon 2024; 10:e25105. [PMID: 38317916 PMCID: PMC10838803 DOI: 10.1016/j.heliyon.2024.e25105] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 01/12/2024] [Accepted: 01/20/2024] [Indexed: 02/07/2024] Open
Abstract
The development of ship technology and information technology has been driving the continuous improvement of ship intelligence, with safety being an inevitable requirement in the shipping industry. A machine vision-based ship collision warning method is proposed for high monitoring system cost and limited information acquisition in safety design of autonomous ship navigation. The method combines machine learning with image algorithms. Firstly, the backbone of YOLOv7 detector is replaced by EfficientFormerV2 network to achieve model lightweight while ensuring detection accuracy. Public datasets SeaShips, Flow and self-made ship pictures are combined, and the network is trained on this dataset. StrongSORT is used for target tracking. Secondly, a data fusion algorithm is introduced to determine the target point at the bow-bottom of the ship based on the time-varying attitude of the camera and the time-series features of the bounding boxes. Ship navigation trajectory estimation is performed using image algorithms. Finally, a collision evaluation model is established to calculate the collision risk index. Experimental results demonstrate that the improved YOLOv7 network maintains similar mAP.5 and Recall compared to the original model, while reducing the parameters by 31.2 % and GFLOPs by 58.4 %. The accuracy of target ship trajectory estimation is high, with MAE values below 1.5 % and RMSE values below 2 % in experiments. In ship collision warning experiments, the proposed method accurately identifies navigating vessels, estimates the trajectories, and provides timely warnings for imminent collision accidents. Compared to traditional ship collision warning methods, this paper offers a more intelligent and lightweight solution.
Collapse
Affiliation(s)
- Zhiqiang Jiang
- Huazhong University of Science and Technology, School of Naval Architecture and Ocean Engineering, 1037 Luoyu Road, Hongshan District, Wuhan, Hubei, 430074, China
| | - Lingyu Zhang
- Huazhong University of Science and Technology, School of Naval Architecture and Ocean Engineering, 1037 Luoyu Road, Hongshan District, Wuhan, Hubei, 430074, China
| | - Weijia Li
- Huazhong University of Science and Technology, School of Naval Architecture and Ocean Engineering, 1037 Luoyu Road, Hongshan District, Wuhan, Hubei, 430074, China
| |
Collapse
|
7
|
Ficzere M, Péterfi O, Farkas A, Nagy ZK, Galata DL. Image-based simultaneous particle size distribution and concentration measurement of powder blend components with deep learning and machine vision. Eur J Pharm Sci 2023; 191:106611. [PMID: 37844806 DOI: 10.1016/j.ejps.2023.106611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 08/21/2023] [Accepted: 10/14/2023] [Indexed: 10/18/2023]
Abstract
This work presents a system, where deep learning was used on images captured with a digital camera to simultaneously determine the API concentration and the particle size distribution (PSD) of two components of a powder blend. The blend consisted of acetylsalicylic acid (ASA) and calcium hydrogen phosphate (CHP), and the predicted API concentration was found corresponding with the HPLC measurements. The PSDs determined with the method corresponded with those measured with laser diffraction particle size analysis. This novel method provides fast and simple measurements and could be suitable for detecting segregation in the powder. By examining the powders discharged from a batch blender, the API concentrations at the top and bottom of the container could be measured, yielding information about the adequacy of the blending and improving the quality control of the manufacturing process.
Collapse
Affiliation(s)
- Máté Ficzere
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp 3., Budapest H 1111, Hungary
| | - Orsolya Péterfi
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp 3., Budapest H 1111, Hungary
| | - Attila Farkas
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp 3., Budapest H 1111, Hungary
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp 3., Budapest H 1111, Hungary.
| | - Dorián László Galata
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp 3., Budapest H 1111, Hungary
| |
Collapse
|
8
|
Péterfi O, Madarász L, Ficzere M, Lestyán-Goda K, Záhonyi P, Erdei G, Sipos E, Nagy ZK, Galata DL. In-line particle size measurement during granule fluidization using convolutional neural network-aided process imaging. Eur J Pharm Sci 2023; 189:106563. [PMID: 37582409 DOI: 10.1016/j.ejps.2023.106563] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 07/24/2023] [Accepted: 08/12/2023] [Indexed: 08/17/2023]
Abstract
This paper presents a machine learning-based image analysis method to monitor the particle size distribution of fluidized granules. The key components of the direct imaging system are a rigid fiber-optic endoscope, a light source and a high-speed camera, which allow for real-time monitoring of the granules. The system was implemented into a custom-made 3D-printed device that could reproduce the particle movement characteristic in a fluidized-bed granulator. The suitability of the method was evaluated by determining the particle size distribution (PSD) of various granule mixtures within the 100-2000 μm size range. The convolutional neural network-based software was able to successfully detect the granules that were in focus despite the dense flow of the particles. The volumetric PSDs were compared with off-line reference measurements obtained by dynamic image analysis and laser diffraction. Similar trends were observed across the PSDs acquired with all three methods. The results of this study demonstrate the feasibility of performing real-time particle size analysis using machine vision as an in-line process analytical technology (PAT) tool.
Collapse
Affiliation(s)
- Orsolya Péterfi
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Lajos Madarász
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Máté Ficzere
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Katalin Lestyán-Goda
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Petra Záhonyi
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| | - Gábor Erdei
- Department of Atomic Physics, Faculty of Natural Sciences, Budapest University of Technology and Economics, H-1111, Budapest, Budafoki 8, Hungary
| | - Emese Sipos
- Department of Pharmaceutical Industry and Management, Faculty of Pharmacy, George Emil Palade University of Medicine, Pharmacy, Sciences and Technology of Targu Mures, Gheorghe Marinescu street 38, 540142 Targu Mures, Romania
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary.
| | - Dorián László Galata
- Department of Organic Chemistry and Technology, Faculty of Chemical Technology and Biotechnology, Budapest University of Technology and Economics, Műegyetem rkp. 3., H-1111 Budapest, Hungary
| |
Collapse
|
9
|
Ang Z. Application of IoT technology based on neural networks in basketball training motion capture and injury prevention. Prev Med 2023; 175:107660. [PMID: 37573953 DOI: 10.1016/j.ypmed.2023.107660] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2023] [Revised: 08/08/2023] [Accepted: 08/10/2023] [Indexed: 08/15/2023]
Abstract
Basketball players need to frequently engage in various physical movements during the game, which puts a certain burden on their bodies and can easily lead to various sports injuries. Therefore, it is crucial to prevent sports injuries in basketball teaching. This paper also studies basketball motion track capture. Basketball motion capture preserves the motion posture information of the target person in three-dimensional space. Because the motion capture system based on machine vision often encounters problems such as occlusion or self occlusion in the application scene, human motion capture is still a challenging problem in the current research field. This article designs a multi perspective human motion trajectory capture algorithm framework, which uses a two-dimensional human motion pose estimation algorithm based on deep learning to estimate the position distribution of human joint points on the two-dimensional image from each perspective. By combining the knowledge of camera poses from multiple perspectives, the three-dimensional spatial distribution of joint points is transformed, and the final evaluation result of the target human 3D pose is obtained. This article applies the research results of neural networks and IoT devices to basketball motion capture methods, further developing basketball motion capture systems.
Collapse
Affiliation(s)
- Zhao Ang
- Hui Shang Vocational College, Hefei 230022, China.
| |
Collapse
|
10
|
Jing Y, Li C, Du T, Jiang T, Sun H, Yang J, Shi L, Gao M, Grzegorzek M, Li X. A comprehensive survey of intestine histopathological image analysis using machine vision approaches. Comput Biol Med 2023; 165:107388. [PMID: 37696178 DOI: 10.1016/j.compbiomed.2023.107388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Revised: 08/06/2023] [Accepted: 08/25/2023] [Indexed: 09/13/2023]
Abstract
Colorectal Cancer (CRC) is currently one of the most common and deadly cancers. CRC is the third most common malignancy and the fourth leading cause of cancer death worldwide. It ranks as the second most frequent cause of cancer-related deaths in the United States and other developed countries. Histopathological images contain sufficient phenotypic information, they play an indispensable role in the diagnosis and treatment of CRC. In order to improve the objectivity and diagnostic efficiency for image analysis of intestinal histopathology, Computer-aided Diagnosis (CAD) methods based on machine learning (ML) are widely applied in image analysis of intestinal histopathology. In this investigation, we conduct a comprehensive study on recent ML-based methods for image analysis of intestinal histopathology. First, we discuss commonly used datasets from basic research studies with knowledge of intestinal histopathology relevant to medicine. Second, we introduce traditional ML methods commonly used in intestinal histopathology, as well as deep learning (DL) methods. Then, we provide a comprehensive review of the recent developments in ML methods for segmentation, classification, detection, and recognition, among others, for histopathological images of the intestine. Finally, the existing methods have been studied, and the application prospects of these methods in this field are given.
Collapse
Affiliation(s)
- Yujie Jing
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Chen Li
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
| | - Tianming Du
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Tao Jiang
- School of Intelligent Medicine, Chengdu University of Traditional Chinese Medicine, Chengdu, China; International Joint Institute of Robotics and Intelligent Systems, Chengdu University of Information Technology, Chengdu, China
| | - Hongzan Sun
- Shengjing Hospital of China Medical University, Shenyang, China
| | - Jinzhu Yang
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Liyu Shi
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Minghe Gao
- Microscopic Image and Medical Image Analysis Group, College of Medicine and Biological Information Engineering, Northeastern University, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China
| | - Marcin Grzegorzek
- Institute for Medical Informatics, University of Luebeck, Luebeck, Germany; Department of Knowledge Engineering, University of Economics in Katowice, Katowice, Poland
| | - Xiaoyan Li
- Cancer Hospital of China Medical University, Liaoning Cancer Hospital, Shenyang, China.
| |
Collapse
|
11
|
Shen C, Ding M, Wu X, Cai G, Cai Y, Gai S, Wang B, Liu D. Identifying the quality characteristics of pork floss structure based on deep learning framework. Curr Res Food Sci 2023; 7:100587. [PMID: 37727873 PMCID: PMC10506091 DOI: 10.1016/j.crfs.2023.100587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/04/2023] [Accepted: 09/05/2023] [Indexed: 09/21/2023] Open
Abstract
Pork floss is a traditional Chinese food with a long history. Nowadays, pork floss is known to consumers as a leisure food. It is made from pork through a unique process in which the muscle fibers become flaky or granular and tangled. In this study, a deep learning-based approach is proposed to detect the quality characteristics of pork floss structure. Describe that the experiments were conducted using widely recognized brands of pork floss available in the grocery market, omitting the use of abbreviations. A total of 8000 images of eight commercially available pork flosses were collected and processed using sharpening, image gray coloring, real-time shading correction, and binarization. After the machine learning model learned the features of the pork floss, the images were labeled using a manual mask. The coupling of residual enhancement mask and region-based convolutional neural network (CRE-MRCNN) based deep learning framework was used to segment the images. The results showed that CRE-MRCNN could be used to identify the knot features and pore features of different brands of pork floss to evaluate their quality. The combined results of the models based on the sensory tests and machine vision showed that the pork floss from TC was the best, followed by YJJ, DD and HQ. This also shows the potential of machine vision to help people recognize the quality characteristics of pork floss structure.
Collapse
Affiliation(s)
- Che Shen
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
- Key Laboratory for Agricultural Products Processing of Anhui Province, School of Food Science and Engineering, Hefei University of Technology, Hefei, 230009, China
| | - Meiqi Ding
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| | - Xinnan Wu
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| | - Guanhua Cai
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| | - Yun Cai
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| | - Shengmei Gai
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| | - Bo Wang
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
- Key Laboratory of Meat Processing and Quality Control, MOE, Key Laboratory of Meat Processing, MARA, College of Food Science and Technology, Nanjing Agricultural University, Nanjing 210095, China
- Institute of Ocean Research, Bohai University, Jinzhou 121013, Liaoning, China
| | - Dengyong Liu
- College of Food Science and Technology, Bohai University, Jinzhou 121013, China
| |
Collapse
|
12
|
Chiang HY, Lin CH. The use of RGB-tracking of color changes during indigo-reduction processes based on LabVIEW machine vision. ANAL SCI 2023; 39:1607-1612. [PMID: 37223873 DOI: 10.1007/s44211-023-00353-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Accepted: 04/23/2023] [Indexed: 05/25/2023]
Abstract
The use of an RGB-tracking chart for monitoring the reduction of indigo (color changes) based on the LabVIEW machine vision is demonstrated for the first time. In contrast to a normal analytical chromatographic chart, the time scale is used on the X-axis, but the sum of "RGB-pixels" is used on the Y-axis, instead of "signal intensity". The RGB-tracking chart was obtained from an investigation of the process involved in the reduction of indigo, in which a PC camera was used as a detector and LabVIEW machine vision was simultaneously operated. As a result, when sodium dithionite (Na2S2O4) and yeast were used, respectively, during the indigo-reduction processes, two types of reduction processes were found; the optimized timing for dyeing can be easily determined from the RGB-tracking charts. Furthermore, based on the changes in HSV (hue, saturation, lightness), the use of sodium dithionite provides a higher number of hue and saturation when clothes & fabric were dyed. In contrast to this, a longer time was required for the yeast solution to reach the same high number for hue and saturation. After comparing several series of dyed fabrics, we found that the use of an RGB-tracking chart is indeed a reliable novel tool for measuring color changes that occur during the chemical reactions that are associated with this process.
Collapse
Affiliation(s)
- Hui-Yu Chiang
- Department of Chemistry, National Taiwan Normal University, 88 Sec. 4, Tingchow Rd., Taipei, 10677, Taiwan
| | - Cheng-Huang Lin
- Department of Fine Arts, National Taiwan University, No.1, Shida Rd., Da'an Dist., Taipei, 10645, Taiwan.
- Department of Chemistry, National Taiwan Normal University, 88 Sec. 4, Tingchow Rd., Taipei, 10677, Taiwan.
| |
Collapse
|
13
|
Jin S, Yang Z, Królczykg G, Liu X, Gardoni P, Li Z. Garbage detection and classification using a new deep learning-based machine vision system as a tool for sustainable waste recycling. Waste Manag 2023; 162:123-130. [PMID: 36989995 DOI: 10.1016/j.wasman.2023.02.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Revised: 01/27/2023] [Accepted: 02/13/2023] [Indexed: 06/19/2023]
Abstract
Waste recycling is a critical issue for environment pollution management while garbage classification determines the recycling efficiency. In order to reduce labor costs and increase garbage classification capacity, a machine vision system is established based on the deep learning and transfer learning. In this new method, an improved MobileNetV2 deep learning model is proposed for garbage detection and classification, where the attention mechanism is introduced into the first and last convolution layers of the MobileNetV2 model to improve the recognition accuracy and the transfer learning uses a set of pre-trained weight parameters to extend the model generalization ability. In addition, the principal component analysis (PCA) is employed to reduce the dimension of the last fully connected layer to enable real-time operation of the developed model on an edge device. The experimental results demonstrate that the proposed method generates 90.7 % of the garbage classification accuracy on the "Huawei Cloud" datasets, the average inference time is 600 ms on the raspberry Pi 4B microprocessor, and the model volume compression is 30.1 % of the basic MobileNetV2 model. Furthermore, a garbage sorting porotype is designed and manufactured to evaluate the performance of the proposed MobileNetV2 model on the real-world garbage identification, which turns out that the average garbage classification accuracy is 89.26 %. Hence, the developed garbage sorting porotype can be used a effective tool for sustainable waste recycling.
Collapse
Affiliation(s)
- Shoufeng Jin
- College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an 710600, China
| | - Zixuan Yang
- College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an 710600, China
| | - Grzegorz Królczykg
- Faculty of Mechanical Engineering, Opole University of Technology, Opole 45-758, Poland
| | - Xinying Liu
- College of Mechanical and Electrical Engineering, Xi'an Polytechnic University, Xi'an 710600, China
| | - Paolo Gardoni
- Department of Civil and Environmental Engineering, University of Illinois at Urbana-Champaign, Champaign, IL 61820, USA
| | - Zhixiong Li
- Donghai Laboratory, Zhoushan 316021, Zhejiang, China; Yonsei Frontier Lab, Yonsei University, Seoul 03722, South Korea.
| |
Collapse
|
14
|
Diltz ZR, Sheffer BJ. Intraoperative Navigation and Robotics in Pediatric Spinal Deformity. Orthop Clin North Am 2023; 54:201-207. [PMID: 36894292 DOI: 10.1016/j.ocl.2022.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Abstract
Current technologies for image guidance navigation and robotic assistance with spinal surgery are improving rapidly with several systems commercially available. Newer machine vision technology has several potential advantages. Limited studies have shown similar outcomes to traditional navigation platforms with decreased intraoperative radiation and time required for registration. However, there are no active robotic arms that can be coupled with machine vision navigation. Further research is necessary to justify the cost, potential increased operative time, and workflow issues but the use of navigation and robotics will only continue to expand given the growing body of evidence supporting their use.
Collapse
Affiliation(s)
- Zachary R Diltz
- Department of Orthopedic Surgery, LeBonheur Children's Hospital, 848 Adams Avenue, Memphis, TN 38103, USA; Department of Orthopedic Surgery, Campbell Clinic, University of Tennessee Health Science Center, 1211 Union Avenue, Memphis, TN 38104, USA; Campbell Clinic Orthopedics, 1400 South Germantown Road, Germantown, TN 38138, USA
| | - Benjamin J Sheffer
- Department of Orthopedic Surgery, LeBonheur Children's Hospital, 848 Adams Avenue, Memphis, TN 38103, USA; Department of Orthopedic Surgery, Campbell Clinic, University of Tennessee Health Science Center, 1211 Union Avenue, Memphis, TN 38104, USA; Campbell Clinic Orthopedics, 1400 South Germantown Road, Germantown, TN 38138, USA.
| |
Collapse
|
15
|
Nguyen LH, Pham NT, Do VH, Nguyen LT, Nguyen TT, Nguyen H, Nguyen ND, Nguyen TT, Nguyen SD, Bhatti A, Lim CP. Fruit-CoV: An efficient vision-based framework for speedy detection and diagnosis of SARS-CoV-2 infections through recorded cough sounds. Expert Syst Appl 2023; 213:119212. [PMID: 36407848 PMCID: PMC9639421 DOI: 10.1016/j.eswa.2022.119212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 10/20/2022] [Accepted: 11/01/2022] [Indexed: 06/16/2023]
Abstract
COVID-19 is an infectious disease caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). This deadly virus has spread worldwide, leading to a global pandemic since March 2020. A recent variant of SARS-CoV-2 named Delta is intractably contagious and responsible for more than four million deaths globally. Therefore, developing an efficient self-testing service for SARS-CoV-2 at home is vital. In this study, a two-stage vision-based framework, namely Fruit-CoV, is introduced for detecting SARS-CoV-2 infections through recorded cough sounds. Specifically, audio signals are converted into Log-Mel spectrograms, and the EfficientNet-V2 network is used to extract their visual features in the first stage. In the second stage, 14 convolutional layers extracted from the large-scale Pretrained Audio Neural Networks for audio pattern recognition (PANNs) and the Wavegram-Log-Mel-CNN are employed to aggregate feature representations of the Log-Mel spectrograms and the waveform. Finally, the combined features are used to train a binary classifier. In this study, a dataset provided by the AICovidVN 115M Challenge is employed for evaluation. It includes 7,371 recorded cough sounds collected throughout Vietnam, India, and Switzerland. Experimental results indicate that the proposed model achieves an Area Under the Receiver Operating Characteristic Curve (AUC) score of 92.8% and ranks first on the final leaderboard of the AICovidVN 115M Challenge. Our code is publicly available.
Collapse
Affiliation(s)
- Long H Nguyen
- Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam
| | - Nhat Truong Pham
- Division of Computational Mechatronics, Institute for Computational Science, Ton Duc Thang University, Ho Chi Minh City, Vietnam
- Faculty of Electrical and Electronics Engineering, Ton Duc Thang University, Ho Chi Minh City, Vietnam
| | | | - Liu Tai Nguyen
- Faculty of Information Technology, Ton Duc Thang University, Ho Chi Minh City, Vietnam
| | - Thanh Tin Nguyen
- Human Computer Interaction Lab, Sejong University, Seoul, South Korea
| | - Hai Nguyen
- Khoury College of Computer Sciences, Northeastern University, Boston, USA
| | - Ngoc Duy Nguyen
- Institute for Intelligent Systems Research and Innovation, Deakin University, Victoria, Australia
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University, Victoria, Australia
| | - Sy Dzung Nguyen
- Laboratory for Computational Mechatronics, Institute for Computational Science and Artificial Intelligence, Van Lang University, Ho Chi Minh City, Vietnam
- Faculty of Mechanical-Electrical and Computer Engineering, Van Lang University, Ho Chi Minh City, Vietnam
| | - Asim Bhatti
- Institute for Intelligent Systems Research and Innovation, Deakin University, Victoria, Australia
| | - Chee Peng Lim
- Institute for Intelligent Systems Research and Innovation, Deakin University, Victoria, Australia
| |
Collapse
|
16
|
Dierks C, Söldner R, Prühl K, Wagner N, Delmdahl N, Dominik A, Olszowy MW, Austerjost J. Towards an automated approach for smart sterility test examination. SLAS Technol 2022; 27:339-43. [PMID: 36183997 DOI: 10.1016/j.slast.2022.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Revised: 09/16/2022] [Accepted: 09/27/2022] [Indexed: 12/14/2022]
Abstract
As new technologies emerge, deep learning applications are often integral parts of new products as features and often as differentiating benefits. This is especially notable in commercial consumer products in everyday applications, such as voice assistants or streaming content recommendation systems. Due to the power and applicability of these deep learning technologies significant efforts are being directed to the development and integration of appropriate models into science and engineering applications to supplant analogue systems that may be highly prone to human error. Here we present an innovative, low-cost approach to advance sterility assessment workflows that are required and regulated within drug release/manufacturing processes. The model system leverages off-the-shelf hardware as well as deep learning models to detect and classify different microbial contaminations in test containers. The paired hardware and software tools were evaluated in experiments using common model organisms (C. sporogenes, P. aeruginosa, S. aureus). With this approach we were able to detect all three test organisms across 40 experiments, furthermore we were capable of classifying the present organisms with an average classification accuracy of over 87%.
Collapse
|
17
|
Li H, Zeng N, Wu P, Clawson K. Cov-Net: A computer-aided diagnosis method for recognizing COVID-19 from chest X-ray images via machine vision. Expert Syst Appl 2022; 207:118029. [PMID: 35812003 PMCID: PMC9252868 DOI: 10.1016/j.eswa.2022.118029] [Citation(s) in RCA: 35] [Impact Index Per Article: 17.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Revised: 06/17/2022] [Accepted: 06/29/2022] [Indexed: 05/05/2023]
Abstract
In the context of global pandemic Coronavirus disease 2019 (COVID-19) that threatens life of all human beings, it is of vital importance to achieve early detection of COVID-19 among symptomatic patients. In this paper, a computer aided diagnosis (CAD) model Cov-Net is proposed for accurate recognition of COVID-19 from chest X-ray images via machine vision techniques, which mainly concentrates on powerful and robust feature learning ability. In particular, a modified residual network with asymmetric convolution and attention mechanism embedded is selected as the backbone of feature extractor, after which skip-connected dilated convolution with varying dilation rates is applied to achieve sufficient feature fusion among high-level semantic and low-level detailed information. Experimental results on two public COVID-19 radiography databases have demonstrated the practicality of proposed Cov-Net in accurate COVID-19 recognition with accuracy of 0.9966 and 0.9901, respectively. Furthermore, within same experimental conditions, proposed Cov-Net outperforms other six state-of-the-art computer vision algorithms, which validates the superiority and competitiveness of Cov-Net in building highly discriminative features from the perspective of methodology. Hence, it is deemed that proposed Cov-Net has a good generalization ability so that it can be applied to other CAD scenarios. Consequently, one can conclude that this work has both practical value in providing reliable reference to the radiologist and theoretical significance in developing methods to build robust features with strong presentation ability.
Collapse
Affiliation(s)
- Han Li
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Peishu Wu
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian 361102, China
| | - Kathy Clawson
- School of Computer Science, University of Sunderland, Saint Peter Campus, United Kingdom
| |
Collapse
|
18
|
So JH, Joe SY, Hwang SH, Hong SJ, Lee SH. Current advances in detection of abnormal egg: a review. J Anim Sci Technol 2022; 64:813-829. [PMID: 36287780 PMCID: PMC9574607 DOI: 10.5187/jast.2022.e56] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2022] [Revised: 06/21/2022] [Accepted: 06/22/2022] [Indexed: 11/06/2022]
Abstract
Internal and external defects of eggs should be detected to prevent
cross-contamination of intact eggs by abnormal eggs during storage. Emerging
detection technologies for abnormal eggs were introduced as an alternative to
human inspection. The advanced technologies could rapidly detect abnormal eggs.
Abnormal egg detection technologies using acoustic response, machine vision, and
spectroscopy have been commercialized in the poultry industry. Non-destructive
egg quality assessment methods meanwhile could preserve the value of eggs and
improve detection efficiency. In order to improve detection efficiency, it is
essential to select a proper algorithm for classifying the types of abnormal
eggs. This review deals with the performance of the detection technologies for
various types of abnormal eggs in recently published resources. In addition, the
discriminant methods and detection algorithms of abnormal eggs reported in the
published literature were investigated. Although the majority of the studies
were conducted on a laboratory scale, the developed detection technologies for
internal and external defects in eggs were technically feasible to obtain the
excellent detection accuracy. To apply the developed detection technologies to
the poultry industry, it is necessary to achieve the detection rates required
from the industry.
Collapse
Affiliation(s)
- Jun-Hwi So
- Department of Smart Agriculture Systems,
Chungnam National University, Daejeon 34134, Korea
| | - Sung Yong Joe
- Department of Biosystems Machinery
Engineering, Chungnam National University, Daejeon 34134,
Korea
| | - Seon Ho Hwang
- Department of Smart Agriculture Systems,
Chungnam National University, Daejeon 34134, Korea
| | - Soon Jung Hong
- Department of Liberal Arts, Korea National
University of Agriculture and Fisheries, Jeonju 54874,
Korea
| | - Seung Hyun Lee
- Department of Smart Agriculture Systems,
Chungnam National University, Daejeon 34134, Korea,Department of Biosystems Machinery
Engineering, Chungnam National University, Daejeon 34134,
Korea,Corresponding author: Seung Hyun Lee,
Department of Smart Agriculture Systems, Chungnam National University, Daejeon
34134, Korea. Tel: +82-42-821-6718, E-mail:
| |
Collapse
|
19
|
Chiu TL, Lin SZ, Ahmed T, Huang CY, Chen CH. Pilot study of a new type of machine vision-assisted stereotactic neurosurgery for EVD placement. Acta Neurochir (Wien) 2022; 164:2385-93. [PMID: 35788905 DOI: 10.1007/s00701-022-05287-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Accepted: 06/15/2022] [Indexed: 12/14/2022]
Abstract
BACKGROUND The usage of machine vision technologies for image-based analysis and inspection is increasing. With the advent of the ability to process high-dimension data instantly, the possibilities of machine vision multiply exponentially. Robots now use this technology to assist in surgery. OBJECTIVE The aim of this study is to explore the efficacy of Surgical Navigation Robot NaoTrac (Brain Navi Biotechnology Co., Ltd.), which utilizes machine vision-inspired technology for patient registration and stereotactic external ventricular drainage (EVD) by the robotic arm. METHODS Preoperative and postoperative computed tomography (CT) scans were acquired for each case. The surgeons planned the targets and trajectories with the preoperative CT images. The postoperative CT images were utilized in the accuracy measurements. RESULTS All 14 cases had cerebrospinal fluid drained through the catheter. The NaoTrac placed the catheter into the frontal horn in one attempt in 13 cases and was able to drain CSF in 12 cases. Not a single case had any bleeding or intraoperative complications. The average time spent on the patient registration was 142.8 s. The mean target deviation was 1.68 mm, and the mean angular deviation was 1.99°, all within the accepted tolerance for minimal tissue damage. CONCLUSION The results of this report demonstrate that machine vision-inspired patient registration is feasible and fast. NaoTrac has demonstrated its accuracy and safety in performing frameless catheter placement in 13 clinical cases. Other stereotactic neurosurgical operations such as stereotactic biopsy, depth electrode placement, deep brain stimulation electrode positioning, and neuroendoscopy may also be benefited from the assistance of NaoTrac.
Collapse
|
20
|
Li M, Wang B, Yang J, Cao J, You C, Sun Y, Wang J, Wu D. Multistage adaptive control strategy based on image contour data for autonomous endoscope navigation. Comput Biol Med 2022; 149:105946. [PMID: 36030721 DOI: 10.1016/j.compbiomed.2022.105946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 07/23/2022] [Accepted: 08/06/2022] [Indexed: 11/26/2022]
Abstract
The physician burnout, poor ergonomics are hardly conducive to the sustainability and high quality of colonoscopy. In order to reduce doctors' workload and improve patients' experiences during colonoscopy, this paper proposes a multistage adaptive control approach based on image contour data to guide the autonomous navigation of endoscopes. First, a fast image preprocessing and contour extraction algorithms are designed. Second, different processing algorithms are developed according to the different contour information that can be clearly extracted to compute the endoscope control parameters. Third, when a clear contour cannot be extracted, a triple control method inspired by the turning of a novice car driver is devised to help the endoscope capture clear contours. The proposed multistage adaptive control approach is tested in an intestinal model over a variety of curved configurations and verified on the actual colonoscopy image. The results reveal the success of the strategy in both straight sections of this intestinal model and in tightly curved sections as small as 6 cm in radius of curvature. In the experiment, processing time for a single image is 20-25 ms and the accuracy of judging steering based on intestinal model pictures is 96.7%. Additionally, the average velocity reaches 3.04 cm/s in straight sections and 2.49 cm/s in curved sections respectively.
Collapse
Affiliation(s)
- Mingqiang Li
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Boquan Wang
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Jianlin Yang
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Jia Cao
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Chenzhi You
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China
| | - Yizhe Sun
- Department of Biomedical Engineering, University of Southern California, Los Angeles, CA, USA
| | - Jing Wang
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| | - Dawei Wu
- State Key Lab of Mechanics and Control of Mechanical Structures, Nanjing University of Aeronautics and Astronautics, Nanjing, 210016, China.
| |
Collapse
|
21
|
Liang X, Chen B, Wei C, Zhang X. Inter-row navigation line detection for cotton with broken rows. Plant Methods 2022; 18:90. [PMID: 35780217 PMCID: PMC9250195 DOI: 10.1186/s13007-022-00913-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2021] [Accepted: 05/25/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The application of autopilot technology is conductive to achieving path planning navigation and liberating labor productivity. In addition, the self-driving vehicles can drive according to the growth state of crops to ensure the accuracy of spraying and pesticide effect. Navigation line detection is the core technology of self-driving technology, which plays a more important role in the development of Chinese intelligent agriculture. The general algorithms for seedling line extraction in the agricultural fields are for large seedling crops. At present, scholars focus more on how to reduce the impact of crop row adhesion on extraction of crop rows. However, for seedling crops, especially double-row sown seedling crops, the navigation lines cannot be extracted very effectively due to the lack of plants or the interference of rut marks caused by wheel pressure on seedlings. To solve these problems, this paper proposed an algorithm that combined edge detection and OTSU to determine the seedling column contours of two narrow rows for cotton crops sown in wide and narrow rows. Furthermore, the least squares were used to fit the navigation line where the gap between two narrow rows of cotton was located, which could be well adapted to missing seedlings and rutted print interference. RESULTS The algorithm was developed using images of cotton at the seedling stage. Apart from that, the accuracy of route detection was tested under different lighting conditions and in maize and soybean at the seedling stage. According to the research results, the accuracy of the line of sight for seedling cotton was 99.2%, with an average processing time of 6.63 ms per frame; the accuracy of the line of sight for seedling corn was 98.1%, with an average processing time of 6.97 ms per frame; the accuracy of the line of sight for seedling soybean was 98.4%, with an average processing time of 6.72 ms per frame. In addition, the standard deviation of lateral deviation is 2 cm, and the standard deviation of heading deviation is 0.57 deg. CONCLUSION The proposed rows detection algorithm could achieve state-of-the-art performance. Besides, this method could ensure the normal spraying speed by adapting to different shadow interference and the randomness of crop row growth. In terms of the applications, it could be used as a reference for the navigation line fitting of other growing crops in complex environments disturbed by shadow.
Collapse
Affiliation(s)
- Xihuizi Liang
- Institute of intelligent manufacturing, Suzhou Chien-Shiung Institute of Technology, Suzhou, Jiangsu, China
| | - Bingqi Chen
- College of Engineering, China Agricultural University, Beijing, China.
| | - Chaojie Wei
- College of Engineering, China Agricultural University, Beijing, China
| | - Xiongchu Zhang
- College of Engineering, China Agricultural University, Beijing, China
| |
Collapse
|
22
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
23
|
Ficzere M, Alexandra Mészáros L, Kállai-Szabó N, Kovács A, Antal I, Kristóf Nagy Z, László Galata D. Real-time coating thickness measurement and defect recognition of film coated tablets with machine vision and deep learning. Int J Pharm 2022;:121957. [PMID: 35760260 DOI: 10.1016/j.ijpharm.2022.121957] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2022] [Revised: 06/20/2022] [Accepted: 06/21/2022] [Indexed: 11/22/2022]
Abstract
This paper presents a system, where images acquired with a digital camera are coupled with image analysis and deep learning to identify and categorize film coating defects and to measure the film coating thickness of tablets. There were 5 different classes of defective tablets, and the YOLOv5 algorithm was utilized to recognize defects, the accuracy of the classification was 98.2%. In order to characterize coating thickness, the diameter of the tablets in pixels was measured, which was used to measure the coating thickness of the tablets. The proposed system can be easily scaled up to match the production capability of continuous film coaters. With the developed technique, the complete screening of the produced tablets can be achieved in real-time resulting in the improvement of quality control.
Collapse
|
24
|
Rosa da Silva N, Deklerck V, Baetens JM, Van den Bulcke J, De Ridder M, Rousseau M, Bruno OM, Beeckman H, Van Acker J, De Baets B, Verwaeren J. Improved wood species identification based on multi-view imagery of the three anatomical planes. Plant Methods 2022; 18:79. [PMID: 35690828 PMCID: PMC9188236 DOI: 10.1186/s13007-022-00910-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 05/18/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND The identification of tropical African wood species based on microscopic imagery is a challenging problem due to the heterogeneous nature of the composition of wood combined with the vast number of candidate species. Image classification methods that rely on machine learning can facilitate this identification, provided that sufficient training material is available. Despite the fact that the three main anatomical sections contain information that is relevant for species identification, current methods only rely on transverse sections. Additionally, commonly used procedures for evaluating the performance of these methods neglect the fact that multiple images often originate from the same tree, leading to an overly optimistic estimate of the performance. RESULTS We introduce a new image dataset containing microscopic images of the three main anatomical sections of 77 Congolese wood species. A dedicated multi-view image classification method is developed and obtains an accuracy (computed using the naive but common approach) of 95%, outperforming the single-view methods by a large margin. An in-depth analysis shows that naive accuracy estimates can lead to a dramatic over-prediction, of up to 60%, of the accuracy. CONCLUSIONS Additional images from non-transverse sections can boost the performance of machine-learning-based wood species identification methods. Additionally, care should be taken when evaluating the performance of machine-learning-based wood species identification methods to avoid an overestimation of the performance.
Collapse
Affiliation(s)
- Núbia Rosa da Silva
- KERMIT, Department of Data Analysis and Mathematical Modelling, Ghent University, Ghent, Belgium.
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil.
- Institute of Biotechnology, Federal University of Catalão, Catalão, Goiás, Brazil.
| | | | - Jan M Baetens
- KERMIT, Department of Data Analysis and Mathematical Modelling, Ghent University, Ghent, Belgium
| | - Jan Van den Bulcke
- Laboratory of Wood Technology, Department of Environment, Ghent University, Ghent, Belgium
| | - Maaike De Ridder
- Service of Wood Biology, Royal Museum for Central Africa, Tervuren, Belgium
| | - Mélissa Rousseau
- Service of Wood Biology, Royal Museum for Central Africa, Tervuren, Belgium
| | - Odemir Martinez Bruno
- Institute of Mathematics and Computer Science, University of São Paulo, São Carlos, Brazil
- São Carlos Institute of Physics, University of São Paulo, São Carlos, Brazil
| | - Hans Beeckman
- Service of Wood Biology, Royal Museum for Central Africa, Tervuren, Belgium
| | - Joris Van Acker
- Laboratory of Wood Technology, Department of Environment, Ghent University, Ghent, Belgium
| | - Bernard De Baets
- KERMIT, Department of Data Analysis and Mathematical Modelling, Ghent University, Ghent, Belgium
| | - Jan Verwaeren
- KERMIT, Department of Data Analysis and Mathematical Modelling, Ghent University, Ghent, Belgium
| |
Collapse
|
25
|
Nansen C, Imtiaz MS, Mesgaran MB, Lee H. Experimental data manipulations to assess performance of hyperspectral classification models of crop seeds and other objects. Plant Methods 2022; 18:74. [PMID: 35658997 PMCID: PMC9164469 DOI: 10.1186/s13007-022-00912-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/14/2021] [Accepted: 05/22/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Optical sensing solutions are being developed and adopted to classify a wide range of biological objects, including crop seeds. Performance assessment of optical classification models remains both a priority and a challenge. METHODS As training data, we acquired hyperspectral imaging data from 3646 individual tomato seeds (germination yes/no) from two tomato varieties. We performed three experimental data manipulations: (1) Object assignment error: effect of individual object in the training data being assigned to the wrong class. (2) Spectral repeatability: effect of introducing known ranges (0-10%) of stochastic noise to individual reflectance values. (3) Size of training data set: effect of reducing numbers of observations in training data. Effects of each of these experimental data manipulations were characterized and quantified based on classifications with two functions [linear discriminant analysis (LDA) and support vector machine (SVM)]. RESULTS For both classification functions, accuracy decreased linearly in response to introduction of object assignment error and to experimental reduction of spectral repeatability. We also demonstrated that experimental reduction of training data by 20% had negligible effect on classification accuracy. LDA and SVM classification algorithms were applied to independent validation seed samples. LDA-based classifications predicted seed germination with RMSE = 10.56 (variety 1) and 26.15 (variety 2), and SVM-based classifications predicted seed germination with RMSE = 10.44 (variety 1) and 12.58 (variety 2). CONCLUSION We believe this study represents the first, in which optical seed classification included both a thorough performance evaluation of two separate classification functions based on experimental data manipulations, and application of classification models to validation seed samples not included in training data. Proposed experimental data manipulations are discussed in broader contexts and general relevance, and they are suggested as methods for in-depth performance assessments of optical classification models.
Collapse
Affiliation(s)
- Christian Nansen
- Department of Entomology and Nematology, University of California, Davis, USA.
- Department of Entomology and Nematology, UC Davis Briggs Hall, Room 367, Davis, CA, 95616, USA.
| | - Mohammad S Imtiaz
- Department of Electrical & Computer Engineering, Bradley University, Peoria, USA
| | | | - Hyoseok Lee
- Department of Entomology and Nematology, University of California, Davis, USA
| |
Collapse
|
26
|
Zheng C, Cao D, Hu C. A similarity-guided segmentation model for garbage detection under road scene. Front Optoelectron 2022; 15:22. [PMID: 36637526 PMCID: PMC9756228 DOI: 10.1007/s12200-022-00004-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 07/05/2021] [Indexed: 06/17/2023]
Abstract
The development of computer vision technology provides a possible path for realizing intelligent control of road sweepers to reduce energy waste in urban street cleaning work. For garbage segmentation of seven categories under road scene, we introduce an efficient deep-learning-based method. Our model follows a lightweight structure with a feature pyramid attention (FPA) module employed in the decoder to enhance feature integration at multi-levels. Besides, a similarity guidance (SG) module is added to the decoder branches, which calculates the cosine distance between learned prototypes and feature maps to guide the segmentation results from a metric learning perspective. Our model has less than 3 M parameters and can run at over 65 FPS in an RTX 2070 GPU. Experimental results demonstrate that our method can yield competitive results in terms of speed and accuracy trade-off, with overall mean intersection-over-union (mIoU) reaching 0.87 and 0.67, respectively, on two garbage data sets we built. Besides, our model can perform acceptable category-balanced segmentation from less than 20 annotated images per category by introducing the SG module.
Collapse
Affiliation(s)
- Caiyun Zheng
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074 China
| | - Danhua Cao
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074 China
| | - Cheng Hu
- School of Optical and Electronic Information, Huazhong University of Science and Technology, Wuhan, 430074 China
| |
Collapse
|
27
|
Liu Z, Zhang R, Yang C, Hu B, Luo X, Li Y, Dong C. Research on moisture content detection method during green tea processing based on machine vision and near-infrared spectroscopy technology. Spectrochim Acta A Mol Biomol Spectrosc 2022; 271:120921. [PMID: 35091181 DOI: 10.1016/j.saa.2022.120921] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Revised: 01/11/2022] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Abstract
Moisture content is an important indicator that affects green tea processing. In this study, taking Chuyeqi tea as the research object, a quantitative prediction model of the changes in moisture content during the processing of green tea was constructed based on machine vision and near-infrared spectroscopy technology. First, collect the spectrum and image information in the process of spreading, fixation, first-drying, carding, and second-drying. The competitive adaptive reweighted sampling (CARS) method is then used to extract the characteristic wavelengths in the spectrum, and the image's 9 color features and 6 texture features are combined to establish linear PLSR and nonlinear SVR prediction models by fusing the data information from the two sensors. The results show that, when compared to single data, the PLSR and SVR models based on low-level data fusion do not effectively improve the model's prediction accuracy, but rather produce poor prediction results. In contrast, the PLSR and SVR models established by middle-level data fusion have improved the prediction accuracy of moisture content in green tea processing. Among them, the established SVR model has the best effect. The correlation coefficient of the calibration set (Rc) and the root mean square error of calibration (RMSEC) are 0.9804 and 0.0425, respectively, the correlation coefficient of the prediction set (Rp) and the root mean square error of prediction (RMSEP) are 0.9777 and 0.0490 respectively, and the relative percent deviation is 4.5002. The results show that the middle data fusion based on machine vision and near-infrared spectroscopy technology can effectively predict the moisture content in the processing of green tea, which has important guiding significance for overcoming the low prediction accuracy of a single sensor.
Collapse
Affiliation(s)
- Zhongyuan Liu
- Tea Research Institute, The Chinese Academy of Agricultural Sciences, Hangzhou, Zhejiang 310008, China; College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832003, China
| | - Rentian Zhang
- Tea Research Institute, The Chinese Academy of Agricultural Sciences, Hangzhou, Zhejiang 310008, China; College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832003, China
| | - Chongshan Yang
- Tea Research Institute, The Chinese Academy of Agricultural Sciences, Hangzhou, Zhejiang 310008, China
| | - Bin Hu
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832003, China
| | - Xin Luo
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi 832003, China
| | - Yang Li
- Tea Research Institute, The Chinese Academy of Agricultural Sciences, Hangzhou, Zhejiang 310008, China.
| | - Chunwang Dong
- Tea Research Institute, The Chinese Academy of Agricultural Sciences, Hangzhou, Zhejiang 310008, China.
| |
Collapse
|
28
|
Hou X, Sivashanmugan K, Zhao Y, Zhang B, Wang AX. Multiplex Sensing of Complex Mixtures by Machine Vision Analysis of TLC-SERS Images. Sens Actuators B Chem 2022; 357:131355. [PMID: 35221529 PMCID: PMC8880841 DOI: 10.1016/j.snb.2021.131355] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Thin layer chromatography in tandem with surface-enhanced Raman scattering (TLC-SERS) has demonstrated tremendous potentials as a new analytical chemistry tool to detect a wide range of substances from real-world samples. However, it still faces significant challenges of multiplex sensing from complex mixtures due to the imperfect separation by TLC and the resulting interference of SERS detection. In this article, we propose a multiplex sensing method of complex mixtures by machine vision analysis of the scanning image of the TLC-SERS results. Briefly, various pure substances in solution and the complex mixture solution are separated by TLC followed by one-dimensional SERS scanning of the entire TLC plate, which generates TLC-SERS images of all target substances along the chromatography path. After that, a machine vision method is employed to extract the template images from the TLC-SERS images of pure substance solutions. Finally, we apply a feature point matching strategy based on the Winner-take-all principle, which matches the template image of each pure substance with the mixture image to confirm the existence and derive the position of each target substance in the TLC plate, respectively. Our experimental results based on the mixture solution of five different substances show that the proposed machine vision analysis is highly selective, sensitive and does not require artificial analysis of the SERS spectra. Therefore, we envision that the proposed machine vision analysis of the TLC-SERS imaging is an objective, accurate, and efficient method for multiplex sensing of trace level of target substances from complex mixtures.
Collapse
Affiliation(s)
- Xingwei Hou
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, 97331, USA
- State Key Laboratory of Precision Measurement Technology and Instrument and School of Precision Instruments & Opto-Electronics Engineering, Tianjin University, Tianjin 300072, P.R. China
| | - Kundan Sivashanmugan
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, 97331, USA
| | - Yong Zhao
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, 97331, USA
- School of Electrical Engineering, The Key Laboratory of Measurement Technology and Instrumentation of Hebei Province, Yanshan University, Qinhuangdao, Hebei 066004, P.R. China
| | - Boxin Zhang
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, 97331, USA
| | - Alan X. Wang
- School of Electrical Engineering and Computer Science, Oregon State University, Corvallis, OR, 97331, USA
| |
Collapse
|
29
|
Malham GM, Munday NR. Comparison of novel machine vision spinal image guidance system with existing 3D fluoroscopy-based navigation system: a randomized prospective study. Spine J 2022; 22:561-569. [PMID: 34666179 DOI: 10.1016/j.spinee.2021.10.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 10/01/2021] [Accepted: 10/01/2021] [Indexed: 02/03/2023]
Abstract
BACKGROUND CONTEXT The use of spinal image guidance systems (IGS) has increased patient safety, accuracy, operative efficiency, and reduced revision rates in pedicle screw placement procedures. Traditional intraoperative 3D fluoroscopy or CT imaging produces potentially harmful ionizing radiation and increases operative time to register the patient. An IGS, FLASH Navigation, uses machine vision through high resolution stereoscopic cameras and structured visible light to build a 3D topographical map of the patient's bony surface anatomy enabling navigation use without ionizing radiation. PURPOSE We aimed to compare FLASH navigation system to a widely used 3D fluoroscopic navigation (3D) platform by comparing radiation exposure and pedicle screw accuracy. DESIGN A randomized prospective comparative cohort study of consecutive patients undergoing open posterior lumbar instrumented fusion. PATIENT SAMPLE Adults diagnosed with spinal pathology requiring surgical treatment and planning for open posterior lumbar fusion with pedicle screws implanted into 1-4 vertebral levels. OUTCOME MEASURES Outcome measures included mean intraoperative fluoroscopy time and dose, mean CT dose length product (DLP) for preoperative and day 2 CT, pedicle screw accuracy by CT, estimated blood loss and revision surgery rate. METHODS Consecutive patients were randomized 1:1 to FLASH or 3D and underwent posterior lumbar instrumented fusion. Radiation doses were recorded from pre- and postoperative CT and intraoperative 3D fluoroscopy. 2 independent blinded radiologists reviewed pedicle screw accuracy on CT. RESULTS A total of 429 (n=210 FLASH, n=219 3D) pedicle screws were placed in 90 patients (n=45 FLASH, n=45 3D) over the 18-month study period. Mean age and indication for surgery were similar between both groups, with a non-significantly higher ratio of males in the 3D group. Mean intraoperative fluoroscopy time and doses were significantly reduced in FLASH compared to 3D (4.51±3.71s vs 79.6±23.0s, p<.001 and 80.9±68.1cGycm2 vs 3704.1±3442.4 cGycm2, p<.001, respectively). This represented a relative reduction of 94.3% in the total intraoperative radiation time and a 97.8% reduction in the total intraoperative radiation dose. Mean preoperative CT DLP and mean day 2 postoperative CT DLP were significantly reduced in FLASH compared to 3D (662.0±440.4mGy-cm vs 1008.9±616.3 mGy-cm, p<.001 and 577.9±294.3 mGy-cm vs 980.7±441.6 mGy-cm, p<.001, respectively). This represented relative reductions of 34.4% and 41.0% in the preoperative CT dose and postoperative total DLP, respectively. The FLASH group required an average of 1.2 registrations in each case with an average of 2447 (±961.3) data points registered with a mean registration time of 106s (±52.1). A rapid re-registration mechanism was utilized in 22% (n=10/45) of cases and took 22.7s (±11.3). Re-registration was used in 7% (n=3/45) in the 3D group. Pedicle screw accuracy was high in FLASH (98.1%) and 3D (97.3%) groups with no pedicle breach >2mm in either group (p<.001). EBL was not statistically different between the groups (p=.38). No neurovascular injuries occurred, and no patients required return to theatre for screw repositioning. CONCLUSIONS FLASH and 3D IGS demonstrate high accuracy for pedicle screw placement. FLASH showed significant reduction in intraoperative radiation time and dose with lower but non-significant blood loss. FLASH showed significant reduction in preoperative and postoperative radiation, but this may be associated to the lower number of males/females preponderance in this group. FLASH provides similar accuracy to contemporary IGS without requiring 3D-fluoroscopy or radiolucent operating tables. Reducing registration time and specialized equipment may reduce costs.
Collapse
Affiliation(s)
- Gregory M Malham
- Epworth Hospital, Richmond, Melbourne, Australia; Swinburne University of Technology, Melbourne, Australia.
| | | |
Collapse
|
30
|
Vagvolgyi BP, Jayakumar RP, Madhav MS, Knierim JJ, Cowan NJ. Wide-angle, monocular head tracking using passive markers. J Neurosci Methods 2022; 368:109453. [PMID: 34968626 PMCID: PMC8857048 DOI: 10.1016/j.jneumeth.2021.109453] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/22/2021] [Accepted: 12/17/2021] [Indexed: 10/19/2022]
Abstract
BACKGROUND Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. NEW METHODS We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch). RESULTS Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. COMPARISON WITH EXISTING METHODS Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. CONCLUSIONS The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.
Collapse
Affiliation(s)
- Balazs P. Vagvolgyi
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Corresponding author: (Balazs P. Vagvolgyi)
| | - Ravikrishnan P. Jayakumar
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Manu S. Madhav
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,School of Biomedical Engineering, Djawad Mowafaghian Centre for Brain Health, University of British Columbia, BC, Canada,Corresponding author: (Balazs P. Vagvolgyi)
| | - James J. Knierim
- Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, U.S.A.,Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, U.S.A
| | - Noah J. Cowan
- Laboratory for Computational Sensing and Robotics, Johns Hopkins University, Baltimore, MD, U.S.A.,Mechanical Engineering Department, Johns Hopkins University, Baltimore, MD, U.S.A
| |
Collapse
|
31
|
Huang Z, Omwange KA, Tsay LWJ, Saito Y, Maai E, Yamazaki A, Nakano R, Nakazaki T, Kuramoto M, Suzuki T, Ogawa Y, Kondo N. UV excited fluorescence image-based non-destructive method for early detection of strawberry (Fragaria × ananassa) spoilage. Food Chem 2022; 368:130776. [PMID: 34425344 DOI: 10.1016/j.foodchem.2021.130776] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 07/21/2021] [Accepted: 08/02/2021] [Indexed: 11/04/2022]
Abstract
The soon spoiled strawberries need to be classified from healthy fruits in an early stage. In this research, a machine vision system is proposed for inspecting the quality of strawberries using ultraviolet (UV) light based on the excitation-emission matrix (EEM) results. Among the 100 fruits which were harvested and stored under 10 °C condition for 7 days, 7 fruits were confirmed to be spoiled by using a firmness meter. The EEM results show the fluorescence compound contributes to a whitish surface on the spoiled fruits. Based on the EEM results, UV fluorescence images from the bottom view of strawberries were used to classify the spoiled fruits and healthy fruits within 1 day after harvest. These results demonstrate the UV fluorescence imaging can be a fast, non-destructive, and low-cost method for inspecting the soon spoiled fruits. The proposed index related to the spoiling time can be a new indicator for qualifying strawberry.
Collapse
Affiliation(s)
- Zichen Huang
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| | - Ken Abamba Omwange
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| | - Lok Wai Jacky Tsay
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| | - Yoshito Saito
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan; Research Fellow of Japan Society for the Promotion of Science, Graduate School of Agriculture, Kyoto University, Kyoto 606-8502, Japan.
| | - Eri Maai
- Laboratory of Plant Production Control (Experimental Farm), Graduate School of Agriculture, Kyoto University, Kizugawa, Kyoto 619-0218, Japan; Faculty of International Agriculture and Food Studies, Tokyo University of Agriculture, Setagaya, Tokyo 156-8502, Japan
| | - Akira Yamazaki
- Laboratory of Plant Production Control (Experimental Farm), Graduate School of Agriculture, Kyoto University, Kizugawa, Kyoto 619-0218, Japan
| | - Ryohei Nakano
- Laboratory of Plant Production Control (Experimental Farm), Graduate School of Agriculture, Kyoto University, Kizugawa, Kyoto 619-0218, Japan
| | - Tetsuya Nakazaki
- Laboratory of Plant Production Control (Experimental Farm), Graduate School of Agriculture, Kyoto University, Kizugawa, Kyoto 619-0218, Japan
| | - Makoto Kuramoto
- Advanced Research Support Center, Ehime University, 2-5 Bunkyo-cho, Matsuyama, Ehime 790-8577, Japan
| | - Tetsuhito Suzuki
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| | - Yuichi Ogawa
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| | - Naoshi Kondo
- Laboratory of Biosensing Engineering, Graduate School of Agriculture, Kyoto University, Kitashirakawa, Kyoto 606-8502, Japan
| |
Collapse
|
32
|
Yang JJ, Klinkenberg C, Pan JZ, Wyss HM, den Toonder JMJ, Fang Q. An integrated system for automated measurement of airborne pollen based on electrostatic enrichment and image analysis with machine vision. Talanta 2022; 237:122908. [PMID: 34736645 DOI: 10.1016/j.talanta.2021.122908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2021] [Revised: 09/26/2021] [Accepted: 09/27/2021] [Indexed: 11/25/2022]
Abstract
Here we describe an automated and compact pollen detection system that integrates enrichment, in-situ detection and self-cleaning modules. The system can achieve continuous capture and enrichment of pollen grains in air samples by electrostatic adsorption. The captured pollen grains are imaged with a digital camera, and an automated image analysis based on machine vision is performed, which enables a quantification of the number of pollen particles as well as a preliminary classification into two types of pollen grains. In order to optimize and evaluate the system performance, we developed a testing approach that utilizes an airflow containing a precisely metered amount of pollen particles surrounded by a sheath flow to achieve the generation and lossless transmission of standard gas samples. We studied various factors affecting the pollen capture efficiency, including the applied voltage, air flow rate and humidity. Under optimized conditions, the system was successfully used in the measurement of airborne pollen particles within a wide range of concentrations, spanning 3 orders of magnitude.
Collapse
Affiliation(s)
- Jia-Jing Yang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China; Microsystems Section, Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, 5600MB, the Netherlands
| | - Christian Klinkenberg
- Microsystems Section, Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, 5600MB, the Netherlands
| | - Jian-Zhang Pan
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China
| | - Hans M Wyss
- Microsystems Section, Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, 5600MB, the Netherlands.
| | - Jaap M J den Toonder
- Microsystems Section, Department of Mechanical Engineering, Eindhoven University of Technology, Eindhoven, 5600MB, the Netherlands
| | - Qun Fang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China.
| |
Collapse
|
33
|
Nguyen LLP, Baranyai L, Nagy D, Mahajan PV, Zsom-Muha V, Zsom T. Color analysis of horticultural produces using hue spectra fingerprinting. MethodsX 2022; 8:101594. [PMID: 35004226 PMCID: PMC8720896 DOI: 10.1016/j.mex.2021.101594] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Accepted: 11/25/2021] [Indexed: 10/28/2022] Open
Abstract
Color has great importance in agriculture due to its relationship with plant pigments and therefore, plant development and biochemical changes. Due to the trichromatic vision, instruments equipped with CCD or CMOS sensor represent color with the mixture of red, green and blue signals. These values are often transformed into HSL (hue, saturation, luminance) color space. Beyond average color of the visible surface area, histograms can represent color distribution. Interpretation of distribution can be challenging due to the information shared among histograms. Hue spectra fingerprinting offers color information suitable for analysis with common chemometric methods and easy to understand. Algorithm is presented with GNU Octave code.•Hue spectra is a histogram of hue angle over the captured scene but summarizes saturation instead of number of pixels. There are peaks of important colors, while others of low saturation disappear. Neutral backgrounds such as white, black or gray, are removed without the need of segmentation.•Color changes of fruits and vegetables are represented by displacement of color peaks. Since saturation is usually changing during ripening, storage and shelf life, peaks also change their shape by means of peak value and width.
Collapse
Affiliation(s)
- Lien Le Phuong Nguyen
- Hungarian University of Agriculture and Life Sciences, Institute of Food Science and Technology, Budapest, Hungary.,Industrial University of Ho Chi Minh City, Institute of Biotechnology and Food Technology, Ho Chi Minh, Viet Nam
| | - László Baranyai
- Hungarian University of Agriculture and Life Sciences, Institute of Food Science and Technology, Budapest, Hungary
| | - Dávid Nagy
- Hungarian University of Agriculture and Life Sciences, Institute of Food Science and Technology, Budapest, Hungary
| | - Pramod V Mahajan
- Department of Horticultural Engineering, Leibniz Institute for Agricultural Engineering and Bioeconomy (ATB), Potsdam, Germany
| | - Viktória Zsom-Muha
- Hungarian University of Agriculture and Life Sciences, Institute of Food Science and Technology, Budapest, Hungary
| | - Tamás Zsom
- Hungarian University of Agriculture and Life Sciences, Institute of Food Science and Technology, Budapest, Hungary
| |
Collapse
|
34
|
Kvæstad B, Hansen BH, Davies E. Automated morphometrics on microscopy images of Atlantic cod larvae using Mask R-CNN and classical machine vision techniques. MethodsX 2021; 9:101598. [PMID: 34917490 PMCID: PMC8666706 DOI: 10.1016/j.mex.2021.101598] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 11/30/2021] [Indexed: 11/29/2022] Open
Abstract
Measurements of morphometrical parameters on i.e., fish larvae are useful for assessing the quality and condition of the specimen in environmental research or optimal growth in the cultivation industry. Manually acquiring morphometrical parameters from microscopy images can be time consuming and tedious, this can be a limiting factor when acquiring samples for an experiment. Mask R-CNN, an instance segmentation neural network architecture, has been implemented for finding outlines on parts of interest on fish larvae (Atlantic cod, Gadus morhua). Using classical machine vision techniques on the outlines makes it is possible to acquire morphometrics such as area, diameter, length, and height. The combination of these techniques is providing accurate-, consistent-, and high-volume information on the morphometrics of small organisms, making it possible to sample more data for morphometric analysis.•Capabilities to automatically analyse a set of microscopy images in approximately 2-3 seconds per image, with results that have a high degree of accuracy when compared to morphometrics acquired manually by an expert.•Can be implemented on other species of ichthyoplankton or zooplankton and has successfully been tested on ballan wrasse, zebrafish, lumpsucker and calanoid copepods.
Collapse
Affiliation(s)
- Bjarne Kvæstad
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, Trondheim NO-7010, Norway
| | - Bjørn Henrik Hansen
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, Trondheim NO-7010, Norway
| | - Emlyn Davies
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, Trondheim NO-7010, Norway
| |
Collapse
|
35
|
Hamann T, Wiest M, Mislevics A, Bondarenko A, Zweifel S. At the Pulse of Time: Machine Vision in Retinal Videos. Acta Neurochir Suppl 2022; 134:303-11. [PMID: 34862554 DOI: 10.1007/978-3-030-85292-4_34] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Spontaneous venous pulsations (SVP) are a common finding in healthy people. The absence of SVP is associated with rapid progression in glaucoma and increased intracranial pressure. Traditionally, SVP has been documented qualitatively by clinicians during biomicroscopy. Nowadays numerous imaging devices recording the fundus exist. Hence, video data for objectification of SVP is readily available. Still, these clinical datasets are afflicted with various quality issues and artifacts. In this machine vision based study, we explore methods to overcome challenges in identifying SVP in fundus videos of varying quality and provide a detailed protocol thereof. Hereby, we aim to lower the burden of access of implementing machine vision in clinical video datasets and quantification of SVP.
Collapse
|
36
|
Hu Q, Zhang Y, Ma R, An J, Huang W, Wu Y, Hou J, Zhang D, Lin F, Xu R, Sun Q, Sun L. Genetic dissection of seed appearance quality using recombinant inbred lines in soybean. Mol Breed 2021; 41:72. [PMID: 37309518 PMCID: PMC10236129 DOI: 10.1007/s11032-021-01262-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2021] [Accepted: 11/01/2021] [Indexed: 06/14/2023]
Abstract
Soybean seed appearance quality greatly affects the marketability. The objective of this study was to identify the quantitative trait loci (QTLs) that control the appearance quality of soybean seeds. A total of 256 recombinant inbred lines from Qi Huang No.34 × Ji Dou No.17 were utilized for QTL mapping. We innovatively applied a machine vision system to quantify the seed appearance of each line. As a result of QTL mapping, a total of 145 QTLs for the machine vision parameters were detected across three environments. We integrated QTLs mapped overlapped and obtained 16 QTL hotspots in total. Of these hotspots, hotspot-4-1 was suggested to be a major locus controlling seed size, and hotspot-15 was identified to affect the seed color and texture. The mapping for principal components of the seed appearance also supported it. This study comprehensively dug up the QTLs for seed appearance quality of soybean cultivars while providing an efficient method for phenotyping of seed appearance. These results would contribute to dissecting the genetic bases of seed appearance quality for the improvement of soybean. Supplementary Information The online version contains supplementary material available at 10.1007/s11032-021-01262-9.
Collapse
Affiliation(s)
- Quan Hu
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Yanwei Zhang
- Crop Research Institute, Shandong Academy of Agricultural Sciences, Jinan, 250131 Shandong China
| | - Ruirui Ma
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Jie An
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Wenxuan Huang
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Yueying Wu
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Jingjing Hou
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Dajian Zhang
- College of Agronomy, Shandong Agricultural University, Tai’an, 271018 Shandong China
| | - Feng Lin
- Department of Plant, Soil and Microbial Sciences, Michigan State University, East Lansing, MI 48824 USA
| | - Ran Xu
- Crop Research Institute, Shandong Academy of Agricultural Sciences, Jinan, 250131 Shandong China
| | - Qun Sun
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| | - Lianjun Sun
- State Key Laboratory of Agrobiotechnology, and College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
- Beijing Key Laboratory for Crop Genetic Improvement, College of Agronomy and Biotechnology, China Agricultural University, Beijing, 100193 China
| |
Collapse
|
37
|
Yang Y, Nie J, Kan Z, Yang S, Zhao H, Li J. Cotton stubble detection based on wavelet decomposition and texture features. Plant Methods 2021; 17:113. [PMID: 34727933 PMCID: PMC8561878 DOI: 10.1186/s13007-021-00809-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Accepted: 10/12/2021] [Indexed: 06/13/2023]
Abstract
BACKGROUND At present, the residual film pollution in cotton fields is crucial. The commonly used recycling method is the manual-driven recycling machine, which is heavy and time-consuming. The development of a visual navigation system for the recovery of residual film is conducive, in order to improve the work efficiency. The key technology in the visual navigation system is the cotton stubble detection. A successful cotton stubble detection can ensure the stability and reliability of the visual navigation system. METHODS Firstly, it extracts the three types of texture features of GLCM, GLRLM and LBP, from the three types of images of stubbles, residual films and broken leaves between rows. It then builds three classifiers: Random Forest, Back Propagation Neural Network and Support Vector Machine in order to classify the sample images. Finally, the possibility of improving the classification accuracy using the texture features extracted from the wavelet decomposition coefficients, is discussed. RESULTS The experiment proves that the GLCM texture feature of the original image has the best performance under the Back Propagation Neural Network classifier. As for the different wavelet bases, the vertical coefficient texture feature of coif3 wavelet decomposition, combined with the texture feature of the original image, is the feature having the best classification effect. Compared with the original image texture features, the classification accuracy is increased by 3.8%, the sensitivity is increased by 4.8%, and the specificity is increased by 1.2%. CONCLUSIONS The algorithm can complete the task of stubble detection in different locations, different periods and abnormal driving conditions, which shows that the wavelet coefficient texture feature combined with the original image texture feature is a useful fusion feature for detecting stubble and can provide a reference for different crop stubble detection.
Collapse
Affiliation(s)
- Yukun Yang
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China
| | - Jing Nie
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China
| | - Za Kan
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China
| | - Shuo Yang
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China
| | - Hangxing Zhao
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China
| | - Jingbin Li
- College of Mechanical and Electrical Engineering, Shihezi University, Shihezi, 832000, Xinjiang, China.
- Industrial Technology Research Institute - XPCC, Xinjiang Production and Construction Corps (XPCC), Shihezi, 832000, Xinjiang, China.
| |
Collapse
|
38
|
Ficzere M, Mészáros LA, Madarász L, Novák M, Nagy ZK, Galata DL. Indirect monitoring of ultralow dose API content in continuous wet granulation and tableting by machine vision. Int J Pharm 2021; 607:121008. [PMID: 34391851 DOI: 10.1016/j.ijpharm.2021.121008] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 07/12/2021] [Accepted: 08/10/2021] [Indexed: 10/20/2022]
Abstract
This paper presents new machine vision-based methods for indirect real-time quantification of ultralow drug content during continuous twin-screw wet granulation and tableting. Granulation was performed with a solution containing carvedilol (CAR) as API in the ultralow dose range (0.05w/w% in the granule) and the addition of riboflavin (RI) as a coloured tracer. An in-line calibration in the range of 0.047-0.058 w/w% was prepared for the measurement of CAR concentration using colour analysis (CA) and particle size analysis (PSA), and the validation with HPLC resulted in respective relative errors of 2.62% and 2.30% showing great accuracy. To improve the technique, a second in-line calibration was conducted in a broader CAR concentration range of 0.039-0.063 w/w% utilizing only half the amount of RI (0.045 w/w%), while doubling the output of the granulation line to 2 kg/h, producing a relative error of 4.51% and 4.29%, respectively. Finally, it was shown that the CA technique can also be carried on to monitor the CAR content of tablets in the 42-62 μg dose range with a relative error of 5.20%. Machine vision was proven to be a potent indirect method for the in-line, determination and monitoring of ultralow API content during continuous manufacturing.
Collapse
Affiliation(s)
- Máté Ficzere
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary
| | - Lilla Alexandra Mészáros
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary
| | - Lajos Madarász
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary
| | - Márk Novák
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary.
| | - Dorián László Galata
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, Műegyetem rakpart 3, H-1111 Budapest, Hungary
| |
Collapse
|
39
|
Hou Y, Cai X, Miao P, Li S, Shu C, Li P, Li W, Li Z. A feasibility research on the application of machine vision technology in appearance quality inspection of Xuesaitong dropping pills. Spectrochim Acta A Mol Biomol Spectrosc 2021; 258:119787. [PMID: 33932636 DOI: 10.1016/j.saa.2021.119787] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/05/2021] [Revised: 03/29/2021] [Accepted: 04/01/2021] [Indexed: 06/12/2023]
Abstract
Defect detection is a critical issue for the quality control of dropping pills, which is a special dosage form of traditional Chinese Medicine. Machine vision is a non-destructing testing technology and cost-effective with high accuracy that can be used to predict the detects of both interior and exterior of the sample by employing the camera. In this research, a machine vision system for inspecting quality of the Xuesaitong dropping pills (XDPs) that include non-spherical, abnormal sizes and colors was developed to evaluate the appearance quality of XDPs rapidly and accurately. Firstly, 270 images of XDPs containing qualified and three different types of defects were collected. Subsequently, the processing of the XDPs images were carried out. Finally, Three defecting categories classification models were developed and compared based on contour and color features. The experimental results showed that the Random Forest outperformed all the explored models and the classification accuracy for non-spherical, abnormal sizes and colors reached 98.52%, 100.00% and 100.00%, respectively. In summary, the method established in this research is scientific, reliable, fast and accurate, which has great application potential and can provide technical support for the automatic defect detection of dropping pills.
Collapse
Affiliation(s)
- Yizhe Hou
- College of Pharmaceutical Engineering of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China; State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| | - Xiang Cai
- Langtian Pharmaceutical (Hubei) Co., Ltd., Huangshi 435000, China
| | - Peiqi Miao
- Tianjin Modern Innovative TCM Technology Co., Ltd., Tianjin 301617, China
| | - Shunan Li
- College of Pharmaceutical Engineering of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China; State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| | - Chengren Shu
- Langtian Pharmaceutical (Hubei) Co., Ltd., Huangshi 435000, China
| | - Pian Li
- Langtian Pharmaceutical (Hubei) Co., Ltd., Huangshi 435000, China
| | - Wenlong Li
- College of Pharmaceutical Engineering of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China; State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China.
| | - Zheng Li
- College of Pharmaceutical Engineering of Traditional Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China; State Key Laboratory of Component-based Chinese Medicine, Tianjin University of Traditional Chinese Medicine, Tianjin 301617, China
| |
Collapse
|
40
|
Jahanbakhshi A, Abbaspour-Gilandeh Y, Heidarbeigi K, Momeny M. Detection of fraud in ginger powder using an automatic sorting system based on image processing technique and deep learning. Comput Biol Med 2021; 136:104764. [PMID: 34426164 DOI: 10.1016/j.compbiomed.2021.104764] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2021] [Revised: 08/10/2021] [Accepted: 08/10/2021] [Indexed: 12/01/2022]
Abstract
Ginger is a well-known product in the food and pharmaceutical industries. Ginger is one of the spices which are adulterated for economic gain. The lack of marketability of grade 3 chickpeas (small and broken chickpeas) and their very low price have made them a good choice to be mixed with ginger in powder form and sold in the market. Demand for non-destructive methods of measuring food quality, such as machine vision and the growing need for food and spices, were the main motives to conduct this study. This study classified ginger powder images to detect fraud by improving convolutional neural networks (CNN) through a gated pooling function. The main approach to improving CNN is to use a pooling function that combines average pooling and max pooling. The Batch normalization (BN) technique is used in CNN to improve classification results. We show empirically that the combining operation used increases the accuracy of ginger powder classification compared to the baseline pooling method. For this purpose, 3360 image samples of ginger powder were prepared in 7 categories (pure ginger powder, chickpea powder, 10%, 20%, 30%, 40%, and 50% fraud in ginger powder). Moreover, MLP, Fuzzy, SVM, GBT, and EDT algorithms were used to compare the proposed CNN results with other classifiers. The results showed that using batch normalization based on gated pooling, the proposed CNN was able to grade the images of ginger powder with 99.70% accuracy compared to other classifiers. Therefore, it can be said that the CNN method and image processing technique effectively increase marketability, prevent ginger powder fraud, and promote traditional methods of ginger powder fraud detection.
Collapse
Affiliation(s)
- Ahmad Jahanbakhshi
- Department of Biosystems Engineering, University of Mohaghegh Ardabili, Ardabil, Iran.
| | | | | | - Mohammad Momeny
- Department of Computer Engineering, Yazd University, Yazd, Iran
| |
Collapse
|
41
|
Zhang G, Yu X, Huang G, Lei D, Tong M. An improved automated zebrafish larva high-throughput imaging system. Comput Biol Med 2021; 136:104702. [PMID: 34352455 DOI: 10.1016/j.compbiomed.2021.104702] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 07/09/2021] [Accepted: 07/26/2021] [Indexed: 12/16/2022]
Abstract
As a typical multicellular model organism, the zebrafish has been increasingly used in biological research. Despite the efforts to develop automated zebrafish larva imaging systems, existing ones are still defective in terms of reliability and automation. This paper presents an improved zebrafish larva high-throughput imaging system, which makes improvements to the existing designs in the following aspects. Firstly, a single larva extraction strategy is developed to make larva loading more reliable. The aggregated larvae are identified, classified by their numbers and patterns, and separated by the aspiration pipette or water stream. Secondly, the dynamic model of larva motion in the capillary is established and an adaptive robust controller is designed for decelerating the fast-moving larva to ensure the survival rate. Thirdly, rotating the larva to the desired orientation is automated by developing an algorithm to estimate the larva's initial rotation angle. For validating the improved larva imaging system, a real-time heart rate monitoring experiment is conducted as an application example. Experimental results demonstrate that the goals of the improvements have been achieved. With these improvements, the improved zebrafish larva imaging system remarkably reduces human intervention and increases the efficiency and success/survival rates of larva imaging.
Collapse
Affiliation(s)
- Gefei Zhang
- Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China
| | - Xinghu Yu
- Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China; Ningbo Institute of Intelligent Equipment Technology Co. Ltd., Ningbo, China
| | - Gang Huang
- Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China
| | - Dongxu Lei
- Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China
| | - Mingsi Tong
- Research Institute of Intelligent Control and Systems, Harbin Institute of Technology, Harbin, 150001, China.
| |
Collapse
|
42
|
Schneider C, Allam M, Stoyanov D, Hawkes DJ, Gurusamy K, Davidson BR. Performance of image guided navigation in laparoscopic liver surgery - A systematic review. Surg Oncol 2021; 38:101637. [PMID: 34358880 DOI: 10.1016/j.suronc.2021.101637] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 07/04/2021] [Accepted: 07/24/2021] [Indexed: 02/07/2023]
Abstract
BACKGROUND Compared to open surgery, minimally invasive liver resection has improved short term outcomes. It is however technically more challenging. Navigated image guidance systems (IGS) are being developed to overcome these challenges. The aim of this systematic review is to provide an overview of their current capabilities and limitations. METHODS Medline, Embase and Cochrane databases were searched using free text terms and corresponding controlled vocabulary. Titles and abstracts of retrieved articles were screened for inclusion criteria. Due to the heterogeneity of the retrieved data it was not possible to conduct a meta-analysis. Therefore results are presented in tabulated and narrative format. RESULTS Out of 2015 articles, 17 pre-clinical and 33 clinical papers met inclusion criteria. Data from 24 articles that reported on accuracy indicates that in recent years navigation accuracy has been in the range of 8-15 mm. Due to discrepancies in evaluation methods it is difficult to compare accuracy metrics between different systems. Surgeon feedback suggests that current state of the art IGS may be useful as a supplementary navigation tool, especially in small liver lesions that are difficult to locate. They are however not able to reliably localise all relevant anatomical structures. Only one article investigated IGS impact on clinical outcomes. CONCLUSIONS Further improvements in navigation accuracy are needed to enable reliable visualisation of tumour margins with the precision required for oncological resections. To enhance comparability between different IGS it is crucial to find a consensus on the assessment of navigation accuracy as a minimum reporting standard.
Collapse
Affiliation(s)
- C Schneider
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK.
| | - M Allam
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK; General surgery Department, Tanta University, Egypt
| | - D Stoyanov
- Department of Computer Science, University College London, London, UK; Centre for Medical Image Computing (CMIC), University College London, London, UK
| | - D J Hawkes
- Centre for Medical Image Computing (CMIC), University College London, London, UK; Wellcome / EPSRC Centre for Surgical and Interventional Sciences (WEISS), University College London, London, UK
| | - K Gurusamy
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| | - B R Davidson
- Department of Surgical Biotechnology, University College London, Pond Street, NW3 2QG, London, UK
| |
Collapse
|
43
|
Staartjes VE, Volokitin A, Regli L, Konukoglu E, Serra C. Machine Vision for Real-Time Intraoperative Anatomic Guidance: A Proof-of-Concept Study in Endoscopic Pituitary Surgery. Oper Neurosurg (Hagerstown) 2021; 21:242-247. [PMID: 34131753 DOI: 10.1093/ons/opab187] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 04/04/2021] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Current intraoperative orientation methods either rely on preoperative imaging, are resource-intensive to implement, or difficult to interpret. Real-time, reliable anatomic recognition would constitute another strong pillar on which neurosurgeons could rest for intraoperative orientation. OBJECTIVE To assess the feasibility of machine vision algorithms to identify anatomic structures using only the endoscopic camera without prior explicit anatomo-topographic knowledge in a proof-of-concept study. METHODS We developed and validated a deep learning algorithm to detect the nasal septum, the middle turbinate, and the inferior turbinate during endoscopic endonasal approaches based on endoscopy videos from 23 different patients. The model was trained in a weakly supervised manner on 18 and validated on 5 patients. Performance was compared against a baseline consisting of the average positions of the training ground truth labels using a semiquantitative 3-tiered system. RESULTS We used 367 images extracted from the videos of 18 patients for training, as well as 182 test images extracted from the videos of another 5 patients for testing the fully developed model. The prototype machine vision algorithm was able to identify the 3 endonasal structures qualitatively well. Compared to the baseline model based on location priors, the algorithm demonstrated slightly but statistically significantly (P < .001) improved annotation performance. CONCLUSION Automated recognition of anatomic structures in endoscopic videos by means of a machine vision model using only the endoscopic camera without prior explicit anatomo-topographic knowledge is feasible. This proof of concept encourages further development of fully automated software for real-time intraoperative anatomic guidance during surgery.
Collapse
Affiliation(s)
- Victor E Staartjes
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, University Hospital Zurich, Clinical Neuroscience Centre, University of Zurich, Zurich, Switzerland
| | - Anna Volokitin
- Computer Vision Lab (CVL), ETH Zurich, Zurich, Switzerland
| | - Luca Regli
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, University Hospital Zurich, Clinical Neuroscience Centre, University of Zurich, Zurich, Switzerland
| | | | - Carlo Serra
- Machine Intelligence in Clinical Neuroscience (MICN) Laboratory, Department of Neurosurgery, University Hospital Zurich, Clinical Neuroscience Centre, University of Zurich, Zurich, Switzerland
| |
Collapse
|
44
|
Wang Y, Wang DF, Wang HF, Wang JW, Pan JZ, Guo XG, Fang Q. A microfluidic robot for rare cell sorting based on machine vision identification and multi-step sorting strategy. Talanta 2021; 226:122136. [PMID: 33676690 DOI: 10.1016/j.talanta.2021.122136] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 01/14/2021] [Accepted: 01/16/2021] [Indexed: 12/19/2022]
Abstract
The identification, sorting and analysis of rare target single cells in human blood has always been a clinically meaningful medical challenge. Here, we developed a microfluidic robot platform for sorting specific rare cells from complex clinical blood samples based on machine vision-based image identification, liquid handling robot and droplet-based microfluidic techniques. The robot integrated a cell capture and droplet generation module, a laser-induced fluorescence imaging module, a target cell identification and data analysis module, and a system control module, which could automatically achieve the scanning imaging of cell array, cell identification, capturing, and droplet generation of rare target cells from blood samples containing large numbers of normal cells. Based on the robot platform, a novel "gold panning" multi-step sorting strategy was proposed to achieve the sorting of rare target cells in large-scale cell samples with high operation efficiency and high sorting purity (>90%). The robot platform and the multi-step sorting strategy were applied in the sorting of circulating endothelial progenitor cells (CEPCs) in human blood to demonstrate their feasibility and application potential in the sorting and analysis of rare specific cells. Approximately 1,000 CEPCs were automatically identified from 3,000,000 blood cells at a scanning speed of ca. 4,000 cells/s, and 20 25-nL droplets containing single CEPCs were generated.
Collapse
Affiliation(s)
- Yu Wang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China
| | - Dong-Fei Wang
- Department of Cardiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, China
| | - Hui-Feng Wang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China
| | - Jian-Wei Wang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China
| | - Jian-Zhang Pan
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China
| | - Xiao-Gang Guo
- Department of Cardiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, 310003, China.
| | - Qun Fang
- Institute of Microanalytical Systems, Department of Chemistry, Zhejiang University, Hangzhou, 310058, China.
| |
Collapse
|
45
|
Zhu L, Spachos P, Pensini E, Plataniotis KN. Deep learning and machine vision for food processing: A survey. Curr Res Food Sci 2021; 4:233-249. [PMID: 33937871 PMCID: PMC8079277 DOI: 10.1016/j.crfs.2021.03.009] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2020] [Revised: 03/24/2021] [Accepted: 03/25/2021] [Indexed: 11/21/2022] Open
Abstract
The quality and safety of food is an important issue to the whole society, since it is at the basis of human health, social development and stability. Ensuring food quality and safety is a complex process, and all stages of food processing must be considered, from cultivating, harvesting and storage to preparation and consumption. However, these processes are often labour-intensive. Nowadays, the development of machine vision can greatly assist researchers and industries in improving the efficiency of food processing. As a result, machine vision has been widely used in all aspects of food processing. At the same time, image processing is an important component of machine vision. Image processing can take advantage of machine learning and deep learning models to effectively identify the type and quality of food. Subsequently, follow-up design in the machine vision system can address tasks such as food grading, detecting locations of defective spots or foreign objects, and removing impurities. In this paper, we provide an overview on the traditional machine learning and deep learning methods, as well as the machine vision techniques that can be applied to the field of food processing. We present the current approaches and challenges, and the future trends.
Collapse
Affiliation(s)
- Lili Zhu
- School of Engineering, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Petros Spachos
- School of Engineering, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | - Erica Pensini
- School of Engineering, University of Guelph, Guelph, ON, N1G 2W1, Canada
| | | |
Collapse
|
46
|
Šiaulys A, Vaičiukynas E, Medelytė S, Olenin S, Šaškov A, Buškus K, Verikas A. A fully-annotated imagery dataset of sublittoral benthic species in Svalbard, Arctic. Data Brief 2021; 35:106823. [PMID: 33604435 PMCID: PMC7873376 DOI: 10.1016/j.dib.2021.106823] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 01/25/2021] [Accepted: 01/28/2021] [Indexed: 12/01/2022] Open
Abstract
Underwater imagery is widely used for a variety of applications in marine biology and environmental sciences, such as classification and mapping of seabed habitats, marine environment monitoring and impact assessment, biogeographic reconstructions in the context of climate change, etc. This approach is relatively simple and cost-effective, allowing the rapid collection of large amounts of data. However, due to the laborious and time-consuming manual analysis procedure, only a small part of the information stored in the archives of underwater images is retrieved. Emerging novel deep learning methods open up the opportunity for more effective, accurate and rapid analysis of seabed images than ever before. We present annotated images of the bottom macrofauna obtained from underwater video recorded in Spitsbergen island's European Arctic waters, Svalbard Archipelago. Our videos were filmed in both the photic and aphotic zones of polar waters, often influenced by melting glaciers. We used artificial lighting and shot close to the seabed (<1 m) to preserve natural colours and avoid the distorting effect of muddy water. The underwater video footage was captured using a remotely operated vehicle (ROV) and a drop-down camera. The footage was converted to 2D mosaic images of the seabed. 2D mosaics were manually annotated by several experts using the Labelbox tool and co-annotations were refined using the SurveyJS platform. A set of carefully annotated underwater images associated with the original videos can be used by marine biologists as a biological atlas, as well as practitioners in the fields of machine vision, pattern recognition, and deep learning as training materials for the development of various tools for automatic analysis of underwater imagery.
Collapse
Affiliation(s)
- Andrius Šiaulys
- Marine Research Institute, Klaipeda University, Klaipeda, Lithuania
| | | | - Saulė Medelytė
- Marine Research Institute, Klaipeda University, Klaipeda, Lithuania
| | - Sergej Olenin
- Marine Research Institute, Klaipeda University, Klaipeda, Lithuania
| | - Aleksej Šaškov
- Marine Research Institute, Klaipeda University, Klaipeda, Lithuania
| | - Kazimieras Buškus
- Faculty of Mathematics and Natural Sciences, Kaunas University of Technology, Kaunas, Lithuania
| | - Antanas Verikas
- Faculty of Electrical and Electronics Engineering, Kaunas University of Technology, Kaunas, Lithuania
| |
Collapse
|
47
|
Galata DL, Mészáros LA, Ficzere M, Vass P, Nagy B, Szabó E, Domokos A, Farkas A, Csontos I, Marosi G, Nagy ZK. Continuous blending monitored and feedback controlled by machine vision-based PAT tool. J Pharm Biomed Anal 2021; 196:113902. [PMID: 33486449 DOI: 10.1016/j.jpba.2021.113902] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Revised: 01/11/2021] [Accepted: 01/12/2021] [Indexed: 12/20/2022]
Abstract
In a continuous powder blending process machine vision is utilized as a Process Analytical Technology (PAT) tool. While near-infrared (NIR) and Raman spectroscopy are reliable methods in this field, measurements become challenging when concentrations below 2 w/w% are quantified. However, an active pharmaceutical ingredient (API) with an intense color might be quantified in even lower quantities by images recorded with a digital camera. Riboflavin (RI) was used as a model API with orange color, its Limit of Detection was found to be 0.015 w/w% and the Limit of Quantification was 0.046 w/w% using a calibration based on the pixel value of images. A calibration for in-line measurement of RI concentration was prepared in the range of 0.2-0.45 w/w%, validation with UV/VIS spectrometry showed great accuracy with a relative error of 2.53 %. The developed method was then utilized for a residence time distribution (RTD) measurement in order to characterize the dynamics of the blending process. Lastly, the technique was applied in real-time feedback control of a continuous powder blending process. Machine vision based direct or indirect API concentration determination is a promising and fast method with a great potential for monitoring and control of continuous pharmaceutical processes.
Collapse
Affiliation(s)
- Dorián László Galata
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Lilla Alexandra Mészáros
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Máté Ficzere
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Panna Vass
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Brigitta Nagy
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Edina Szabó
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - András Domokos
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Attila Farkas
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - István Csontos
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - György Marosi
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary
| | - Zsombor Kristóf Nagy
- Department of Organic Chemistry and Technology, Budapest University of Technology and Economics, H-1111, Budapest, Műegyetem rakpart 3, Hungary.
| |
Collapse
|
48
|
Wang C, Li Z, Wang T, Xu X, Zhang X, Li D. Intelligent fish farm-the future of aquaculture. Aquac Int 2021; 29:2681-2711. [PMID: 34539102 PMCID: PMC8435764 DOI: 10.1007/s10499-021-00773-8] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/27/2021] [Indexed: 05/17/2023]
Abstract
With the continuous expansion of aquaculture scale and density, contemporary aquaculture methods have been forced to overproduce resulting in the accelerated imbalance rate of water environment, the frequent occurrence of fish diseases, and the decline of aquatic product quality. Moreover, due to the fact that the average age profile of agricultural workers in many parts of the world are on the higher side, fishery production will face the dilemma of shortage of labor, and aquaculture methods are in urgent need of change. Modern information technology has gradually penetrated into various fields of agriculture, and the concept of intelligent fish farm has also begun to take shape. The intelligent fish farm tries to deal with the precise work of increasing oxygen, optimizing feeding, reducing disease incidences, and accurately harvesting through the idea of "replacing human with machine," so as to liberate the manpower completely and realize the green and sustainable aquaculture. This paper reviews the application of fishery intelligent equipment, IoT, edge computing, 5G, and artificial intelligence algorithms in modern aquaculture, and analyzes the existing problems and future development prospects. Meanwhile, based on different business requirements, the design frameworks for key functional modules in the construction of intelligent fish farm are proposed.
Collapse
Affiliation(s)
- Cong Wang
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing, China
- Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture China Agriculture University, Beijing, 100083 China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083 China
| | - Zhen Li
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing, China
- Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture China Agriculture University, Beijing, 100083 China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083 China
| | - Tan Wang
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing, China
- Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture China Agriculture University, Beijing, 100083 China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083 China
| | - Xianbao Xu
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing, China
- Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture China Agriculture University, Beijing, 100083 China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083 China
| | - Xiaoshuan Zhang
- College of Engineering, China Agricultural University, Beijing, 100083 China
| | - Daoliang Li
- National Innovation Center for Digital Fishery, China Agricultural University, Beijing, China
- Beijing Engineering and Technology Research Centre for Internet of Things in Agriculture China Agriculture University, Beijing, 100083 China
- College of Information and Electrical Engineering, China Agricultural University, Beijing, 100083 China
| |
Collapse
|
49
|
Lee JH, Yu HJ, Kim MJ, Kim JW, Choi J. Automated cephalometric landmark detection with confidence regions using Bayesian convolutional neural networks. BMC Oral Health 2020; 20:270. [PMID: 33028287 PMCID: PMC7541217 DOI: 10.1186/s12903-020-01256-7] [Citation(s) in RCA: 54] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 09/20/2020] [Indexed: 11/13/2022] Open
Abstract
Background Despite the integral role of cephalometric analysis in orthodontics, there have been limitations regarding the reliability, accuracy, etc. of cephalometric landmarks tracing. Attempts on developing automatic plotting systems have continuously been made but they are insufficient for clinical applications due to low reliability of specific landmarks. In this study, we aimed to develop a novel framework for locating cephalometric landmarks with confidence regions using Bayesian Convolutional Neural Networks (BCNN). Methods We have trained our model with the dataset from the ISBI 2015 grand challenge in dental X-ray image analysis. The overall algorithm consisted of a region of interest (ROI) extraction of landmarks and landmarks estimation considering uncertainty. Prediction data produced from the Bayesian model has been dealt with post-processing methods with respect to pixel probabilities and uncertainties. Results Our framework showed a mean landmark error (LE) of 1.53 ± 1.74 mm and achieved a successful detection rate (SDR) of 82.11, 92.28 and 95.95%, respectively, in the 2, 3, and 4 mm range. Especially, the most erroneous point in preceding studies, Gonion, reduced nearly halves of its error compared to the others. Additionally, our results demonstrated significantly higher performance in identifying anatomical abnormalities. By providing confidence regions (95%) that consider uncertainty, our framework can provide clinical convenience and contribute to making better decisions. Conclusion Our framework provides cephalometric landmarks and their confidence regions, which could be used as a computer-aided diagnosis tool and education.
Collapse
Affiliation(s)
- Jeong-Hoon Lee
- School of Mechanical Engineering, Yonsei University, 50 Yonsei Ro, Seodaemun Gu, Seoul, 03722, Republic of Korea
| | - Hee-Jin Yu
- School of Mechanical Engineering, Yonsei University, 50 Yonsei Ro, Seodaemun Gu, Seoul, 03722, Republic of Korea
| | - Min-Ji Kim
- Department of Orthodontics, School of Medicine, Ewha Womans University, Anyangcheon-ro 1071, Yangcheon-gu, Seoul, 07985, Republic of Korea
| | - Jin-Woo Kim
- Department of Oral and Maxillofacial Surgery, School of Medicine, Ewha Womans University, Anyangcheon-ro 1071, Yangcheon-gu, Seoul, 07985, Republic of Korea.
| | - Jongeun Choi
- School of Mechanical Engineering, Yonsei University, 50 Yonsei Ro, Seodaemun Gu, Seoul, 03722, Republic of Korea.
| |
Collapse
|
50
|
Kvæstad B, Nordtug T, Hagemann A. A machine vision system for tracking population behavior of zooplankton in small-scale experiments: a case study on salmon lice ( Lepeophtheirus salmonis Krøyer, 1838) copepodite population responses to different light stimuli. Biol Open 2020; 9:bio050724. [PMID: 32554485 PMCID: PMC7328005 DOI: 10.1242/bio.050724] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2020] [Accepted: 05/28/2020] [Indexed: 11/20/2022] Open
Abstract
To achieve efficient and preventive measures against salmon lice (Lepeophtheirus salmonis Krøyer, 1838) infestation, a better understanding of behavioral patterns of the planktonic life stages is key. To investigate light responses in L. salmonis copepodites, a non-intrusive experimental system was designed to measure behavioral responses in a 12.5-l volume using machine vision technology and methodology. The experimental system successfully tracked the collective movement patterns of the sea lice population during exposure to different light stimuli emitted from alternating zones in the system. This system could further be used to study behavioral responses to different physical cues of various developmental stages of sea lice or other zooplankton.
Collapse
Affiliation(s)
- Bjarne Kvæstad
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, NO-7010 Trondheim, Norway
| | - Trond Nordtug
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, NO-7010 Trondheim, Norway
| | - Andreas Hagemann
- SINTEF Ocean, Environment and New Resources, Brattørkaia 17C, NO-7010 Trondheim, Norway
| |
Collapse
|