52
|
On Combining DeepSnake and Global Saliency for Detection of Orchard Apples. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11146269] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
For the fast detection and recognition of apple fruit targets, based on the real-time DeepSnake deep learning instance segmentation model, this paper provided an algorithm basis for the practical application and promotion of apple picking robots. Since the initial detection results have an important impact on the subsequent edge prediction, this paper proposed an automatic detection method for apple fruit targets in natural environments based on saliency detection and traditional color difference methods. Combined with the original image, the histogram backprojection algorithm was used to further optimize the salient image results. A dynamic adaptive overlapping target separation algorithm was proposed to locate the single target fruit and further to determine the initial contour for DeepSnake, in view of the possible overlapping fruit regions in the saliency map. Finally, the target fruit was labeled based on the segmentation results of the examples. In the experiment, 300 training datasets were used to train the DeepSnake model, and the self-built dataset containing 1036 pictures of apples in various situations under natural environment was tested. The detection accuracy of target fruits under non-overlapping shaded fruits, overlapping fruits, shaded branches and leaves, and poor illumination conditions were 99.12%, 94.78%, 90.71%, and 94.46% respectively. The comprehensive detection accuracy was 95.66%, and the average processing time was 0.42 s in 1036 test images, which showed that the proposed algorithm can effectively separate the overlapping fruits through a not-very-large training samples and realize the rapid and accurate detection of apple targets.
Collapse
|
53
|
Adamopoulos A, Chatzopoulos EG, Anastassopoulos G, Detorakis E. Eyelid basal cell carcinoma classification using shallow and deep learning artificial neural networks. EVOLVING SYSTEMS 2021. [DOI: 10.1007/s12530-021-09383-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
54
|
Pan Z, Lu W, Fan Y, Li J. Identification of groundwater contamination sources and hydraulic parameters based on bayesian regularization deep neural network. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021; 28:16867-16879. [PMID: 33398760 DOI: 10.1007/s11356-020-11614-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Accepted: 11/09/2020] [Indexed: 06/12/2023]
Abstract
Simultaneous identification of various features of groundwater contamination sources and hydraulic parameters, such as hydraulic conductivities, can result in high-nonlinear inverse problem, which significantly hinders identification. A surrogate model was proposed to relieve computational burden caused by massive callings to simulation model in identification. However, shallow learning surrogate model may show limited fitting ability to high nonlinear problem. Thus, in this study, a simulation-optimization method based on Bayesian regularization deep neural network (BRDNN) surrogate model was proposed to efficiently solve high-nonlinear inverse problem. This method identified eight variables including locations and release intensities of two pollution sources and hydraulic conductivities of two partitions. Three hidden layers were employed in the BRDNN surrogate model, which profoundly improved the fitting capacity of nonlinear mapping relationship to the simulation model. Furthermore, Bayesian regularization was applied in the training process of neural network to solve overfitting problem. The results indicated that BRDNN was capable of establishing input-output interplay of high nonlinear inverse problem, which substantially reduced computational cost while ensuring a desirable level of accuracy. The utility of simulation-optimization on the basis of BRDNN surrogate model provided stable and reliable inversion results for groundwater contamination sources and hydraulic parameters.
Collapse
Affiliation(s)
- Zidong Pan
- Key Laboratory of Groundwater Resources and Environment, Ministry of Education, Jilin University, Changchun, 130021, China
- Jilin Provincial Key Laboratory of Water Resources and Environment, Jilin University, Changchun, 130021, China
- College of New Energy and Environment, Jilin University, Changchun, 130021, China
| | - Wenxi Lu
- Key Laboratory of Groundwater Resources and Environment, Ministry of Education, Jilin University, Changchun, 130021, China.
- Jilin Provincial Key Laboratory of Water Resources and Environment, Jilin University, Changchun, 130021, China.
- College of New Energy and Environment, Jilin University, Changchun, 130021, China.
| | - Yue Fan
- Key Laboratory of Groundwater Resources and Environment, Ministry of Education, Jilin University, Changchun, 130021, China
- Jilin Provincial Key Laboratory of Water Resources and Environment, Jilin University, Changchun, 130021, China
- College of New Energy and Environment, Jilin University, Changchun, 130021, China
| | - Jiuhui Li
- Key Laboratory of Groundwater Resources and Environment, Ministry of Education, Jilin University, Changchun, 130021, China
- Jilin Provincial Key Laboratory of Water Resources and Environment, Jilin University, Changchun, 130021, China
- College of New Energy and Environment, Jilin University, Changchun, 130021, China
| |
Collapse
|
55
|
Aouiti C, Hui Q, Jallouli H, Moulay E. Sliding mode control-based fixed-time stabilization and synchronization of inertial neural networks with time-varying delays. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05833-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
56
|
Tomasiello S, Loia V, Khaliq A. A granular recurrent neural network for multiple time series prediction. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05791-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
57
|
Liu S, Ni’mah I, Menkovski V, Mocanu DC, Pechenizkiy M. Efficient and effective training of sparse recurrent neural networks. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05727-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
AbstractRecurrent neural networks (RNNs) have achieved state-of-the-art performances on various applications. However, RNNs are prone to be memory-bandwidth limited in practical applications and need both long periods of training and inference time. The aforementioned problems are at odds with training and deploying RNNs on resource-limited devices where the memory and floating-point operations (FLOPs) budget are strictly constrained. To address this problem, conventional model compression techniques usually focus on reducing inference costs, operating on a costly pre-trained model. Recently, dynamic sparse training has been proposed to accelerate the training process by directly training sparse neural networks from scratch. However, previous sparse training techniques are mainly designed for convolutional neural networks and multi-layer perceptron. In this paper, we introduce a method to train intrinsically sparse RNN models with a fixed number of parameters and floating-point operations (FLOPs) during training. We demonstrate state-of-the-art sparse performance with long short-term memory and recurrent highway networks on widely used tasks, language modeling, and text classification. We simply use the results to advocate that, contrary to the general belief that training a sparse neural network from scratch leads to worse performance than dense networks, sparse training with adaptive connectivity can usually achieve better performance than dense models for RNNs.
Collapse
|