1
|
Explainable artificial intelligence for the automated assessment of the retinal vascular tortuosity. Med Biol Eng Comput 2024; 62:865-881. [PMID: 38060101 PMCID: PMC10881731 DOI: 10.1007/s11517-023-02978-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 11/22/2023] [Indexed: 12/08/2023]
Abstract
Retinal vascular tortuosity is an excessive bending and twisting of the blood vessels in the retina that is associated with numerous health conditions. We propose a novel methodology for the automated assessment of the retinal vascular tortuosity from color fundus images. Our methodology takes into consideration several anatomical factors to weigh the importance of each individual blood vessel. First, we use deep neural networks to produce a robust extraction of the different anatomical structures. Then, the weighting coefficients that are required for the integration of the different anatomical factors are adjusted using evolutionary computation. Finally, the proposed methodology also provides visual representations that explain the contribution of each individual blood vessel to the predicted tortuosity, hence allowing us to understand the decisions of the model. We validate our proposal in a dataset of color fundus images providing a consensus ground truth as well as the annotations of five clinical experts. Our proposal outperforms previous automated methods and offers a performance that is comparable to that of the clinical experts. Therefore, our methodology demonstrates to be a viable alternative for the assessment of the retinal vascular tortuosity. This could facilitate the use of this biomarker in clinical practice and medical research.
Collapse
|
2
|
Reverse time migration and genetic algorithms Combined for Reconstruction in Transluminal Shear Wave Elastography: An In Silico Case Study. ULTRASONICS 2024; 138:107206. [PMID: 38008004 DOI: 10.1016/j.ultras.2023.107206] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Revised: 11/14/2023] [Accepted: 11/21/2023] [Indexed: 11/28/2023]
Abstract
A new reconstruction approach that combines Reverse Time Migration (RTM) and Genetic Algorithms (GAs) is proposed for solving the inverse problem associated with transluminal shear wave elastography. The transurethral identification of the first thermal lesion generated by transrectal High Intensity Focused Ultrasound (HIFU) for the treatment of prostate cancer, was used to preliminarily test in silico the combined reconstruction method. The RTM method was optimised by comparing reconstruction images from several cross-correlation techniques, including a new proposed one, and different device configurations in terms of the number and arrangement of emitters and receivers of the conceptual transurethral probe. The best results were obtained for the new proposed cross-correlation method and a device configuration with 3 emitters and 32 receivers. The RTM reconstructions did not completely contour the shape of the HIFU lesion, however, as planned for the combined approach, the areas in the RTM images with high level of correlation were used to narrow down the search space in the GA-based technique. The GA-based technique was set to find the location of the HIFU lesion and the increment in stiffness and viscosity due to thermal damage. Overall, the combined approach achieves lower level of error in the reconstructed values, and in a shorter computational time, compared to the GA-based technique alone. The lowest errors were accomplished for the location of HIFU lesion, followed by the contrast ratio of stiffness between thermally treated tissue and non-treated normal tissue. The homologous ratio of viscosity obtained higher level of error. Further investigation considering diverse scenarios to be reconstructed and with experimental data is required to fully evaluate the feasibility of the combined approach.
Collapse
|
3
|
Soft computing applications in the field of human factors and ergonomics: A review of the past decade of research. APPLIED ERGONOMICS 2024; 114:104132. [PMID: 37672916 DOI: 10.1016/j.apergo.2023.104132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 08/23/2023] [Accepted: 08/31/2023] [Indexed: 09/08/2023]
Abstract
The main objectives of this study were to 1) review the literature on the applications of soft computing concepts to the field of human factors and ergonomics (HFE) between 2013 and 2022 and 2) highlight future developments and trends. Multiple soft computing methods and techniques have been investigated for their ability to address various applications in HFE effectively. These techniques include fuzzy logic, artificial neural networks, genetic algorithms, and their combinations. Applications of these methods in HFE have been highlighted in one hundred and four articles selected from 406 papers. The results of this study help address the challenges of complexity, vagueness, and imprecision in human factors and ergonomics research through the application of soft computing methodologies.
Collapse
|
4
|
How to use machine learning and fuzzy cognitive maps to test hypothetical scenarios in health behavior change interventions: a case study on fruit intake. BMC Public Health 2023; 23:2478. [PMID: 38082297 PMCID: PMC10714655 DOI: 10.1186/s12889-023-17367-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2022] [Accepted: 11/28/2023] [Indexed: 12/18/2023] Open
Abstract
BACKGROUND Intervention planners use logic models to design evidence-based health behavior interventions. Logic models that capture the complexity of health behavior necessitate additional computational techniques to inform decisions with respect to the design of interventions. OBJECTIVE Using empirical data from a real intervention, the present paper demonstrates how machine learning can be used together with fuzzy cognitive maps to assist in designing health behavior change interventions. METHODS A modified Real Coded Genetic algorithm was applied on longitudinal data from a real intervention study. The dataset contained information about 15 determinants of fruit intake among 257 adults in the Netherlands. Fuzzy cognitive maps were used to analyze the effect of two hypothetical intervention scenarios designed by domain experts. RESULTS Simulations showed that the specified hypothetical interventions would have small impact on fruit intake. The results are consistent with the empirical evidence used in this paper. CONCLUSIONS Machine learning together with fuzzy cognitive maps can assist in building health behavior interventions with complex logic models. The testing of hypothetical scenarios may help interventionists finetune the intervention components thus increasing their potential effectiveness.
Collapse
|
5
|
Optimizing Sanitation Network Upgrading Projects in Slum Areas. J Urban Health 2023; 100:811-833. [PMID: 37535302 PMCID: PMC10447308 DOI: 10.1007/s11524-023-00751-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 06/12/2023] [Indexed: 08/04/2023]
Abstract
Infrastructure upgrading projects are a key element in enhancing the livelihood of residents in slum areas. These projects face significant constructability challenges common to dense-urban construction coupled with the unique socioeconomic challenges of operating in slums. This research focuses on sanitation network upgrading projects in slum areas and proposes a novel methodology capable of (1) accounting for the unique constructability challenges for these projects, (2) accelerating the provision of sanitation services, and (3) optimizing construction decisions. The key contribution of this research to the body of knowledge is in developing a comprehensive construction planning framework capable of achieving these three objectives. The proposed framework focuses specifically on sewer lines upgrading within the larger sanitation networks upgrading projects. This framework consists of five main models that can guide planners in selecting the appropriate equipment sizes, trench system configuration, and optimal equipment routing, in addition to identifying all possible execution sequences along with the corresponding construction cost and duration of each sequence. Most notably, this framework proposes an approach to assess the serviceability of different construction plans measured by how fast sanitary services can be provided to slum dwellers. A multi-objective, genetic algorithms optimization model is developed to identify the optimal construction plans that accelerate the sanitary service provision to residents while minimizing construction costs. A real-world example is presented to demonstrate the model capabilities in optimizing construction plans.
Collapse
|
6
|
Adaptive genetic algorithm for user preference discovery in multi-criteria recommender systems. Heliyon 2023; 9:e18183. [PMID: 37501952 PMCID: PMC10368822 DOI: 10.1016/j.heliyon.2023.e18183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 07/10/2023] [Accepted: 07/11/2023] [Indexed: 07/29/2023] Open
Abstract
A Multi-Criteria Recommender System (MCRS) represents users' preferences on several factors of products and utilizes these preferences while making product recommendations. In recent studies, MCRS has demonstrated the potential of applying Multi-Criteria Decision Making methods to make effective recommendations in several application domains. However, eliciting actual user preferences is still a major challenge in MCRS since we have many criteria for each product. Therefore, this paper proposes a three-phase adaptive genetic algorithm-based approach to discover user preferences in MCRS. Initially, we build a model by assigning weights to multi-criteria features and then learn the preferences on each criteria during similarity computation among users through a genetic algorithm. This allows us to know the actual preference of the user on each criteria and find other like-minded users for decision making. Finally, products are recommended after making predictions. The comparative results demonstrate that the proposed genetic algorithm based approach outperforms both multi-criteria and single criteria based recommender systems on the Yahoo! Movies dataset based on various evaluation measures.
Collapse
|
7
|
Microalgal cultures for the remediation of wastewaters with different nitrogen to phosphorus ratios: Process modelling using artificial neural networks. ENVIRONMENTAL RESEARCH 2023; 231:116076. [PMID: 37156357 DOI: 10.1016/j.envres.2023.116076] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 05/04/2023] [Accepted: 05/05/2023] [Indexed: 05/10/2023]
Abstract
Microalgae have remarkable potential for wastewater bioremediation since they can efficiently uptake nitrogen and phosphorus in a sustainable and environmentally friendly treatment system. However, wastewater composition greatly depends on its source and has a significant seasonal variability. This study aimed to evaluate the impact of different N:P molar ratios on the growth of Chlorella vulgaris and nutrient removal from synthetic wastewater. Furthermore, artificial neural network (ANN) threshold models, optimised by genetic algorithms (GAs), were used to model biomass productivity (BP) and nitrogen/phosphorus removal rates (RRN/RRP). The impact of various inputs culture variables on these parameters was evaluated. Microalgal growth was not nutrient limited since the average biomass productivities and specific growth rates were similar between the experiments. Nutrient removal efficiencies/rates reached 92.0 ± 0.6%/6.15 ± 0.01 mgN L-1 d-1 for nitrogen and 98.2 ± 0.2%/0.92 ± 0.03 mgP L-1 d-1 for phosphorus. Low nitrogen concentration limited phosphorus uptake for low N:P ratios (e.g., 2 and 3, yielding 36 ± 2 mgDW mgP-1 and 39 ± 3 mgDW mgP-1, respectively), while low phosphorus concentration limited nitrogen uptake with high ratios (e.g., 66 and 67, yielding 9.0 ± 0.4 mgDW mgN-1 and 8.8 ± 0.3 mgDW mgN-1, respectively). ANN models showed a high fitting performance, with coefficients of determination of 0.951, 0.800, and 0.793 for BP, RRN, and RRP, respectively. In summary, this study demonstrated that microalgae could successfully grow and adapt to N:P molar ratios between 2 and 67, but the nutrient uptake was impacted by these variations, especially for the lowest and highest N:P molar ratios. Furthermore, GA-ANN models demonstrated to be relevant tools for microalgal growth modelling and control. Their high fitting performance in characterising this biological system can contribute to reducing the experimental effort for culture monitoring (human resources and consumables), thus decreasing the costs of microalgae production.
Collapse
|
8
|
Evaluation of adaptive neuro-fuzzy inference system-genetic algorithm in the prediction and optimization of NOx emission in cement precalcining kiln. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2023; 30:54835-54845. [PMID: 36882651 DOI: 10.1007/s11356-023-26282-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/22/2022] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
The increasing demand for cement due to urbanization growth in Africa countries may result in an upsurge of pollutants associated with its production. One major air pollutant in cement production is nitrogen oxides (NOx) and reported to cause serious damage to human health and the ecosystem. The operation of a cement rotary kiln NOx emission was studied with plant data using the ASPEN Plus software. It is essential to understand the effects of calciner temperature, tertiary air pressure, fuel gas, raw feed material, and fan damper on NOx emissions from a precalcining kiln. In addition, the performance capability of adaptive neuro-fuzzy inference systems and genetic algorithms (ANFIS-GA) to predict and optimize NOx emissions from a precalcining cement kiln is evaluated. The simulation results were in good agreement with the experimental results, with root mean square error of 2.05, variance account (VAF) of 96.0%, average absolute deviation (AAE) of 0.4097, and correlation coefficient of 0.963. Further, the optimal NOx emission was 273.0 mg/m3, with the parameters as determined by the algorithm were calciner temperature at 845 °C, tertiary air pressure - 4.50 mbar, fuel gas of 8550 m3/h, raw feed material 200 t/h, and damper opening of 60%. Consequently, it is recommended that ANFIS should be combined with GA for effective prediction, and optimization of NOx emission in cement plants.
Collapse
|
9
|
Student timetabling genetic algorithm accounting for student preferences. PeerJ Comput Sci 2023; 9:e1200. [PMID: 37346570 PMCID: PMC10280284 DOI: 10.7717/peerj-cs.1200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 12/07/2022] [Indexed: 06/23/2023]
Abstract
Universities face a constant challenge when distributing students and allocating them to their required classes, especially for a large mass of students. Generating feasible timetables is a strenuous task that requires plenty of resources, which makes it impractical to take student preferences into consideration during the process. Timetabling and scheduling problems are proven to be NP-hard due to their complex nature and large search spaces. A genetic algorithm (GA) that assigns students to their classes based on their preferences is proposed as a solution to this problem and is implemented in this article. The GA's performance is enhanced by applying different metaheuristic concepts and by tailoring the genetic operators to the given problem. The quality of the solutions generated is boosted further with the unique repair and improvement functions that were implemented in conjunction with the genetic algorithm. The success of the GA was evaluated by using different datasets of varying complexity and by assessing the quality of the solutions generated. The results obtained were promising and the algorithm guarantees the feasibility of solutions as well as satisfying more than 90% of student preferences even for the most complex problems.
Collapse
|
10
|
AUTO-HAR: An adaptive human activity recognition framework using an automated CNN architecture design. Heliyon 2023; 9:e13636. [PMID: 36852018 PMCID: PMC9958436 DOI: 10.1016/j.heliyon.2023.e13636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Revised: 12/04/2022] [Accepted: 02/06/2023] [Indexed: 02/16/2023] Open
Abstract
Convolutional neural networks (CNNs) have demonstrated exceptional results in the analysis of time- series data when used for Human Activity Recognition (HAR). The manual design of such neural architectures is an error-prone and time-consuming process. The search for optimal CNN architectures is considered a revolution in the design of neural networks. By means of Neural Architecture Search (NAS), network architectures can be designed and optimized automatically. Thus, the optimal CNN architecture representation can be found automatically because of its ability to overcome the limitations of human experience and thinking modes. Evolution algorithms, which are derived from evolutionary mechanisms such as natural selection and genetics, have been widely employed to develop and optimize NAS because they can handle a blackbox optimization process for designing appropriate solution representations and search paradigms without explicit mathematical formulations or gradient information. The Genetic optimization algorithm (GA) is widely used to find optimal or near-optimal solutions for difficult problems. Considering these characteristics, an efficient human activity recognition architecture (AUTO-HAR) is presented in this study. Using the evolutionary GA to select the optimal CNN architecture, the current study proposes a novel encoding schema structure and a novel search space with a much broader range of operations to effectively search for the best architectures for HAR tasks. In addition, the proposed search space provides a reasonable degree of depth because it does not limit the maximum length of the devised task architecture. To test the effectiveness of the proposed framework for HAR tasks, three datasets were utilized: UCI-HAR, Opportunity, and DAPHNET. Based on the results of this study, it has been found that the proposed method can efficiently recognize human activity with an average accuracy of 98.5% (∓1.1), 98.3%, and 99.14% (∓0.8) for UCI-HAR, Opportunity, and DAPHNET, respectively.
Collapse
|
11
|
A machine learning framework for discovering high entropy alloys phase formation drivers. Heliyon 2023; 9:e12859. [PMID: 36704292 PMCID: PMC9871219 DOI: 10.1016/j.heliyon.2023.e12859] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2022] [Revised: 10/06/2022] [Accepted: 01/04/2023] [Indexed: 01/15/2023] Open
Abstract
In the past years, high entropy alloys (HEAs) witnessed great interest because of their superior properties. Phase prediction using machine learning (ML) methods was one of the main research themes in HEAs in the past three years. Although various ML-based phase prediction works exhibited high accuracy, only a few studied the variables that drive the phase formation in HEAs. Those (the previously mentioned work) did that by incorporating domain knowledge in the feature engineering part of the ML framework. In this work, we tackle this problem from a different direction by predicting the phase of HEAs, based only on the concentration of the alloy constituent elements. Then, pruned tree models and linear correlation are used to develop simple primitive prediction rules that are used with self-organizing maps (SOMs) and constructed Euclidean spaces to formulate the problem of discovering the phase formation drivers as an optimization problem. In addition, genetic algorithm (GA) optimization results reveal that the phase formation is affected by the electron affinity, molar volume, and resistivity of the constituent elements. Moreover, one of the primitive prediction rules reveals that the FCC phase formation in the AlCoCrFeNiTiCu family of high entropy alloys can be predicted with 87% accuracy by only knowing the concentration of Al and Cu.
Collapse
|
12
|
Gene signature for the prediction of the trajectories of sepsis-induced acute kidney injury. Crit Care 2022; 26:398. [PMID: 36544199 PMCID: PMC9773539 DOI: 10.1186/s13054-022-04234-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 11/10/2022] [Indexed: 12/24/2022] Open
Abstract
BACKGROUND Acute kidney injury (AKI) is a common complication in sepsis. However, the trajectories of sepsis-induced AKI and their transcriptional profiles are not well characterized. METHODS Sepsis patients admitted to centres participating in Chinese Multi-omics Advances In Sepsis (CMAISE) from November 2020 to December 2021 were enrolled, and gene expression in peripheral blood mononuclear cells was measured on Day 1. The renal function trajectory was measured by the renal component of the SOFA score (SOFArenal) on Days 1 and 3. Transcriptional profiles on Day 1 were compared between these renal function trajectories, and a support vector machine (SVM) was developed to distinguish transient from persistent AKI. RESULTS A total of 172 sepsis patients were enrolled during the study period. The renal function trajectory was classified into four types: non-AKI (SOFArenal = 0 on Days 1 and 3, n = 50), persistent AKI (SOFArenal > 0 on Days 1 and 3, n = 62), transient AKI (SOFArenal > 0 on Day 1 and SOFArenal = 0 on Day 3, n = 50) and worsening AKI (SOFArenal = 0 on Days 1 and SOFArenal > 0 on Day 3, n = 10). The persistent AKI group showed severe organ dysfunction and prolonged requirements for organ support. The worsening AKI group showed the least organ dysfunction on day 1 but had higher serum lactate and prolonged use of vasopressors than the non-AKI and transient AKI groups. There were 2091 upregulated and 1,902 downregulated genes (adjusted p < 0.05) between the persistent and transient AKI groups, with enrichment in the plasma membrane complex, receptor complex, and T-cell receptor complex. A 43-gene SVM model was developed using the genetic algorithm, which showed significantly greater performance predicting persistent AKI than the model based on clinical variables in a holdout subset (AUC: 0.948 [0.912, 0.984] vs. 0.739 [0.648, 0.830]; p < 0.01 for Delong's test). CONCLUSIONS Our study identified four subtypes of sepsis-induced AKI based on kidney injury trajectories. The landscape of host response aberrations across these subtypes was characterized. An SVM model based on a gene signature was developed to predict renal function trajectories, and showed better performance than the clinical variable-based model. Future studies are warranted to validate the gene model in distinguishing persistent from transient AKI.
Collapse
|
13
|
Predicting the winning team in basketball: A novel approach. Heliyon 2022; 8:e12189. [PMID: 36561688 PMCID: PMC9764182 DOI: 10.1016/j.heliyon.2022.e12189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/05/2022] [Accepted: 11/30/2022] [Indexed: 12/13/2022] Open
Abstract
Predicting the winner of a basketball game is a difficult task, due to the inherent complexity of team sports. All 10 players on the court interact with each other and this intricate web of relationships makes the prediction task difficult, especially if the prediction model aims to account for how different players amplify or inhibit other players. Building our approach on complex systems and prototype heuristics, we identify player types through clustering and use cluster memberships to train prediction models. We achieve a prediction accuracy of ∼76% over a period of five NBA seasons and a prediction accuracy of ∼71% over a season not used for model training. Our best models outperform human experts on prediction accuracy. Our research contributes to the literature by showing that player stereotypes extracted from individual statistics are a valid approach to predict game winners.
Collapse
|
14
|
Regret-Based Nash Equilibrium Sorting Genetic Algorithm for Combinatorial Game Theory Problems with Multiple Players. EVOLUTIONARY COMPUTATION 2022; 30:447-478. [PMID: 35231120 DOI: 10.1162/evco_a_00308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 02/14/2022] [Indexed: 06/14/2023]
Abstract
We introduce a regret-based fitness assignment strategy for evolutionary algorithms to find Nash equilibria in noncooperative simultaneous combinatorial game theory problems where it is computationally intractable to enumerate all decision options of the players involved in the game. Applications of evolutionary algorithms to non-cooperative simultaneous games have been limited due to challenges in guiding the evolutionary search toward equilibria, which are usually inferior points in the objective space. We propose a regret-based approach to select candidate decision options of the players for the next generation in a multipopulation genetic algorithm called Regret-Based Nash Equilibrium Sorting Genetic Algorithm (RNESGA). We show that RNESGA can converge to multiple Nash equilibria in a single run using two- and three- player competitive knapsack games and other games from the literature. We also show that pure payoff-based fitness assignment strategies perform poorly in three-player games.
Collapse
|
15
|
Evolving neural networks through bio-inspired parent selection in dynamic environments. Biosystems 2022; 218:104686. [PMID: 35525435 DOI: 10.1016/j.biosystems.2022.104686] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Revised: 03/29/2022] [Accepted: 04/25/2022] [Indexed: 11/26/2022]
Abstract
Environmental variability often degrades the performance of algorithms designed to capture the global convergence of a given search space. Several approaches have been developed to challenge environmental uncertainty by incorporating biologically inspired notions, focusing on crossover, mutation, and selection. This study proposes a bio-inspired approach called NEAT-HD, which focuses on parent selection based on genetic similarity. The originality of the proposed approach rests on its use of a sigmoid function to accelerate species formation and contribute to population diversity. Experiments on two classic control tasks were performed to demonstrate the performance of the proposed method. The results show that NEAT-HD can dynamically adapt to its environment by forming hybrid individuals originating from genetically distinct parents. Additionally, an increase in diversity within the population was observed due to the formation of hybrids and novel individuals, which has never been observed before. Comparing two tasks, the characteristics of NEAT-HD were improved by appropriately setting the algorithm to include the distribution of genetic distance within the population. Our key finding is the inherent potential of newly formed individuals for robustness against dynamic environments.
Collapse
|
16
|
Novel design of weighted differential evolution for parameter estimation of Hammerstein-Wiener systems. J Adv Res 2022; 43:123-136. [PMID: 36585102 PMCID: PMC9811373 DOI: 10.1016/j.jare.2022.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 01/07/2023] Open
Abstract
INTRODUCTION Knacks of evolutionary computing paradigm-based heuristics has been exploited exhaustively for system modeling and parameter estimation of complex nonlinear systems due to their legacy of reliable convergence, accurate performance, simple conceptual design ease implementation ease and wider applicability. OBJECTIVES The aim of the presented study is to investigate in evolutionary heuristics of weighted differential evolution (WDE) to estimate the parameters of Hammerstein-Wiener model (HWM) along with comparative evaluation from state-of-the-art counterparts. The objective function of the HWM for controlled autoregressive systems is efficaciously formulated by approximating error in mean square sense by computing difference between true and estimated parameters. METHODS The adjustable parameters of HWM are estimated through heuristics of WDE and genetic algorithms (GAs) for different degrees of freedom and noise levels for exhaustive, comprehensive, and robust analysis on multiple autonomous trials. RESULTS Comparison through sufficient large number of graphical and numerical illustrations of outcomes for single and multiple execution of WDE and GAs through different performance measuring metrics of precision, convergence and complexity proves the worth and value of the designed WDE algorithm. Statistical assessment studies further prove the efficacy of the proposed scheme. CONCLUSION Extensive simulation based experimentations on measure of central tendency and variance authenticate the effectiveness of the designed methodology WDE as precise, efficient, stable, and robust computing platform for system identification of HWM for controlled autoregressive scenarios.
Collapse
|
17
|
Immunotherapy treatment outcome prediction in metastatic melanoma through an automated multi-objective delta-radiomics model. Comput Biol Med 2021; 138:104916. [PMID: 34656867 DOI: 10.1016/j.compbiomed.2021.104916] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2021] [Revised: 09/29/2021] [Accepted: 09/29/2021] [Indexed: 01/18/2023]
Abstract
Based on recent studies, immunotherapy led by immune checkpoint inhibitors has significantly improved the patient survival rate and effectively reduced the recurrence risk. However, immunotherapy has different therapeutic effects for different patients, leading to difficulties in predicting the treatment response. Conversely, delta-radiomic features, which measure the difference between pre- and post-treatment through quantitative image features, have proven to be promising descriptors for treatment outcome prediction. Consequently, we developed an effective model termed as the automated multi-objective delta-radiomics (Auto-MODR) model for the prediction of immunotherapy response in metastatic melanoma. In Auto-MODR, delta-radiomic features and traditional radiomic features were used as inputs. Furthermore, a novel automated multi-objective model was developed to obtain more reliable and balanced results between sensitivity and specificity. We conducted extensive comparisons with existing studies on treatment outcome prediction. Our method achieved an area under the curve (AUC) of 0.86 in a cross-validation study and an AUC of 0.73 in an independent study. Compared with the model using conventional radiomic features (pre- and post-treatment) only, better performance can be obtained when conventional radiomic and delta-radiomic features are combined. Furthermore, Auto-MODR outperformed the currently available radiomic strategies.
Collapse
|
18
|
Preventive maintenance for the flexible flowshop scheduling under uncertainty: a waste-to-energy system. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2021:10.1007/s11356-021-16234-x. [PMID: 34519989 DOI: 10.1007/s11356-021-16234-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 08/25/2021] [Indexed: 06/13/2023]
Abstract
Nowadays, an efficient and robust plan for maintenance activities can reduce the total cost significantly in the equipment-driven industry. Maintenance activities are directly associated with the impact on the plant output, production quality, production cost, safety, and the environmental performance. To address this challenge more broadly, this paper presents an optimization model for the problem of flexible flowshop scheduling in a series-parallel waste-to-energy (WTE) system. To this end, a preventive maintenance (PM) policy is proposed to find an optimal sequence for processing tasks and minimize the delays. To deal with the uncertainty of the flexible flowshop scheduling of waste-to-energy in practice, the work processing time is modeled to be uncertain in this study. Therefore, a robust optimization model is applied to address the proposed problem. Due to the computational complexity of this model, a novel scenario-based genetic algorithm is proposed to solve it. The applicability of this research is shown by a real-life case study for a WTE system in Iran. The proposed algorithm is compared against an exact optimization method and a canonical genetic algorithm. The findings confirm a competitive performance of the proposed method in terms of time savings that will ultimately save the cost of the proposed PM policy.
Collapse
|
19
|
Local network connectivity optimization: an evaluation of heuristics applied to complex spatial networks, a transportation case study, and a spatial social network. PeerJ Comput Sci 2021; 7:e605. [PMID: 34239982 PMCID: PMC8237331 DOI: 10.7717/peerj-cs.605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Accepted: 05/31/2021] [Indexed: 06/13/2023]
Abstract
Optimizing global connectivity in spatial networks, either through rewiring or adding edges, can increase the flow of information and increase the resilience of the network to failures. Yet, rewiring is not feasible for systems with fixed edges and optimizing global connectivity may not result in optimal local connectivity in systems where that is wanted. We describe the local network connectivity optimization problem, where costly edges are added to a systems with an established and fixed edge network to increase connectivity to a specific location, such as in transportation and telecommunication systems. Solutions to this problem maximize the number of nodes within a given distance to a focal node in the network while they minimize the number and length of additional connections. We compare several heuristics applied to random networks, including two novel planar random networks that are useful for spatial network simulation research, a real-world transportation case study, and a set of real-world social network data. Across network types, significant variation between nodal characteristics and the optimal connections was observed. The characteristics along with the computational costs of the search for optimal solutions highlights the need of prescribing effective heuristics. We offer a novel formulation of the genetic algorithm, which outperforms existing techniques. We describe how this heuristic can be applied to other combinatorial and dynamic problems.
Collapse
|
20
|
Self-selection of evolutionary strategies: adaptive versus non-adaptive forces. Heliyon 2021; 7:e06997. [PMID: 34041384 PMCID: PMC8141468 DOI: 10.1016/j.heliyon.2021.e06997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2020] [Revised: 04/03/2021] [Accepted: 04/30/2021] [Indexed: 12/18/2022] Open
Abstract
The evolution of complex genetic networks is shaped over the course of many generations through multiple mechanisms. These mechanisms can be broken into two predominant categories: adaptive forces, such as natural selection, and non-adaptive forces, such as recombination, genetic drift, and random mutation. Adaptive forces are influenced by the environment, where individuals better suited for their ecological niche are more likely to reproduce. This adaptive force results in a selective pressure which creates a bias in the reproduction of individuals with beneficial traits. Non-adaptive forces, in contrast, are not influenced by the environment: Random mutations occur in offspring regardless of whether they improve the fitness of the offspring. Both adaptive and non-adaptive forces play critical roles in the development of a species over time, and both forces are intrinsically linked to one another. We hypothesize that even under a simple sexual reproduction model, selective pressure will result in changes in the mutation rate and genome size. We tested this hypothesis by evolving Boolean networks using a modified genetic algorithm. Our results demonstrate that changes in environmental signals can result in selective pressure which affects mutation rate.
Collapse
|
21
|
Comparison of various approaches to combine logistic regression with genetic algorithms in survival prediction of hepatocellular carcinoma. Comput Biol Med 2021; 134:104431. [PMID: 34015670 DOI: 10.1016/j.compbiomed.2021.104431] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 04/14/2021] [Accepted: 04/21/2021] [Indexed: 11/18/2022]
Abstract
Hepatocellular carcinoma (HCC) is the most common liver cancer in adults. Many different factors make it difficult to diagnose in humans.. In this paper, a novel diagnostics approach based on machine learning techniques is presented. Logistic regression is one of the most classic machine learning models used to solve the problem of binary classification. In typical implementations, logistic regression coefficients are optimized using iterative methods. Additionally, parameters such as solver, C - a regularization parameter or the number of iterations of the algorithm operation should be selected. In our research, we propose a combination of logistic regression with genetic algorithms. We present three experiments showing the fusion of those methods. In the first experiment, we genetically select the logistic regression parameters, while the second experiment extends this approach by including a genetic selection of features. The third experiment presents a novel approach to train the logistic regression model - the genetic selection of coefficients (weights). Our models are tested for the survival prediction of hepatocellular carcinoma based on patient data collected at Coimbra's Hospital and Universitary Center (CHUC), Portugal. The model we proposed achieved a classification accuracy of 94.55% and an f1-score of 93.56%. Our algorithm shows that machine learning techniques optimized by the proposed concept can bring a new and accurate approach in HCC diagnosis with high accuracy.
Collapse
|
22
|
Design of Spline-Evolutionary Computing Paradigm for Nonlinear Thin Film Flow Model. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2021; 46:9279-9299. [PMID: 34230873 PMCID: PMC8249438 DOI: 10.1007/s13369-021-05830-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 06/04/2021] [Indexed: 02/06/2023]
Abstract
A novel design of stochastic numerical computing method is introduced for computational fluid dynamics problem governed with nonlinear thin film flow (TFF) system by exploiting the competency of polynomial splines for discretization and optimization with evolutionary computing aided with brilliance of local search. The TFF model of second grade fluid is represented with nonlinear second-order differential system. The aim of the present work is to exploit the cubic spline approach (CSA) to transform the differential equations for TFF model into an equivalent set of nonlinear equations. The approximation in mean squared error sense is introduced for the formulation of cost function for solving the nonlinear system of equations representing TFF model. The optimization of the decision variables of the cost function is carried out with global search efficacy of evolution by genetic algorithms (GAs) integrated with sequential quadratic programming (SQP) for speedy adjustments. The designed spline-evolutionary computing paradigm, CSA-GA-SQP, is evaluated for different scenarios of TFF model by variation of second grade and magnetic parameters, as well as variation in the length of splines. Results endorsed the worth of CSA-GA-SQP solver as an efficient alternative, reliable, stable, and accurate framework for the variants of nonlinear TFF systems on the basis of multiple autonomous executions. The design computing spline paradigm CSA-GA-SQP is a promising alternative numerical solver to be implemented for the solution of stiff nonlinear systems representing the complex scenarios of computational fluid dynamics problems.
Collapse
|
23
|
Introducing a novel multi-layer perceptron network based on stochastic gradient descent optimized by a meta-heuristic algorithm for landslide susceptibility mapping. THE SCIENCE OF THE TOTAL ENVIRONMENT 2020; 742:140549. [PMID: 32629264 DOI: 10.1016/j.scitotenv.2020.140549] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 06/16/2020] [Accepted: 06/25/2020] [Indexed: 06/11/2023]
Abstract
The main objective of the current study was to present a methodological approach that combines Information Theory, a neural network and meta-heuristic techniques so as to generate a landslide susceptibility map. Specifically, the methodology involved three important tasks: Classifying the landslide related variables, weighting them and optimizing the structural parameters of the neural network. Shannon's entropy index was used to estimate for each landslide related variable the number of classes which maximized the information coefficient, whereas the Certainty Factor method was used to weight the variables. A Neural Network, a (NN) which uses stochastic gradient descent (SGD), the structural parameters of which are optimized by a Genetic Algorithm (GA), was implemented to generate the landslide susceptibility map. A well defined spatial database which included 380 landslides and fourteen related variables (elevation, slope, aspect, plan curvature, profile curvature, topographic wetness index, stream power index, stream transport index, land use cover, distance to road, distance to faults, distance to river, lithology and soil cover) were considered for implementing the NN-SGD-GA model, in the Yanshan County located in Shangrao Municipality, in the north-eastern of Jiangxi province, China. To validate the predictive power of the novel model, a Logistic Regression (LR) and Random Forest (RF) model were used for comparison. The results showed that the NN-SGD-GA model achieved the highest prediction accuracy (88.10%), followed by the RF (86.26%) and the LR (85.82%) models. Furthermore, by analyzing the validation data, concerning the spatial distribution of landslides and the susceptibility index, the proposed model showed an area under curve value of 0.8212, followed by the RF (0.8124) and the LR (0.8020) models. Finally, the proposed model showed the highest relative landslide density value of 65.09, followed by the RF (62.51) and the LR (61.76) models, when using the validation dataset. The novelty of our approach is the usage of an intelligent way to select and classify the most appropriate prognostic variables and also the implementation of an evolutionary wrapper automatic procedure that efficiently generates prediction models with reduced complexity and adequate generalization capacity. Overall, the proposed model can be successfully used for landslide susceptibility mapping as an alternative spatial investigation tool.
Collapse
|
24
|
Present and future of artificial intelligence in dentistry. J Oral Biol Craniofac Res 2020; 10:391-396. [PMID: 32775180 PMCID: PMC7394756 DOI: 10.1016/j.jobcr.2020.07.015] [Citation(s) in RCA: 30] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 07/16/2020] [Accepted: 07/19/2020] [Indexed: 11/27/2022] Open
Abstract
The last decennary has marked as the breakthrough in the advancement of technology with evolution of artificial intelligence, which is rapidly gaining the attention of researchers across the globe. Every field opted artificial intelligence with huge enthusiasm and so the field of dental science is no exception. With huge increases in patient documented information and data this is the need of the hour to use intelligent software to compile and save this data. From the basic step of taking a patient's history to data processing and then to extract the information from the data for diagnosis, artificial intelligence has many applications in dental and medical science. While in no case artificial intelligence can replace the role of a dental surgeon but it is important to be acquainted with the scope to amalgamate this advancement of technology in future for betterment of dental practice.
Collapse
|
25
|
A Genetic Attack Against Machine Learning Classifiers to Steal Biometric Actigraphy Profiles from Health Related Sensor Data. J Med Syst 2020; 44:187. [PMID: 32929615 PMCID: PMC7497442 DOI: 10.1007/s10916-020-01646-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2019] [Accepted: 08/20/2020] [Indexed: 11/28/2022]
Abstract
In this work, we propose the use of a genetic-algorithm-based attack against machine learning classifiers with the aim of 'stealing' users' biometric actigraphy profiles from health related sensor data. The target classification model uses daily actigraphy patterns for user identification. The biometric profiles are modeled as what we call impersonator examples which are generated based solely on the predictions' confidence score by repeatedly querying the target classifier. We conducted experiments in a black-box setting on a public dataset that contains actigraphy profiles from 55 individuals. The data consists of daily motion patterns recorded with an actigraphy device. These patterns can be used as biometric profiles to identify each individual. Our attack was able to generate examples capable of impersonating a target user with a success rate of 94.5%. Furthermore, we found that the impersonator examples have high transferability to other classifiers trained with the same training set. We also show that the generated biometric profiles have a close resemblance to the ground truth profiles which can lead to sensitive data exposure, like revealing the time of the day an individual wakes-up and goes to bed.
Collapse
|
26
|
Identification of risk factors for mortality associated with COVID-19. PeerJ 2020; 8:e9885. [PMID: 32953279 PMCID: PMC7473053 DOI: 10.7717/peerj.9885] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 08/16/2020] [Indexed: 02/05/2023] Open
Abstract
OBJECTIVES Coronavirus Disease 2019 (COVID-19) has become a pandemic outbreak. Risk stratification at hospital admission is of vital importance for medical decision making and resource allocation. There is no sophisticated tool for this purpose. This study aimed to develop neural network models with predictors selected by genetic algorithms (GA). METHODS This study was conducted in Wuhan Third Hospital from January 2020 to March 2020. Predictors were collected on day 1 of hospital admission. The primary outcome was the vital status at hospital discharge. Predictors were selected by using GA, and neural network models were built with the cross-validation method. The final neural network models were compared with conventional logistic regression models. RESULTS A total of 246 patients with COVID-19 were included for analysis. The mortality rate was 17.1% (42/246). Non-survivors were significantly older (median (IQR): 69 (57, 77) vs. 55 (41, 63) years; p < 0.001), had higher high-sensitive troponin I (0.03 (0, 0.06) vs. 0 (0, 0.01) ng/L; p < 0.001), C-reactive protein (85.75 (57.39, 164.65) vs. 23.49 (10.1, 53.59) mg/L; p < 0.001), D-dimer (0.99 (0.44, 2.96) vs. 0.52 (0.26, 0.96) mg/L; p < 0.001), and α-hydroxybutyrate dehydrogenase (306.5 (268.75, 377.25) vs. 194.5 (160.75, 247.5); p < 0.001) and a lower level of lymphocyte count (0.74 (0.41, 0.96) vs. 0.98 (0.77, 1.26) × 109/L; p < 0.001) than survivors. The GA identified a 9-variable (NNet1) and a 32-variable model (NNet2). The NNet1 model was parsimonious with a cost on accuracy; the NNet2 model had the maximum accuracy. NNet1 (AUC: 0.806; 95% CI [0.693-0.919]) and NNet2 (AUC: 0.922; 95% CI [0.859-0.985]) outperformed the linear regression models. CONCLUSIONS Our study included a cohort of COVID-19 patients. Several risk factors were identified considering both clinical and statistical significance. We further developed two neural network models, with the variables selected by using GA. The model performs much better than the conventional generalized linear models.
Collapse
|
27
|
Convolutional neural networks and genetic algorithm for visual imagery classification. Phys Eng Sci Med 2020; 43:973-983. [PMID: 32662039 DOI: 10.1007/s13246-020-00894-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2019] [Accepted: 06/29/2020] [Indexed: 10/23/2022]
Abstract
Brain-Computer Interface (BCI) systems establish a channel for direct communication between the brain and the outside world without having to use the peripheral nervous system. While most BCI systems use evoked potentials and motor imagery, in the present work we present a technique that employs visual imagery. Our technique uses neural networks to classify the signals produced in visual imagery. To this end, we have used densely connected neural and convolutional networks, together with a genetic algorithm to find the best parameters for these networks. The results we obtained are a 60% success rate in the classification of four imagined objects (a tree, a dog, an airplane and a house) plus a state of relaxation, thus outperforming the state of the art in visual imagery classification.
Collapse
|
28
|
Optimized fuzzy inference system to enhance prediction accuracy for influent characteristics of a sewage treatment plant. THE SCIENCE OF THE TOTAL ENVIRONMENT 2020; 722:137878. [PMID: 32199382 DOI: 10.1016/j.scitotenv.2020.137878] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 02/26/2020] [Accepted: 03/10/2020] [Indexed: 06/10/2023]
Abstract
Sewage treatment plants (STPs) keep sewage contamination within safe levels and minimize the risk of environmental disasters. To achieve optimum operation of an STP, it is necessary for influent parameters to be measured or estimated precisely. In this research, six well-known influent chemical and biological characteristics, i.e., biochemical oxygen demand (BOD), chemical oxygen demand (COD), Ammoniacal Nitrogen (NH3-N), pH, oil and grease (OG) and suspended solids (SS), were modeled and predicted using the Sugeno fuzzy logic model. The membership function range of the fuzzy model was optimized by ANFIS, the integrated Genetic algorithms (GA), and the integrated particle swarm optimization (PSO) algorithms. The results were evaluated by different indices to find the accuracy of each algorithm. To ensure prediction accuracy, outliers in the predicted data were found and replaced with reasonable values. The results showed that both integrated GA-FIS and PSO-FIS algorithms performed at almost the same level and both had fewer errors than ANFIS. As the GA-FIS algorithm predicts BOD with fewer errors than PSO-FIS and the aim of this study is to provide an accurate prediction of missing data, GA-FIS was only used to predict the BOD parameter; the other parameters were predicted by PSO-FIS algorithm. As a result, the model successfully could provide outstanding performance for predicting the BOD, COD, NH3-N, OG, pH and SS with MAE equal to 3.79, 5.14, 0.4, 0.27, 0.02, and 3.16, respectively.
Collapse
|
29
|
GARS: Genetic Algorithm for the identification of a Robust Subset of features in high-dimensional datasets. BMC Bioinformatics 2020; 21:54. [PMID: 32046651 PMCID: PMC7014945 DOI: 10.1186/s12859-020-3400-6] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2019] [Accepted: 02/07/2020] [Indexed: 11/06/2022] Open
Abstract
BACKGROUND Feature selection is a crucial step in machine learning analysis. Currently, many feature selection approaches do not ensure satisfying results, in terms of accuracy and computational time, when the amount of data is huge, such as in 'Omics' datasets. RESULTS Here, we propose an innovative implementation of a genetic algorithm, called GARS, for fast and accurate identification of informative features in multi-class and high-dimensional datasets. In all simulations, GARS outperformed two standard filter-based and two 'wrapper' and one embedded' selection methods, showing high classification accuracies in a reasonable computational time. CONCLUSIONS GARS proved to be a suitable tool for performing feature selection on high-dimensional data. Therefore, GARS could be adopted when standard feature selection approaches do not provide satisfactory results or when there is a huge amount of data to be analyzed.
Collapse
|
30
|
Simultaneous estimation of amlodipine and atorvastatin by micelle-augmented first derivative synchronous spectrofluorimetry and multivariate analysis. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2020; 224:117430. [PMID: 31382228 DOI: 10.1016/j.saa.2019.117430] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 07/25/2019] [Accepted: 07/25/2019] [Indexed: 06/10/2023]
Abstract
Five Selective, rapid and sensitive spectrofluorimetric methods were performed in this study for the simultaneous estimation of amlodipine besylate (AML) and atorvastatin (ATR) in their binary mixtures and combination polypills that are used for management of cardiovascular conditions. The first method depends on micelle-enhanced first derivative synchronous fluorimetric analysis (method I) and the other four methods are multivariate analysis techniques based on the use of factor-based calibration prediction methods comprising partial least squares (PLS), Principal Component Regression (PCR), genetic algorithm PLS (GA-PLS) and genetic algorithm PCR (GA-PCR). The synchronous fluorescence spectra of the solutions were measured at a constant wavelength difference; Δλ = 100 nm. The magnitudes of the peaks of the first derivative spectra (1D) were measured at 292 nm and 387 nm for ATR, and AML correspondingly. The multivariate models were constructed utilizing fifteen mixtures as a calibration set and ten mixtures as a validation set. The linearity of all the methods was in the concentration ranges of (0.1-4.0 μg mL-1, 0.4-10.0 μg mL-1) for AML and ATR, correspondingly. Statistical analysis revealed no significant difference between the proposed methods and the reference method. The validity of the proposed methods allows their suitability for quality control work. All the analysis settings were optimized and all the suggested procedures were applied productively for the determination of both drugs in synthetic mixtures, validation set, and combination polypills.
Collapse
|
31
|
ARX model decomposed on Meixner-Like orthonormal bases. ISA TRANSACTIONS 2019; 95:278-294. [PMID: 31146964 DOI: 10.1016/j.isatra.2019.05.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 05/21/2019] [Accepted: 05/21/2019] [Indexed: 06/09/2023]
Abstract
The present study provides a new modeling of linear slowly starting systems. More precisely, this new approach extends the technique of filtering only the input using Meixner-Like (M-L) filters to filter both the output and input of the system outlined by an ARX model. Therefore, the idea is to develop the input and output parameters of ARX modeling over 2 M-L bases. So as to ensure an optimal representation, the two M-L poles are optimized using Newton-Raphson (N-R) and Genetic Algorithms (GA) methods. A new method is proposed for Model Predictive Control (MPC) using the obtained optimal model that is called ARXMeixner-Like (ARXM-L). A numerical example of system having delay and three examples of experimental research: A supersonic jet engine inlet, a Process Trainer PT326 and a Quanser aero experiment with one degree of freedom attitude control are made.
Collapse
|
32
|
Computational analysis of viable parameter regions in models of synthetic biological systems. J Biol Eng 2019; 13:75. [PMID: 31548864 PMCID: PMC6751877 DOI: 10.1186/s13036-019-0205-0] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 09/05/2019] [Indexed: 01/22/2023] Open
Abstract
Background Gene regulatory networks with different topological and/or dynamical properties might exhibit similar behavior. System that is less perceptive for the perturbations of its internal and external factors should be preferred. Methods for sensitivity and robustness assessment have already been developed and can be roughly divided into local and global approaches. Local methods focus only on the local area around nominal parameter values. This can be problematic when parameters exhibits the desired behavior over a large range of parameter perturbations or when parameter values are unknown. Global methods, on the other hand, investigate the whole space of parameter values and mostly rely on different sampling techniques. This can be computationally inefficient. To address these shortcomings ’glocal’ approaches were developed that apply global and local approaches in an effective and rigorous manner. Results Herein, we present a computational approach for ’glocal’ analysis of viable parameter regions in biological models. The methodology is based on the exploration of high-dimensional viable parameter spaces with global and local sampling, clustering and dimensionality reduction techniques. The proposed methodology allows us to efficiently investigate the viable parameter space regions, evaluate the regions which exhibit the largest robustness, and to gather new insights regarding the size and connectivity of the viable parameter regions. We evaluate the proposed methodology on three different synthetic gene regulatory network models, i.e. the repressilator model, the model of the AC-DC circuit and the model of the edge-triggered master-slave D flip-flop. Conclusions The proposed methodology provides a rigorous assessment of the shape and size of viable parameter regions based on (1) the mathematical description of the biological system of interest, (2) constraints that define feasible parameter regions and (3) cost function that defines the desired or observed behavior of the system. These insights can be used to assess the robustness of biological systems, even in the case when parameter values are unknown and more importantly, even when there are multiple poorly connected viable parameter regions in the solution space. Moreover, the methodology can be efficiently applied to the analysis of biological systems that exhibit multiple modes of the targeted behavior.
Collapse
|
33
|
Multi-scale optimisation vs. genetic algorithms in the gradient separation of diuretics by reversed-phase liquid chromatography. J Chromatogr A 2019; 1609:460427. [PMID: 31439441 DOI: 10.1016/j.chroma.2019.460427] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2019] [Revised: 07/30/2019] [Accepted: 08/03/2019] [Indexed: 01/07/2023]
Abstract
Multi-linear gradients are a convenient solution to get separation of complex samples by modulating carefully the gradient slope, in order to accomplish the local selectivity needs for each particular solute cluster. These gradients can be designed by trial-and-error according to the chromatographer experience, but this strategy becomes quickly inappropriate for complex separations. More evolved solutions imply the sequential construction of multi-segmented gradients. However, this strategy discards part of the search space in each step of the construction and, again, cannot deal properly with very complex samples. When the complexity is too large, the only valid alternative for finding the best gradient is the use of global search methods, such as genetic algorithms (GAs). Recently, a new global approach where the level of detail is increased along the search has been proposed, namely Multi-scale optimisation (MSO). In this strategy, cubic splines are applied to build intermediate curves to define any arbitrary solvent variation function. Subdivision schemes are used to generate the cubic splines and control their level of detail. The search was subjected to a number of restrictions, such as avoiding long elution and favouring a balanced peak distribution. The aim of this work is evaluating and comparing the results of GAs and MSO. Both approaches were tested with a set of 14 diuretics and probenecid, eluted with acetonitrile-water mixtures using a C18 column. Satisfactory baseline resolution was obtained with an analysis time of 15-16 min. We found that GAs optimisation offered results equivalent to those provided by MSO, when the penalisation parameters were included in the cost function.
Collapse
|
34
|
Backtracking search optimization heuristics for nonlinear Hammerstein controlled auto regressive auto regressive systems. ISA TRANSACTIONS 2019; 91:99-113. [PMID: 30770155 DOI: 10.1016/j.isatra.2019.01.042] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 12/13/2018] [Accepted: 01/31/2019] [Indexed: 06/09/2023]
Abstract
In this work, novel application of evolutionary computational heuristics is presented for parameter identification problem of nonlinear Hammerstein controlled auto regressive auto regressive (NHCARAR) systems through global search competency of backtracking search algorithm (BSA), differential evolution (DE) and genetic algorithms (GAs). The mean squared error metric is used for the fitness function of NHCARAR system based on difference between actual and approximated design variables. Optimization of the cost function is conducted with BSA for NHCARAR model by varying degrees of freedom and noise variances. To verify and validate the worth of the presented scheme, comparative studies are carried out with its counterparts DE and GAs through statistical observations by means of weight deviation factor, root of mean squared error, and Thiel's inequality coefficient as well as complexity measures.
Collapse
|
35
|
User-interfaces layout optimization using eye-tracking, mouse movements and genetic algorithms. APPLIED ERGONOMICS 2019; 78:197-209. [PMID: 31046951 DOI: 10.1016/j.apergo.2019.03.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 02/11/2019] [Accepted: 03/13/2019] [Indexed: 06/09/2023]
Abstract
Establishing the best layout configuration for software-generated interfaces and control panels is a complex problem when they include many controls and indicators. Several methods have been developed for arranging the interface elements; however, the results are usually conceptual designs that must be manually adjusted to obtain layouts valid for real situations. Based on these considerations, in this work we propose a new automatized procedure to obtain optimal layouts for software-based interfaces. Eye-tracking and mouse-tracking data collected during the use of the interface is used to obtain the best configuration for its elements. The solutions are generated using a slicing-trees based genetic algorithm. This algorithm is able to obtain really applicable configurations that respect the geometrical restrictions of elements in the interface. Results show that this procedure increases effectiveness, efficiency and satisfaction of the users when they interact with the obtained interfaces.
Collapse
|
36
|
Flash flood susceptibility modeling using an optimized fuzzy rule based feature selection technique and tree based ensemble methods. THE SCIENCE OF THE TOTAL ENVIRONMENT 2019; 668:1038-1054. [PMID: 31018446 DOI: 10.1016/j.scitotenv.2019.02.422] [Citation(s) in RCA: 55] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2018] [Revised: 02/26/2019] [Accepted: 02/27/2019] [Indexed: 06/09/2023]
Abstract
The main objective of the present study was to provide a novel methodological approach for flash flood susceptibility modeling based on a feature selection method (FSM) and tree based ensemble methods. The FSM, used a fuzzy rule based algorithm FURIA, as attribute evaluator, whereas GA were used as the search method, in order to obtain optimal set of variables used in flood susceptibility modeling assessments. The novel FURIA-GA was combined with LogitBoost, Bagging and AdaBoost ensemble algorithms. The performance of the developed methodology was evaluated at the Bao Yen district and the Bac Ha district of Lao Cai Province in the Northeast region of Vietnam. For the case study, 654 floods and twelve geo-environmental variables were used. The predictive performance of each model was estimated through the calculation of the classification accuracy, the sensitivity, the specificity, the success and predictive rate curve and the area under the curves (AUC). The FURIA-GA FSM compared to a conventional rule based method gave more accurate predictive results. Also, the FURIA-GA based models, presented higher learning and predictive ability compared to the ensemble models that had not undergone a FSM. Based on the predictive classification accuracy, FURIA-GA-Bagging (93.37%) outperformed FURIA-GA-LogitBoost (92.35%) and FURIA-GA-AdaBoost (89.03%). FURIA-GA-Bagging showed also the highest sensitivity (96.94%) and specificity (89.80%). On the other hand, the FURIA-GA-LogitBoost showed the lowest percentage in very high susceptible zone and the highest relative flash-flood density, whereas the FURIA-GA-AdaBoost achieved the highest prediction AUC value (0.9740), based on the prediction rate curve, followed by FURIA-GA-Bagging (0.9566), and FURIA-GA-LogitBoost (0.8955). It can be concluded that the usage of different statistical metrics, provides different outcomes concerning the best prediction model, which mainly could be attributed to sites specific settings. The proposed models could be considered as a novel alternative investigation tools appropriate for flash flood susceptibility mapping.
Collapse
|
37
|
Abstract
Background In order to fully characterize the genome of an individual, the reconstruction of the two distinct copies of each chromosome, called haplotypes, is essential. The computational problem of inferring the full haplotype of a cell starting from read sequencing data is known as haplotype assembly, and consists in assigning all heterozygous Single Nucleotide Polymorphisms (SNPs) to exactly one of the two chromosomes. Indeed, the knowledge of complete haplotypes is generally more informative than analyzing single SNPs and plays a fundamental role in many medical applications. Results To reconstruct the two haplotypes, we addressed the weighted Minimum Error Correction (wMEC) problem, which is a successful approach for haplotype assembly. This NP-hard problem consists in computing the two haplotypes that partition the sequencing reads into two disjoint sub-sets, with the least number of corrections to the SNP values. To this aim, we propose here GenHap, a novel computational method for haplotype assembly based on Genetic Algorithms, yielding optimal solutions by means of a global search process. In order to evaluate the effectiveness of our approach, we run GenHap on two synthetic (yet realistic) datasets, based on the Roche/454 and PacBio RS II sequencing technologies. We compared the performance of GenHap against HapCol, an efficient state-of-the-art algorithm for haplotype phasing. Our results show that GenHap always obtains high accuracy solutions (in terms of haplotype error rate), and is up to 4× faster than HapCol in the case of Roche/454 instances and up to 20× faster when compared on the PacBio RS II dataset. Finally, we assessed the performance of GenHap on two different real datasets. Conclusions Future-generation sequencing technologies, producing longer reads with higher coverage, can highly benefit from GenHap, thanks to its capability of efficiently solving large instances of the haplotype assembly problem. Moreover, the optimization approach proposed in GenHap can be extended to the study of allele-specific genomic features, such as expression, methylation and chromatin conformation, by exploiting multi-objective optimization techniques. The source code and the full documentation are available at the following GitHub repository: https://github.com/andrea-tango/GenHap.
Collapse
|
38
|
Neutron spectrum unfolding using three artificial intelligence optimization methods. Appl Radiat Isot 2019; 147:136-143. [PMID: 30878774 DOI: 10.1016/j.apradiso.2019.03.009] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2018] [Revised: 02/22/2019] [Accepted: 03/04/2019] [Indexed: 10/27/2022]
Abstract
Many methods have been proposed and developed in research into neutron spectrum unfolding. In this work, three artificial intelligence optimization methods-genetic algorithms, radial basis function neural networks and generalized regression neural networks-were developed on the basis of former research to retrieve the neutron spectrum. Sixty-three neutron spectra were unfolded on the basis of the same response functions with the three methods, and three indexes-the mean squared error, the spectral quality and the sphere reading quality-were applied with the aim to compare the generalized unfolding performance. The results obtained with the three methods show that the unfolded neutron spectra are mostly acceptable using three methods without the initial guess spectra and that the generalized regression neural network method is the fastest and most accurate method with the most powerful generalization ability.
Collapse
|
39
|
Application of Genetic Algorithms for Pixel Selection in MIA-QSAR Studies on Anti-HIV HEPT Analogues for New Design Derivatives. IRANIAN JOURNAL OF PHARMACEUTICAL RESEARCH : IJPR 2019; 18:1239-1252. [PMID: 32641935 PMCID: PMC6934972 DOI: 10.22037/ijpr.2019.1100731] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Quantitative structure-activity relationship (QSAR) analysis has been carried out with a series of 107 anti-HIV HEPT compounds with antiviral activity, which was performed by chemometrics methods. Bi-dimensional images were used to calculate some pixels and multivariate image analysis was applied to QSAR modelling of the anti-HIV potential of HEPT analogues by means of multivariate calibration, such as principal component regression (PCR) and partial least squares (PLS). In this paper, we investigated the effect of pixel selection by application of genetic algorithms (GAs) for the PLS model. GAs is very useful in the variable selection in modelling and calibration because of the strong effect of the relationship between presence/absence of variables in a calibration model and the prediction ability of the model itself. The subset of pixels, which resulted in the low prediction error, was selected by genetic algorithms. The resulted GA-PLS model had a high statistical quality (RMSEP = 0.0423 and R2 = 0.9412) in comparison with PCR (RMSEP = 0.4559, R2 = 0.7929) and PLS (RMSEP = 0.3275 and R2 = 0.0.8427) for predicting the activity of the compounds. Because of high correlation between values of predicted and experimental activities, MIA-QSAR proved to be a highly predictive approach.
Collapse
|
40
|
Delimiting the knowledge space and the design space of nanostructured lipid carriers through Artificial Intelligence tools. Int J Pharm 2018; 553:522-530. [PMID: 30442594 DOI: 10.1016/j.ijpharm.2018.10.058] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Revised: 10/24/2018] [Accepted: 10/25/2018] [Indexed: 12/24/2022]
Abstract
Nanostructured lipid carriers (NLC) are biocompatible and biodegradable nanoscale systems with extensive application for controlled drug release. However, the development of optimal nanosystems along with a reproducible manufacturing process is still challenging. In this study, a two-step experimental design was performed and databases were successfully modelled using Artificial Intelligence techniques as an innovative method to get optimal, reproducible and stable NLC. The initial approach, including a wide range of values for the different variables, was followed by a second set of experiments with variable values in a narrower range, more suited to the characteristics of the system. NLC loaded with rifabutin, a hydrophobic drug model, were produced by hot homogenization and fully characterized in terms of particle size, size distribution, zeta potential, encapsulation efficiency and drug loading. The use of Artificial Intelligence tools has allowed to elucidate the key parameters that modulate each formulation property. Stable nanoparticles with low sizes and polydispersions, negative zeta potentials and high drug loadings were obtained when the proportion of lipid components, drug, surfactants and stirring speed were optimized by FormRules® and INForm®. The successful application of Artificial Intelligence tools on NLC formulation optimization constitutes a pioneer approach in the field of lipid nanoparticles.
Collapse
|
41
|
Optimisation of ANN topology for predicting the rehydrated apple cubes colour change using RSM and GA. Neural Comput Appl 2018; 30:1795-1809. [PMID: 30220793 PMCID: PMC6132921 DOI: 10.1007/s00521-016-2801-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2016] [Accepted: 12/18/2016] [Indexed: 11/29/2022]
Abstract
In this study, an efficient optimisation method by combining response surface methodology (RSM) and genetic algorithm (GA) is introduced to find the optimal topology of artificial neural networks (ANNs) for predicting colour changes in rehydrated apple cubes. A multi-layered feed-forward backpropagation ANN model of algorithms was developed to correlate one output (colour change) to four input variables (drying air temperature, drying air velocity, temperature of distilled water and rehydration time). A predictive model for ANN topology in terms of the best mean squared error (MSE) performance on validation samples was created using RSM. RSM model was integrated with an effective GA to find the optimum topology of ANN. The optimum ANN had minimum MSE when the number of hidden neurons, learning rate, momentum constant, number of epochs and number of training runs were 13, 0.33, 0.89, 3869 and 3, respectively. MSE of optimal ANN topology on validation samples was 0.0072095. It turned out that the optimal ANN topology can be considered as more precise for predicting colour change in the rehydrated apple cubes. Mean absolute error and regression coefficient (R) of the optimal ANN topology were determined as 0.0259 and 0.96475 for training, 0.0399 and 0.95243 for testing and 0.0264 and 0.95151 for validation data sets. The results of the testing model on new samples showed excellent agreement between the actual and predicted data with coefficient of determination R2 = 0.97.
Collapse
|
42
|
[Optimization of the prediction of financial problems in Spanish private health companies using genetic algorithms]. GACETA SANITARIA 2018; 33:462-467. [PMID: 30143246 DOI: 10.1016/j.gaceta.2018.01.001] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Revised: 01/07/2018] [Accepted: 01/09/2018] [Indexed: 10/28/2022]
Abstract
OBJECTIVE This paper presents a methodology to optimize, using Altman's Z-Score for private companies, the prediction of private companies of the Spanish health sector entering a situation of bankruptcy. METHOD The proposed method consists of the application of genetic algorithms (GA) to find the coefficients of the formula of the chain of ratios proposed by Altman in the version of the score for private companies which optimize the prediction for Spanish private health companies, maximizing sensitivity and specificity, and thereby reducing type I and type II errors. For this purpose, a sample of 5,903 companies from the Spanish private health sector obtained from the database of the Iberian Balance Analysis System (SABI) between 2007 and 2015 was used. RESULTS The results show that the predictive model obtained with the AG presents greater accuracy, sensitivity and specificity than that proposed by Altman for private companies with both test data and all sample data. CONCLUSIONS The most important finding of this study was to establish a methodology that can identify the optimized coefficients for the Altman Z-Score, which allows a more accurate prediction of bankruptcy in Spanish private healthcare companies.
Collapse
|
43
|
Abstract
Background Triclustering has shown to be a valuable tool for the analysis of microarray data since its appearance as an improvement of classical clustering and biclustering techniques. The standard for validation of triclustering is based on three different measures: correlation, graphic similarity of the patterns and functional annotations for the genes extracted from the Gene Ontology project (GO). Results We propose TRIQ, a single evaluation measure that combines the three measures previously described: correlation, graphic validation and functional annotation, providing a single value as result of the validation of a tricluster solution and therefore simplifying the steps inherent to research of comparison and selection of solutions. TRIQ has been applied to three datasets already studied and evaluated with single measures based on correlation, graphic similarity and GO terms. Triclusters have been extracted from this three datasets using two different algorithms: TriGen and OPTricluster. Conclusions TRIQ has successfully provided the same results as a the three single evaluation measures. Furthermore, we have applied TRIQ to results from another algorithm, OPTRicluster, and we have shown how TRIQ has been a valid tool to compare results from different algorithms in a quantitative straightforward manner. Therefore, it appears as a valid measure to represent and summarize the quality of tricluster solutions. It is also feasible for evaluation of non biological triclusters, due to the parametrization of each component of TRIQ.
Collapse
|
44
|
Comparison of support vector machine based on genetic algorithm with logistic regression to diagnose obstructive sleep apnea. JOURNAL OF RESEARCH IN MEDICAL SCIENCES 2018; 23:65. [PMID: 30181747 PMCID: PMC6091128 DOI: 10.4103/jrms.jrms_357_17] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/19/2017] [Revised: 01/01/2018] [Accepted: 04/29/2018] [Indexed: 11/04/2022]
Abstract
Background Diagnosing of obstructive sleep apnea (OSA) is an important subject in medicine. This study aimed to compare the performance of two data mining techniques, support vector machine (SVM), and logistic regression (LR), in diagnosing OSA. The best-fit model was used as a substitute for polysomnography (PSG), which is the gold standard for diagnosing this disease. Materials and Methods A total of 250 patients with sleep problems complaints and whose disease had been diagnosed by PSG and referred to the Sleep Disorders Research Center of Farabi Hospital, Kermanshah, between 2012 and 2015 were recruited in this study. To fit the best LR model, a model was first fitted with all variables and then compared with a model made from the significant variables using Akaike's information criterion (AIC). The SVM model and radial basis function (RBF) kernel, whose parameters had been optimized by genetic algorithm, were used to diagnose OSA. Results Based on AIC, the best LR model obtained from this study was a model fitted with all variables. The performance of final LR model was compared with SVM model, revealing the accuracy 0.797 versus 0.729, sensitivity 0.714 versus 0.777, and specificity 0.847 vs. 0.702, respectively. Conclusion Both models were found to have an appropriate performance. However, considering accuracy as an important criterion for comparing the performance of models in this domain, it can be argued that SVM could have a better efficiency than LR in diagnosing OSA in patients.
Collapse
|
45
|
Abstract
Background A huge and continuous increase in the number of completely sequenced chloroplast genomes, available for evolutionary and functional studies in plants, has been observed during the past years. Consequently, it appears possible to build large-scale phylogenetic trees of plant species. However, building such a tree that is well-supported can be a difficult task, even when a subset of close plant species is considered. Usually, the difficulty raises from a few core genes disturbing the phylogenetic information, due for example from problems of homoplasy. Fortunately, a reliable phylogenetic tree can be obtained once these problematic genes are identified and removed from the analysis.Therefore, in this paper we address the problem of finding the largest subset of core genomes which allows to build the best supported tree. Results As an exhaustive study of all core genes combination is untractable in practice, since the combinatorics of the situation made it computationally infeasible, we investigate three well-known metaheuristics to solve this optimization problem. More precisely, we design and compare distributed approaches using genetic algorithm, particle swarm optimization, and simulated annealing. The latter approach is a new contribution and therefore is described in details, whereas the two former ones have been already studied in previous works. They have been designed de novo in a new platform, and new experiments have been achieved on a larger set of chloroplasts, to compare together these three metaheuristics. Conclusions The ways genes affect both tree topology and supports are assessed using statistical tools like Lasso or dummy logistic regression, in an hybrid approach of the genetic algorithm. By doing so, we are able to provide the most supported trees based on the largest subsets of core genes.
Collapse
|
46
|
Control of a manipulator robot by neuro-fuzzy subsets form approach control optimized by the genetic algorithms. ISA TRANSACTIONS 2018; 77:133-145. [PMID: 29661551 DOI: 10.1016/j.isatra.2018.03.023] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/20/2017] [Revised: 01/25/2018] [Accepted: 03/29/2018] [Indexed: 06/08/2023]
Abstract
In this paper, we describe a new form of neuro-fuzzy-genetic controller design for nonlinear system derived from a manipulator robot. The proposed method combines fuzzy logic and neuronal networks which are of growing interest in robotics, the neuro-fuzzy controller does not require the knowledge of the robot parameters values. Furtheremore, the genetic algorithms (GAs) for complex motion planning of robots require an evaluation function which takes into account multiple factors. An optimizing algorithm based on the genetic algorithms is applied in order to provide the most adequate shape of the fuzzy subsets that are considered as an interpolation functions. The proposed approach provides a well learning of the manipulator robot dynamics whatever the assigned task. Simulation and practical results illustrate the effectiveness of the proposed strategy. The advantages of the proposed method and the possibilities of further improvements are discussed.
Collapse
|
47
|
Centralized bundle generation in auction-based collaborative transportation. OR SPECTRUM : QUANTITATIVE APPROACHES IN MANAGEMENT 2018; 40:613-635. [PMID: 31258228 PMCID: PMC6560701 DOI: 10.1007/s00291-018-0516-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 03/23/2018] [Indexed: 05/29/2023]
Abstract
In horizontal collaborations, carriers form coalitions in order to perform parts of their logistics operations jointly. By exchanging transportation requests among each other, they can operate more efficiently and in a more sustainable way. This exchange of requests can be organized through combinatorial auctions, where collaborators submit requests for exchange to a common pool. The requests in the pool are grouped into bundles, and these are offered to participating carriers. From a practical point of view, offering all possible bundles is not manageable, since the number of bundles grows exponentially with the number of traded requests. We show how the complete set of bundles can be efficiently reduced to a subset of attractive ones. For this we define the Bundle Generation Problem (BuGP). The aim is to provide a reduced set of offered bundles that maximizes the total coalition profit, while a feasible assignment of bundles to carriers is guaranteed. The objective function, however, could only be evaluated whether carriers reveal sensitive information, which would be unrealistic. Thus, we develop a proxy for the objective function for assessing the attractiveness of bundles under incomplete information. This is used in a genetic algorithms-based framework that aims at producing attractive and feasible bundles, such that all requirements of the BuGP are met. We achieve very good solution quality, while reducing the computational time for the auction procedure significantly. This is an important step towards running combinatorial auctions of real-world size, which were previously intractable due to their computational complexity. The strengths but also the limitations of the proposed approach are discussed.
Collapse
|
48
|
Multiple crack detection in 3D using a stable XFEM and global optimization. COMPUTATIONAL MECHANICS 2018; 62:835-852. [PMID: 30220758 PMCID: PMC6132880 DOI: 10.1007/s00466-017-1532-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Accepted: 12/18/2017] [Indexed: 06/08/2023]
Abstract
A numerical scheme is proposed for the detection of multiple cracks in three dimensional (3D) structures. The scheme is based on a variant of the extended finite element method (XFEM) and a hybrid optimizer solution. The proposed XFEM variant is particularly well-suited for the simulation of 3D fracture problems, and as such serves as an efficient solution to the so-called forward problem. A set of heuristic optimization algorithms are recombined into a multiscale optimization scheme. The introduced approach proves effective in tackling the complex inverse problem involved, where identification of multiple flaws is sought on the basis of sparse measurements collected near the structural boundary. The potential of the scheme is demonstrated through a set of numerical case studies of varying complexity.
Collapse
|
49
|
Design of sparse Halbach magnet arrays for portable MRI using a genetic algorithm. IEEE TRANSACTIONS ON MAGNETICS 2018; 54:5100112. [PMID: 29749974 PMCID: PMC5937527 DOI: 10.1109/tmag.2017.2751001] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Permanent magnet arrays offer several attributes attractive for the development of a low-cost portable MRI scanner for brain imaging. They offer the potential for a relatively lightweight, low to mid-field system with no cryogenics, a small fringe field, and no electrical power requirements or heat dissipation needs. The cylindrical Halbach array, however, requires external shimming or mechanical adjustments to produce B0 fields with standard MRI homogeneity levels (e.g., 0.1 ppm over FOV), particularly when constrained or truncated geometries are needed, such as a head-only magnet where the magnet length is constrained by the shoulders. For portable scanners using rotation of the magnet for spatial encoding with generalized projections, the spatial pattern of the field is important since it acts as the encoding field. In either a static or rotating magnet, it will be important to be able to optimize the field pattern of cylindrical Halbach arrays in a way that retains construction simplicity. To achieve this, we present a method for designing an optimized cylindrical Halbach magnet using the genetic algorithm to achieve either homogeneity (for standard MRI applications) or a favorable spatial encoding field pattern (for rotational spatial encoding applications). We compare the chosen designs against a standard, fully populated sparse Halbach design, and evaluate optimized spatial encoding fields using point-spread-function and image simulations. We validate the calculations by comparing to the measured field of a constructed magnet. The experimentally implemented design produced fields in good agreement with the predicted fields, and the genetic algorithm was successful in improving the chosen metrics. For the uniform target field, an order of magnitude homogeneity improvement was achieved compared to the un-optimized, fully populated design. For the rotational encoding design the resolution uniformity is improved by 95% compared to a uniformly populated design.
Collapse
|
50
|
Using RGB-D sensors and evolutionary algorithms for the optimization of workstation layouts. APPLIED ERGONOMICS 2017; 65:530-540. [PMID: 28159113 DOI: 10.1016/j.apergo.2017.01.012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 01/09/2017] [Accepted: 01/19/2017] [Indexed: 05/28/2023]
Abstract
RGB-D sensors can collect postural data in an automatized way. However, the application of these devices in real work environments requires overcoming problems such as lack of accuracy or body parts' occlusion. This work presents the use of RGB-D sensors and genetic algorithms for the optimization of workstation layouts. RGB-D sensors are used to capture workers' movements when they reach objects on workbenches. Collected data are then used to optimize workstation layout by means of genetic algorithms considering multiple ergonomic criteria. Results show that typical drawbacks of using RGB-D sensors for body tracking are not a problem for this application, and that the combination with intelligent algorithms can automatize the layout design process. The procedure described can be used to automatically suggest new layouts when workers or processes of production change, to adapt layouts to specific workers based on their ways to do the tasks, or to obtain layouts simultaneously optimized for several production processes.
Collapse
|