1
|
Gosti G, Milanetti E, Folli V, de Pasquale F, Leonetti M, Corbetta M, Ruocco G, Della Penna S. A recurrent Hopfield network for estimating meso-scale effective connectivity in MEG. Neural Netw 2024; 170:72-93. [PMID: 37977091 DOI: 10.1016/j.neunet.2023.11.027] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 11/07/2023] [Accepted: 11/09/2023] [Indexed: 11/19/2023]
Abstract
The architecture of communication within the brain, represented by the human connectome, has gained a paramount role in the neuroscience community. Several features of this communication, e.g., the frequency content, spatial topology, and temporal dynamics are currently well established. However, identifying generative models providing the underlying patterns of inhibition/excitation is very challenging. To address this issue, we present a novel generative model to estimate large-scale effective connectivity from MEG. The dynamic evolution of this model is determined by a recurrent Hopfield neural network with asymmetric connections, and thus denoted Recurrent Hopfield Mass Model (RHoMM). Since RHoMM must be applied to binary neurons, it is suitable for analyzing Band Limited Power (BLP) dynamics following a binarization process. We trained RHoMM to predict the MEG dynamics through a gradient descent minimization and we validated it in two steps. First, we showed a significant agreement between the similarity of the effective connectivity patterns and that of the interregional BLP correlation, demonstrating RHoMM's ability to capture individual variability of BLP dynamics. Second, we showed that the simulated BLP correlation connectomes, obtained from RHoMM evolutions of BLP, preserved some important topological features, e.g, the centrality of the real data, assuring the reliability of RHoMM. Compared to other biophysical models, RHoMM is based on recurrent Hopfield neural networks, thus, it has the advantage of being data-driven, less demanding in terms of hyperparameters and scalable to encompass large-scale system interactions. These features are promising for investigating the dynamics of inhibition/excitation at different spatial scales.
Collapse
Affiliation(s)
- Giorgio Gosti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; Istituto di Scienze del Patrimonio Culturale, Sede di Roma, Consiglio Nazionale delle Ricerche, CNR-ISPC, Via Salaria km, 34900 Rome, Italy.
| | - Edoardo Milanetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Viola Folli
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Francesco de Pasquale
- Faculty of Veterinary Medicine, University of Teramo, 64100 Piano D'Accio, Teramo, Italy.
| | - Marco Leonetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Soft and Living Matter Laboratory, Institute of Nanotechnology, Consiglio Nazionale delle Ricerche, Piazzale Aldo Moro, 5, 00185, Rome, Italy; D-TAILS srl, Via di Torre Rossa, 66, 00165, Rome, Italy.
| | - Maurizio Corbetta
- Department of Neuroscience, University of Padova, Via Belzoni, 160, 35121, Padova, Italy; Padova Neuroscience Center (PNC), University of Padova, Via Orus, 2/B, 35129, Padova, Italy; Veneto Institute of Molecular Medicine (VIMM), Via Orus, 2, 35129, Padova, Italy.
| | - Giancarlo Ruocco
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena, 291, 00161, Rome, Italy; Department of Physics, Sapienza University of Rome, Piazzale Aldo Moro, 5, 00185, Rome, Italy.
| | - Stefania Della Penna
- Department of Neuroscience, Imaging and Clinical Sciences, and Institute for Advanced Biomedical Technologies, "G. d'Annunzio" University of Chieti-Pescara, Via Luigi Polacchi, 11, 66100 Chieti, Italy.
| |
Collapse
|
2
|
Leonetti M, Gosti G, Ruocco G. Photonic Stochastic Emergent Storage for deep classification by scattering-intrinsic patterns. Nat Commun 2024; 15:505. [PMID: 38218858 PMCID: PMC10787794 DOI: 10.1038/s41467-023-44498-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 12/15/2023] [Indexed: 01/15/2024] Open
Abstract
Disorder is a pervasive characteristic of natural systems, offering a wealth of non-repeating patterns. In this study, we present a novel storage method that harnesses naturally-occurring random structures to store an arbitrary pattern in a memory device. This method, the Stochastic Emergent Storage (SES), builds upon the concept of emergent archetypes, where a training set of imperfect examples (prototypes) is employed to instantiate an archetype in a Hopfield-like network through emergent processes. We demonstrate this non-Hebbian paradigm in the photonic domain by utilizing random transmission matrices, which govern light scattering in a white-paint turbid medium, as prototypes. Through the implementation of programmable hardware, we successfully realize and experimentally validate the capability to store an arbitrary archetype and perform classification at the speed of light. Leveraging the vast number of modes excited by mesoscopic diffusion, our approach enables the simultaneous storage of thousands of memories without requiring any additional fabrication efforts. Similar to a content addressable memory, all stored memories can be collectively assessed against a given pattern to identify the matching element. Furthermore, by organizing memories spatially into distinct classes, they become features within a higher-level categorical (deeper) optical classification layer.
Collapse
Affiliation(s)
- Marco Leonetti
- Soft and Living Matter Laboratory, Institute of Nanotechnology, 00185, Rome, Italy.
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy.
- Rebel Dynamics-IIT CLN2S Jointlab, 00161, Roma, Italy.
| | - Giorgio Gosti
- Soft and Living Matter Laboratory, Institute of Nanotechnology, 00185, Rome, Italy
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy
- Istituto di Scienze del Patrimonio Culturale, Sede di Roma, Consiglio Nazionale delle Ricerche, 00010, Montelibretti (RM), Italy
| | - Giancarlo Ruocco
- Center for Life Nano- & Neuro-Science, Italian Institute of Technology, Rome, Italy
- Department of Physics, University Sapienza, I-00185, Roma, Italy
| |
Collapse
|
3
|
Abernot M, Azemard N, Todri-Sanial A. Oscillatory neural network learning for pattern recognition: an on-chip learning perspective and implementation. Front Neurosci 2023; 17:1196796. [PMID: 37397448 PMCID: PMC10308018 DOI: 10.3389/fnins.2023.1196796] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Accepted: 05/29/2023] [Indexed: 07/04/2023] Open
Abstract
In the human brain, learning is continuous, while currently in AI, learning algorithms are pre-trained, making the model non-evolutive and predetermined. However, even in AI models, environment and input data change over time. Thus, there is a need to study continual learning algorithms. In particular, there is a need to investigate how to implement such continual learning algorithms on-chip. In this work, we focus on Oscillatory Neural Networks (ONNs), a neuromorphic computing paradigm performing auto-associative memory tasks, like Hopfield Neural Networks (HNNs). We study the adaptability of the HNN unsupervised learning rules to on-chip learning with ONN. In addition, we propose a first solution to implement unsupervised on-chip learning using a digital ONN design. We show that the architecture enables efficient ONN on-chip learning with Hebbian and Storkey learning rules in hundreds of microseconds for networks with up to 35 fully-connected digital oscillators.
Collapse
Affiliation(s)
- Madeleine Abernot
- Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Department of Microelectroncis, University of Montpellier, CNRS, Montpellier, France
| | - Nadine Azemard
- Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Department of Microelectroncis, University of Montpellier, CNRS, Montpellier, France
| | - Aida Todri-Sanial
- Laboratoire d'Informatique de Robotique et de Microélectronique de Montpellier (LIRMM), Department of Microelectroncis, University of Montpellier, CNRS, Montpellier, France
- Electrical Engineering Department, Eindhoven University of Technology, Eindhoven, Netherlands
| |
Collapse
|
4
|
Zamri NE, Azhar SA, Mansor MA, Alway A, Kasihmuddin MSM. Weighted Random k Satisfiability for k=1,2 (r2SAT) in Discrete Hopfield Neural Network. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
5
|
Multi-discrete genetic algorithm in hopfield neural network with weighted random k satisfiability. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07541-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
6
|
A Novel Multi-Objective Hybrid Election Algorithm for Higher-Order Random Satisfiability in Discrete Hopfield Neural Network. MATHEMATICS 2022. [DOI: 10.3390/math10121963] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Hybridized algorithms are commonly employed to improve the performance of any existing method. However, an optimal learning algorithm composed of evolutionary and swarm intelligence can radically improve the quality of the final neuron states and has not received creative attention yet. Considering this issue, this paper presents a novel metaheuristics algorithm combined with several objectives—introduced as the Hybrid Election Algorithm (HEA)—with great results in solving optimization and combinatorial problems over a binary search space. The core and underpinning ideas of this proposed HEA are inspired by socio-political phenomena, consisting of creative and powerful mechanisms to achieve the optimal result. A non-systematic logical structure can find a better phenomenon in the study of logic programming. In this regard, a non-systematic structure known as Random k Satisfiability (RANkSAT) with higher-order is hosted here to overcome the interpretability and dissimilarity compared to a systematic, logical structure in a Discrete Hopfield Neural Network (DHNN). The novelty of this study is to introduce a new multi-objective Hybrid Election Algorithm that achieves the highest fitness value and can boost the storage capacity of DHNN along with a diversified logical structure embedded with RANkSAT representation. To attain such goals, the proposed algorithm tested four different types of algorithms, such as evolutionary types (Genetic Algorithm (GA)), swarm intelligence types (Artificial Bee Colony algorithm), population-based (traditional Election Algorithm (EA)) and the Exhaustive Search (ES) model. To check the performance of the proposed HEA model, several performance metrics, such as training–testing, energy, similarity analysis and statistical analysis, such as the Friedman test with convergence analysis, have been examined and analyzed. Based on the experimental and statistical results, the proposed HEA model outperformed all the mentioned four models in this research.
Collapse
|
7
|
Zhao S, Chen B, Wang H, Luo Z, Zhang T. A Feed-Forward Neural Network for Increasing the Hopfield-Network Storage Capacity. Int J Neural Syst 2022; 32:2250027. [PMID: 35534937 DOI: 10.1142/s0129065722500277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In the hippocampal dentate gyrus (DG), pattern separation mainly depends on the concepts of 'expansion recoding', meaning random mixing of different DG input channels. However, recent advances in neurophysiology have challenged the theory of pattern separation based on these concepts. In this study, we propose a novel feed-forward neural network, inspired by the structure of the DG and neural oscillatory analysis, to increase the Hopfield-network storage capacity. Unlike the previously published feed-forward neural networks, our bio-inspired neural network is designed to take advantage of both biological structure and functions of the DG. To better understand the computational principles of pattern separation in the DG, we have established a mouse model of environmental enrichment. We obtained a possible computational model of the DG, associated with better pattern separation ability, by using neural oscillatory analysis. Furthermore, we have developed a new algorithm based on Hebbian learning and coupling direction of neural oscillation to train the proposed neural network. The simulation results show that our proposed network significantly expands the storage capacity of Hopfield network, and more effective pattern separation is achieved. The storage capacity rises from 0.13 for the standard Hopfield network to 0.32 using our model when the overlap in patterns is 10%.
Collapse
Affiliation(s)
- Shaokai Zhao
- College of Life Sciences, Nankai University, 300071 Tianjin, P. R. China
| | - Bin Chen
- College of Life Sciences, Nankai University, 300071 Tianjin, P. R. China
| | - Hui Wang
- College of Life Sciences, Nankai University, 300071 Tianjin, P. R. China
| | - Zhiyuan Luo
- Department of Computer Science, Royal Holloway, University of London, Egham, Surrey TW20 0EX, UK
| | - Tao Zhang
- College of Life Sciences, Nankai University, 300071 Tianjin, P. R. China
| |
Collapse
|
8
|
Monti M, Fiorentino J, Milanetti E, Gosti G, Tartaglia GG. Prediction of Time Series Gene Expression and Structural Analysis of Gene Regulatory Networks Using Recurrent Neural Networks. ENTROPY (BASEL, SWITZERLAND) 2022; 24:141. [PMID: 35205437 PMCID: PMC8871363 DOI: 10.3390/e24020141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 01/14/2022] [Accepted: 01/15/2022] [Indexed: 11/17/2022]
Abstract
Methods for time series prediction and classification of gene regulatory networks (GRNs) from gene expression data have been treated separately so far. The recent emergence of attention-based recurrent neural network (RNN) models boosted the interpretability of RNN parameters, making them appealing for the understanding of gene interactions. In this work, we generated synthetic time series gene expression data from a range of archetypal GRNs and we relied on a dual attention RNN to predict the gene temporal dynamics. We show that the prediction is extremely accurate for GRNs with different architectures. Next, we focused on the attention mechanism of the RNN and, using tools from graph theory, we found that its graph properties allow one to hierarchically distinguish different architectures of the GRN. We show that the GRN responded differently to the addition of noise in the prediction by the RNN and we related the noise response to the analysis of the attention mechanism. In conclusion, this work provides a way to understand and exploit the attention mechanism of RNNs and it paves the way to RNN-based methods for time series prediction and inference of GRNs from gene expression data.
Collapse
Affiliation(s)
- Michele Monti
- RNA System Biology Lab, Department of Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia, Via Morego 30, 16163 Genoa, Italy
- Centre for Genomic Regulation (CRG), The Barcelona Institute for Science and Technology, Dr. Aiguader 88, 08003 Barcelona, Spain
| | - Jonathan Fiorentino
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy; (J.F.); (E.M.); (G.G.)
| | - Edoardo Milanetti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy; (J.F.); (E.M.); (G.G.)
- Department of Physics, Sapienza University of Rome, 00185 Rome, Italy
| | - Giorgio Gosti
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy; (J.F.); (E.M.); (G.G.)
- Department of Physics, Sapienza University of Rome, 00185 Rome, Italy
| | - Gian Gaetano Tartaglia
- RNA System Biology Lab, Department of Neuroscience and Brain Technologies, Istituto Italiano di Tecnologia, Via Morego 30, 16163 Genoa, Italy
- Centre for Genomic Regulation (CRG), The Barcelona Institute for Science and Technology, Dr. Aiguader 88, 08003 Barcelona, Spain
- Center for Life Nano- & Neuro-Science, Istituto Italiano di Tecnologia, Viale Regina Elena 291, 00161 Rome, Italy; (J.F.); (E.M.); (G.G.)
- Department of Biology and Biotechnology Charles Darwin, Sapienza University of Rome, 00185 Rome, Italy
| |
Collapse
|
9
|
Abernot M, Gil T, Jiménez M, Núñez J, Avellido MJ, Linares-Barranco B, Gonos T, Hardelin T, Todri-Sanial A. Digital Implementation of Oscillatory Neural Network for Image Recognition Applications. Front Neurosci 2021; 15:713054. [PMID: 34512246 PMCID: PMC8427800 DOI: 10.3389/fnins.2021.713054] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Accepted: 08/04/2021] [Indexed: 11/20/2022] Open
Abstract
Computing paradigm based on von Neuman architectures cannot keep up with the ever-increasing data growth (also called “data deluge gap”). This has resulted in investigating novel computing paradigms and design approaches at all levels from materials to system-level implementations and applications. An alternative computing approach based on artificial neural networks uses oscillators to compute or Oscillatory Neural Networks (ONNs). ONNs can perform computations efficiently and can be used to build a more extensive neuromorphic system. Here, we address a fundamental problem: can we efficiently perform artificial intelligence applications with ONNs? We present a digital ONN implementation to show a proof-of-concept of the ONN approach of “computing-in-phase” for pattern recognition applications. To the best of our knowledge, this is the first attempt to implement an FPGA-based fully-digital ONN. We report ONN accuracy, training, inference, memory capacity, operating frequency, hardware resources based on simulations and implementations of 5 × 3 and 10 × 6 ONNs. We present the digital ONN implementation on FPGA for pattern recognition applications such as performing digits recognition from a camera stream. We discuss practical challenges and future directions in implementing digital ONN.
Collapse
Affiliation(s)
- Madeleine Abernot
- Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier, University of Montpellier, CNRS, Montpellier, France
| | - Thierry Gil
- Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier, University of Montpellier, CNRS, Montpellier, France
| | - Manuel Jiménez
- Instituto de Microelectronica de Sevilla, IMSE-CNM, CSIC, Universidad de Sevilla, Sevilla, Spain
| | - Juan Núñez
- Instituto de Microelectronica de Sevilla, IMSE-CNM, CSIC, Universidad de Sevilla, Sevilla, Spain
| | - María J Avellido
- Instituto de Microelectronica de Sevilla, IMSE-CNM, CSIC, Universidad de Sevilla, Sevilla, Spain
| | - Bernabé Linares-Barranco
- Instituto de Microelectronica de Sevilla, IMSE-CNM, CSIC, Universidad de Sevilla, Sevilla, Spain
| | | | | | - Aida Todri-Sanial
- Laboratoire d'Informatique, de Robotique et de Microélectronique de Montpellier, University of Montpellier, CNRS, Montpellier, France
| |
Collapse
|
10
|
Miotto M, Monacelli L. TOLOMEO, a Novel Machine Learning Algorithm to Measure Information and Order in Correlated Networks and Predict Their State. ENTROPY 2021; 23:e23091138. [PMID: 34573763 PMCID: PMC8470539 DOI: 10.3390/e23091138] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Revised: 08/24/2021] [Accepted: 08/25/2021] [Indexed: 11/16/2022]
Abstract
We present ToloMEo (TOpoLogical netwOrk Maximum Entropy Optimization), a program implemented in C and Python that exploits a maximum entropy algorithm to evaluate network topological information. ToloMEo can study any system defined on a connected network where nodes can assume N discrete values by approximating the system probability distribution with a Pottz Hamiltonian on a graph. The software computes entropy through a thermodynamic integration from the mean-field solution to the final distribution. The nature of the algorithm guarantees that the evaluated entropy is variational (i.e., it always provides an upper bound to the exact entropy). The program also performs machine learning, inferring the system’s behavior providing the probability of unknown states of the network. These features make our method very general and applicable to a broad class of problems. Here, we focus on three different cases of study: (i) an agent-based model of a minimal ecosystem defined on a square lattice, where we show how topological entropy captures a crossover between hunting behaviors; (ii) an example of image processing, where starting from discretized pictures of cell populations we extract information about the ordering and interactions between cell types and reconstruct the most likely positions of cells when data are missing; and (iii) an application to recurrent neural networks, in which we measure the information stored in different realizations of the Hopfield model, extending our method to describe dynamical out-of-equilibrium processes.
Collapse
Affiliation(s)
- Mattia Miotto
- Department of Physics, Sapienza University of Rome, 00184 Rome, Italy
- Center for Life Nano- & Neuro Science, Istituto Italiano di Tecnologia, 00161 Rome, Italy
- Correspondence: (M.M.); (L.M.)
| | - Lorenzo Monacelli
- Department of Physics, Sapienza University of Rome, 00184 Rome, Italy
- Correspondence: (M.M.); (L.M.)
| |
Collapse
|
11
|
Curado EMF, Melgar NB, Nobre FD. External Stimuli on Neural Networks: Analytical and Numerical Approaches. ENTROPY 2021; 23:e23081034. [PMID: 34441174 PMCID: PMC8393424 DOI: 10.3390/e23081034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 08/03/2021] [Accepted: 08/05/2021] [Indexed: 11/26/2022]
Abstract
Based on the behavior of living beings, which react mostly to external stimuli, we introduce a neural-network model that uses external patterns as a fundamental tool for the process of recognition. In this proposal, external stimuli appear as an additional field, and basins of attraction, representing memories, arise in accordance with this new field. This is in contrast to the more-common attractor neural networks, where memories are attractors inside well-defined basins of attraction. We show that this procedure considerably increases the storage capabilities of the neural network; this property is illustrated by the standard Hopfield model, which reveals that the recognition capacity of our model may be enlarged, typically, by a factor 102. The primary challenge here consists in calibrating the influence of the external stimulus, in order to attenuate the noise generated by memories that are not correlated with the external pattern. The system is analyzed primarily through numerical simulations. However, since there is the possibility of performing analytical calculations for the Hopfield model, the agreement between these two approaches can be tested—matching results are indicated in some cases. We also show that the present proposal exhibits a crucial attribute of living beings, which concerns their ability to react promptly to changes in the external environment. Additionally, we illustrate that this new approach may significantly enlarge the recognition capacity of neural networks in various situations; with correlated and non-correlated memories, as well as diluted, symmetric, or asymmetric interactions (synapses). This demonstrates that it can be implemented easily on a wide diversity of models.
Collapse
|