1
|
Gowen SD. Training allostery-inspired mechanical response in disordered elastic networks. SOFT MATTER 2025; 21:3527-3533. [PMID: 40207387 DOI: 10.1039/d4sm01340a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2025]
Abstract
Disordered elastic networks are a model material system in which it is possible to achieve tunable and trainable functions. This work investigates the modification of local mechanical properties in disordered networks inspired by allosteric interactions in proteins: applying strain locally to a set of source nodes triggers a strain response at a distant set of target nodes. This is demonstrated first by using directed aging to modify the existing mechanical coupling between pairs of distant source and target nodes, and later as a means for inducing coupling between formerly isolated source-target pairs. The experimental results are compared with those predicted by simulations.
Collapse
Affiliation(s)
- Savannah D Gowen
- Department of Physics and The James Franck and Enrico Fermi Institutes, University of Chicago, Chicago, IL 60637, USA.
| |
Collapse
|
2
|
Jaeger HM, Murugan A, Nagel SR. Training physical matter to matter. SOFT MATTER 2024; 20:6695-6701. [PMID: 39140794 DOI: 10.1039/d4sm00629a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/15/2024]
Abstract
Biological systems offer a great many examples of how sophisticated, highly adapted behavior can emerge from training. Here we discuss how training might be used to impart similarly adaptive properties in physical matter. As a special form of materials processing, training differs in important ways from standard approaches of obtaining sought after material properties. In particular, rather than designing or programming the local configurations and interactions of constituents, training uses externally applied stimuli to evolve material properties. This makes it possible to obtain different functionalities from the same starting material (pluripotency). Furthermore, training evolves a material in situ or under conditions similar to those during the intended use; thus, material performance can improve rather than degrade over time. We discuss requirements for trainability, outline recently developed training strategies for creating soft materials with multiple, targeted and adaptable functionalities, and provide examples where the concept of training has been applied to materials on length scales from the molecular to the macroscopic.
Collapse
Affiliation(s)
- Heinrich M Jaeger
- The James Franck Institute and Department of Physics, The University of Chicago, 929 E 57th St., Chicago, Illinois 60637, USA.
| | - Arvind Murugan
- The James Franck Institute and Department of Physics, The University of Chicago, 929 E 57th St., Chicago, Illinois 60637, USA.
| | - Sidney R Nagel
- The James Franck Institute and Department of Physics, The University of Chicago, 929 E 57th St., Chicago, Illinois 60637, USA.
| |
Collapse
|
3
|
Stern M, Liu AJ, Balasubramanian V. Physical effects of learning. Phys Rev E 2024; 109:024311. [PMID: 38491658 DOI: 10.1103/physreve.109.024311] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Accepted: 01/31/2024] [Indexed: 03/18/2024]
Abstract
Interacting many-body physical systems ranging from neural networks in the brain to folding proteins to self-modifying electrical circuits can learn to perform diverse tasks. This learning, both in nature and in engineered systems, can occur through evolutionary selection or through dynamical rules that drive active learning from experience. Here, we show that learning in linear physical networks with weak input signals leaves architectural imprints on the Hessian of a physical system. Compared to a generic organization of the system components, (a) the effective physical dimension of the response to inputs decreases, (b) the response of physical degrees of freedom to random perturbations (or system "susceptibility") increases, and (c) the low-eigenvalue eigenvectors of the Hessian align with the task. Overall, these effects embody the typical scenario for learning processes in physical systems in the weak input regime, suggesting ways of discovering whether a physical network may have been trained.
Collapse
Affiliation(s)
- Menachem Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Andrea J Liu
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Center for Computational Biology, Flatiron Institute, Simons Foundation, New York, New York 10010, USA
| | - Vijay Balasubramanian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
- Santa Fe Institute, 1399 Hyde Park Road, Santa Fe, New Mexico 87501, USA
- Theoretische Natuurkunde, Vrije Universiteit Brussel, Pleinlaan 2, B-1050 Brussels, Belgium
| |
Collapse
|
4
|
Kedia H, Pan D, Slotine JJ, England JL. Drive-specific selection in multistable mechanical networks. J Chem Phys 2023; 159:214106. [PMID: 38047510 DOI: 10.1063/5.0171993] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Accepted: 11/05/2023] [Indexed: 12/05/2023] Open
Abstract
Systems with many stable configurations abound in nature, both in living and inanimate matter, encoding a rich variety of behaviors. In equilibrium, a multistable system is more likely to be found in configurations with lower energy, but the presence of an external drive can alter the relative stability of different configurations in unexpected ways. Living systems are examples par excellence of metastable nonequilibrium attractors whose structure and stability are highly dependent on the specific form and pattern of the energy flow sustaining them. Taking this distinctively lifelike behavior as inspiration, we sought to investigate the more general physical phenomenon of drive-specific selection in nonequilibrium dynamics. To do so, we numerically studied driven disordered mechanical networks of bistable springs possessing a vast number of stable configurations arising from the two stable rest lengths of each spring, thereby capturing the essential physical properties of a broad class of multistable systems. We found that there exists a range of forcing amplitudes for which the attractor states of driven disordered multistable mechanical networks are fine-tuned with respect to the pattern of external forcing to have low energy absorption from it. Additionally, we found that these drive-specific attractor states are further stabilized by precise matching between the multidimensional shape of their orbit and that of the potential energy well they inhabit. Lastly, we showed evidence of drive-specific selection in an experimental system and proposed a general method to estimate the range of drive amplitudes for drive-specific selection.
Collapse
Affiliation(s)
- Hridesh Kedia
- Physics of Living Systems Group, Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Deng Pan
- Physics of Living Systems Group, Department of Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | - Jean-Jacques Slotine
- Nonlinear Systems Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA
| | | |
Collapse
|
5
|
Mendels D, Byléhn F, Sirk TW, de Pablo JJ. Systematic modification of functionality in disordered elastic networks through free energy surface tailoring. SCIENCE ADVANCES 2023; 9:eadf7541. [PMID: 37285442 DOI: 10.1126/sciadv.adf7541] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 05/01/2023] [Indexed: 06/09/2023]
Abstract
A combined machine learning-physics-based approach is explored for molecular and materials engineering. Specifically, collective variables, akin to those used in enhanced sampled simulations, are constructed using a machine learning model trained on data gathered from a single system. Through the constructed collective variables, it becomes possible to identify critical molecular interactions in the considered system, the modulation of which enables a systematic tailoring of the system's free energy landscape. To explore the efficacy of the proposed approach, we use it to engineer allosteric regulation and uniaxial strain fluctuations in a complex disordered elastic network. Its successful application in these two cases provides insights regarding how functionality is governed in systems characterized by extensive connectivity and points to its potential for design of complex molecular systems.
Collapse
Affiliation(s)
- Dan Mendels
- Pritzker School of Molecular Engineering, University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637 USA
| | - Fabian Byléhn
- Pritzker School of Molecular Engineering, University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637 USA
| | - Timothy W Sirk
- Polymers Branch, U.S. CCDC Army Research Laboratory, Aberdeen Proving Ground, MD 21005, USA
| | - Juan J de Pablo
- Pritzker School of Molecular Engineering, University of Chicago, 5640 S. Ellis Avenue, Chicago, IL 60637 USA
| |
Collapse
|
6
|
Lindeman CW, Hagh VF, Ip CI, Nagel SR. Competition between Energy and Dynamics in Memory Formation. PHYSICAL REVIEW LETTERS 2023; 130:197201. [PMID: 37243648 DOI: 10.1103/physrevlett.130.197201] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Revised: 03/10/2023] [Accepted: 04/19/2023] [Indexed: 05/29/2023]
Abstract
Bistable objects that are pushed between states by an external field are often used as a simple model to study memory formation in disordered materials. Such systems, called hysterons, are typically treated quasistatically. Here, we generalize hysterons to explore the effect of dynamics in a simple spring system with tunable bistability and study how the system chooses a minimum. Changing the timescale of the forcing allows the system to transition between a situation where its fate is determined by following the local energy minimum to one where it is trapped in a shallow well determined by the path taken through configuration space. Oscillatory forcing can lead to transients lasting many cycles, a behavior not possible for a single quasistatic hysteron.
Collapse
Affiliation(s)
- Chloe W Lindeman
- Department of Physics and The James Franck and Enrico Fermi Institutes The University of Chicago, Chicago, Illinois 60637, USA
| | - Varda F Hagh
- Department of Physics and The James Franck and Enrico Fermi Institutes The University of Chicago, Chicago, Illinois 60637, USA
| | - Chi Ian Ip
- Department of Physics and The James Franck and Enrico Fermi Institutes The University of Chicago, Chicago, Illinois 60637, USA
| | - Sidney R Nagel
- Department of Physics and The James Franck and Enrico Fermi Institutes The University of Chicago, Chicago, Illinois 60637, USA
| |
Collapse
|
7
|
Kroo LA, Bull MS, Prakash M. Active foam: the adaptive mechanics of 2D air-liquid foam under cyclic inflation. SOFT MATTER 2023; 19:2539-2553. [PMID: 36942719 DOI: 10.1039/d3sm00019b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Foam is a canonical example of disordered soft matter where local force balance leads to the competition of many metastable configurations. We present an experimental and theoretical framework for "active foam" where an individual voxel inflates and deflates periodically. Local periodic activity leads to irreversible and reversible T1 transitions throughout the foam, eventually reaching a reversible limit cycle. Individual vertices displace outwards and subsequently return back to their approximate original radial position; this radial displacement follows an inverse law. Surprisingly, each return trajectory does not retrace its outbound path but encloses a finite area, with a clockwise (CW) or counterclockwise (CCW) direction, which we define as a local swirl. These swirls form coherent patterns spanning the scale of the material. Using a dynamical model, we demonstrate that swirl arises from disorder in the local micro-structure. We demonstrate that disorder and strain-rate control a crossover between cooperation and competition between swirls in adjacent vertices. Over 5-10 cycles, the region around the active voxel structurally adapts from a higher-energy metastable state to a lower-energy state, locally ordering and stiffening the structure. The coherent domains of CW/CCW swirl become smaller as the system stabilizes, indicative of a process similar to the Hall-Petch effect. Finally, we introduce a statistical model that evolves edge lengths with a set of rules to explore how this class of materials adapts as a function of initial structure. Adding activity to foam couples structural disorder and adaptive dynamics to encourage the development of a new class of abiotic, cellularized active matter.
Collapse
Affiliation(s)
- L A Kroo
- Department of Mechanical Engineering, Stanford University, USA
| | | | - Manu Prakash
- Department of Bioengineering, Stanford University, USA.
| |
Collapse
|
8
|
Hexner D. Training precise stress patterns. SOFT MATTER 2023; 19:2120-2126. [PMID: 36861892 DOI: 10.1039/d2sm01487d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
We introduce a training rule that enables a network composed of springs and dashpots to learn precise stress patterns. Our goal is to control the tensions on a fraction of "target" bonds, which are chosen randomly. The system is trained by applying stresses to the target bonds, causing the remaining bonds, which act as the learning degrees of freedom, to evolve. Different criteria for selecting the target bonds affects whether frustration is present. When there is at most a single target bond per node the error converges to computer precision. Additional targets on a single node may lead to slow convergence and failure. Nonetheless, training is successful even when approaching the limit predicted by the Maxwell Calladine theorem. We demonstrate the generality of these ideas by considering dashpots with yield stresses. We show that training converges, albeit with a slower, power-law decay of the error. Furthermore, dashpots with yielding stresses prevent the system from relaxing after training, enabling to encode permanent memories.
Collapse
Affiliation(s)
- Daniel Hexner
- Faculty of Mechanical Engineering, Technion, 320000 Haifa, Israel.
| |
Collapse
|
9
|
Bhattacharyya K, Zwicker D, Alim K. Memory capacity of adaptive flow networks. Phys Rev E 2023; 107:034407. [PMID: 37073018 DOI: 10.1103/physreve.107.034407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Accepted: 02/09/2023] [Indexed: 04/20/2023]
Abstract
Biological flow networks adapt their network morphology to optimize flow while being exposed to external stimuli from different spatial locations in their environment. These adaptive flow networks retain a memory of the stimulus location in the network morphology. Yet, what limits this memory and how many stimuli can be stored are unknown. Here, we study a numerical model of adaptive flow networks by applying multiple stimuli subsequently. We find strong memory signals for stimuli imprinted for a long time into young networks. Consequently, networks can store many stimuli for intermediate stimulus duration, which balance imprinting and aging.
Collapse
Affiliation(s)
- Komal Bhattacharyya
- Max Planck Institute for Dynamics and Self-Organisation, 37077 Göttingen, Germany
| | - David Zwicker
- Max Planck Institute for Dynamics and Self-Organisation, 37077 Göttingen, Germany
| | - Karen Alim
- Max Planck Institute for Dynamics and Self-Organisation, 37077 Göttingen, Germany
- Center for Protein Assemblies and Department of Bioscience, School of Natural Sciences, Technische Universität München, 85748 Garching, Germany
| |
Collapse
|
10
|
Pashine N, Nasab AM, Kramer-Bottiglio R. Reprogrammable allosteric metamaterials from disordered networks. SOFT MATTER 2023; 19:1617-1623. [PMID: 36752560 DOI: 10.1039/d2sm01284g] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Prior works on disordered mechanical metamaterial networks-consisting of fixed nodes connected by discrete bonds-have shown that auxetic and allosteric responses can be achieved by pruning a specific set of the bonds from an originally random network. However, bond pruning is irreversible and yields a single bulk response. Using material stiffness as a tunable design parameter, we create metamaterial networks where allosteric responses are achieved without bond removal. Such systems are experimentally realized through variable stiffness bonds that can strengthen and weaken on-demand. In a disordered mechanical network with variable stiffness bonds, different subsets of bonds can be strategically softened to achieve different bulk responses, enabling a multiplicity of reprogrammable input/output allosteric responses.
Collapse
Affiliation(s)
- Nidhi Pashine
- School of Engineering & Applied Science, Yale University, New Haven, CT, 06520, USA.
| | - Amir Mohammadi Nasab
- School of Engineering & Applied Science, Yale University, New Haven, CT, 06520, USA.
| | | |
Collapse
|
11
|
Arinze C, Stern M, Nagel SR, Murugan A. Learning to self-fold at a bifurcation. Phys Rev E 2023; 107:025001. [PMID: 36932611 DOI: 10.1103/physreve.107.025001] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 12/23/2022] [Indexed: 02/10/2023]
Abstract
Disordered mechanical systems can deform along a network of pathways that branch and recombine at special configurations called bifurcation points. Multiple pathways are accessible from these bifurcation points; consequently, computer-aided design algorithms have been sought to achieve a specific structure of pathways at bifurcations by rationally designing the geometry and material properties of these systems. Here, we explore an alternative physical training framework in which the topology of folding pathways in a disordered sheet is changed in a desired manner due to changes in crease stiffnesses induced by prior folding. We study the quality and robustness of such training for different "learning rules," that is, different quantitative ways in which local strain changes the local folding stiffness. We experimentally demonstrate these ideas using sheets with epoxy-filled creases whose stiffnesses change due to folding before the epoxy sets. Our work shows how specific forms of plasticity in materials enable them to learn nonlinear behaviors through their prior deformation history in a robust manner.
Collapse
Affiliation(s)
- Chukwunonso Arinze
- Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| | - Menachem Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - Sidney R Nagel
- Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| | - Arvind Murugan
- Department of Physics, University of Chicago, Chicago, Illinois 60637, USA
| |
Collapse
|
12
|
Model architecture can transform catastrophic forgetting into positive transfer. Sci Rep 2022; 12:10736. [PMID: 35750768 PMCID: PMC9232654 DOI: 10.1038/s41598-022-14348-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 06/06/2022] [Indexed: 11/10/2022] Open
Abstract
The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous one. We hypothesize that this could be a symptom of a fundamental problem: addition is an algorithmic task that should not be learned through pattern recognition. Therefore, other model architectures better suited for this task would avoid catastrophic forgetting. We use a neural network with a different architecture that can be trained to recover the correct algorithm for the addition of binary numbers. This neural network includes conditional clauses that are naturally treated within the back-propagation algorithm. We test it in the setting proposed by McCloskey and Cohen and training on random additions one by one. The neural network not only does not suffer from catastrophic forgetting but it improves its predictive power on unseen pairs of numbers as training progresses. We also show that this is a robust effect, also present when averaging many simulations. This work emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and introduces a neural network that is able to learn an algorithm.
Collapse
|
13
|
Tociu L, Rassolov G, Fodor E, Vaikuntanathan S. Mean-field theory for the structure of strongly interacting active liquids. J Chem Phys 2022; 157:014902. [DOI: 10.1063/5.0096710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Active systems, which are driven out of equilibrium by local non-conservative forces, exhibit unique behaviors and structures with potential utility for the design of novel materials. An important and difficult challenge along the path towards such a goal is to precisely predict how the structure of active systems is modified as their driving forces push them out of equilibrium. Here, we use tools from liquid-state theories to approach this challenge for a classic minimal isotropic active matter model. First, we construct a nonequilibrium mean-field framework which can predict the structure of systems of weakly interacting particles. Second, motivated by equilibrium solvation theories, we modify this theory to extend it with surprisingly high accuracy to strongly interacting particles, distinguishing it from most existing similarly tractable approaches. Our results provide insight into spatial organization in strongly interacting out-of-equilibrium systems and strategies to control them.
Collapse
Affiliation(s)
- Laura Tociu
- The University of Chicago, United States of America
| | | | | | | |
Collapse
|
14
|
Hagh VF, Nagel SR, Liu AJ, Manning ML, Corwin EI. Transient learning degrees of freedom for introducing function in materials. Proc Natl Acad Sci U S A 2022; 119:e2117622119. [PMID: 35512090 PMCID: PMC9171605 DOI: 10.1073/pnas.2117622119] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 03/08/2022] [Indexed: 11/24/2022] Open
Abstract
SignificanceMany protocols used in material design and training have a common theme: they introduce new degrees of freedom, often by relaxing away existing constraints, and then evolve these degrees of freedom based on a rule that leads the material to a desired state at which point these new degrees of freedom are frozen out. By creating a unifying framework for these protocols, we can now understand that some protocols work better than others because the choice of new degrees of freedom matters. For instance, introducing particle sizes as degrees of freedom to the minimization of a jammed particle packing can lead to a highly stable state, whereas particle stiffnesses do not have nearly the same impact.
Collapse
Affiliation(s)
- Varda F. Hagh
- James Franck Institute, University of Chicago, Chicago, IL 60637
- Department of Physics and Materials Science Institute, University of Oregon, Eugene, OR 97403
| | - Sidney R. Nagel
- James Franck Institute, University of Chicago, Chicago, IL 60637
| | - Andrea J. Liu
- Department of Physics, University of Pennsylvania, Philadelphia, PA 19104
| | - M. Lisa Manning
- Department of Physics, Syracuse University, Syracuse, NY 13244
- BioInspired Institute, Syracuse University, Syracuse, NY 13244
| | - Eric I. Corwin
- Department of Physics and Materials Science Institute, University of Oregon, Eugene, OR 97403
| |
Collapse
|
15
|
Wycoff JF, Dillavou S, Stern M, Liu AJ, Durian DJ. Desynchronous learning in a physics-driven learning network. J Chem Phys 2022; 156:144903. [DOI: 10.1063/5.0084631] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
In a neuron network, synapses update individually using local information, allowing for entirely decentralized learning. In contrast, elements in an artificial neural network are typically updated simultaneously using a central processor. Here, we investigate the feasibility and effect of desynchronous learning in a recently introduced decentralized, physics-driven learning network. We show that desynchronizing the learning process does not degrade the performance for a variety of tasks in an idealized simulation. In experiment, desynchronization actually improves the performance by allowing the system to better explore the discretized state space of solutions. We draw an analogy between desynchronization and mini-batching in stochastic gradient descent and show that they have similar effects on the learning process. Desynchronizing the learning process establishes physics-driven learning networks as truly fully distributed learning machines, promoting better performance and scalability in deployment.
Collapse
Affiliation(s)
- J. F. Wycoff
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - S. Dillavou
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - M. Stern
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - A. J. Liu
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| | - D. J. Durian
- Department of Physics and Astronomy, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA
| |
Collapse
|
16
|
Liu C, Ferrero EE, Jagla EA, Martens K, Rosso A, Talon L. The Fate of Shear-Oscillated Amorphous Solids. J Chem Phys 2022; 156:104902. [DOI: 10.1063/5.0079460] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Affiliation(s)
- Chen Liu
- Columbia University Department of Chemistry, United States of America
| | | | - Eduardo A. Jagla
- Teoria de solidos, Centro Atomico Bariloche, Comision Nacional de Energia Atomica, Argentina
| | | | | | | |
Collapse
|
17
|
Hexner D. Training nonlinear elastic functions: nonmonotonic, sequence dependent and bifurcating. SOFT MATTER 2021; 17:4407-4412. [PMID: 33908450 DOI: 10.1039/d0sm02189j] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The elastic behavior of materials operating in the linear regime is constrained, by definition, to operations that are linear in the imposed deformation. Although the nonlinear regime holds promise for new functionality, the design in this regime is challenging. In this paper, we demonstrate that a recent approach based on training [Hexner et al., PNAS 2020, 201922847] allows responses that are inherently non-linear. By applying designer strains, a disordered solid evolves through plastic deformations that alter its response. We show examples of elaborate nonlinear training paths that lead to the following functions: (1) frequency conversion, (2) logic gate and (3) expansion or contraction along one axis, depending on the sequence of imposed transverse compressions. We study the convergence rate and find that it depends on the trained function.
Collapse
Affiliation(s)
- Daniel Hexner
- Faculty of Mechanical Engineering, Technion, 320000 Haifa, Israel.
| |
Collapse
|