1
|
Peng Y, Hietala K, Tao R, Li L, Rand R, Hicks M, Wu X. A formally certified end-to-end implementation of Shor's factorization algorithm. Proc Natl Acad Sci U S A 2023; 120:e2218775120. [PMID: 37186832 PMCID: PMC10214188 DOI: 10.1073/pnas.2218775120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Accepted: 04/21/2023] [Indexed: 05/17/2023] Open
Abstract
Quantum computing technology may soon deliver revolutionary improvements in algorithmic performance, but it is useful only if computed answers are correct. While hardware-level decoherence errors have garnered significant attention, a less recognized obstacle to correctness is that of human programming errors-"bugs." Techniques familiar to most programmers from the classical domain for avoiding, discovering, and diagnosing bugs do not easily transfer, at scale, to the quantum domain because of its unique characteristics. To address this problem, we have been working to adapt formal methods to quantum programming. With such methods, a programmer writes a mathematical specification alongside the program and semiautomatically proves the program correct with respect to it. The proof's validity is automatically confirmed-certified-by a "proof assistant." Formal methods have successfully yielded high-assurance classical software artifacts, and the underlying technology has produced certified proofs of major mathematical theorems. As a demonstration of the feasibility of applying formal methods to quantum programming, we present a formally certified end-to-end implementation of Shor's prime factorization algorithm, developed as part of a framework for applying the certified approach to general applications. By leveraging our framework, one can significantly reduce the effects of human errors and obtain a high-assurance implementation of large-scale quantum applications in a principled way.
Collapse
Affiliation(s)
- Yuxiang Peng
- Department of Computer Science, University of Maryland, College Park, MD20740
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD20740
| | - Kesha Hietala
- Department of Computer Science, University of Maryland, College Park, MD20740
| | - Runzhou Tao
- Department of Computer Science, Columbia University, New York, NY10027
| | - Liyi Li
- Department of Computer Science, University of Maryland, College Park, MD20740
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD20740
| | - Robert Rand
- Department of Computer Science, University of Chicago, Chicago, IL60637
| | - Michael Hicks
- Department of Computer Science, University of Maryland, College Park, MD20740
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD20740
| | - Xiaodi Wu
- Department of Computer Science, University of Maryland, College Park, MD20740
- Joint Center for Quantum Information and Computer Science, University of Maryland, College Park, MD20740
| |
Collapse
|
2
|
Luo F, Jiang Y, Wang J, Li Z, Zhang X. A Framework for Cybersecurity Requirements Management in the Automotive Domain. Sensors (Basel) 2023; 23:4979. [PMID: 37430891 DOI: 10.3390/s23104979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 05/11/2023] [Accepted: 05/18/2023] [Indexed: 07/12/2023]
Abstract
The rapid development of intelligent connected vehicles has increased the attack surface of vehicles and made the complexity of vehicle systems unprecedented. Original equipment manufacturers (OEMs) need to accurately represent and identify threats and match corresponding security requirements. Meanwhile, the fast iteration cycle of modern vehicles requires development engineers to quickly obtain cybersecurity requirements for new features in their developed systems in order to develop system code that meets cybersecurity requirements. However, existing threat identification and cybersecurity requirement methods in the automotive domain cannot accurately describe and identify threats for a new feature while also quickly matching appropriate cybersecurity requirements. This article proposes a cybersecurity requirements management system (CRMS) framework to assist OEM security experts in conducting comprehensive automated threat analysis and risk assessment and to help development engineers identify security requirements prior to software development. The proposed CRMS framework enables development engineers to quickly model their systems using the UML-based (i.e., capable of describing systems using UML) Eclipse Modeling Framework and security experts to integrate their security experience into a threat library and security requirement library expressed in Alloy formal language. In order to ensure accurate matching between the two, a middleware communication framework called the component channel messaging and interface (CCMI) framework, specifically designed for the automotive domain, is proposed. The CCMI communication framework enables the fast model of development engineers to match with the formal model of security experts for threat and security requirement matching, achieving accurate and automated threat and risk identification and security requirement matching. To validate our work, we conducted experiments on the proposed framework and compared the results with the HEAVENS approach. The results showed that the proposed framework is superior in terms of threat detection rates and coverage rates of security requirements. Moreover, it also saves analysis time for large and complex systems, and the cost-saving effect becomes more pronounced with increasing system complexity.
Collapse
Affiliation(s)
- Feng Luo
- School of Automotive Studies, Tongji University, Shanghai 201804, China
| | - Yifan Jiang
- School of Automotive Studies, Tongji University, Shanghai 201804, China
| | - Jiajia Wang
- School of Automotive Studies, Tongji University, Shanghai 201804, China
| | - Zhihao Li
- School of Automotive Studies, Tongji University, Shanghai 201804, China
| | - Xiaoxian Zhang
- iSOFT Infrastructure Software Co., Ltd., Shanghai 200125, China
| |
Collapse
|
3
|
Mostafa N, Kotb Y, Al-Arnaout Z, Alabed S, Shdefat AY. Replicating File Segments between Multi-Cloud Nodes in a Smart City: A Machine Learning Approach. Sensors (Basel) 2023; 23:4639. [PMID: 37430552 DOI: 10.3390/s23104639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 04/10/2023] [Accepted: 05/08/2023] [Indexed: 07/12/2023]
Abstract
The design and management of smart cities and the IoT is a multidimensional problem. One of those dimensions is cloud and edge computing management. Due to the complexity of the problem, resource sharing is one of the vital and major components that when enhanced, the performance of the whole system is enhanced. Research in data access and storage in multi-clouds and edge servers can broadly be classified to data centers and computational centers. The main aim of data centers is to provide services for accessing, sharing and modifying large databases. On the other hand, the aim of computational centers is to provide services for sharing resources. Present and future distributed applications need to deal with very large multi-petabyte datasets and increasing numbers of associated users and resources. The emergence of IoT-based, multi-cloud systems as a potential solution for large computational and data management problems has initiated significant research activity in the area. Due to the considerable increase in data production and data sharing within scientific communities, the need for improvements in data access and data availability cannot be overlooked. It can be argued that the current approaches of large dataset management do not solve all problems associated with big data and large datasets. The heterogeneity and veracity of big data require careful management. One of the issues for managing big data in a multi-cloud system is the scalability and expendability of the system under consideration. Data replication ensures server load balancing, data availability and improved data access time. The proposed model minimises the cost of data services through minimising a cost function that takes storage cost, host access cost and communication cost into consideration. The relative weights between different components is learned through history and it is different from a cloud to another. The model ensures that data are replicated in a way that increases availability while at the same time decreasing the overall cost of data storage and access time. Using the proposed model avoids the overheads of the traditional full replication techniques. The proposed model is mathematically proven to be sound and valid.
Collapse
Affiliation(s)
- Nour Mostafa
- College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
| | - Yehia Kotb
- College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
| | - Zakwan Al-Arnaout
- College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
| | - Samer Alabed
- Biomedical Engineering Department, School of Applied Medical Sciences, German Jordanian University, Amman 11180, Jordan
| | - Ahmed Younes Shdefat
- College of Engineering and Technology, American University of the Middle East, Egaila 54200, Kuwait
| |
Collapse
|
4
|
Shieh MZ, Lin YB, Hsu YJ. VerificationTalk: A Verification and Security Mechanism for IoT Applications. Sensors (Basel) 2021; 21:s21227449. [PMID: 34833525 PMCID: PMC8619704 DOI: 10.3390/s21227449] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Revised: 11/03/2021] [Accepted: 11/03/2021] [Indexed: 11/23/2022]
Abstract
An Internet of Things (IoT) application typically involves implementations in both the device domain and the network domain. In this two-domain environment, it is possible that application developers implement the wrong network functions and/or connect some IoT devices that should never be linked, which result in the execution of wrong operations on network functions. To resolve these issues, we propose the VerificationTalk mechanism to prevent inappropriate IoT application deployment. VerificationTalk consists of two subsystems: the BigraphTalk subsystem which verifies IoT device configuration; and AFLtalk which validates the network functions. VerificationTalk provides mechanisms to conduct online anomaly detection by using a runtime monitor and offline by using American Fuzzy Lop (AFL). The runtime monitor is capable of intercepting potentially harmful data targeting IoT devices. When VerificationTalk detects errors, it provides feedback for debugging. VerificationTalk also assists in building secure IoT applications by identifying security loopholes in network applications. By the appropriate design of the IoTtalk execution engine, the testing capacity of AFLtalk is three times that of traditional AFL approaches.
Collapse
Affiliation(s)
- Min-Zheng Shieh
- Information Technology Service Center, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
- Correspondence:
| | - Yi-Bing Lin
- Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan;
- Miin Wu School of Computing, National Cheng Kung University, Tainan 701401, Taiwan
- College of Humanities and Sciences, China Medicine University, Taichung 406040, Taiwan
- Department of Computer Science and Information Engineering, Asia University, Taichung 413305, Taiwan
- College of Artificial Intelligence and Green Energy, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
| | - Yin-Jui Hsu
- Institute of Network Engineering, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan;
| |
Collapse
|
5
|
Affiliation(s)
- Carlos Gershenson
- Departamento de Ciencias de la Computación, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas, Universidad Nacional Autónoma de Mexico, Mexico, Mexico.,Centro de Ciencias de la Complejidad, Universidad Nacional Autónoma de Mexico, Mexico, Mexico.,Lakeside Labs GmbH, Klagenfurt, Austria
| | - Daniel Polani
- School of Engineering and Computer Science, University of Hertfordshire, Hatfield, United Kingdom
| | - Georg Martius
- Autonomous Learning Group, Max Planck Institute for Intelligent Systems, Tübingen, Germany
| |
Collapse
|
6
|
Santone A, Belfiore MP, Mercaldo F, Varriano G, Brunese L. On the Adoption of Radiomics and Formal Methods for COVID-19 Coronavirus Diagnosis. Diagnostics (Basel) 2021; 11:293. [PMID: 33673394 PMCID: PMC7917767 DOI: 10.3390/diagnostics11020293] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 01/29/2021] [Accepted: 02/08/2021] [Indexed: 11/16/2022] Open
Abstract
Considering the current pandemic, caused by the spreading of the novel Coronavirus disease, there is the urgent need for methods to quickly and automatically diagnose infection. To assist pathologists and radiologists in the detection of the novel coronavirus, in this paper we propose a two-tiered method, based on formal methods (to the best of authors knowledge never previously introduced in this context), aimed to (i) detect whether the patient lungs are healthy or present a generic pulmonary infection; (ii) in the case of the previous tier, a generic pulmonary disease is detected to identify whether the patient under analysis is affected by the novel Coronavirus disease. The proposed approach relies on the extraction of radiomic features from medical images and on the generation of a formal model that can be automatically checked using the model checking technique. We perform an experimental analysis using a set of computed tomography medical images obtained by the authors, achieving an accuracy of higher than 81% in disease detection.
Collapse
Affiliation(s)
- Antonella Santone
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy;
| | - Maria Paola Belfiore
- Department of Precision Medicine, University of Campania “Luigi Vanvitelli”, 80138 Napoli, Italy;
| | - Francesco Mercaldo
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy;
| | - Giulia Varriano
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy;
| | - Luca Brunese
- Department of Medicine and Health Sciences “Vincenzo Tiberio”, University of Molise, 86100 Campobasso, Italy;
| |
Collapse
|
7
|
Sung K, Min KW, Choi J, Kim BC. A Formal and Quantifiable Log Analysis Framework for Test Driving of Autonomous Vehicles. Sensors (Basel) 2020; 20:s20051356. [PMID: 32121632 PMCID: PMC7085529 DOI: 10.3390/s20051356] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 02/05/2020] [Accepted: 02/12/2020] [Indexed: 11/16/2022]
Abstract
We propose a log analysis framework for test driving of autonomous vehicles. The log of a vehicle is a fundamental source to detect and analyze events during driving. A set of dumped logs are, however, usually mixed and fragmented since they are generated concurrently by a number of modules such as sensors, actuators and programs. This makes it hard to analyze them to discover latent errors that could occur due to complex chain reactions among those modules. Our framework provides a logging architecture based on formal specifications, which hierarchically organizes them to find out a priori relationships between them. Then, algorithmic or implementation errors can be detected by examining a posteriori relationships. However, a test in a situation of certain parameters, so called an oracle test, does not necessarily trigger latent violations of the relationships. In our framework, this is remedied by adopting metamorphic testing to quantitatively verify the formal specification. As a working proof, we define three metamorphic relations critical for testing autonomous vehicles and verify them in a quantitative manner based on our logging system.
Collapse
Affiliation(s)
- Kyungbok Sung
- Autonomous Driving Intelligence Research Section, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea; (K.S.); (K.-W.M.); (J.C.)
| | - Kyoung-Wook Min
- Autonomous Driving Intelligence Research Section, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea; (K.S.); (K.-W.M.); (J.C.)
| | - Jeongdan Choi
- Autonomous Driving Intelligence Research Section, Artificial Intelligence Research Laboratory, Electronics and Telecommunications Research Institute, Daejeon 34129, Korea; (K.S.); (K.-W.M.); (J.C.)
| | - Byung-Cheol Kim
- School of Software Engineering, Joongbu University-Goyang, Goyang 10279, Korea
- Correspondence: ; Tel.: +82-31-8075-1608
| |
Collapse
|
8
|
White N, Matthews S, Chapman R. Formal verification: will the seedling ever flower? Philos Trans A Math Phys Eng Sci 2017; 375:rsta.2015.0402. [PMID: 28871051 PMCID: PMC5597725 DOI: 10.1098/rsta.2015.0402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 03/20/2017] [Indexed: 06/07/2023]
Abstract
In one sense, formal specification and verification have been highly successful: techniques have been developed in pioneering academic research, transferred to software companies through training and partnerships, and successfully deployed in systems with national significance. Altran UK has been in the vanguard of this movement. This paper summarizes some of our key deployments of formal techniques over the past 20 years, including both security- and safety-critical systems. The impact of formal techniques, however, remains within an industrial niche, and while government and suppliers across industry search for solutions to the problems of poor-quality software, the wider software industry remains resistant to adoption of this proven solution. We conclude by reflecting on some of the challenges we face as a community in ensuring that formal techniques achieve their true potential impact on society.This article is part of the themed issue 'Verified trustworthy software systems'.
Collapse
Affiliation(s)
- Neil White
- Altran UK, 22 St Lawrence Street, Bath BA1 1AN, UK
| | | | | |
Collapse
|
9
|
Fisher K, Launchbury J, Richards R. The HACMS program: using formal methods to eliminate exploitable bugs. Philos Trans A Math Phys Eng Sci 2017; 375:rsta.2015.0401. [PMID: 28871050 PMCID: PMC5597724 DOI: 10.1098/rsta.2015.0401] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Accepted: 02/02/2017] [Indexed: 06/07/2023]
Abstract
For decades, formal methods have offered the promise of verified software that does not have exploitable bugs. Until recently, however, it has not been possible to verify software of sufficient complexity to be useful. Recently, that situation has changed. SeL4 is an open-source operating system microkernel efficient enough to be used in a wide range of practical applications. Its designers proved it to be fully functionally correct, ensuring the absence of buffer overflows, null pointer exceptions, use-after-free errors, etc., and guaranteeing integrity and confidentiality. The CompCert Verifying C Compiler maps source C programs to provably equivalent assembly language, ensuring the absence of exploitable bugs in the compiler. A number of factors have enabled this revolution, including faster processors, increased automation, more extensive infrastructure, specialized logics and the decision to co-develop code and correctness proofs rather than verify existing artefacts. In this paper, we explore the promise and limitations of current formal-methods techniques. We discuss these issues in the context of DARPA's HACMS program, which had as its goal the creation of high-assurance software for vehicles, including quadcopters, helicopters and automobiles.This article is part of the themed issue 'Verified trustworthy software systems'.
Collapse
Affiliation(s)
- Kathleen Fisher
- Department of Computer Science, Tufts University, Medford, MA, USA
| | | | | |
Collapse
|
10
|
Hunt WA, Kaufmann M, Moore JS, Slobodova A. Industrial hardware and software verification with ACL2. Philos Trans A Math Phys Eng Sci 2017; 375:rsta.2015.0399. [PMID: 28871049 PMCID: PMC5597723 DOI: 10.1098/rsta.2015.0399] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/13/2017] [Indexed: 06/07/2023]
Abstract
The ACL2 theorem prover has seen sustained industrial use since the mid-1990s. Companies that have used ACL2 regularly include AMD, Centaur Technology, IBM, Intel, Kestrel Institute, Motorola/Freescale, Oracle and Rockwell Collins. This paper introduces ACL2 and focuses on how and why ACL2 is used in industry. ACL2 is well-suited to its industrial application to numerous software and hardware systems, because it is an integrated programming/proof environment supporting a subset of the ANSI standard Common Lisp programming language. As a programming language ACL2 permits the coding of efficient and robust programs; as a prover ACL2 can be fully automatic but provides many features permitting domain-specific human-supplied guidance at various levels of abstraction. ACL2 specifications and models often serve as efficient execution engines for the modelled artefacts while permitting formal analysis and proof of properties. Crucially, ACL2 also provides support for the development and verification of other formal analysis tools. However, ACL2 did not find its way into industrial use merely because of its technical features. The core ACL2 user/development community has a shared vision of making mechanized verification routine when appropriate and has been committed to this vision for the quarter century since the Computational Logic, Inc., Verified Stack. The community has focused on demonstrating the viability of the tool by taking on industrial projects (often at the expense of not being able to publish much).This article is part of the themed issue 'Verified trustworthy software systems'.
Collapse
Affiliation(s)
- Warren A Hunt
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Matt Kaufmann
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - J Strother Moore
- Department of Computer Science, University of Texas at Austin, Austin, TX, USA
| | - Anna Slobodova
- Centaur Technology, Inc., 7600-C N. Capital of Texas Hwy, Suite 300, Austin, TX 78731, USA
| |
Collapse
|
11
|
Appel AW, Beringer L, Chlipala A, Pierce BC, Shao Z, Weirich S, Zdancewic S. Position paper: the science of deep specification. Philos Trans A Math Phys Eng Sci 2017; 375:rsta.2016.0331. [PMID: 28871056 PMCID: PMC5597730 DOI: 10.1098/rsta.2016.0331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/25/2017] [Indexed: 06/07/2023]
Abstract
We introduce our efforts within the project 'The science of deep specification' to work out the key formal underpinnings of industrial-scale formal specifications of software and hardware components, anticipating a world where large verified systems are routinely built out of smaller verified components that are also used by many other projects. We identify an important class of specification that has already been used in a few experiments that connect strong component-correctness theorems across the work of different teams. To help popularize the unique advantages of that style, we dub it deep specification, and we say that it encompasses specifications that are rich, two-sided, formal and live (terms that we define in the article). Our core team is developing a proof-of-concept system (based on the Coq proof assistant) whose specification and verification work is divided across largely decoupled subteams at our four institutions, encompassing hardware microarchitecture, compilers, operating systems and applications, along with cross-cutting principles and tools for effective specification. We also aim to catalyse interest in the approach, not just by basic researchers but also by users in industry.This article is part of the themed issue 'Verified trustworthy software systems'.
Collapse
Affiliation(s)
- Andrew W Appel
- Department of Computer Science, Princeton University, Princeton, NJ 08540, USA
| | - Lennart Beringer
- Department of Computer Science, Princeton University, Princeton, NJ 08540, USA
| | - Adam Chlipala
- Computer Science and Artificial Intelligence Laboratory, MIT, 77 Mass Avenue, 32-G842, Cambridge, MA 02139, USA
| | - Benjamin C Pierce
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Zhong Shao
- Department of Computer Science, Yale University, New Haven, CT 06520, USA
| | - Stephanie Weirich
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Steve Zdancewic
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA 19104, USA
| |
Collapse
|
12
|
Testa A, Cinque M, Coronato A, Augusto JC. A Formal Methodology to Design and Deploy Dependable Wireless Sensor Networks. Sensors (Basel) 2016; 17:s17010019. [PMID: 28025568 PMCID: PMC5298592 DOI: 10.3390/s17010019] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2016] [Revised: 12/09/2016] [Accepted: 12/14/2016] [Indexed: 06/06/2023]
Abstract
Wireless Sensor Networks (WSNs) are being increasingly adopted in critical applications, where verifying the correct operation of sensor nodes is a major concern. Undesired events may undermine the mission of the WSNs. Hence, their effects need to be properly assessed before deployment, to obtain a good level of expected performance; and during the operation, in order to avoid dangerous unexpected results. In this paper, we propose a methodology that aims at assessing and improving the dependability level of WSNs by means of an event-based formal verification technique. The methodology includes a process to guide designers towards the realization of a dependable WSN and a tool ("ADVISES") to simplify its adoption. The tool is applicable to homogeneous WSNs with static routing topologies. It allows the automatic generation of formal specifications used to check correctness properties and evaluate dependability metrics at design time and at runtime for WSNs where an acceptable percentage of faults can be defined. During the runtime, we can check the behavior of the WSN accordingly to the results obtained at design time and we can detect sudden and unexpected failures, in order to trigger recovery procedures. The effectiveness of the methodology is shown in the context of two case studies, as proof-of-concept, aiming to illustrate how the tool is helpful to drive design choices and to check the correctness properties of the WSN at runtime. Although the method scales up to very large WSNs, the applicability of the methodology may be compromised by the state space explosion of the reasoning model, which must be faced by partitioning large topologies into sub-topologies.
Collapse
Affiliation(s)
| | - Marcello Cinque
- Dipartimento di Ingegneria Elettrica e delle Tecnologie dell'Informazione, University of Naples "Federico II", Naples 80125, Italy.
| | | | - Juan Carlos Augusto
- Department of Computer Science and R.G. on Development of Intelligent Environments, Middlesex University of London, London NW4 2SH, UK.
| |
Collapse
|
13
|
Silva LC, Almeida HO, Perkusich A, Perkusich M. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems. Sensors (Basel) 2015; 15:27625-70. [PMID: 26528982 DOI: 10.3390/s151127625] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/30/2015] [Revised: 10/11/2015] [Accepted: 10/16/2015] [Indexed: 11/26/2022]
Abstract
Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.
Collapse
|
14
|
Anastasio TJ. Computational identification of potential multitarget treatments for ameliorating the adverse effects of amyloid-β on synaptic plasticity. Front Pharmacol 2014; 5:85. [PMID: 24847263 PMCID: PMC4021136 DOI: 10.3389/fphar.2014.00085] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2013] [Accepted: 04/07/2014] [Indexed: 11/24/2022] Open
Abstract
The leading hypothesis on Alzheimer Disease (AD) is that it is caused by buildup of the peptide amyloid-β (Aβ), which initially causes dysregulation of synaptic plasticity and eventually causes destruction of synapses and neurons. Pharmacological efforts to limit Aβ buildup have proven ineffective, and this raises the twin challenges of understanding the adverse effects of Aβ on synapses and of suggesting pharmacological means to prevent them. The purpose of this paper is to initiate a computational approach to understanding the dysregulation by Aβ of synaptic plasticity and to offer suggestions whereby combinations of various chemical compounds could be arrayed against it. This data-driven approach confronts the complexity of synaptic plasticity by representing findings from the literature in a course-grained manner, and focuses on understanding the aggregate behavior of many molecular interactions. The same set of interactions is modeled by two different computer programs, each written using a different programming modality: one imperative, the other declarative. Both programs compute the same results over an extensive test battery, providing an essential crosscheck. Then the imperative program is used for the computationally intensive purpose of determining the effects on the model of every combination of ten different compounds, while the declarative program is used to analyze model behavior using temporal logic. Together these two model implementations offer new insights into the mechanisms by which Aβ dysregulates synaptic plasticity and suggest many drug combinations that potentially may reduce or prevent it.
Collapse
Affiliation(s)
- Thomas J Anastasio
- Department of Molecular and Integrative Physiology, Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign Urbana, IL, USA
| |
Collapse
|
15
|
Anastasio TJ. Computational search for hypotheses concerning the endocannabinoid contribution to the extinction of fear conditioning. Front Comput Neurosci 2013; 7:74. [PMID: 23761759 PMCID: PMC3669745 DOI: 10.3389/fncom.2013.00074] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2013] [Accepted: 05/17/2013] [Indexed: 02/05/2023] Open
Abstract
Fear conditioning, in which a cue is conditioned to elicit a fear response, and extinction, in which a previously conditioned cue no longer elicits a fear response, depend on neural plasticity occurring within the amygdala. Projection neurons in the basolateral amygdala (BLA) learn to respond to the cue during fear conditioning, and they mediate fear responding by transferring cue signals to the output stage of the amygdala. Some BLA projection neurons retain their cue responses after extinction. Recent work shows that activation of the endocannabinoid system is necessary for extinction, and it leads to long-term depression (LTD) of the GABAergic synapses that inhibitory interneurons make onto BLA projection neurons. Such GABAergic LTD would enhance the responses of the BLA projection neurons that mediate fear responding, so it would seem to oppose, rather than promote, extinction. To address this paradox, a computational analysis of two well-known conceptual models of amygdaloid plasticity was undertaken. The analysis employed exhaustive state-space search conducted within a declarative programming environment. The analysis reveals that GABAergic LTD actually increases the number of synaptic strength configurations that achieve extinction while preserving the cue responses of some BLA projection neurons in both models. The results suggest that GABAergic LTD helps the amygdala retain cue memory during extinction even as the amygdala learns to suppress the previously conditioned response. The analysis also reveals which features of both models are essential for their ability to achieve extinction with some cue memory preservation, and suggests experimental tests of those features.
Collapse
Affiliation(s)
- Thomas J Anastasio
- Computational Neurobiology Laboratory, Department of Molecular and Integrative Physiology, Beckman Institute, University of Illinois at Urbana-Champaign Urbana, IL, USA
| |
Collapse
|
16
|
Anastasio TJ. Exploring the contribution of estrogen to amyloid-Beta regulation: a novel multifactorial computational modeling approach. Front Pharmacol 2013; 4:16. [PMID: 23459573 PMCID: PMC3585711 DOI: 10.3389/fphar.2013.00016] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2012] [Accepted: 01/31/2013] [Indexed: 11/23/2022] Open
Abstract
According to the amyloid hypothesis, Alzheimer Disease results from the accumulation beyond normative levels of the peptide amyloid-β (Aβ). Perhaps because of its pathological potential, Aβ and the enzymes that produce it are heavily regulated by the molecular interactions occurring within cells, including neurons. This regulation involves a highly complex system of intertwined normative and pathological processes, and the sex hormone estrogen contributes to it by influencing the Aβ-regulation system at many different points. Owing to its high complexity, Aβ regulation and the contribution of estrogen are very difficult to reason about. This report describes a computational model of the contribution of estrogen to Aβ regulation that provides new insights and generates experimentally testable and therapeutically relevant predictions. The computational model is written in the declarative programming language known as Maude, which allows not only simulation but also analysis of the system using temporal-logic. The model illustrates how the various effects of estrogen could work together to reduce Aβ levels, or prevent them from rising, in the presence of pathological triggers. The model predicts that estrogen itself should be more effective in reducing Aβ than agonists of estrogen receptor α (ERα), and that agonists of ERβ should be ineffective. The model shows how estrogen itself could dramatically reduce Aβ, and predicts that non-steroidal anti-inflammatory drugs should provide a small additional benefit. It also predicts that certain compounds, but not others, could augment the reduction in Aβ due to estrogen. The model is intended as a starting point for a computational/experimental interaction in which model predictions are tested experimentally, the results are used to confirm, correct, and expand the model, new predictions are generated, and the process continues, producing a model of ever increasing explanatory power and predictive value.
Collapse
Affiliation(s)
- Thomas J Anastasio
- Computational Neurobiology Laboratory, Beckman Institute, Department of Molecular and Integrative Physiology, University of Illinois at Urbana-Champaign Urbana, IL, USA
| |
Collapse
|
17
|
Bolton ML, Bass EJ, Siminiceanu RI. Generating Phenotypical Erroneous Human Behavior to Evaluate Human-automation Interaction Using Model Checking. Int J Hum Comput Stud 2012; 70:888-906. [PMID: 23105914 PMCID: PMC3480525 DOI: 10.1016/j.ijhcs.2012.05.010] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Breakdowns in complex systems often occur as a result of system elements interacting in unanticipated ways. In systems with human operators, human-automation interaction associated with both normative and erroneous human behavior can contribute to such failures. Model-driven design and analysis techniques provide engineers with formal methods tools and techniques capable of evaluating how human behavior can contribute to system failures. This paper presents a novel method for automatically generating task analytic models encompassing both normative and erroneous human behavior from normative task models. The generated erroneous behavior is capable of replicating Hollnagel's zero-order phenotypes of erroneous action for omissions, jumps, repetitions, and intrusions. Multiple phenotypical acts can occur in sequence, thus allowing for the generation of higher order phenotypes. The task behavior model pattern capable of generating erroneous behavior can be integrated into a formal system model so that system safety properties can be formally verified with a model checker. This allows analysts to prove that a human-automation interactive system (as represented by the model) will or will not satisfy safety properties with both normative and generated erroneous human behavior. We present benchmarks related to the size of the statespace and verification time of models to show how the erroneous human behavior generation process scales. We demonstrate the method with a case study: the operation of a radiation therapy machine. A potential problem resulting from a generated erroneous human action is discovered. A design intervention is presented which prevents this problem from occurring. We discuss how our method could be used to evaluate larger applications and recommend future paths of development.
Collapse
Affiliation(s)
- Matthew L. Bolton
- San José State University Research Foundation, NASA Ames Research Center, Moffett Field, CA, USA
| | - Ellen J. Bass
- Department of Systems and Information Engineering, University of Virginia, Charlottesville, VA, USA
| | | |
Collapse
|
18
|
Bolton ML, Bass EJ. Formally verifying human-automation interaction as part of a system model: limitations and tradeoffs. Innov Syst Softw Eng 2010; 6:219-231. [PMID: 21572930 PMCID: PMC3092438 DOI: 10.1007/s11334-010-0129-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
Both the human factors engineering (HFE) and formal methods communities are concerned with improving the design of safety-critical systems. This work discusses a modeling effort that leveraged methods from both fields to perform formal verification of human-automation interaction with a programmable device. This effort utilizes a system architecture composed of independent models of the human mission, human task behavior, human-device interface, device automation, and operational environment. The goals of this architecture were to allow HFE practitioners to perform formal verifications of realistic systems that depend on human-automation interaction in a reasonable amount of time using representative models, intuitive modeling constructs, and decoupled models of system components that could be easily changed to support multiple analyses. This framework was instantiated using a patient controlled analgesia pump in a two phased process where models in each phase were verified using a common set of specifications. The first phase focused on the mission, human-device interface, and device automation; and included a simple, unconstrained human task behavior model. The second phase replaced the unconstrained task model with one representing normative pump programming behavior. Because models produced in the first phase were too large for the model checker to verify, a number of model revisions were undertaken that affected the goals of the effort. While the use of human task behavior models in the second phase helped mitigate model complexity, verification time increased. Additional modeling tools and technological developments are necessary for model checking to become a more usable technique for HFE.
Collapse
|