1
|
Decentralised Global Service Discovery for the Internet of Things. SENSORS (BASEL, SWITZERLAND) 2024; 24:2196. [PMID: 38610407 PMCID: PMC11014208 DOI: 10.3390/s24072196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 03/22/2024] [Accepted: 03/28/2024] [Indexed: 04/14/2024]
Abstract
The Internet of Things (IoT) consists of millions of devices deployed over hundreds of thousands of different networks, providing an ever-expanding resource to improve our understanding of and interactions with the physical world. Global service discovery is key to realizing the opportunities of the IoT, spanning disparate networks and technologies to enable the sharing, discovery, and utilisation of services and data outside of the context in which they are deployed. In this paper, we present Decentralised Service Registries (DSRs), a novel trustworthy decentralised approach to global IoT service discovery and interaction, building on DSF-IoT to allow users to simply create and share public and private service registries, to register and query for relevant services, and to access both current and historical data published by the services they discover. In DSR, services are registered and discovered using signed objects that are cryptographically associated with the registry service, linked into a signature chain, and stored and queried for using a novel verifiable DHT overlay. In contrast to existing centralised and decentralised approaches, DSRs decouple registries from supporting infrastructure, provide privacy and multi-tenancy, and support the verification of registry entries and history, service information, and published data to mitigate risks of service impersonation or the alteration of data. This decentralised approach is demonstrated through the creation and use of a DSR to register and search for real-world IoT devices and their data as well as qualified using a scalable cluster-based testbench for the high-fidelity emulation of peer-to-peer applications. DSRs are evaluated against existing approaches, demonstrating the novelty and utility of DSR to address key IoT challenges and enable the sharing, discovery, and use of IoT services.
Collapse
|
2
|
Leveraging extreme scale analytics, AI and digital twins for maritime digitalization: the VesselAI architecture. Front Big Data 2023; 6:1220348. [PMID: 37576115 PMCID: PMC10413560 DOI: 10.3389/fdata.2023.1220348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Accepted: 07/03/2023] [Indexed: 08/15/2023] Open
Abstract
The modern maritime industry is producing data at an unprecedented rate. The capturing and processing of such data is integral to create added value for maritime companies and other maritime stakeholders, but their true potential can only be unlocked by innovative technologies such as extreme-scale analytics, AI, and digital twins, given that existing systems and traditional approaches are unable to effectively collect, store, and process big data. Such innovative systems are not only projected to effectively deal with maritime big data but to also create various tools that can assist maritime companies, in an evolving and complex environment that requires maritime vessels to increase their overall safety and performance and reduce their consumption and emissions. An integral challenge for developing these next-generation maritime applications lies in effectively combining and incorporating the aforementioned innovative technologies in an integrated system. Under this context, the current paper presents the architecture of VesselAI, an EU-funded project that aims to develop, validate, and demonstrate a novel holistic framework based on a combination of the state-of-the-art HPC, Big Data and AI technologies, capable of performing extreme-scale and distributed analytics for fuelling the next-generation digital twins in maritime applications and beyond.
Collapse
|
3
|
Local Approval Processes in a Federated and Distributed Research Infrastructure - Lessons Learned from the AKTIN-Project. Stud Health Technol Inform 2023; 302:362-363. [PMID: 37203685 DOI: 10.3233/shti230141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
The AKTIN-Emergency Department Registry is a federated and distributed health data network which uses a two-step process for local approval of received data queries and result transmission. For currently establishing distributed research infrastructures, we present our lessons learned from 5 years of established operations.
Collapse
|
4
|
Development and Analysis of a Distributed Leak Detection and Localisation System for Crude Oil Pipelines. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094298. [PMID: 37177501 PMCID: PMC10181705 DOI: 10.3390/s23094298] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 04/18/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Crude oil leakages and spills (OLS) are some of the problems attributed to pipeline failures in the oil and gas industry's midstream sector. Consequently, they are monitored via several leakage detection and localisation techniques (LDTs) comprising classical methods and, recently, Internet of Things (IoT)-based systems via wireless sensor networks (WSNs). Although the latter techniques are proven to be more efficient, they are susceptible to other types of failures such as high false alarms or single point of failure (SPOF) due to their centralised implementations. Therefore, in this work, we present a hybrid distributed leakage detection and localisation technique (HyDiLLEch), which combines multiple classical LDTs. The technique is implemented in two versions, a single-hop and a double-hop version. The evaluation of the results is based on the resilience to SPOFs, the accuracy of detection and localisation, and communication efficiency. The results obtained from the placement strategy and the distributed spatial data correlation include increased sensitivity to leakage detection and localisation and the elimination of the SPOF related to the centralised LDTs by increasing the number of node-detecting and localising (NDL) leakages to four and six in the single-hop and double-hop versions, respectively. In addition, the accuracy of leakages is improved from 0 to 32 m in nodes that were physically close to the leakage points while keeping the communication overhead minimal.
Collapse
|
5
|
Emergent regulation of ant foraging frequency through a computationally inexpensive forager movement rule. eLife 2023; 12:77659. [PMID: 37067884 PMCID: PMC10110237 DOI: 10.7554/elife.77659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Accepted: 02/06/2023] [Indexed: 04/18/2023] Open
Abstract
Ant colonies regulate foraging in response to their collective hunger, yet the mechanism behind this distributed regulation remains unclear. Previously, by imaging food flow within ant colonies we showed that the frequency of foraging events declines linearly with colony satiation (Greenwald et al., 2018). Our analysis implied that as a forager distributes food in the nest, two factors affect her decision to exit for another foraging trip: her current food load and its rate of change. Sensing these variables can be attributed to the forager's individual cognitive ability. Here, new analyses of the foragers' trajectories within the nest imply a different way to achieve the observed regulation. Instead of an explicit decision to exit, foragers merely tend toward the depth of the nest when their food load is high and toward the nest exit when it is low. Thus, the colony shapes the forager's trajectory by controlling her unloading rate, while she senses only her current food load. Using an agent-based model and mathematical analysis, we show that this simple mechanism robustly yields emergent regulation of foraging frequency. These findings demonstrate how the embedding of individuals in physical space can reduce their cognitive demands without compromising their computational role in the group.
Collapse
|
6
|
Smart Transportation: An Overview of Technologies and Applications. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23083880. [PMID: 37112221 PMCID: PMC10143476 DOI: 10.3390/s23083880] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/22/2023] [Accepted: 03/31/2023] [Indexed: 05/27/2023]
Abstract
As technology continues to evolve, our society is becoming enriched with more intelligent devices that help us perform our daily activities more efficiently and effectively. One of the most significant technological advancements of our time is the Internet of Things (IoT), which interconnects various smart devices (such as smart mobiles, intelligent refrigerators, smartwatches, smart fire alarms, smart door locks, and many more) allowing them to communicate with each other and exchange data seamlessly. We now use IoT technology to carry out our daily activities, for example, transportation. In particular, the field of smart transportation has intrigued researchers due to its potential to revolutionize the way we move people and goods. IoT provides drivers in a smart city with many benefits, including traffic management, improved logistics, efficient parking systems, and enhanced safety measures. Smart transportation is the integration of all these benefits into applications for transportation systems. However, as a way of further improving the benefits provided by smart transportation, other technologies have been explored, such as machine learning, big data, and distributed ledgers. Some examples of their application are the optimization of routes, parking, street lighting, accident prevention, detection of abnormal traffic conditions, and maintenance of roads. In this paper, we aim to provide a detailed understanding of the developments in the applications mentioned earlier and examine current researches that base their applications on these sectors. We aim to conduct a self-contained review of the different technologies used in smart transportation today and their respective challenges. Our methodology encompassed identifying and screening articles on smart transportation technologies and its applications. To identify articles addressing our topic of review, we searched for articles in the four significant databases: IEEE Xplore, ACM Digital Library, Science Direct, and Springer. Consequently, we examined the communication mechanisms, architectures, and frameworks that enable these smart transportation applications and systems. We also explored the communication protocols enabling smart transportation, including Wi-Fi, Bluetooth, and cellular networks, and how they contribute to seamless data exchange. We delved into the different architectures and frameworks used in smart transportation, including cloud computing, edge computing, and fog computing. Lastly, we outlined current challenges in the smart transportation field and suggested potential future research directions. We will examine data privacy and security issues, network scalability, and interoperability between different IoT devices.
Collapse
|
7
|
A Novel OpenBCI Framework for EEG-Based Neurophysiological Experiments. SENSORS (BASEL, SWITZERLAND) 2023; 23:3763. [PMID: 37050823 PMCID: PMC10098804 DOI: 10.3390/s23073763] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 02/23/2023] [Accepted: 02/27/2023] [Indexed: 06/19/2023]
Abstract
An Open Brain-Computer Interface (OpenBCI) provides unparalleled freedom and flexibility through open-source hardware and firmware at a low-cost implementation. It exploits robust hardware platforms and powerful software development kits to create customized drivers with advanced capabilities. Still, several restrictions may significantly reduce the performance of OpenBCI. These limitations include the need for more effective communication between computers and peripheral devices and more flexibility for fast settings under specific protocols for neurophysiological data. This paper describes a flexible and scalable OpenBCI framework for electroencephalographic (EEG) data experiments using the Cyton acquisition board with updated drivers to maximize the hardware benefits of ADS1299 platforms. The framework handles distributed computing tasks and supports multiple sampling rates, communication protocols, free electrode placement, and single marker synchronization. As a result, the OpenBCI system delivers real-time feedback and controlled execution of EEG-based clinical protocols for implementing the steps of neural recording, decoding, stimulation, and real-time analysis. In addition, the system incorporates automatic background configuration and user-friendly widgets for stimuli delivery. Motor imagery tests the closed-loop BCI designed to enable real-time streaming within the required latency and jitter ranges. Therefore, the presented framework offers a promising solution for tailored neurophysiological data processing.
Collapse
|
8
|
Multitier Web System Reliability: Identifying Causative Metrics and Analyzing Performance Anomaly Using a Regression Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:1919. [PMID: 36850514 PMCID: PMC9965331 DOI: 10.3390/s23041919] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/30/2023] [Accepted: 02/02/2023] [Indexed: 06/18/2023]
Abstract
With the development of the Internet and communication technologies, the types of services provided by multitier Web systems are becoming more diverse and complex compared to those of the past. Ensuring a continuous availability of business services is crucial for multitier Web system providers, as service performance issues immediately affect customer experience and satisfaction. Large companies attempt to monitor the system performance indicator (SPI) that summarizes the status of multitier Web systems to detect performance anomalies at an early stage. However, the current anomaly detection methods are designed to monitor a single specific SPI. Moreover, the existing approaches consider performance anomaly detection and its root cause analysis separately, thereby aggravating the burden of resolving the performance anomaly. To support the system provider in diagnosing the performance anomaly, we propose an advanced causative metric analysis (ACMA) framework. First, we draw out 191 performance metrics (PMs) closely related to the target SPI. Among these PMs, the ACMA determines 62 vital PMs that have the most influence on the variance of the target SPI using several statistical methods. Then, we implement a performance anomaly detection model to identify the causative metrics (CMs) between the vital PMs using random forest regression. Even if the target SPI changes, our detection model does not require any change in its model structure and can derive closely related PMs of the target SPI. Based on our experiments, wherein we applied the ACMA to the business services in an enterprise system, we observed that the proposed ACMA could correctly detect various performance anomalies and their CMs.
Collapse
|
9
|
A Blockchain-Based End-to-End Data Protection Model for Personal Health Records Sharing: A Fully Homomorphic Encryption Approach. SENSORS (BASEL, SWITZERLAND) 2022; 23:14. [PMID: 36616613 PMCID: PMC9823636 DOI: 10.3390/s23010014] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Revised: 12/09/2022] [Accepted: 12/09/2022] [Indexed: 06/17/2023]
Abstract
Personal health records (PHR) represent health data managed by a specific individual. Traditional solutions rely on centralized architectures to store and distribute PHR, which are more vulnerable to security breaches. To address such problems, distributed network technologies, including blockchain and distributed hash tables (DHT) are used for processing, storing, and sharing health records. Furthermore, fully homomorphic encryption (FHE) is a set of techniques that allows the calculation of encrypted data, which can help to protect personal privacy in data sharing. In this context, we propose an architectural model that applies a DHT technique called the interplanetary protocol file system and blockchain networks to store and distribute data and metadata separately; two new elements, called data steward and shared data vault, are introduced in this regard. These new modules are responsible for segregating responsibilities from health institutions and promoting end-to-end encryption; therefore, a person can manage data encryption and requests for data sharing in addition to restricting access to data for a predefined period. In addition to supporting calculations on encrypted data, our contribution can be summarized as follows: (i) mitigation of risk to personal privacy by reducing the use of unencrypted data, and (ii) improvement of semantic interoperability among health institutions by using distributed networks for standardized PHR. We evaluated performance and storage occupation using a database with 1.3 million COVID-19 registries, which showed that combining FHE with distributed networks could redefine e-health paradigms.
Collapse
|
10
|
Communication Efficient Algorithms for Bounding and Approximating the Empirical Entropy in Distributed Systems. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1611. [PMID: 36359705 PMCID: PMC9689552 DOI: 10.3390/e24111611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 11/01/2022] [Indexed: 06/16/2023]
Abstract
The empirical entropy is a key statistical measure of data frequency vectors, enabling one to estimate how diverse the data are. From the computational point of view, it is important to quickly compute, approximate, or bound the entropy. In a distributed system, the representative ("global") frequency vector is the average of the "local" frequency vectors, each residing in a distinct node. Typically, the trivial solution of aggregating the local vectors and computing their average incurs a huge communication overhead. Hence, the challenge is to approximate, or bound, the entropy of the global vector, while reducing communication overhead. In this paper, we develop algorithms which achieve this goal.
Collapse
|
11
|
Cloud Servers: Resource Optimization Using Different Energy Saving Techniques. SENSORS (BASEL, SWITZERLAND) 2022; 22:8384. [PMID: 36366082 PMCID: PMC9659174 DOI: 10.3390/s22218384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 10/13/2022] [Accepted: 10/26/2022] [Indexed: 06/16/2023]
Abstract
Currently, researchers are working to contribute to the emerging fields of cloud computing, edge computing, and distributed systems. The major area of interest is to examine and understand their performance. The major globally leading companies, such as Google, Amazon, ONLIVE, Giaki, and eBay, are truly concerned about the impact of energy consumption. These cloud computing companies use huge data centers, consisting of virtual computers that are positioned worldwide and necessitate exceptionally high-power costs to preserve. The increased requirement for energy consumption in IT firms has posed many challenges for cloud computing companies pertinent to power expenses. Energy utilization is reliant upon numerous aspects, for example, the service level agreement, techniques for choosing the virtual machine, the applied optimization strategies and policies, and kinds of workload. The present paper tries to provide an answer to challenges related to energy-saving through the assistance of both dynamic voltage and frequency scaling techniques for gaming data centers. Also, to evaluate both the dynamic voltage and frequency scaling techniques compared to non-power-aware and static threshold detection techniques. The findings will facilitate service suppliers in how to encounter the quality of service and experience limitations by fulfilling the service level agreements. For this purpose, the CloudSim platform is applied for the application of a situation in which game traces are employed as a workload for analyzing the procedure. The findings evidenced that an assortment of good quality techniques can benefit gaming servers to conserve energy expenditures and sustain the best quality of service for consumers located universally. The originality of this research presents a prospect to examine which procedure performs good (for example, dynamic, static, or non-power aware). The findings validate that less energy is utilized by applying a dynamic voltage and frequency method along with fewer service level agreement violations, and better quality of service and experience, in contrast with static threshold consolidation or non-power aware technique.
Collapse
|
12
|
Multi-Agent Systems for Resource Allocation and Scheduling in a Smart Grid. SENSORS (BASEL, SWITZERLAND) 2022; 22:8099. [PMID: 36365795 PMCID: PMC9656614 DOI: 10.3390/s22218099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2022] [Revised: 10/10/2022] [Accepted: 10/14/2022] [Indexed: 06/16/2023]
Abstract
Multi-Agent Systems (MAS) have been seen as an attractive area of research for civil engineering professionals to subdivide complex issues. Based on the assignment's history, nearby agents, and objective, the agent intended to take the appropriate action to complete the task. MAS models complex systems, smart grids, and computer networks. MAS has problems with agent coordination, security, and work distribution despite its use. This paper reviews MAS definitions, attributes, applications, issues, and communications. For this reason, MASs have drawn interest from computer science and civil engineering experts to solve complex difficulties by subdividing them into smaller assignments. Agents have individual responsibilities. Each agent selects the best action based on its activity history, interactions with neighbors, and purpose. MAS uses the modeling of complex systems, smart grids, and computer networks. Despite their extensive use, MAS still confronts agent coordination, security, and work distribution challenges. This study examines MAS's definitions, characteristics, applications, issues, communications, and evaluation, as well as the classification of MAS applications and difficulties, plus research references. This paper should be a helpful resource for MAS researchers and practitioners. MAS in controlling smart grids, including energy management, energy marketing, pricing, energy scheduling, reliability, network security, fault handling capability, agent-to-agent communication, SG-electrical cars, SG-building energy systems, and soft grids, have been examined. More than 100 MAS-based smart grid control publications have been reviewed, categorized, and compiled.
Collapse
|
13
|
A Distributed Method for Self-Calibration of Magnetoresistive Angular Position Sensor within a Servo System. SENSORS (BASEL, SWITZERLAND) 2022; 22:5974. [PMID: 36015735 PMCID: PMC9412424 DOI: 10.3390/s22165974] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 08/03/2022] [Accepted: 08/04/2022] [Indexed: 06/15/2023]
Abstract
Magnetoresistive angle position sensors are, beside Hall effect sensors, especially suitable for usage within servo systems due to their reliability, longevity, and resilience to unfavorable environmental conditions. The proposed distributed method for self-calibration of magnetoresistive angular position sensor uses the data collected during the highest allowed speed shaft movement for the identification of the measurement process model parameters. Data acquisition and initial data processing have been realized as a part of the control process of the servo system, whereas the identification of the model parameters is a service of an application server. The method of minimizing of the sum of algebraic distances of the sensor readings and the parametrized model is employed for the identification of parameters of linear compensation, whereas the average shaft rotation speed has been used as a high precision reference for the identification of parameters of harmonic compensation. The proposed method, in addition to a fast convergence, provides for the increase in measurement accuracy for an order of magnitude. Experimentally obtained measurement uncertainty was better than 0.5°, with the residual variance less than 0.02°, comparable to the sensor resolution.
Collapse
|
14
|
Unified InterPlanetary Smart Parking Network for Maximum End-User Flexibility. SENSORS 2021; 22:s22010221. [PMID: 35009764 PMCID: PMC8749736 DOI: 10.3390/s22010221] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 12/15/2021] [Accepted: 12/22/2021] [Indexed: 11/17/2022]
Abstract
Technological breakthroughs have offered innovative solutions for smart parking systems, independent of the use of computer vision, smart sensors, gap sensing, and other variations. We now have a high degree of confidence in spot classification or object detection at the parking level. The only thing missing is end-user satisfaction, as users are forced to use multiple interfaces to find a parking spot in a geographical area. We propose a trustless federated model that will add a layer of abstraction between the technology and the human interface to facilitate user adoption and responsible data acquisition by leveraging a federated identity protocol based on Zero Knowledge Cryptography. No central authority is needed for the model to work; thus, it is trustless. Chained trust relationships generate a graph of trustworthiness, which is necessary to bridge the gap from one smart parking program to an intelligent system that enables smart cities. With the help of Zero Knowledge Cryptography, end users can attain a high degree of mobility and anonymity while using a diverse array of service providers. From an investor’s standpoint, the usage of IPFS (InterPlanetary File System) lowers operational costs, increases service resilience, and decentralizes the network of smart parking solutions. A peer-to-peer content addressing system ensures that the data are moved close to the users without deploying expensive cloud-based infrastructure. The result is a macro system with independent actors that feed each other data and expose information in a common protocol. Different client implementations can offer the same experience, even though the parking providers use different technologies. We call this InterPlanetary Smart Parking Architecture NOW—IPSPAN.
Collapse
|
15
|
Infrastructure as Software in Micro Clouds at the Edge. SENSORS 2021; 21:s21217001. [PMID: 34770308 PMCID: PMC8588097 DOI: 10.3390/s21217001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Revised: 09/18/2021] [Accepted: 10/12/2021] [Indexed: 11/26/2022]
Abstract
Edge computing offers cloud services closer to data sources and end-users, making the foundation for novel applications. The infrastructure deployment is taking off, bringing new challenges: how to use geo-distribution properly, or harness the advantages of having resources at a specific location? New real-time applications require multi-tier infrastructure, preferably doing data preprocessing locally, but using the cloud for heavy workloads. We present a model, able to organize geo-distributed nodes into micro clouds dynamically, allowing resource reorganization to best serve population needs. Such elasticity is achieved by relying on cloud organization principles, adapted for a different environment. The desired state is specified descriptively, and the system handles the rest. As such, infrastructure is abstracted to the software level, thus enabling “infrastructure as software” at the edge. We argue about blending the proposed model into existing tools, allowing cloud providers to offer future micro clouds as a service.
Collapse
|
16
|
Examining the Performance of Fog-Aided, Cloud-Centered IoT in a Real-World Environment. SENSORS 2021; 21:s21216950. [PMID: 34770256 PMCID: PMC8587892 DOI: 10.3390/s21216950] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2021] [Revised: 10/07/2021] [Accepted: 10/10/2021] [Indexed: 11/16/2022]
Abstract
The fog layer provides substantial benefits in cloud-based IoT applications because it can serve as an aggregation layer and it moves the computation resources nearer to the IoT devices; however, it is important to ensure adequate performance is achieved in such applications, as the devices usually communicate frequently and authenticate with the cloud. This can cause performance and availability issues, which can be dangerous in critical applications such as in the healthcare sector. In this paper, we analyze the efficacy of the fog layer in different architectures in a real-world environment by examining performance metrics for the cloud and fog layers using different numbers of IoT devices. We also implement the fog layer using two methods to determine whether different fog implementation frameworks can affect the performance. The results show that including a fog layer with semi-heavyweight computation capability results in higher capital costs, although the in the long run resources, time, and money are saved. This study can serve as a reference for fundamental fog computing concepts. It can also be used to walk practitioners through different implementation frameworks of fog-aided IoT and to show tradeoffs in order to inform when to use each implementation framework based on one’s objectives.
Collapse
|
17
|
Integrating multiple blockchains to support distributed personal health records. Health Informatics J 2021; 27:14604582211007546. [PMID: 33853403 DOI: 10.1177/14604582211007546] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
Blockchain technologies have evolved in recent years, as have the use of personal health record (PHR) data. Initially, only the financial domain benefited from Blockchain technologies. Due to efficient distribution format and data integrity security, however, these technologies have demonstrated potential in other areas, such as PHR data in the healthcare domain. Applying Blockchain to PHR data faces different challenges than applying it to financial transactions via crypto-currency. To propose and discuss an architectural model of a Blockchain platform named "OmniPHR Multi-Blockchain" to address key challenges associated with geographical distribution of PHR data. We analyzed the current literature to identify critical barriers faced when applying Blockchain technologies to distribute PHR data. We propose an architecture model and describe a prototype developed to evaluate and address these challenges. The OmniPHR Multi-Blockchain architecture yielded promising results for scenarios involving distributed PHR data. The project demonstrated a viable and beneficial alternative for processing geographically distributed PHR data with performance comparable with conventional methods. Blockchain's implementation tools have evolved, but the domain of healthcare still faces many challenges concerning distribution and interoperability. This study empirically demonstrates an alternative architecture that enables the distributed processing of PHR data via Blockchain technologies.
Collapse
|
18
|
Deep-Framework: A Distributed, Scalable, and Edge-Oriented Framework for Real-Time Analysis of Video Streams. SENSORS 2021; 21:s21124045. [PMID: 34208327 PMCID: PMC8231160 DOI: 10.3390/s21124045] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 06/08/2021] [Accepted: 06/09/2021] [Indexed: 11/17/2022]
Abstract
Edge computing is the best approach for meeting the exponential demand and the real-time requirements of many video analytics applications. Since most of the recent advances regarding the extraction of information from images and video rely on computation heavy deep learning algorithms, there is a growing need for solutions that allow the deployment and use of new models on scalable and flexible edge architectures. In this work, we present Deep-Framework, a novel open source framework for developing edge-oriented real-time video analytics applications based on deep learning. Deep-Framework has a scalable multi-stream architecture based on Docker and abstracts away from the user the complexity of cluster configuration, orchestration of services, and GPU resources allocation. It provides Python interfaces for integrating deep learning models developed with the most popular frameworks and also provides high-level APIs based on standard HTTP and WebRTC interfaces for consuming the extracted video data on clients running on browsers or any other web-based platform.
Collapse
|
19
|
Abstract
A robot swarm is a decentralized system characterized by locality of sensing and communication, self-organization, and redundancy. These characteristics allow robot swarms to achieve scalability, flexibility and fault tolerance, properties that are especially valuable in the context of simultaneous localization and mapping (SLAM), specifically in unknown environments that evolve over time. So far, research in SLAM has mainly focused on single- and centralized multi-robot systems-i.e., non-swarm systems. While these systems can produce accurate maps, they are typically not scalable, cannot easily adapt to unexpected changes in the environment, and are prone to failure in hostile environments. Swarm SLAM is a promising approach to SLAM as it could leverage the decentralized nature of a robot swarm and achieve scalable, flexible and fault-tolerant exploration and mapping. However, at the moment of writing, swarm SLAM is a rather novel idea and the field lacks definitions, frameworks, and results. In this work, we present the concept of swarm SLAM and its constraints, both from a technical and an economical point of view. In particular, we highlight the main challenges of swarm SLAM for gathering, sharing, and retrieving information. We also discuss the strengths and weaknesses of this approach against traditional multi-robot SLAM. We believe that swarm SLAM will be particularly useful to produce abstract maps such as topological or simple semantic maps and to operate under time or cost constraints.
Collapse
|
20
|
Plant tropisms as a window on plant computational processes. THE NEW PHYTOLOGIST 2021; 229:1911-1916. [PMID: 33219510 DOI: 10.1111/nph.17091] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Accepted: 10/10/2020] [Indexed: 06/11/2023]
Abstract
Plants are living information-processing organisms with highly adaptive behavior, allowing them to prosper in a harsh and fluctuating environment in spite of being sessile. Lacking a central nervous system, plants are distributed systems orchestrating complex computational processes performed at the tissue level. Here I consider plant tropisms as a useful input-output system boasting a robust mathematical description, naturally permitting a dialogue between mathematical modeling and biological observations. I propose tropisms as an ideal framework for the study of plant computational processes, allowing us to infer the relationship between observed tropic responses and known stimuli. I concentrate on macroscopic models, and elucidate this approach by presenting recent examples focusing on computational processes involved at different hierarchical levels of interactions: a plant's interaction with itself and its internal state, with the abiotic environment, and with neighboring plants.
Collapse
|
21
|
DDR-coin: An Efficient Probabilistic Distributed Trigger Counting Algorithm. SENSORS 2020; 20:s20226446. [PMID: 33187349 PMCID: PMC7696785 DOI: 10.3390/s20226446] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 10/30/2020] [Accepted: 11/05/2020] [Indexed: 11/16/2022]
Abstract
A distributed trigger counting (DTC) problem is to detect w triggers in the distributed system consisting of n nodes. DTC algorithms can be used for monitoring systems using sensors to detect a significant global change. When designing an efficient DTC algorithm, the following goals should be considered; minimizing the whole number of exchanged messages used for counting triggers and even distribution of communication loads among nodes. In this paper, we present an efficient DTC algorithm, DDR-coin (Deterministic Detection of Randomly generated coins). The message complexity-the total number of exchanged messages-of DDR-coin is O(nlogn(w/n)) in average. MaxRcvLoad-the maximum number of received messages to detect w triggers in each node-is O(logn(w/n)) on average. DDR-coin is not an exact algorithm; even though w triggers are received by the n nodes, it can fail to raise an alarm with a negligible probability. However, DDR-coin is more efficient than exact DTC algorithms on average and the gap between those is increased for larger n. We implemented the prototype of the proposed scheme using NetLogo 6.1.1. We confirmed that experimental results are close to our mathematical analysis. Compared with the previous schemes-TreeFill, CoinRand, and RingRand- DDR-coin shows smaller message complexity and MaxRcvLoad.
Collapse
|
22
|
Scalability and cost-effectiveness analysis of whole genome-wide association studies on Google Cloud Platform and Amazon Web Services. J Am Med Inform Assoc 2020; 27:1425-1430. [PMID: 32719837 PMCID: PMC7534581 DOI: 10.1093/jamia/ocaa068] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2020] [Revised: 03/20/2020] [Accepted: 04/17/2020] [Indexed: 01/14/2023] Open
Abstract
Objective Advancements in human genomics have generated a surge of available data, fueling the growth and accessibility of databases for more comprehensive, in-depth genetic studies. Methods We provide a straightforward and innovative methodology to optimize cloud configuration in order to conduct genome-wide association studies. We utilized Spark clusters on both Google Cloud Platform and Amazon Web Services, as well as Hail (http://doi.org/10.5281/zenodo.2646680) for analysis and exploration of genomic variants dataset. Results Comparative evaluation of numerous cloud-based cluster configurations demonstrate a successful and unprecedented compromise between speed and cost for performing genome-wide association studies on 4 distinct whole-genome sequencing datasets. Results are consistent across the 2 cloud providers and could be highly useful for accelerating research in genetics. Conclusions We present a timely piece for one of the most frequently asked questions when moving to the cloud: what is the trade-off between speed and cost?
Collapse
|
23
|
A Generalized Threat Model for Visual Sensor Networks. SENSORS 2020; 20:s20133629. [PMID: 32605274 PMCID: PMC7374518 DOI: 10.3390/s20133629] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 06/16/2020] [Accepted: 06/23/2020] [Indexed: 11/16/2022]
Abstract
Today, visual sensor networks (VSNs) are pervasively used in smart environments such as intelligent homes, industrial automation or surveillance. A major concern in the use of sensor networks in general is their reliability in the presence of security threats and cyberattacks. Compared to traditional networks, sensor networks typically face numerous additional vulnerabilities due to the dynamic and distributed network topology, the resource constrained nodes, the potentially large network scale and the lack of global network knowledge. These vulnerabilities allow attackers to launch more severe and complicated attacks. Since the state-of-the-art is lacking studies on vulnerabilities in VSNs, a thorough investigation of attacks that can be launched against VSNs is required. This paper presents a general threat model for the attack surfaces of visual sensor network applications and their components. The outlined threats are classified by the STRIDE taxonomy and their weaknesses are classified using CWE, a common taxonomy for security weaknesses.
Collapse
|
24
|
An overview of blockchain science and engineering. ROYAL SOCIETY OPEN SCIENCE 2020; 7:200168. [PMID: 32742687 PMCID: PMC7353972 DOI: 10.1098/rsos.200168] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2020] [Accepted: 05/12/2020] [Indexed: 06/11/2023]
Abstract
This is the preface to a special issue in the journal Royal Society Open Science, themed around blockchain technology. Since this is still an emergent and interdisciplinary field, we first provide a gentle introduction into that larger topic. Then, we discuss why this technology has been criticized for not being energy-efficient. Next, we provide an analysis of recent developments in blockchain research that may help with making blockchain technology truly sustainable. Finally, we highlight some of the contributions made by papers in this special issue.
Collapse
|
25
|
Multi-GPU, Multi-Node Algorithms for Acceleration of Image Reconstruction in 3D Electrical Capacitance Tomography in Heterogeneous Distributed System. SENSORS 2020; 20:s20020391. [PMID: 32284509 PMCID: PMC7013565 DOI: 10.3390/s20020391] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/28/2019] [Revised: 12/19/2019] [Accepted: 01/04/2020] [Indexed: 11/16/2022]
Abstract
Electrical capacitance tomography (ECT) is one of non-invasive visualization techniques which can be used for industrial process monitoring. However, acquiring images trough 3D ECT often requires performing time consuming complex computations on large size matrices. Therefore, a new parallel approach for 3D ECT image reconstruction is proposed, which is based on application of multi-GPU, multi-node algorithms in heterogeneous distributed system. This solution allows to speed up the required data processing. Distributed measurement system with a new framework for parallel computing and a special plugin dedicated to ECT are presented in the paper. Computing system architecture and its main features are described. Both data distribution as well as transmission between the computing nodes are discussed. System performance was measured using LBP and the Landweber’s reconstruction algorithms which were implemented as a part of the ECT plugin. Application of the framework with a new network communication layer reduced data transfer times significantly and improved the overall system efficiency.
Collapse
|
26
|
Evaluation of Three Different Approaches for Automated Time Delay Estimation for Distributed Sensor Systems of Electric Vehicles. SENSORS 2020; 20:s20020351. [PMID: 31936363 PMCID: PMC7013948 DOI: 10.3390/s20020351] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 12/24/2019] [Accepted: 12/26/2019] [Indexed: 11/16/2022]
Abstract
Deviations between High Voltage (HV) current measurements and the corresponding real values provoke serious problems in the power trains of Electric Vehicle (EVs). Examples for these problems have inaccurate performance coordinations and unnecessary power limitations during driving or charging. The main reason for the deviations are time delays. By correcting these delays with accurate Time Delay Estimation (TDE), our data shows that we can reduce the measurement deviations from 25% of the maximum current to below 5%. In this paper, we present three different approaches for TDE. We evaluate all approaches with real data from power trains of EVs. To enable an execution on automotive Electronic Control Unit (ECUs), the focus of our evaluation lies not only on the accuracy of the TDE, but also on the computational efficiency. The proposed Linear Regression (LR) approach suffers even from small noise and offsets in the measurement data and is unsuited for our purpose. A better alternative is the Variance Minimization (VM) approach. It is not only more noise-resistant but also very efficient after the first execution. Another interesting approach are Adaptive Filter (AFs), introduced by Emadzadeh et al. Unfortunately, AFs do not reach the accuracy and efficiency of VM in our experiments. Thus, we recommend VM for TDE of HV current signals in the power train of EVs and present an additional optimization to enable its execution on ECUs.
Collapse
|
27
|
Environmental Monitoring with Distributed Mesh Networks: An Overview and Practical Implementation Perspective for Urban Scenario. SENSORS 2019; 19:s19245548. [PMID: 31888131 PMCID: PMC6960639 DOI: 10.3390/s19245548] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Revised: 12/11/2019] [Accepted: 12/13/2019] [Indexed: 11/21/2022]
Abstract
Almost inevitable climate change and increasing pollution levels around the world are the most significant drivers for the environmental monitoring evolution. Recent activities in the field of wireless sensor networks have made tremendous progress concerning conventional centralized sensor networks known for decades. However, most systems developed today still face challenges while estimating the trade-off between their flexibility and security. In this work, we provide an overview of the environmental monitoring strategies and applications. We conclude that wireless sensor networks of tomorrow would mostly have a distributed nature. Furthermore, we present the results of the developed secure distributed monitoring framework from both hardware and software perspectives. The developed mechanisms provide an ability for sensors to communicate in both infrastructure and mesh modes. The system allows each sensor node to act as a relay, which increases the system failure resistance and improves the scalability. Moreover, we employ an authentication mechanism to ensure the transparent migration of the nodes between different network segments while maintaining a high level of system security. Finally, we report on the real-life deployment results.
Collapse
|
28
|
Towards Evaluating Proactive and Reactive Approaches on Reorganizing Human Resources in IoT-Based Smart Hospitals. SENSORS 2019; 19:s19173800. [PMID: 31480772 PMCID: PMC6749393 DOI: 10.3390/s19173800] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/12/2019] [Revised: 08/26/2019] [Accepted: 08/30/2019] [Indexed: 11/17/2022]
Abstract
Hospitals play an important role on ensuring a proper treatment of human health. One of the problems to be faced is the increasingly overcrowded patients care queues, who end up waiting for longer times without proper treatment to their health problems. The allocation of health professionals in hospital environments is not able to adapt to the demands of patients. There are times when underused rooms have idle professionals, and overused rooms have fewer professionals than necessary. Previous works have not solved this problem since they focus on understanding the evolution of doctor supply and patient demand, as to better adjust one to the other. However, they have not proposed concrete solutions for that regarding techniques for better allocating available human resources. Moreover, elasticity is one of the most important features of cloud computing, referring to the ability to add or remove resources according to the needs of the application or service. Based on this background, we introduce Elastic allocation of human resources in Healthcare environments (ElHealth) an IoT-focused model able to monitor patient usage of hospital rooms and adapt these rooms for patients demand. Using reactive and proactive elasticity approaches, ElHealth identifies when a room will have a demand that exceeds the capacity of care, and proposes actions to move human resources to adapt to patient demand. Our main contribution is the definition of Human Resources IoT-based Elasticity (i.e., an extension of the concept of resource elasticity in Cloud Computing to manage the use of human resources in a healthcare environment, where health professionals are allocated and deallocated according to patient demand). Another contribution is a cost-benefit analysis for the use of reactive and predictive strategies on human resources reorganization. ElHealth was simulated on a hospital environment using data from a Brazilian polyclinic, and obtained promising results, decreasing the waiting time by up to 96.4% and 96.73% in reactive and proactive approaches, respectively.
Collapse
|
29
|
Single-Board-Computer Clusters for Cloudlet Computing in Internet of Things. SENSORS (BASEL, SWITZERLAND) 2019; 19:s19133026. [PMID: 31324039 PMCID: PMC6650845 DOI: 10.3390/s19133026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/03/2019] [Revised: 07/03/2019] [Accepted: 07/08/2019] [Indexed: 06/10/2023]
Abstract
The number of connected sensors and devices is expected to increase to billions in the near future. However, centralised cloud-computing data centres present various challenges to meet the requirements inherent to Internet of Things (IoT) workloads, such as low latency, high throughput and bandwidth constraints. Edge computing is becoming the standard computing paradigm for latency-sensitive real-time IoT workloads, since it addresses the aforementioned limitations related to centralised cloud-computing models. Such a paradigm relies on bringing computation close to the source of data, which presents serious operational challenges for large-scale cloud-computing providers. In this work, we present an architecture composed of low-cost Single-Board-Computer clusters near to data sources, and centralised cloud-computing data centres. The proposed cost-efficient model may be employed as an alternative to fog computing to meet real-time IoT workload requirements while keeping scalability. We include an extensive empirical analysis to assess the suitability of single-board-computer clusters as cost-effective edge-computing micro data centres. Additionally, we compare the proposed architecture with traditional cloudlet and cloud architectures, and evaluate them through extensive simulation. We finally show that acquisition costs can be drastically reduced while keeping performance levels in data-intensive IoT use cases.
Collapse
|
30
|
On Providing Multi-Level Quality of Service for Operating Rooms of the Future. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2303. [PMID: 31109073 PMCID: PMC6566186 DOI: 10.3390/s19102303] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 04/26/2019] [Accepted: 05/06/2019] [Indexed: 11/16/2022]
Abstract
The Operating Room (OR) plays an important role in delivering vital medical services to patients in hospitals. Such environments contain several medical devices, equipment, and systems producing valuable information which might be combined for biomedical and surgical workflow analysis. Considering the sensibility of data from sensors in the OR, independently of processing and network loads, the middleware that provides data from these sensors have to respect applications quality of service (QoS) demands. In an OR middleware, there are two main bottlenecks that might suffer QoS problems and, consequently, impact directly in user experience: (i) simultaneous user applications connecting the middleware; and (ii) a high number of sensors generating information from the environment. Currently, many middlewares that support QoS have been proposed by many fields; however, to the best of our knowledge, there is no research on this topic or the OR environment. OR environments are characterized by being crowded by persons and equipment, some of them of specific use in such environments, as mobile x-ray machines. Therefore, this article proposes QualiCare, an adaptable middleware model to provide multi-level QoS, improve user experience, and increase hardware utilization to middlewares in OR environments. Our main contributions are a middleware model and an orchestration engine in charge of changing the middleware behavior to guarantee performance. Results demonstrate that adapting middleware parameters on demand reduces network usage and improves resource consumption maintaining data provisioning.
Collapse
|
31
|
Strength of Crowd (SOC)-Defeating a Reactive Jammer in IoT with Decoy Messages. SENSORS 2018; 18:s18103492. [PMID: 30332848 PMCID: PMC6210481 DOI: 10.3390/s18103492] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/10/2018] [Revised: 09/27/2018] [Accepted: 10/12/2018] [Indexed: 11/24/2022]
Abstract
We propose Strength of Crowd (SoC), a distributed Internet of Things (IoT) protocol that guarantees message broadcast from an initiator to all network nodes in the presence of either a reactive or a proactive jammer, that targets a variable portion of the radio spectrum. SoC exploits a simple, yet innovative and effective idea: nodes not (currently) involved in the broadcast process transmit decoy messages that cannot be distinguished (by the jammer) from the real ones. Therefore, the jammer has to implement a best-effort strategy to jam all the concurrent communications up to its frequency/energy budget. SoC exploits the inherent parallelism that stems from the massive deployments of IoT nodes to guarantee a high number of concurrent communications, exhausting the jammer capabilities and hence leaving a subset of the communications not jammed. It is worth noting that SoC could be adopted in several wireless scenarios; however, we focus on its application to the Wireless Sensor Networks (WSN) domain, including IoT, Machine-to-Machine (M2M), Device-to-Device (D2D), to name a few. In this framework, we provide several contributions: firstly, we show the details of the SoC protocol, as well as its integration with the IEEE 802.15.4-2015 MAC protocol; secondly, we study the broadcast delay to deliver the message to all the nodes in the network; and finally, we run an extensive simulation and experimental campaign to test our solution. We consider the state-of-the-art OpenMote-B experimental platform, adopting the OpenWSN open-source protocol stack. Experimental results confirm the quality and viability of our solution.
Collapse
|
32
|
Distributed One Time Password Infrastructure for Linux Environments. ENTROPY 2018; 20:e20050319. [PMID: 33265409 PMCID: PMC7512839 DOI: 10.3390/e20050319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 04/14/2018] [Accepted: 04/23/2018] [Indexed: 11/19/2022]
Abstract
Nowadays, there is a lot of critical information and services hosted on computer systems. The proper access control to these resources is essential to avoid malicious actions that could cause huge losses to home and professional users. The access control systems have evolved from the first password based systems to the modern mechanisms using smart cards, certificates, tokens, biometric systems, etc. However, when designing a system, it is necessary to take into account their particular limitations, such as connectivity, infrastructure or budget. In addition, one of the main objectives must be to ensure the system usability, but this property is usually orthogonal to the security. Thus, the use of password is still common. In this paper, we expose a new password based access control system that aims to improve password security with the minimum impact in the system usability.
Collapse
|
33
|
A Framework to Design the Computational Load Distribution of Wireless Sensor Networks in Power Consumption Constrained Environments. SENSORS 2018; 18:s18040954. [PMID: 29570645 PMCID: PMC5949029 DOI: 10.3390/s18040954] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/26/2018] [Revised: 03/10/2018] [Accepted: 03/20/2018] [Indexed: 11/16/2022]
Abstract
In this paper, we present a work based on the computational load distribution among the homogeneous nodes and the Hub/Sink of Wireless Sensor Networks (WSNs). The main contribution of the paper is an early decision support framework helping WSN designers to take decisions about computational load distribution for those WSNs where power consumption is a key issue (when we refer to “framework” in this work, we are considering it as a support tool to make decisions where the executive judgment can be included along with the set of mathematical tools of the WSN designer; this work shows the need to include the load distribution as an integral component of the WSN system for making early decisions regarding energy consumption). The framework takes advantage of the idea that balancing sensors nodes and Hub/Sink computational load can lead to improved energy consumption for the whole or at least the battery-powered nodes of the WSN. The approach is not trivial and it takes into account related issues such as the required data distribution, nodes, and Hub/Sink connectivity and availability due to their connectivity features and duty-cycle. For a practical demonstration, the proposed framework is applied to an agriculture case study, a sector very relevant in our region. In this kind of rural context, distances, low costs due to vegetable selling prices and the lack of continuous power supplies may lead to viable or inviable sensing solutions for the farmers. The proposed framework systematize and facilitates WSN designers the required complex calculations taking into account the most relevant variables regarding power consumption, avoiding full/partial/prototype implementations, and measurements of different computational load distribution potential solutions for a specific WSN.
Collapse
|
34
|
Individual crop loads provide local control for collective food intake in ant colonies. eLife 2018; 7:31730. [PMID: 29506650 PMCID: PMC5862530 DOI: 10.7554/elife.31730] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2017] [Accepted: 02/19/2018] [Indexed: 11/13/2022] Open
Abstract
Nutritional regulation by ants emerges from a distributed process: food is collected by a small fraction of workers, stored within the crops of individuals, and spread via local ant-to-ant interactions. The precise individual-level underpinnings of this collective regulation have remained unclear mainly due to difficulties in measuring food within ants' crops. Here we image fluorescent liquid food in individually tagged Camponotus sanctus ants and track the real-time food flow from foragers to their gradually satiating colonies. We show how the feedback between colony satiation level and food inflow is mediated by individual crop loads; specifically, the crop loads of recipient ants control food flow rates, while those of foragers regulate the frequency of foraging-trips. Interestingly, these effects do not rise from pure physical limitations of crop capacity. Our findings suggest that the emergence of food intake regulation does not require individual foragers to assess the global state of the colony.
Collapse
|
35
|
A Decentralized Framework for Multi-Agent Robotic Systems. SENSORS 2018; 18:s18020417. [PMID: 29389849 PMCID: PMC5855891 DOI: 10.3390/s18020417] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/23/2017] [Revised: 01/14/2018] [Accepted: 01/17/2018] [Indexed: 11/17/2022]
Abstract
Over the past few years, decentralization of multi-agent robotic systems has become an important research area. These systems do not depend on a central control unit, which enables the control and assignment of distributed, asynchronous and robust tasks. However, in some cases, the network communication process between robotic agents is overlooked, and this creates a dependency for each agent to maintain a permanent link with nearby units to be able to fulfill its goals. This article describes a communication framework, where each agent in the system can leave the network or accept new connections, sending its information based on the transfer history of all nodes in the network. To this end, each agent needs to comply with four processes to participate in the system, plus a fifth process for data transfer to the nearest nodes that is based on Received Signal Strength Indicator (RSSI) and data history. To validate this framework, we use differential robotic agents and a monitoring agent to generate a topological map of an environment with the presence of obstacles.
Collapse
|
36
|
Towards Designing a Secure Exchange Platform for Diabetes Monitoring and Therapy. Stud Health Technol Inform 2018; 248:239-246. [PMID: 29726443] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
BACKGROUND Diabetes mellitus is one of the most prominent examples of chronic conditions that requires an active patient self-management and a network of specialists. OBJECTIVES The aim of this study was to analyze the user and legal requirements and develop a rough technology concept for a secure and patient-centered exchange platform. METHODS To this end, 14 experts representing different stakeholders were interviewed and took part in group discussions at three workshops, the pertinent literature and legal texts were analyzed. RESULTS The user requirements embraced a comprehensive set of use cases and the demand for "one platform for all" which is underlined by the right for data portability according to new regulations. In order to meet these requirements a distributed ledger technology was proposed. CONCLUSION We will therefore focus on a patient-centered application that showcases self-management and exchange with health specialists.
Collapse
|
37
|
Decentralized safety concept for closed-loop controlled intensive care. ACTA ACUST UNITED AC 2017; 62:213-223. [PMID: 28306515 DOI: 10.1515/bmt-2016-0087] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2016] [Accepted: 01/05/2017] [Indexed: 11/15/2022]
Abstract
This paper presents a decentralized safety concept for networked intensive care setups, for which a decentralized network of sensors and actuators is realized by embedded microcontroller nodes. It is evaluated for up to eleven medical devices in a setup for automated acute respiratory distress syndrome (ARDS) therapy. In this contribution we highlight a blood pump supervision as exemplary safety measure, which allows a reliable bubble detection in an extracorporeal blood circulation. The approach is validated with data of animal experiments including 35 bubbles with a size between 0.05 and 0.3 ml. All 18 bubbles with a size down to 0.15 ml are successfully detected. By using hidden Markov models (HMMs) as statistical method the number of necessary sensors can be reduced by two pressure sensors.
Collapse
|
38
|
Towards an Open Infrastructure for Relating Scholarly Assets. Stud Health Technol Inform 2017; 235:491-495. [PMID: 28423841] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Discovery of useful relationships between scholarly assets on the web is challenging, both in terms generating the right metadata around the assets, and in connecting all relevant digital entities in chain of provenance accessible to the whole community. This paper reports the development of a framework and tools enabling scholarly asset relationships to be expressed in a standard and open way, illustrated with use-cases of discovering new knowledge across cohort studies. The framework uses Research Objects for aggregation, distributed databases for storage, and distributed ledgers for provenance. Our proposal avoids management by a single central platform or organization, instead leveraging the use of existing resources and platforms across natural partnerships. Our proposed infrastructure will support a wide range of users from system administrators to researchers.
Collapse
|
39
|
Abstract
Finding the global minimum energy conformation (GMEC) of a huge combinatorial search space is the key challenge in computational protein design (CPD) problems. Traditional algorithms lack a scalable and efficient distributed design scheme, preventing researchers from taking full advantage of current cloud infrastructures. We design cloud OSPREY (cOSPREY), an extension to a widely used protein design software OSPREY, to allow the original design framework to scale to the commercial cloud infrastructures. We propose several novel designs to integrate both algorithm and system optimizations, such as GMEC-specific pruning, state search partitioning, asynchronous algorithm state sharing, and fault tolerance. We evaluate cOSPREY on three different cloud platforms using different technologies and show that it can solve a number of large-scale protein design problems that have not been possible with previous approaches.
Collapse
|
40
|
Abstract
Simple computation can be performed using the interactions between single-stranded molecules of DNA. These interactions are typically toehold-mediated strand displacement reactions in a well-mixed solution. We demonstrate that a DNA circuit with tethered reactants is a distributed system and show how it can be described as a stochastic Petri net. The system can be verified by mapping the Petri net onto a continuous-time Markov chain, which can also be used to find an optimal design for the circuit. This theoretical machinery can be applied to create software that automatically designs a DNA circuit, linking an abstract propositional formula to a physical DNA computation system that is capable of evaluating it. We conclude by introducing example mechanisms that can implement such circuits experimentally and discuss their individual strengths and weaknesses.
Collapse
|
41
|
Abstract
We propose a distributed model of nestmate recognition, analogous to the one used by the vertebrate immune system, in which colony response results from the diverse reactions of many ants. The model describes how individual behaviour produces colony response to non-nestmates. No single ant knows the odour identity of the colony. Instead, colony identity is defined collectively by all the ants in the colony. Each ant responds to the odour of other ants by reference to its own unique decision boundary, which is a result of its experience of encounters with other ants. Each ant thus recognizes a particular set of chemical profiles as being those of non-nestmates. This model predicts, as experimental results have shown, that the outcome of behavioural assays is likely to be variable, that it depends on the number of ants tested, that response to non-nestmates changes over time and that it changes in response to the experience of individual ants. A distributed system allows a colony to identify non-nestmates without requiring that all individuals have the same complete information and helps to facilitate the tracking of changes in cuticular hydrocarbon profiles, because only a subset of ants must respond to provide an adequate response.
Collapse
|
42
|
A practical approach to achieve private medical record linkage in light of public resources. J Am Med Inform Assoc 2013; 20:285-92. [PMID: 22847304 PMCID: PMC3638181 DOI: 10.1136/amiajnl-2012-000917] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2012] [Accepted: 07/12/2012] [Indexed: 11/03/2022] Open
Abstract
OBJECTIVE Integration of patients' records across resources enhances analytics. To address privacy concerns, emerging strategies such as Bloom filter encodings (BFEs), enable integration while obscuring identifiers. However, recent investigations demonstrate BFEs are, in theory, vulnerable to cryptanalysis when encoded identifiers are randomly selected from a public resource. This study investigates the extent to which cryptanalysis conditions hold for (1) real patient records and (2) a countermeasure that obscures the frequencies of the identifying values in encoded datasets. DESIGN First, to investigate the strength of cryptanalysis for real patient records, we build BFEs from identifiers in an electronic medical record system and apply cryptanalysis using identifiers in a publicly available voter registry. Second, to investigate the countermeasure under ideal cryptanalysis conditions, we compose BFEs from the identifiers that are randomly selected from a public voter registry. MEASUREMENT We utilize precision (ie, rate of correct re-identified encodings) and computation efficiency (ie, time to complete cryptanalysis) to assess the performance of cryptanalysis in BFEs before and after application of the countermeasure. RESULTS Cryptanalysis can achieve high precision when the encoded identifiers are composed of a random sample of a public resource (ie, a voter registry). However, we also find that the attack is less efficient and may not be practical for more realistic scenarios. By contrast, the proposed countermeasure made cryptanalysis impractical in terms of precision and efficiency. CONCLUSIONS Performance of cryptanalysis against BFEs based on patient data is significantly lower than theoretical estimates. The proposed countermeasure makes BFEs resistant to known practical attacks.
Collapse
|
43
|
Reducing patient re-identification risk for laboratory results within research datasets. J Am Med Inform Assoc 2013; 20:95-101. [PMID: 22822040 PMCID: PMC3555327 DOI: 10.1136/amiajnl-2012-001026] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Accepted: 07/02/2012] [Indexed: 01/08/2023] Open
Abstract
OBJECTIVE To try to lower patient re-identification risks for biomedical research databases containing laboratory test results while also minimizing changes in clinical data interpretation. MATERIALS AND METHODS In our threat model, an attacker obtains 5-7 laboratory results from one patient and uses them as a search key to discover the corresponding record in a de-identified biomedical research database. To test our models, the existing Vanderbilt TIME database of 8.5 million Safe Harbor de-identified laboratory results from 61 280 patients was used. The uniqueness of unaltered laboratory results in the dataset was examined, and then two data perturbation models were applied-simple random offsets and an expert-derived clinical meaning-preserving model. A rank-based re-identification algorithm to mimic an attack was used. The re-identification risk and the retention of clinical meaning for each model's perturbed laboratory results were assessed. RESULTS Differences in re-identification rates between the algorithms were small despite substantial divergence in altered clinical meaning. The expert algorithm maintained the clinical meaning of laboratory results better (affecting up to 4% of test results) than simple perturbation (affecting up to 26%). DISCUSSION AND CONCLUSION With growing impetus for sharing clinical data for research, and in view of healthcare-related federal privacy regulation, methods to mitigate risks of re-identification are important. A practical, expert-derived perturbation algorithm that demonstrated potential utility was developed. Similar approaches might enable administrators to select data protection scheme parameters that meet their preferences in the trade-off between the protection of privacy and the retention of clinical meaning of shared data.
Collapse
|
44
|
The Hub Population Health System: distributed ad hoc queries and alerts. J Am Med Inform Assoc 2012; 19:e46-50. [PMID: 22071531 PMCID: PMC3392869 DOI: 10.1136/amiajnl-2011-000322] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2011] [Accepted: 10/16/2011] [Indexed: 11/04/2022] Open
Abstract
The Hub Population Health System enables the creation and distribution of queries for aggregate count information, clinical decision support alerts at the point-of-care for patients who meet specified conditions, and secure messages sent directly to provider electronic health record (EHR) inboxes. Using a metronidazole medication recall, the New York City Department of Health was able to determine the number of affected patients and message providers, and distribute an alert to participating practices. As of September 2011, the system is live in 400 practices and within a year will have over 532 practices with 2500 providers, representing over 2.5 million New Yorkers. The Hub can help public health experts to evaluate population health and quality improvement activities throughout the ambulatory care network. Multiple EHR vendors are building these features in partnership with the department's regional extension center in anticipation of new meaningful use requirements.
Collapse
|
45
|
A multi-layered framework for disseminating knowledge for computer-based decision support. J Am Med Inform Assoc 2011; 18 Suppl 1:i132-9. [PMID: 22052898 PMCID: PMC3241169 DOI: 10.1136/amiajnl-2011-000334] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2011] [Accepted: 09/27/2011] [Indexed: 11/03/2022] Open
Abstract
BACKGROUND There are several challenges in encoding guideline knowledge in a form that is portable to different clinical sites, including the heterogeneity of clinical decision support (CDS) tools, of patient data representations, and of workflows. METHODS We have developed a multi-layered knowledge representation framework for structuring guideline recommendations for implementation in a variety of CDS contexts. In this framework, guideline recommendations are increasingly structured through four layers, successively transforming a narrative text recommendation into input for a CDS system. We have used this framework to implement rules for a CDS service based on three guidelines. We also conducted a preliminary evaluation, where we asked CDS experts at four institutions to rate the implementability of six recommendations from the three guidelines. CONCLUSION The experience in using the framework and the preliminary evaluation indicate that this approach has promise in creating structured knowledge, to implement in CDS systems, that is usable across organizations.
Collapse
|
46
|
The Yale cTAKES extensions for document classification: architecture and application. J Am Med Inform Assoc 2011; 18:614-20. [PMID: 21622934 PMCID: PMC3168305 DOI: 10.1136/amiajnl-2011-000093] [Citation(s) in RCA: 64] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2010] [Accepted: 04/22/2011] [Indexed: 11/04/2022] Open
Abstract
BACKGROUND Open-source clinical natural-language-processing (NLP) systems have lowered the barrier to the development of effective clinical document classification systems. Clinical natural-language-processing systems annotate the syntax and semantics of clinical text; however, feature extraction and representation for document classification pose technical challenges. METHODS The authors developed extensions to the clinical Text Analysis and Knowledge Extraction System (cTAKES) that simplify feature extraction, experimentation with various feature representations, and the development of both rule and machine-learning based document classifiers. The authors describe and evaluate their system, the Yale cTAKES Extensions (YTEX), on the classification of radiology reports that contain findings suggestive of hepatic decompensation. RESULTS AND DISCUSSION The F(1)-Score of the system for the retrieval of abdominal radiology reports was 96%, and was 79%, 91%, and 95% for the presence of liver masses, ascites, and varices, respectively. The authors released YTEX as open source, available at http://code.google.com/p/ytex.
Collapse
|
47
|
Normalized names for clinical drugs: RxNorm at 6 years. J Am Med Inform Assoc 2011; 18:441-8. [PMID: 21515544 PMCID: PMC3128404 DOI: 10.1136/amiajnl-2011-000116] [Citation(s) in RCA: 253] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2011] [Accepted: 03/24/2011] [Indexed: 01/03/2023] Open
Abstract
OBJECTIVE In the 6 years since the National Library of Medicine began monthly releases of RxNorm, RxNorm has become a central resource for communicating about clinical drugs and supporting interoperation between drug vocabularies. MATERIALS AND METHODS Built on the idea of a normalized name for a medication at a given level of abstraction, RxNorm provides a set of names and relationships based on 11 different external source vocabularies. The standard model enables decision support to take place for a variety of uses at the appropriate level of abstraction. With the incorporation of National Drug File Reference Terminology (NDF-RT) from the Veterans Administration, even more sophisticated decision support has become possible. DISCUSSION While related products such as RxTerms, RxNav, MyMedicationList, and MyRxPad have been recognized as helpful for various uses, tasks such as identifying exactly what is and is not on the market remain a challenge.
Collapse
|
48
|
A secure protocol for protecting the identity of providers when disclosing data for disease surveillance. J Am Med Inform Assoc 2011; 18:212-7. [PMID: 21486880 PMCID: PMC3078664 DOI: 10.1136/amiajnl-2011-000100] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2011] [Accepted: 02/03/2011] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Providers have been reluctant to disclose patient data for public-health purposes. Even if patient privacy is ensured, the desire to protect provider confidentiality has been an important driver of this reluctance. METHODS Six requirements for a surveillance protocol were defined that satisfy the confidentiality needs of providers and ensure utility to public health. The authors developed a secure multi-party computation protocol using the Paillier cryptosystem to allow the disclosure of stratified case counts and denominators to meet these requirements. The authors evaluated the protocol in a simulated environment on its computation performance and ability to detect disease outbreak clusters. RESULTS Theoretical and empirical assessments demonstrate that all requirements are met by the protocol. A system implementing the protocol scales linearly in terms of computation time as the number of providers is increased. The absolute time to perform the computations was 12.5 s for data from 3000 practices. This is acceptable performance, given that the reporting would normally be done at 24 h intervals. The accuracy of detection disease outbreak cluster was unchanged compared with a non-secure distributed surveillance protocol, with an F-score higher than 0.92 for outbreaks involving 500 or more cases. CONCLUSION The protocol and associated software provide a practical method for providers to disclose patient data for sentinel, syndromic or other indicator-based surveillance while protecting patient privacy and the identity of individual providers.
Collapse
|
49
|
An Entropy Approach to Disclosure Risk Assessment: Lessons from Real Applications and Simulated Domains. DECISION SUPPORT SYSTEMS 2011; 51:10-20. [PMID: 21647242 PMCID: PMC3107517 DOI: 10.1016/j.dss.2010.11.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
We live in an increasingly mobile world, which leads to the duplication of information across domains. Though organizations attempt to obscure the identities of their constituents when sharing information for worthwhile purposes, such as basic research, the uncoordinated nature of such environment can lead to privacy vulnerabilities. For instance, disparate healthcare providers can collect information on the same patient. Federal policy requires that such providers share "de-identified" sensitive data, such as biomedical (e.g., clinical and genomic) records. But at the same time, such providers can share identified information, devoid of sensitive biomedical data, for administrative functions. On a provider-by-provider basis, the biomedical and identified records appear unrelated, however, links can be established when multiple providers' databases are studied jointly. The problem, known as trail disclosure, is a generalized phenomenon and occurs because an individual's location access pattern can be matched across the shared databases. Due to technical and legal constraints, it is often difficult to coordinate between providers and thus it is critical to assess the disclosure risk in distributed environments, so that we can develop techniques to mitigate such risks. Research on privacy protection has so far focused on developing technologies to suppress or encrypt identifiers associated with sensitive information. There is growing body of work on the formal assessment of the disclosure risk of database entries in publicly shared databases, but a less attention has been paid to the distributed setting. In this research, we review the trail disclosure problem in several domains with known vulnerabilities and show that disclosure risk is influenced by the distribution of how people visit service providers. Based on empirical evidence, we propose an entropy metric for assessing such risk in shared databases prior to their release. This metric assesses risk by leveraging the statistical characteristics of a visit distribution, as opposed to person-level data. It is computationally efficient and superior to existing risk assessment methods, which rely on ad hoc assessment that are often computationally expensive and unreliable. We evaluate our approach on a range of location access patterns in simulated environments. Our results demonstrate the approach is effective at estimating trail disclosure risks and the amount of self-information contained in a distributed system is one of the main driving factors.
Collapse
|
50
|
Parallel Fuzzy Segmentation of Multiple Objects. INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2008; 18:336-344. [PMID: 19444333 PMCID: PMC2681298 DOI: 10.1002/ima.20170] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The usefulness of fuzzy segmentation algorithms based on fuzzy connectedness principles has been established in numerous publications. New technologies are capable of producing larger-and-larger datasets and this causes the sequential implementations of fuzzy segmentation algorithms to be time-consuming. We have adapted a sequential fuzzy segmentation algorithm to multi-processor machines. We demonstrate the efficacy of such a distributed fuzzy segmentation algorithm by testing it with large datasets (of the order of 50 million points/voxels/items): a speed-up factor of approximately five over the sequential implementation seems to be the norm.
Collapse
|