907 resultados para Distributed network protocol


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing environmental conditions and number of users, application performance might suffer, leading to Service Level Agreement (SLA) violations and inefficient use of hardware resources. We introduce a system for controlling the complexity of scaling applications composed of multiple services using mechanisms based on fulfillment of SLAs. We present how service monitoring information can be used in conjunction with service level objectives, predictions, and correlations between performance indicators for optimizing the allocation of services belonging to distributed applications. We validate our models using experiments and simulations involving a distributed enterprise information system. We show how discovering correlations between application performance indicators can be used as a basis for creating refined service level objectives, which can then be used for scaling the application and improving the overall application's performance under similar conditions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the past several years the topics of energy consumption and energy harvesting have gained significant importance as a means for improved operation of wireless sensor and mesh networks. Energy-awareness of operation is especially relevant for application scenarios from the domain of environmental monitoring in hard to access areas. In this work we reflect upon our experiences with a real-world deployment of a wireless mesh network. In particular, a comprehensive study on energy measurements collected over several weeks during the summer and the winter period in a network deployment in the Swiss Alps is presented. Energy performance is monitored and analysed for three system components, namely, mesh node, battery and solar panel module. Our findings cover a number of aspects of energy consumption, including the amount of load consumed by a mesh node, the amount of load harvested by a solar panel module, and the dependencies between these two. With our work we aim to shed some light on energy-aware network operation and to help both users and developers in the planning and deployment of a new wireless (mesh) network for environmental research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the problem of distributed sensors' failure detection in networks with a small number of defective sensors, whose measurements differ significantly from the neighbor measurements. We build on the sparse nature of the binary sensor failure signals to propose a novel distributed detection algorithm based on gossip mechanisms and on Group Testing (GT), where the latter has been used so far in centralized detection problems. The new distributed GT algorithm estimates the set of scattered defective sensors with a low complexity distance decoder from a small number of linearly independent binary messages exchanged by the sensors. We first consider networks with one defective sensor and determine the minimal number of linearly independent messages needed for its detection with high probability. We then extend our study to the multiple defective sensors detection by modifying appropriately the message exchange protocol and the decoding procedure. We show that, for small and medium sized networks, the number of messages required for successful detection is actually smaller than the minimal number computed theoretically. Finally, simulations demonstrate that the proposed method outperforms methods based on random walks in terms of both detection performance and convergence rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a framework where data from correlated sources are transmitted with the help of network coding in ad hoc network topologies. The correlated data are encoded independently at sensors and network coding is employed in the intermediate nodes in order to improve the data delivery performance. In such settings, we focus on the problem of reconstructing the sources at decoder when perfect decoding is not possible due to losses or bandwidth variations. We show that the source data similarity can be used at decoder to permit decoding based on a novel and simple approximate decoding scheme. We analyze the influence of the network coding parameters and in particular the size of finite coding fields on the decoding performance. We further determine the optimal field size that maximizes the expected decoding performance as a trade-off between information loss incurred by limiting the resolution of the source data and the error probability in the reconstructed data. Moreover, we show that the performance of the approximate decoding improves when the accuracy of the source model increases even with simple approximate decoding techniques. We provide illustrative examples showing how the proposed algorithm can be deployed in sensor networks and distributed imaging applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Internet of Things (IoT) is attracting considerable attention from the universities, industries, citizens and governments for applications, such as healthcare, environmental monitoring and smart buildings. IoT enables network connectivity between smart devices at all times, everywhere, and about everything. In this context, Wireless Sensor Networks (WSNs) play an important role in increasing the ubiquity of networks with smart devices that are low-cost and easy to deploy. However, sensor nodes are restricted in terms of energy, processing and memory. Additionally, low-power radios are very sensitive to noise, interference and multipath distortions. In this context, this article proposes a routing protocol based on Routing by Energy and Link quality (REL) for IoT applications. To increase reliability and energy-efficiency, REL selects routes on the basis of a proposed end-to-end link quality estimator mechanism, residual energy and hop count. Furthermore, REL proposes an event-driven mechanism to provide load balancing and avoid the premature energy depletion of nodes/networks. Performance evaluations were carried out using simulation and testbed experiments to show the impact and benefits of REL in small and large-scale networks. The results show that REL increases the network lifetime and services availability, as well as the quality of service of IoT applications. It also provides an even distribution of scarce network resources and reduces the packet loss rate, compared with the performance of well-known protocols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless mobile sensor networks are enlarging the Internet of Things (IoT) portfolio with a huge number of multimedia services for smart cities. Safety and environmental monitoring multimedia applications will be part of the Smart IoT systems, which aim to reduce emergency response time, while also predicting hazardous events. In these mobile and dynamic (possible disaster) scenarios, opportunistic routing allows routing decisions in a completely distributed manner, by using a hop- by-hop route decision based on protocol-specific characteristics, and a predefined end-to-end path is not a reliable solution. This enables the transmission of video flows of a monitored area/object with Quality of Experience (QoE) support to users, headquarters or IoT platforms. However, existing approaches rely on a single metric to make the candidate selection rule, including link quality or geographic information, which causes a high packet loss rate, and reduces the video perception from the human standpoint. This article proposes a cross-layer Link quality and Geographical-aware Opportunistic routing protocol (LinGO), which is designed for video dissemination in mobile multimedia IoT environments. LinGO improves routing decisions using multiple metrics, including link quality, geographic loca- tion, and energy. The simulation results show the benefits of LinGO compared with well-known routing solutions for video transmission with QoE support in mobile scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mobile ad-hoc networks (MANETs) and wireless sensor networks (WSNs) have been attracting increasing attention for decades due to their broad civilian and military applications. Basically, a MANET or WSN is a network of nodes connected by wireless communication links. Due to the limited transmission range of the radio, many pairs of nodes in MANETs or WSNs may not be able to communicate directly, hence they need other intermediate nodes to forward packets for them. Routing in such types of networks is an important issue and it poses great challenges due to the dynamic nature of MANETs or WSNs. On the one hand, the open-air nature of wireless environments brings many difficulties when an efficient routing solution is required. The wireless channel is unreliable due to fading and interferences, which makes it impossible to maintain a quality path from a source node to a destination node. Additionally, node mobility aggravates network dynamics, which causes frequent topology changes and brings significant overheads for maintaining and recalculating paths. Furthermore, mobile devices and sensors are usually constrained by battery capacity, computing and communication resources, which impose limitations on the functionalities of routing protocols. On the other hand, the wireless medium possesses inherent unique characteristics, which can be exploited to enhance transmission reliability and routing performance. Opportunistic routing (OR) is one promising technique that takes advantage of the spatial diversity and broadcast nature of the wireless medium to improve packet forwarding reliability in multihop wireless communication. OR combats the unreliable wireless links by involving multiple neighboring nodes (forwarding candidates) to choose packet forwarders. In opportunistic routing, a source node does not require an end-to-end path to transmit packets. The packet forwarding decision is made hop-by-hop in a fully distributed fashion. Motivated by the deficiencies of existing opportunistic routing protocols in dynamic environments such as mobile ad-hoc networks or wireless sensor networks, this thesis proposes a novel context-aware adaptive opportunistic routing scheme. Our proposal selects packet forwarders by simultaneously exploiting multiple types of cross-layer context information of nodes and environments. Our approach significantly outperforms other routing protocols that rely solely on a single metric. The adaptivity feature of our proposal enables network nodes to adjust their behaviors at run-time according to network conditions. To accommodate the strict energy constraints in WSNs, this thesis integrates adaptive duty-cycling mechanism to opportunistic routing for wireless sensor nodes. Our approach dynamically adjusts the sleeping intervals of sensor nodes according to the monitored traffic load and the estimated energy consumption rate. Through the integration of duty cycling of sensor nodes and opportunistic routing, our protocol is able to provide a satisfactory balance between good routing performance and energy efficiency for WSNs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing has evolved to become an enabler for delivering access to large scale distributed applications running on managed network-connected computing systems. This makes possible hosting Distributed Enterprise Information Systems (dEISs) in cloud environments, while enforcing strict performance and quality of service requirements, defined using Service Level Agreements (SLAs). {SLAs} define the performance boundaries of distributed applications, and are enforced by a cloud management system (CMS) dynamically allocating the available computing resources to the cloud services. We present two novel VM-scaling algorithms focused on dEIS systems, which optimally detect most appropriate scaling conditions using performance-models of distributed applications derived from constant-workload benchmarks, together with SLA-specified performance constraints. We simulate the VM-scaling algorithms in a cloud simulator and compare against trace-based performance models of dEISs. We compare a total of three SLA-based VM-scaling algorithms (one using prediction mechanisms) based on a real-world application scenario involving a large variable number of users. Our results show that it is beneficial to use autoregressive predictive SLA-driven scaling algorithms in cloud management systems for guaranteeing performance invariants of distributed cloud applications, as opposed to using only reactive SLA-based VM-scaling algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud Computing enables provisioning and distribution of highly scalable services in a reliable, on-demand and sustainable manner. However, objectives of managing enterprise distributed applications in cloud environments under Service Level Agreement (SLA) constraints lead to challenges for maintaining optimal resource control. Furthermore, conflicting objectives in management of cloud infrastructure and distributed applications might lead to violations of SLAs and inefficient use of hardware and software resources. This dissertation focusses on how SLAs can be used as an input to the cloud management system, increasing the efficiency of allocating resources, as well as that of infrastructure scaling. First, we present an extended SLA semantic model for modelling complex service-dependencies in distributed applications, and for enabling automated cloud infrastructure management operations. Second, we describe a multi-objective VM allocation algorithm for optimised resource allocation in infrastructure clouds. Third, we describe a method of discovering relations between the performance indicators of services belonging to distributed applications and then using these relations for building scaling rules that a CMS can use for automated management of VMs. Fourth, we introduce two novel VM-scaling algorithms, which optimally scale systems composed of VMs, based on given SLA performance constraints. All presented research works were implemented and tested using enterprise distributed applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION Postoperative delirium is one of the most common complications of major surgery, affecting 10-70% of surgical patients 60 years and older. Delirium is an acute change in cognition that manifests as poor attention and illogical thinking and is associated with longer intensive care unit (ICU) and hospital stay, long-lasting cognitive deterioration and increased mortality. Ketamine has been used as an anaesthetic drug for over 50 years and has an established safety record. Recent research suggests that, in addition to preventing acute postoperative pain, a subanaesthetic dose of intraoperative ketamine could decrease the incidence of postoperative delirium as well as other neurological and psychiatric outcomes. However, these proposed benefits of ketamine have not been tested in a large clinical trial. METHODS The Prevention of Delirium and Complications Associated with Surgical Treatments (PODCAST) study is an international, multicentre, randomised controlled trial. 600 cardiac and major non-cardiac surgery patients will be randomised to receive ketamine (0.5 or 1 mg/kg) or placebo following anaesthetic induction and prior to surgical incision. For the primary outcome, blinded observers will assess delirium on the day of surgery (postoperative day 0) and twice daily from postoperative days 1-3 using the Confusion Assessment Method or the Confusion Assessment Method for the ICU. For the secondary outcomes, blinded observers will estimate pain using the Behavioral Pain Scale or the Behavioral Pain Scale for Non-Intubated Patients and patient self-report. ETHICS AND DISSEMINATION The PODCAST trial has been approved by the ethics boards of five participating institutions; approval is ongoing at other sites. Recruitment began in February 2014 and will continue until the end of 2016. Dissemination plans include presentations at scientific conferences, scientific publications, stakeholder engagement and popular media. REGISTRATION DETAILS The study is registered at clinicaltrials.gov, NCT01690988 (last updated March 2014). The PODCAST trial is being conducted under the auspices of the Neurological Outcomes Network for Surgery (NEURONS). TRIAL REGISTRATION NUMBER NCT01690988 (last updated December 2013).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, we propose a novel network coding enabled NDN architecture for the delivery of scalable video. Our scheme utilizes network coding in order to address the problem that arises in the original NDN protocol, where optimal use of the bandwidth and caching resources necessitates the coordination of the forwarding decisions. To optimize the performance of the proposed network coding based NDN protocol and render it appropriate for transmission of scalable video, we devise a novel rate allocation algorithm that decides on the optimal rates of Interest messages sent by clients and intermediate nodes. This algorithm guarantees that the achieved flow of Data objects will maximize the average quality of the video delivered to the client population. To support the handling of Interest messages and Data objects when intermediate nodes perform network coding, we modify the standard NDN protocol and introduce the use of Bloom filters, which store efficiently additional information about the Interest messages and Data objects. The proposed architecture is evaluated for transmission of scalable video over PlanetLab topologies. The evaluation shows that the proposed scheme performs very close to the optimal performance

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advancements in cloud computing have enabled the proliferation of distributed applications, which require management and control of multiple services. However, without an efficient mechanism for scaling services in response to changing workload conditions, such as number of connected users, application performance might suffer, leading to violations of Service Level Agreements (SLA) and possible inefficient use of hardware resources. Combining dynamic application requirements with the increased use of virtualised computing resources creates a challenging resource Management context for application and cloud-infrastructure owners. In such complex environments, business entities use SLAs as a means for specifying quantitative and qualitative requirements of services. There are several challenges in running distributed enterprise applications in cloud environments, ranging from the instantiation of service VMs in the correct order using an adequate quantity of computing resources, to adapting the number of running services in response to varying external loads, such as number of users. The application owner is interested in finding the optimum amount of computing and network resources to use for ensuring that the performance requirements of all her/his applications are met. She/he is also interested in appropriately scaling the distributed services so that application performance guarantees are maintained even under dynamic workload conditions. Similarly, the infrastructure Providers are interested in optimally provisioning the virtual resources onto the available physical infrastructure so that her/his operational costs are minimized, while maximizing the performance of tenants’ applications. Motivated by the complexities associated with the management and scaling of distributed applications, while satisfying multiple objectives (related to both consumers and providers of cloud resources), this thesis proposes a cloud resource management platform able to dynamically provision and coordinate the various lifecycle actions on both virtual and physical cloud resources using semantically enriched SLAs. The system focuses on dynamic sizing (scaling) of virtual infrastructures composed of virtual machines (VM) bounded application services. We describe several algorithms for adapting the number of VMs allocated to the distributed application in response to changing workload conditions, based on SLA-defined performance guarantees. We also present a framework for dynamic composition of scaling rules for distributed service, which used benchmark-generated application Monitoring traces. We show how these scaling rules can be combined and included into semantic SLAs for controlling allocation of services. We also provide a detailed description of the multi-objective infrastructure resource allocation problem and various approaches to satisfying this problem. We present a resource management system based on a genetic algorithm, which performs allocation of virtual resources, while considering the optimization of multiple criteria. We prove that our approach significantly outperforms reactive VM-scaling algorithms as well as heuristic-based VM-allocation approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Content-Centric Networking (CCN) naturally supports multi-path communication, as it allows the simultaneous use of multiple interfaces (e.g. LTE and WiFi). When multiple sources and multiple clients are considered, the optimal set of distribution trees should be determined in order to optimally use all the available interfaces. This is not a trivial task, as it is a computationally intense procedure that should be done centrally. The need for central coordination can be removed by employing network coding, which also offers improved resiliency to errors and large throughput gains. In this paper, we propose NetCodCCN, a protocol for integrating network coding in CCN. In comparison to previous works proposing to enable network coding in CCN, NetCodCCN permit Interest aggregation and Interest pipelining, which reduce the data retrieval times. The experimental evaluation shows that the proposed protocol leads to significant improvements in terms of content retrieval delay compared to the original CCN. Our results demonstrate that the use of network coding adds robustness to losses and permits to exploit more efficiently the available network resources. The performance gains are verified for content retrieval in various network scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of ACTRIS (Aerosols, Clouds, and Trace Gases Research Infrastructure Network) summer 2012 measurement campaign (8 June–17 July 2012), EARLINET organized and performed a controlled exercise of feasibility to demonstrate its potential to perform operational, coordinated measurements and deliver products in near-real time. Eleven lidar stations participated in the exercise which started on 9 July 2012 at 06:00 UT and ended 72 h later on 12 July at 06:00 UT. For the first time, the single calculus chain (SCC) – the common calculus chain developed within EARLINET for the automatic evaluation of lidar data from raw signals up to the final products – was used. All stations sent in real-time measurements of a 1 h duration to the SCC server in a predefined netcdf file format. The pre-processing of the data was performed in real time by the SCC, while the optical processing was performed in near-real time after the exercise ended. 98 and 79 % of the files sent to SCC were successfully pre-processed and processed, respectively. Those percentages are quite large taking into account that no cloud screening was performed on the lidar data. The paper draws present and future SCC users' attention to the most critical parameters of the SCC product configuration and their possible optimal value but also to the limitations inherent to the raw data. The continuous use of SCC direct and derived products in heterogeneous conditions is used to demonstrate two potential applications of EARLINET infrastructure: the monitoring of a Saharan dust intrusion event and the evaluation of two dust transport models. The efforts made to define the measurements protocol and to configure properly the SCC pave the way for applying this protocol for specific applications such as the monitoring of special events, atmospheric modeling, climate research and calibration/validation activities of spaceborne observations.