266 resultados para 080503 Networking and Communications


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This chapter presents a comparative survey of recent key management (key distribution, discovery, establishment and update) solutions for wireless sensor networks. We consider both distributed and hierarchical sensor network architectures where unicast, multicast and broadcast types of communication take place. Probabilistic, deterministic and hybrid key management solutions are presented, and we determine a set of metrics to quantify their security properties and resource usage such as processing, storage and communication overheads. We provide a taxonomy of solutions, and identify trade-offs in these schemes to conclude that there is no one-size-fits-all solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Collaborative methods are promising tools for solving complex security tasks. In this context, the authors present the security overlay framework CIMD (Collaborative Intrusion and Malware Detection), enabling participants to state objectives and interests for joint intrusion detection and find groups for the exchange of security-related data such as monitoring or detection results accordingly; to these groups the authors refer as detection groups. First, the authors present and discuss a tree-oriented taxonomy for the representation of nodes within the collaboration model. Second, they introduce and evaluate an algorithm for the formation of detection groups. After conducting a vulnerability analysis of the system, the authors demonstrate the validity of CIMD by examining two different scenarios inspired sociology where the collaboration is advantageous compared to the non-collaborative approach. They evaluate the benefit of CIMD by simulation in a novel packet-level simulation environment called NeSSi (Network Security Simulator) and give a probabilistic analysis for the scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Secure communications between large number of sensor nodes that are randomly scattered over a hostile territory, necessitate efficient key distribution schemes. However, due to limited resources at sensor nodes such schemes cannot be based on post deployment computations. Instead, pairwise (symmetric) keys are required to be pre-distributed by assigning a list of keys, (a.k.a. key-chain), to each sensor node. If a pair of nodes does not have a common key after deployment then they must find a key-path with secured links. The objective is to minimize the keychain size while (i) maximizing pairwise key sharing probability and resilience, and (ii) minimizing average key-path length. This paper presents a deterministic key distribution scheme based on Expander Graphs. It shows how to map the parameters (e.g., degree, expansion, and diameter) of a Ramanujan Expander Graph to the desired properties of a key distribution scheme for a physical network topology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 802.11 based wireless local area networks (WLANs) are being increasingly deployed for soft real-time control applications. However, they do not provide quality-ofservice (QoS) differentiation to meet the requirements of periodic real-time traffic flows, a unique feature of real-time control systems. This problem becomes evident particularly when the network is under congested conditions. Addressing this problem, a media access control (MAC) scheme, QoS-dif, is proposed in this paper to enable QoS differentiation in IEEE 802.11 networks for different types of periodic real-time traffic flows. It extends the IEEE 802.11e Enhanced Distributed Channel Access (EDCA) by introducing a QoS differentiation method to deal with different types of periodic traffic that have different QoS requirements for real-time control applications. The effectiveness of the proposed QoS-dif scheme is demonstrated through comparisons with the IEEE 802.11e EDCA mechanism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The IEC 61850 family of standards for substation communication systems were released in the early 2000s, and include IEC 61850-8-1 and IEC 61850-9-2 that enable Ethernet to be used for process-level connections between transmission substation switchyards and control rooms. This paper presents an investigation of process bus protection performance, as the in-service behavior of multi-function process buses is largely unknown. An experimental approach was adopted that used a Real Time Digital Simulator and 'live' substation automation devices. The effect of sampling synchronization error and network traffic on transformer differential protection performance was assessed and compared to conventional hard-wired connections. Ethernet was used for all sampled value measurements, circuit breaker tripping, transformer tap-changer position reports and Precision Time Protocol synchronization of sampled value merging unit sampling. Test results showed that the protection relay under investigation operated correctly with process bus network traffic approaching 100% capacity. The protection system was not adversely affected by synchronizing errors significantly larger than the standards permit, suggesting these requirements may be overly conservative. This 'closed loop' approach, using substation automation hardware, validated the operation of protection relays under extreme conditions. Digital connections using a single shared Ethernet network outperformed conventional hard-wired solutions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We conducted a systematic review of the literature on telemedicine use in long-term care facilities (LTCFs) and assessed the quality of the published evidence. A database search identified 22 papers which met the inclusion criteria. The quality of the studies was assessed and if they contained economic data, they were rated according to standard criteria. The clinical services provided by telemedicine included allied health (n = 5), dermatology (3), general practice (4), neurology (2), geriatrics (1), psychiatry (4) and multiple specialities (3). Most studies (17) employed real-time telemedicine using videoconferencing. The remaining five used store and forward telemedicine. The papers focused on economics (3), feasibility (9), stakeholder satisfaction (12), reliability (5) and service implementation (2). Overall, the quality of evidence for telemedicine in LTCFs was low. There was only one small randomised controlled trial (RCT). Most studies were observational and qualitative, and focused on utilisation. They were mainly based on surveys and interviews of stakeholders. A few studies evaluated the cost associated with implementing telemedicine services in LTCFs. The present review shows that there is evidence for feasibility and stakeholder satisfaction in using telemedicine in LTCFs in a number of clinical specialities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Server consolidation using virtualization technology has become an important technology to improve the energy efficiency of data centers. Virtual machine placement is the key in the server consolidation technology. In the past few years, many approaches to the virtual machine placement have been proposed. However, existing virtual machine placement approaches consider the energy consumption by physical machines only, but do not consider the energy consumption in communication network, in a data center. However, the energy consumption in the communication network in a data center is not trivial, and therefore should be considered in the virtual machine placement. In our preliminary research, we have proposed a genetic algorithm for a new virtual machine placement problem that considers the energy consumption in both physical machines and the communication network in a data center. Aiming at improving the performance and efficiency of the genetic algorithm, this paper presents a hybrid genetic algorithm for the energy-efficient virtual machine placement problem. Experimental results show that the hybrid genetic algorithm significantly outperforms the original genetic algorithm, and that the hybrid genetic algorithm is scalable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wireless networked control systems (WNCSs) have been increasingly deployed in industrial applications. As they require timely data packet transmissions, it is difficult to make efficient use of the limited channel resources, particularly in contention based wireless networks in the layered network architecture. Aiming to maintain the WNCSs under critical real-time traffic condition at which the WNCSs marginally meet the real-time requirements, a cross-layer design (CLD) approach is presented in this paper to adaptively adjust the control period to achieve improved channel utilization while still maintaining effective and timely packet transmissions. The effectiveness of the proposed approach is demonstrated through simulation studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precise clock synchronization is essential in emerging time-critical distributed control systems operating over computer networks where the clock synchronization requirements are mostly focused on relative clock synchronization and high synchronization precision. Existing clock synchronization techniques such as the Network Time Protocol (NTP) and the IEEE 1588 standard can be difficult to apply to such systems because of the highly precise hardware clocks required, due to network congestion caused by a high frequency of synchronization message transmissions, and high overheads. In response, we present a Time Stamp Counter based precise Relative Clock Synchronization Protocol (TSC-RCSP) for distributed control applications operating over local-area networks (LANs). In our protocol a software clock based on the TSC register, counting CPU cycles, is adopted in the time clients and server. TSC-based clocks offer clients a precise, stable and low-cost clock synchronization solution. Experimental results show that clock precision in the order of 10~microseconds can be achieved in small-scale LAN systems. Such clock precision is much higher than that of a processor's Time-Of-Day clock, and is easily sufficient for most distributed real-time control applications over LANs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As one of the most widely used wireless network technologies, IEEE 802.11 wireless local area networks (WLANs) have found a dramatically increasing number of applications in soft real-time networked control systems (NCSs). To fulfill the real-time requirements in such NCSs, most of the bandwidth of the wireless networks need to be allocated to high-priority data for periodic measurements and control with deadline requirements. However, existing QoS-enabled 802.11 medium access control (MAC) protocols do not consider the deadline requirements explicitly, leading to unpredictable deadline performance of NCS networks. Consequentially, the soft real-time requirements of the periodic traffic may not be satisfied, particularly under congested network conditions. This paper makes two main contributions to address this problem in wireless NCSs. Firstly, a deadline-constrained MAC protocol with QoS differentiation is presented for IEEE 802.11 soft real-time NCSs. It handles periodic traffic by developing two specific mechanisms: a contention-sensitive backoff mechanism, and an intra-traffic-class QoS differentiation mechanism. Secondly, a theoretical model is established to describe the deadline-constrained MAC protocol and evaluate its performance of throughput, delay and packet-loss ratio in wireless NCSs. Numerical studies are conducted to validate the accuracy of the theoretical model and to demonstrate the effectiveness of the new MAC protocol.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses an output feedback control problem for a class of networked control systems (NCSs) with a stochastic communication protocol. Under the scenario that only one sensor is allowed to obtain the communication access at each transmission instant, a stochastic communication protocol is first defined, where the communication access is modelled by a discrete-time Markov chain with partly unknown transition probabilities. Secondly, by use of a network-based output feedback control strategy and a time-delay division method, the closed-loop system is modeled as a stochastic system with multi time-varying delays, where the inherent characteristic of the network delay is well considered to improve the control performance. Then, based on the above constructed stochastic model, two sufficient conditions are derived for ensuring the mean-square stability and stabilization of the system under consideration. Finally, two examples are given to show the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Generating discriminative input features is a key requirement for achieving highly accurate classifiers. The process of generating features from raw data is known as feature engineering and it can take significant manual effort. In this paper we propose automated feature engineering to derive a suite of additional features from a given set of basic features with the aim of both improving classifier accuracy through discriminative features, and to assist data scientists through automation. Our implementation is specific to HTTP computer network traffic. To measure the effectiveness of our proposal, we compare the performance of a supervised machine learning classifier built with automated feature engineering versus one using human-guided features. The classifier addresses a problem in computer network security, namely the detection of HTTP tunnels. We use Bro to process network traffic into base features and then apply automated feature engineering to calculate a larger set of derived features. The derived features are calculated without favour to any base feature and include entropy, length and N-grams for all string features, and counts and averages over time for all numeric features. Feature selection is then used to find the most relevant subset of these features. Testing showed that both classifiers achieved a detection rate above 99.93% at a false positive rate below 0.01%. For our datasets, we conclude that automated feature engineering can provide the advantages of increasing classifier development speed and reducing development technical difficulties through the removal of manual feature engineering. These are achieved while also maintaining classification accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dispersing a data object into a set of data shares is an elemental stage in distributed communication and storage systems. In comparison to data replication, data dispersal with redundancy saves space and bandwidth. Moreover, dispersing a data object to distinct communication links or storage sites limits adversarial access to whole data and tolerates loss of a part of data shares. Existing data dispersal schemes have been proposed mostly based on various mathematical transformations on the data which induce high computation overhead. This paper presents a novel data dispersal scheme where each part of a data object is replicated, without encoding, into a subset of data shares according to combinatorial design theory. Particularly, data parts are mapped to points and data shares are mapped to lines of a projective plane. Data parts are then distributed to data shares using the point and line incidence relations in the plane so that certain subsets of data shares collectively possess all data parts. The presented scheme incorporates combinatorial design theory with inseparability transformation to achieve secure data dispersal at reduced computation, communication and storage costs. Rigorous formal analysis and experimental study demonstrate significant cost-benefits of the presented scheme in comparison to existing methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mobile applications are being increasingly deployed on a massive scale in various mobile sensor grid database systems. With limited resources from the mobile devices, how to process the huge number of queries from mobile users with distributed sensor grid databases becomes a critical problem for such mobile systems. While the fundamental semantic cache technique has been investigated for query optimization in sensor grid database systems, the problem is still difficult due to the fact that more realistic multi-dimensional constraints have not been considered in existing methods. To solve the problem, a new semantic cache scheme is presented in this paper for location-dependent data queries in distributed sensor grid database systems. It considers multi-dimensional constraints or factors in a unified cost model architecture, determines the parameters of the cost model in the scheme by using the concept of Nash equilibrium from game theory, and makes semantic cache decisions from the established cost model. The scenarios of three factors of semantic, time and locations are investigated as special cases, which improve existing methods. Experiments are conducted to demonstrate the semantic cache scheme presented in this paper for distributed sensor grid database systems.