886 resultados para Computer Networks
Resumo:
Stealthy attackers move patiently through computer networks - taking days, weeks or months to accomplish their objectives in order to avoid detection. As networks scale up in size and speed, monitoring for such attack attempts is increasingly a challenge. This paper presents an efficient monitoring technique for stealthy attacks. It investigates the feasibility of proposed method under number of different test cases and examines how design of the network affects the detection. A methodological way for tracing anonymous stealthy activities to their approximate sources is also presented. The Bayesian fusion along with traffic sampling is employed as a data reduction method. The proposed method has the ability to monitor stealthy activities using 10-20% size sampling rates without degrading the quality of detection.
Resumo:
The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.
Resumo:
This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully
Resumo:
Wireless network technologies, such as IEEE 802.11 based wireless local area networks (WLANs), have been adopted in wireless networked control systems (WNCS) for real-time applications. Distributed real-time control requires satisfaction of (soft) real-time performance from the underlying networks for delivery of real-time traffic. However, IEEE 802.11 networks are not designed for WNCS applications. They neither inherently provide quality-of-service (QoS) support, nor explicitly consider the characteristics of the real-time traffic on networked control systems (NCS), i.e., periodic round-trip traffic. Therefore, the adoption of 802.11 networks in real-time WNCSs causes challenging problems for network design and performance analysis. Theoretical methodologies are yet to be developed for computing the best achievable WNCS network performance under the constraints of real-time control requirements. Focusing on IEEE 802.11 distributed coordination function (DCF) based WNCSs, this paper analyses several important NCS network performance indices, such as throughput capacity, round trip time and packet loss ratio under the periodic round trip traffic pattern, a unique feature of typical NCSs. Considering periodic round trip traffic, an analytical model based on Markov chain theory is developed for deriving these performance indices under a critical real-time traffic condition, at which the real-time performance constraints are marginally satisfied. Case studies are also carried out to validate the theoretical development.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This paper presents the capability of the neural networks as a computational tool for solving constrained optimization problem, arising in routing algorithms for the present day communication networks. The application of neural networks in the optimum routing problem, in case of packet switched computer networks, where the goal is to minimize the average delays in the communication have been addressed. The effectiveness of neural network is shown by the results of simulation of a neural design to solve the shortest path problem. Simulation model of neural network is shown to be utilized in an optimum routing algorithm known as flow deviation algorithm. It is also shown that the model will enable the routing algorithm to be implemented in real time and also to be adaptive to changes in link costs and network topology. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
We address the problem of passive eavesdroppers in multi-hop wireless networks using the technique of friendly jamming. The network is assumed to employ Decode and Forward (DF) relaying. Assuming the availability of perfect channel state information (CSI) of legitimate nodes and eavesdroppers, we consider a scheduling and power allocation (PA) problem for a multiple-source multiple-sink scenario so that eavesdroppers are jammed, and source-destination throughput targets are met while minimizing the overall transmitted power. We propose activation sets (AS-es) for scheduling, and formulate an optimization problem for PA. Several methods for finding AS-es are discussed and compared. We present an approximate linear program for the original nonlinear, non-convex PA optimization problem, and argue that under certain conditions, both the formulations produce identical results. In the absence of eavesdroppers' CSI, we utilize the notion of Vulnerability Region (VR), and formulate an optimization problem with the objective of minimizing the VR. Our results show that the proposed solution can achieve power-efficient operation while defeating eavesdroppers and achieving desired source-destination throughputs simultaneously. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
This letter exposed a serious unfairness problem with IEEE 802.11 MAC based Mobile Ad-hoc Networks (MANETs) when operating TCP connections, and identifies the three common factors that contribute to this problem. The work initiated the development of a programmable wireless framework that is subsequently used in a spin-out company (TOM), and by the Telecoms Technology Testing centre in Taiwan(Dr D Chieng).
Resumo:
The paper describes the design and analysis of a packet scheduler intended to operate over wireless channels with spatially selective error bursts. A particularly innovative aspect in the design is the optimization of the scheduler algorithm to minimize the worst-case fairness index (WFI) for real-time IP traffic.
Resumo:
Key pre-distribution schemes have been proposed as means to overcome Wireless Sensor Networks constraints such as limited communication and processing power. Two sensor nodes can establish a secure link with some probability based on the information stored in their memories though it is not always possible that two sensor nodes may set up a secure link. In this paper, we propose a new approach that elects trusted common nodes called ”Proxies” which reside on an existing secure path linking two sensor nodes. These sensor nodes are used to send the generated key which will be divided into parts (nuggets) according to the number of elected proxies. Our approach has been assessed against previously developed algorithms and the results show that our algorithm discovers proxies more quickly which are closer to both end nodes, thus producing shorter path lengths. We have also assessed the impact of our algorithm on the average time to establish a secure link when the transmitter and receiver of the sensor nodes are ”ON”. The results show the superiority of our algorithm in this regard. Overall, the proposed algorithm is well suited for Wireless Sensor Networks.
Resumo:
Quality of Service (QoS) support in IEEE 802.11-based ad hoc networks relies on the networks’ ability to estimate the available bandwidth on a given link. However, no mechanism has been standardized to accurately evaluate this resource. This remains one of the main issues open to research in this field. This paper proposes an available bandwidth estimation approach which achieves more accurate estimation when compared to existing research. The proposed approach differentiates the channel busy caused by transmitting or receiving from that caused by carrier sensing, and thus improves the accuracy of estimating the overlap probability of two adjacent nodes’ idle time. Simulation results testify the improvement of this approach when compared with well known bandwidth estimation methods in the literature.
Resumo:
A combined antennas and propagation study has been undertaken with a view to directly improving link conditions for wireless body area networks. Using tissue-equivalent numerical and experimental phantoms representative of muscle tissue at 2.45 GHz, we show that the node to node [S-21] path gain performance of a new wearable integrated antenna (WIA) is up to 9 dB better than a conventional compact Printed-F antenna, both of which are suitable for integration with wireless node circuitry. Overall, the WIA performed extremely well with a measured radiation efficiency of 38% and an impedance bandwidth of 24%. Further benefits were also obtained using spatial diversity, with the WIA providing up to 7.7 dB of diversity gain for maximal ratio combining. The results also show that correlation was lower for a multipath environment leading to higher diversity gain. Furthermore, a diversity implementation with the new antenna gave up to 18 dB better performance in terms of mean power level and there was a significant improvement in level crossing rates and average fade durations when moving from a single-branch to a two-branch diversity system.
Resumo:
A new configurable architecture is presented that offers multiple levels of video playback by accommodating variable levels of network utilization and bandwidth. By utilizing scalable MPEG-4 encoding at the network edge and using specific video delivery protocols, media streaming components are merged to fully optimize video playback for IPv6 networks, thus improving QoS. This is achieved by introducing “programmable network functionality” (PNF) which splits layered video transmission and distributes it evenly over available bandwidth, reducing packet loss and delay caused by out-of-profile DiffServ classes. An FPGA design is given which gives improved performance, e.g. link utilization, end-to-end delay, and that during congestion, improves on-time delivery of video frames by up to 80% when compared to current “static” DiffServ.