886 resultados para Computer networks.
Resumo:
This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.
Resumo:
The mobile phone has, as a device, taken the world by storm in the past decade; from only 136 million phones globally in 1996, it is now estimated that by the end of 2008 roughly half of the worlds population will own a mobile phone. Over the years, the capabilities of the phones as well as the networks have increased tremendously, reaching the point where the devices are better called miniature computers rather than simply mobile phones. The mobile industry is currently undertaking several initiatives of developing new generations of mobile network technologies; technologies that to a large extent focus at offering ever-increasing data rates. This thesis seeks to answer the question of whether the future mobile networks in development and the future mobile services are in sync; taking a forward-looking timeframe of five to eight years into the future, will there be services that will need the high-performance new networks being planned? The question is seen to be especially pertinent in light of slower-than-expected takeoff of 3G data services. Current and future mobile services are analyzed from two viewpoints; first, looking at the gradual, evolutionary development of the services and second, through seeking to identify potential revolutionary new mobile services. With information on both current and future mobile networks as well as services, a network capability - service requirements mapping is performed to identify which services will work in which networks. Based on the analysis, it is far from certain whether the new mobile networks, especially those planned for deployment after HSPA, will be needed as soon as they are being currently roadmapped. The true service-based demand for the "beyond HSPA" technologies may be many years into the future - or, indeed, may never materialize thanks to the increasing deployment of local area wireless broadband technologies.
Resumo:
This paper presents a flexible and integrated planning tool for active distribution network to maximise the benefits of having high level s of renewables, customer engagement, and new technology implementations. The tool has two main processing parts: “optimisation” and “forecast”. The “optimization” part is an automated and integrated planning framework to optimize the net present value (NPV) of investment strategy for electric distribution network augmentation over large areas and long planning horizons (e.g. 5 to 20 years) based on a modified particle swarm optimization (MPSO). The “forecast” is a flexible agent-based framework to produce load duration curves (LDCs) of load forecasts for different levels of customer engagement, energy storage controls, and electric vehicles (EVs). In addition, “forecast” connects the existing databases of utility to the proposed tool as well as outputs the load profiles and network plan in Google Earth. This integrated tool enables different divisions within a utility to analyze their programs and options in a single platform using comprehensive information.
Resumo:
An adaptive drug delivery design is presented in this paper using neural networks for effective treatment of infectious diseases. The generic mathematical model used describes the coupled evolution of concentration of pathogens, plasma cells, antibodies and a numerical value that indicates the relative characteristic of a damaged organ due to the disease under the influence of external drugs. From a system theoretic point of view, the external drugs can be interpreted as control inputs, which can be designed based on control theoretic concepts. In this study, assuming a set of nominal parameters in the mathematical model, first a nonlinear controller (drug administration) is designed based on the principle of dynamic inversion. This nominal drug administration plan was found to be effective in curing "nominal model patients" (patients whose immunological dynamics conform to the mathematical model used for the control design exactly. However, it was found to be ineffective in curing "realistic model patients" (patients whose immunological dynamics may have off-nominal parameter values and possibly unwanted inputs) in general. Hence, to make the drug delivery dosage design more effective for realistic model patients, a model-following adaptive control design is carried out next by taking the help of neural networks, that are trained online. Simulation studies indicate that the adaptive controller proposed in this paper holds promise in killing the invading pathogens and healing the damaged organ even in the presence of parameter uncertainties and continued pathogen attack. Note that the computational requirements for computing the control are very minimal and all associated computations (including the training of neural networks) can be carried out online. However it assumes that the required diagnosis process can be carried out at a sufficient faster rate so that all the states are available for control computation.
Resumo:
We consider a single-hop data-gathering sensor network, consisting of a set of sensor nodes that transmit data periodically to a base-station. We are interested in maximizing the lifetime of this network. With our definition of network lifetime and the assumption that the radio transmission energy consumption forms the most significant portion of the total energy consumption at a sensor node, we attempt to enhance the network lifetime by reducing the transmission energy budget of sensor nodes by exploiting three system-level opportunities. We pose the problem of maximizing lifetime as a max-min optimization problem subject to the constraint of successful data collection and limited energy supply at each node. This turns out to be an extremely difficult optimization to solve. To reduce the complexity of this problem, we allow the sensor nodes and the base-station to interactively communicate with each other and employ instantaneous decoding at the base-station. The chief contribution of the paper is to show that the computational complexity of our problem is determined by the complex interplay of various system-level opportunities and challenges.
Resumo:
We propose a solution based on message passing bipartite networks, for deep packet inspection, which addresses both speed and memory issues, which are limiting factors in current solutions. We report on a preliminary implementation and propose a parallel architecture.
Resumo:
We consider the incentive compatible broadcast (ICB) problem in ad hoc wireless networks with selfish nodes. We design a Bayesian incentive compatible Broadcast (BIC-B) protocol to address this problem. VCG mechanism based schemes have been popularly used in the literature to design dominant strategy incentive compatible (DSIC) protocols for ad hoe wireless networks. VCG based mechanisms have two critical limitations: (i) the network is required to he bi-connected, (ii) the resulting protocol is not budget balanced. Our proposed BIC-B protocol overcomes these difficulties. We also prove the optimality of the proposed scheme.
Resumo:
Biological systems present remarkable adaptation, reliability, and robustness in various environments, even under hostility. Most of them are controlled by the individuals in a distributed and self-organized way. These biological mechanisms provide useful resources for designing the dynamical and adaptive routing schemes of wireless mobile sensor networks, in which the individual nodes should ideally operate without central control. This paper investigates crucial biologically inspired mechanisms and the associated techniques for resolving routing in wireless sensor networks, including Ant-based and genetic approaches. Furthermore, the principal contributions of this paper are as follows. We present a mathematical theory of the biological computations in the context of sensor networks; we further present a generalized routing framework in sensor networks by diffusing different modes of biological computations using Ant-based and genetic approaches; finally, an overview of several emerging research directions are addressed within the new biologically computational framework.
Resumo:
Researchers are assessed from a researcher-centric perspective - by quantifying a researcher's contribution to the field. Citation and publication counts are some typical examples. We propose a student-centric measure to assess researchers on their mentoring abilities. Our approach quantifies benefits bestowed by researchers upon their students by characterizing the publication dynamics of research advisor-student interactions in author collaboration networks. We show that our measures could help aspiring students identify research advisors with proven mentoring skills. Our measures also help in stratification of researchers with similar ranks based on typical indices like publication and citation counts while being independent of their direct influences.
Resumo:
A Delay Tolerant Network (DTN) is a dynamic, fragmented, and ephemeral network formed by a large number of highly mobile nodes. DTNs are ephemeral networks with highly mobile autonomous nodes. This requires distributed and self-organised approaches to trust management. Revocation and replacement of security credentials under adversarial influence by preserving the trust on the entity is still an open problem. Existing methods are mostly limited to detection and removal of malicious nodes. This paper makes use of the mobility property to provide a distributed, self-organising, and scalable revocation and replacement scheme. The proposed scheme effectively utilises the Leverage of Common Friends (LCF) trust system concepts to revoke compromised security credentials, replace them with new ones, whilst preserving the trust on them. The level of achieved entity confidence is thereby preserved. Security and performance of the proposed scheme is evaluated using an experimental data set in comparison with other schemes based around the LCF concept. Our extensive experimental results show that the proposed scheme distributes replacement credentials up to 35% faster and spreads spoofed credentials of strong collaborating adversaries up to 50% slower without causing any significant increase on the communication and storage overheads, when compared to other LCF based schemes.
Resumo:
Public key authentication is the verification of the identity-public key binding, and is foundational to the security of any network. The contribution of this thesis has been to provide public key authentication for a decentralised and resource challenged network such as an autonomous Delay Tolerant Network (DTN). It has resulted in the development and evaluation of a combined co-localisation trust system and key distribution scheme evaluated on a realistic large geographic scale mobility model. The thesis also addresses the problem of unplanned key revocation and replacement without any central authority.
Resumo:
Renewable energy resources, in particularly PV and battery storage are increasingly becoming part of residential and agriculture premises to manage their electricity consumption. This thesis addresses the tremendous technical, financial and planning challenges for utilities created by these increases, by offering techniques to examine the significance of various renewable resources in electricity network planning. The outcome of this research should assist utilities and customers for adequate planning that can be financially effective.
Resumo:
Next generation wireless systems employ Orthogonal frequency division multiplexing (OFDM) physical layer owing to the high data rate transmissions that are possible without increase in bandwidth. While TCP performance has been extensively studied for interaction with link layer ARQ, little attention has been given to the interaction of TCP with MAC layer. In this work, we explore cross-layer interactions in an OFDM based wireless system, specifically focusing on channel-aware resource allocation strategies at the MAC layer and its impact on TCP congestion control. Both efficiency and fairness oriented MAC resource allocation strategies were designed for evaluating the performance of TCP. The former schemes try to exploit the channel diversity to maximize the system throughput, while the latter schemes try to provide a fair resource allocation over sufficiently long time duration. From a TCP goodput standpoint, we show that the class of MAC algorithms that incorporate a fairness metric and consider the backlog outperform the channel diversity exploiting schemes.