25 resultados para Deployment of Federal Institutes
Resumo:
We study the coverage in sensor networks having two types of nodes, sensor and backbone nodes. Each sensor is capable of transmitting information over relatively small distances. The backbone nodes collect information from the sensors. This information is processed and communicated over an ad-hoc network formed by the backbone nodes,which are capable of transmitting over much larger distances. We consider two modes of deployment of sensors, one a Poisson-Poisson cluster model and the other a dependently-thinned Poisson point process. We deduce limit laws for functionals of vacancy in both models using properties of association for random measures.
Resumo:
In this thesis we address the problem of multi-agent search. We formulate two deploy and search strategies based on optimal deployment of agents in search space so as to maximize the search effectiveness in a single step. We show that a variation of centroidal Voronoi configuration is the optimal deployment. When the agents have sensors with different capabilities, the problem will be heterogeneous in nature. We introduce a new concept namely, generalized Voronoi partition in order to formulate and solve the heterogeneous multi-agent search problem. We address a few theoretical issues such as optimality of deployment, convergence and spatial distributedness of the control law and the search strategies. Simulation experiments are carried out to compare performances of the proposed strategies with a few simple search strategies.
Resumo:
The problem of intrusion detection and location identification in the presence of clutter is considered for a hexagonal sensor-node geometry. It is noted that in any practical application,for a given fixed intruder or clutter location, only a small number of neighboring sensor nodes will register a significant reading. Thus sensing may be regarded as a local phenomenon and performance is strongly dependent on the local geometry of the sensor nodes. We focus on the case when the sensor nodes form a hexagonal lattice. The optimality of the hexagonal lattice with respect to density of packing and covering and largeness of the kissing number suggest that this is the best possible arrangement from a sensor network viewpoint. The results presented here are clearly relevant when the particular sensing application permits a deterministic placement of sensors. The results also serve as a performance benchmark for the case of a random deployment of sensors. A novel feature of our analysis of the hexagonal sensor grid is a signal-space viewpoint which sheds light on achievable performance.Under this viewpoint, the problem of intruder detection is reduced to one of determining in a distributed manner, the optimal decision boundary that separates the signal spaces SI and SC associated to intruder and clutter respectively. Given the difficulty of implementing the optimal detector, we present a low-complexity distributive algorithm under which the surfaces SI and SC are separated by a wellchosen hyperplane. The algorithm is designed to be efficient in terms of communication cost by minimizing the expected number of bits transmitted by a sensor.
Resumo:
Ad hoc networks are being used in applications ranging from disaster recovery to distributed collaborative entertainment applications. Ad hoc networks have become one of the most attractive solution for rapid deployment of interconnecting large number of mobile personal devices. The user community of mobile personal devices are demanding a variety of value added multimedia entertainment services. The popularity of peer group is increasing and one or some members of the peer group need to send data to some or all members of the peer group. The increasing demand for group oriented value added services is driving for efficient multicast service over ad hoc networks. Access control mechanisms need to be deployed to provide guarantee that the unauthorized users cannot access the multicast content. In this paper, we present a topology aware key management and distribution scheme for secure overlay multicast over MANET to address node mobility related issues for multicast key management. We use overlay approach for key distribution and our objective is to keep communication overhead low for key management and distribution. We also incorporate reliability using explicit acknowledgments with the key distribution scheme. Through simulations we show that the proposed key management scheme has low communication overhead for rekeying and improves the reliability of key distribution.
Resumo:
In this paper a generalisation of the Voronoi partition is used for locational optimisation of facilities having different service capabilities and limited range or reach. The facilities can be stationary, such as base stations in a cellular network, hospitals, schools, etc., or mobile units, such as multiple unmanned aerial vehicles, automated guided vehicles, etc., carrying sensors, or mobile units carrying relief personnel and materials. An objective function for optimal deployment of the facilities is formulated, and its critical points are determined. The locally optimal deployment is shown to be a generalised centroidal Voronoi configuration in which the facilities are located at the centroids of the corresponding generalised Voronoi cells. The problem is formulated for more general mobile facilities, and formal results on the stability, convergence and spatial distribution of the proposed control laws responsible for the motion of the agents carrying facilities, under some constraints on the agents' speed and limit on the sensor range, are provided. The theoretical results are supported with illustrative simulation results.
Resumo:
The key requirements for enabling real-time remote healthcare service on a mobile platform, in the present day heterogeneous wireless access network environment, are uninterrupted and continuous access to the online patient vital medical data, monitor the physical condition of the patient through video streaming, and so on. For an application, this continuity has to be sufficiently transparent both from a performance perspective as well as a Quality of Experience (QoE) perspective. While mobility protocols (MIPv6, HIP, SCTP, DSMIP, PMIP, and SIP) strive to provide both and do so, limited or non-availability (deployment) of these protocols on provider networks and server side infrastructure has impeded adoption of mobility on end user platforms. Add to this, the cumbersome OS configuration procedures required to enable mobility protocol support on end user devices and the user's enthusiasm to add this support is lost. Considering the lack of proper mobility implementations that meet the remote healthcare requirements above, we propose SeaMo+ that comprises a light-weight application layer framework, termed as the Virtual Real-time Multimedia Service (VRMS) for mobile devices to provide an uninterrupted real-time multimedia information access to the mobile user. VRMS is easy to configure, platform independent, and does not require additional network infrastructure unlike other existing schemes. We illustrate the working of SeaMo+ in two realistic remote patient monitoring application scenarios.
Resumo:
Accurately characterizing the time-varying interference caused to the primary users is essential in ensuring a successful deployment of cognitive radios (CR). We show that the aggregate interference at the primary receiver (PU-Rx) from multiple, randomly located cognitive users (CUs) is well modeled as a shifted lognormal random process, which is more accurate than the lognormal and the Gaussian process models considered in the literature, even for a relatively dense deployment of CUs. It also compares favorably with the asymptotically exact stable and symmetric truncated stable distribution models, except at high CU densities. Our model accounts for the effect of imperfect spectrum sensing, which depends on path-loss, shadowing, and small-scale fading of the link from the primary transmitter to the CU; the interweave and underlay modes or CR operation, which determine the transmit powers of the CUs; and time-correlated shadowing and fading of the links from the CUs to the PU-Rx. It leads to expressions for the probability distribution function, level crossing rate, and average exceedance duration. The impact of cooperative spectrum sensing is also characterized. We validate the model by applying it to redesign the primary exclusive zone to account for the time-varying nature of interference.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Structural Health Monitoring (SHM) systems require integration of non-destructive technologies into structural design and operational processes. Modeling and simulation of complex NDE inspection processes are important aspects in the development and deployment of SHM technologies. Ray tracing techniques are vital simulation tools to visualize the wave path inside a material. These techniques also help in optimizing the location of transducers and their orientation with respect to the zone of interrogation. It helps in increasing the chances of detection and identification of a flaw in that zone. While current state-of-the-art techniques such as ray tracing based on geometric principle help in such visualization, other information such as signal losses due to spherical or cylindrical shape of wave front are rarely taken into consideration. The problem becomes a little more complicated in the case of dispersive guided wave propagation and near-field defect scattering. We review the existing models and tools to perform ultrasonic NDE simulation in structural components. As an initial step, we develop a ray-tracing approach, where phase and spectral information are preserved. This enables one to study wave scattering beyond simple time of flight calculation of rays. Challenges in terms of theory and modelling of defects of various kinds are discussed. Various additional considerations such as signal decay and physics of scattering are reviewed and challenges involved in realistic computational implementation are discussed. Potential application of this approach to SHM system design is highlighted and by applying this to complex structural components such as airframe structures, SHM is demonstrated to provide additional value in terms of lighter weight and/or longevity enhancement resulting from an extension of the damage tolerance design principle not compromising safety and reliability.
Resumo:
Bioenergy deployment offers significant potential for climate change mitigation, but also carries considerable risks. In this review, we bring together perspectives of various communities involved in the research and regulation of bioenergy deployment in the context of climate change mitigation: Land-use and energy experts, land-use and integrated assessment modelers, human geographers, ecosystem researchers, climate scientists and two different strands of life-cycle assessment experts. We summarize technological options, outline the state-of-the-art knowledge on various climate effects, provide an update on estimates of technical resource potential and comprehensively identify sustainability effects. Cellulosic feedstocks, increased end-use efficiency, improved land carbon-stock management and residue use, and, when fully developed, BECCS appear as the most promising options, depending on development costs, implementation, learning, and risk management. Combined heat and power, efficient biomass cookstoves and small-scale power generation for rural areas can help to promote energy access and sustainable development, along with reduced emissions. We estimate the sustainable technical potential as up to 100EJ: high agreement; 100-300EJ: medium agreement; above 300EJ: low agreement. Stabilization scenarios indicate that bioenergy may supply from 10 to 245EJyr(-1) to global primary energy supply by 2050. Models indicate that, if technological and governance preconditions are met, large-scale deployment (>200EJ), together with BECCS, could help to keep global warming below 2 degrees degrees of preindustrial levels; but such high deployment of land-intensive bioenergy feedstocks could also lead to detrimental climate effects, negatively impact ecosystems, biodiversity and livelihoods. The integration of bioenergy systems into agriculture and forest landscapes can improve land and water use efficiency and help address concerns about environmental impacts. We conclude that the high variability in pathways, uncertainties in technological development and ambiguity in political decision render forecasts on deployment levels and climate effects very difficult. However, uncertainty about projections should not preclude pursuing beneficial bioenergy options.