100 resultados para Placement of router nodes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a method for placement of Phasor Measurement Units, ensuring the monitoring of vulnerable buses which are obtained based on transient stability analysis of the overall system. Real-time monitoring of phase angles across different nodes, which indicates the proximity to instability, the very purpose will be well defined if the PMUs are placed at buses which are more vulnerable. The issue is to identify the key buses where the PMUs should be placed when the transient stability prediction is taken into account considering various disturbances. Integer Linear Programming technique with equality and inequality constraints is used to find out the optimal placement set with key buses identified from transient stability analysis. Results on IEEE-14 bus system are presented to illustrate the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study, a new reactive power loss index (RPLI) is proposed for identification of weak buses in the system. This index is further used for determining the optimal locations for placement of reactive compensation devices in the power system for additional voltage support. The new index is computed from the reactive power support and loss allocation algorithm using Y-bus method for the system under intact condition and as well as critical/severe network contingencies cases. Fuzzy logic approach is used to select the important and critical/severe line contingencies from the contingency list. The inherent characteristics of the reactive power in system operation is properly addressed while determining the reactive power loss allocation to load buses. The proposed index is tested on sample 10-bus equivalent system and 72-bus practical equivalent system of Indian southern region power grid. The validation of the weak buses identification from the proposed index with that from other existing methods in the literature is carried out to demonstrate the effectiveness of the proposed index. Simulation results show that the identification of weak buses in the system from the new RPLI is completely non-iterative, thus requires minimal computational efforts as compared with other existing methods in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The distribution of black leaf nodes at each level of a linear quadtree is of significant interest in the context of estimation of time and space complexities of linear quadtree based algorithms. The maximum number of black nodes of a given level that can be fitted in a square grid of size 2n × 2n can readily be estimated from the ratio of areas. We show that the actual value of the maximum number of nodes of a level is much less than the maximum obtained from the ratio of the areas. This is due to the fact that the number of nodes possible at a level k, 0≤k≤n − 1, should consider the sum of areas occupied by the actual number of nodes present at levels k + 1, k + 2, …, n − 1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper addresses the problem of how to select the optimal number of sensors and how to determine their placement in a given monitored area for multimedia surveillance systems. We propose to solve this problem by obtaining a novel performance metric in terms of a probability measure for accomplishing the task as a function of set of sensors and their placement. This measure is then used to find the optimal set. The same measure can be used to analyze the degradation in system 's performance with respect to the failure of various sensors. We also build a surveillance system using the optimal set of sensors obtained based on the proposed design methodology. Experimental results show the effectiveness of the proposed design methodology in selecting the optimal set of sensors and their placement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Central to network tomography is the problem of identifiability, the ability to identify internal network characteristics uniquely from end-to-end measurements. This problem is often underconstrained even when internal network characteristics such as link delays are modeled as additive constants. While it is known that the network topology can play a role in determining the extent of identifiability, there is a lack in the fundamental understanding of being able to quantify it for a given network. In this paper, we consider the problem of identifying additive link metrics in an arbitrary undirected network using measurement nodes and establishing paths/cycles between them. For a given placement of measurement nodes, we define and derive the ``link rank'' of the network-the maximum number of linearly independent cycles/paths that may be established between the measurement nodes. We achieve this in linear time. The link rank helps quantify the exact extent of identifiability in a network. We also develop a quadratic time algorithm to compute a set of cycles/paths that achieves the maximum rank.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of intrusion detection and location identification in the presence of clutter is considered for a hexagonal sensor-node geometry. It is noted that in any practical application,for a given fixed intruder or clutter location, only a small number of neighboring sensor nodes will register a significant reading. Thus sensing may be regarded as a local phenomenon and performance is strongly dependent on the local geometry of the sensor nodes. We focus on the case when the sensor nodes form a hexagonal lattice. The optimality of the hexagonal lattice with respect to density of packing and covering and largeness of the kissing number suggest that this is the best possible arrangement from a sensor network viewpoint. The results presented here are clearly relevant when the particular sensing application permits a deterministic placement of sensors. The results also serve as a performance benchmark for the case of a random deployment of sensors. A novel feature of our analysis of the hexagonal sensor grid is a signal-space viewpoint which sheds light on achievable performance.Under this viewpoint, the problem of intruder detection is reduced to one of determining in a distributed manner, the optimal decision boundary that separates the signal spaces SI and SC associated to intruder and clutter respectively. Given the difficulty of implementing the optimal detector, we present a low-complexity distributive algorithm under which the surfaces SI and SC are separated by a wellchosen hyperplane. The algorithm is designed to be efficient in terms of communication cost by minimizing the expected number of bits transmitted by a sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The remarkable geological and evolutionary history of peninsular India has generated much interest in the patterns and processes that might have shaped the current distributions of its endemic biota. In this regard the Out of India hypothesis, which proposes that rafting peninsular India carried Gondwanan forms to Asia after the break-up of Gondwana super continent, has gained prominence. Here we have reviewed molecular studies undertaken on a range of taxa of supposedly Gondwanan origin to better understand the Out-of-India scenario. This re-evaluation of published molecular studies indicates that there is mounting evidence supporting Out-of-India scenario for various Asian taxa. Nevertheless, in many studies the evidence is inconclusive due to lack of information on the age of relevant nodes. Studies also indicate that not all Gondwanan forms of peninsular India dispersed out of India. Many of these ancient lineages are confined to peninsular India and therefore are relict Gondwanan lineages. Additionally for some taxa an Into India rather than Out-of-India scenario better explains their current distribution. To identify the Out-of-India component of Asian biota it is imperative that we understand the complex biogeographical history of India. To this end, we propose three oversimplified yet explicit phylogenetic predictions. These predictions can be tested through the use of molecular phylogenetic tools in conjunction with palaeontological and geological data.