63 resultados para Placement
Resumo:
There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.
Resumo:
We study the question of determining locations of base stations (BSs) that may belong to the same or to competing service providers. We take into account the impact of these decisions on the behavior of intelligent mobile terminals that can connect to the base station that offers the best utility. The signal-to-interference-plus-noise ratio (SINR) is used as the quantity that determines the association. We first study the SINR association-game: We determine the cells corresponding to each base stations, i.e., the locations at which mobile terminals prefer to connect to a given base station than to others. We make some surprising observations: 1) displacing a base station a little in one direction may result in a displacement of the boundary of the corresponding cell to the opposite direction; 2) a cell corresponding to a BS may be the union of disconnected subcells. We then study the hierarchical equilibrium in the combined BS location and mobile association problem: We determine where to locate the BSs so as to maximize the revenues obtained at the induced SINR mobile association game. We consider the cases of single frequency band and two frequency bands of operation. Finally, we also consider hierarchical equilibria in two frequency systems with successive interference cancellation.
Resumo:
We use information theoretic achievable rate formulas for the multi-relay channel to study the problem of optimal placement of relay nodes along the straight line joining a source node and a destination node. The achievable rate formulas that we utilize are for full-duplex radios at the relays and decode-and-forward relaying. For the single relay case, and individual power constraints at the source node and the relay node, we provide explicit formulas for the optimal relay location and the optimal power allocation to the source-relay channel, for the exponential and the power-law path-loss channel models. For the multiple relay case, we consider exponential path-loss and a total power constraint over the source and the relays, and derive an optimization problem, the solution of which provides the optimal relay locations. Numerical results suggest that at low attenuation the relays are mostly clustered close to the source in order to be able to cooperate among themselves, whereas at high attenuation they are uniformly placed and work as repeaters. We also prove that a constant rate independent of the attenuation in the network can be achieved by placing a large enough number of relay nodes uniformly between the source and the destination, under the exponential path-loss model with total power constraint.
Resumo:
Our work is motivated by impromptu (or ``as-you-go'') deployment of wireless relay nodes along a path, a need that arises in many situations. In this paper, the path is modeled as starting at the origin (where there is the data sink, e.g., the control center), and evolving randomly over a lattice in the positive quadrant. A person walks along the path deploying relay nodes as he goes. At each step, the path can, randomly, either continue in the same direction or take a turn, or come to an end, at which point a data source (e.g., a sensor) has to be placed, that will send packets to the data sink. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of sequential relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary (with respect to the position of the last placed relay) beyond which it is optimal to place the next relay. Next, based on a simpler one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than value iteration. We show by simulations that the distance threshold based heuristic, usually assumed in the literature, is close to the optimal, provided that the threshold distance is carefully chosen. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.
Resumo:
Turbulent mixed convection flow and heat transfer in a shallow enclosure with and without partitions and with a series of block-like heat generating components is studied numerically for a range of Reynolds and Grashof numbers with a time-dependent formulation. The flow and temperature distributions are taken to be two-dimensional. Regions with the same velocity and temperature distributions can be identified assuming repeated placement of the blocks and fluid entry and exit openings at regular distances, neglecting the end wall effects. One half of such module is chosen as the computational domain taking into account the symmetry about the vertical centreline. The mixed convection inlet velocity is treated as the sum of forced and natural convection components, with the individual components delineated based on pressure drop across the enclosure. The Reynolds number is based on forced convection velocity. Turbulence computations are performed using the standard k– model and the Launder–Sharma low-Reynolds number k– model. The results show that higher Reynolds numbers tend to create a recirculation region of increasing strength in the core region and that the effect of buoyancy becomes insignificant beyond a Reynolds number of typically 5×105. The Euler number in turbulent flows is higher by about 30 per cent than that in the laminar regime. The dimensionless inlet velocity in pure natural convection varies as Gr1/3. Results are also presented for a number of quantities of interest such as the flow and temperature distributions, Nusselt number, pressure drop and the maximum dimensionless temperature in the block, along with correlations.
Resumo:
This article addresses the problem of how to select the optimal combination of sensors and how to determine their optimal placement in a surveillance region in order to meet the given performance requirements at a minimal cost for a multimedia surveillance system. We propose to solve this problem by obtaining a performance vector, with its elements representing the performances of subtasks, for a given input combination of sensors and their placement. Then we show that the optimal sensor selection problem can be converted into the form of Integer Linear Programming problem (ILP) by using a linear model for computing the optimal performance vector corresponding to a sensor combination. Optimal performance vector corresponding to a sensor combination refers to the performance vector corresponding to the optimal placement of a sensor combination. To demonstrate the utility of our technique, we design and build a surveillance system consisting of PTZ (Pan-Tilt-Zoom) cameras and active motion sensors for capturing faces. Finally, we show experimentally that optimal placement of sensors based on the design maximizes the system performance.
Resumo:
Hyperbranched polyethers having poly(ethylene glycol) (PEG) segments at their molecular periphery were prepared by a simple procedure wherein an AB2 type monomer was melt-polycondensed with an A-type monomer, namely, heptaethylene glycol monomethyl ether. The presence of a large number of PEG units at the termini rendered a lower critical solution temperature (LCST) to these copolymers, above which they precipitated out of an aqueous solution. In an effort to understand the effect of various molecular structural parameters on their LCST, the length of the hydrophobic spacer segment within the hyperbranched core and the extent of PEGylation were varied. Additionally, linear analogues that incorporates pendant PEG segments were also prepared and comparison of their LCST with that of the hyperbranched analogue clearly revealed that hyperbranched topology leads to a substantial increase in the LCST, highlighting the importance of the peripheral placement of the PEG units.
Resumo:
A series of new chiral palladium-bisphosphinite complexes have been prepared from readily available, naturally occurring chiral alcohols. The complexes were used to efficiently carry out catalytic allylic alkylation of 1,3-diphenylpropene-2-yl acetate with dimethyl malonate. The complexes based on derivatives of ascorbic acid carry out enantioselective alkylations, one of which showed an ee as high as 97%. Based on the structural characterization, it can be surmised that strategic placement of phenyl groups is key to higher enantioselectivities.
Resumo:
The goal of this study is the multi-mode structural vibration control in the composite fin-tip of an aircraft. Structural model of the composite fin-tip with surface bonded piezoelectric actuators is developed using the finite element method. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes accurately. A model order reduction technique is employed for reducing the finite element structural matrices before developing the controller. Particle swarm based evolutionary optimization technique is used for optimal placement of piezoelectric patch actuators and accelerometer sensors to suppress vibration. H{infty} based active vibration controllers are designed directly in the discrete domain and implemented using dSpace® (DS-1005) electronic signal processing boards. Significant vibration suppression in the multiple bending modes of interest is experimentally demonstrated for sinusoidal and band limited white noise forcing functions.
Resumo:
Thermodynamic analysis of carbohydrate binding by Artocarpus integrifolia (jackfruit) agglutinin (jacalin) shows that, among monosaccharides, Me alpha GalNAc (methyl-alpha-N-acetylgalactosamine) is the strongest binding ligand. Despite its strong affinity for Me alpha GalNAc and Me alpha Gal, the lectin binds very poorly when Gal and GalNAc are in alpha-linkage with other sugars such as in A- and B-blood-group trisaccharides, Gal alpha 1-3Gal and Gal alpha 1-4Gal. These binding properties are explained by considering the thermodynamic parameters in conjunction with the minimum energy conformations of these sugars. It binds to Gal beta 1-3GalNAc alpha Me with 2800-fold stronger affinity over Gal beta 1-3GalNAc beta Me. It does not bind to asialo-GM1 (monosialoganglioside) oligosaccharide. Moreover, it binds to Gal beta 1-3GalNAc alpha Ser, the authentic T (Thomsen-Friedenreich)-antigen, with about 2.5-fold greater affinity as compared with Gal beta 1-3GalNAc. Asialoglycophorin A was found to be about 169,333 times stronger an inhibitor than Gal beta 1-3GalNAc. The present study thus reveals the exquisite specificity of A. integrifolia lectin for the T-antigen. Appreciable binding of disaccharides Glc beta 1-3GalNAc and GlcNAc beta 1-3Gal and the very poor binding of beta-linked disaccharides, which instead of Gal and GalNAc contain other sugars at the reducing end, underscore the important contribution made by Gal and GalNAc at the reducing end for recognition by the lectin. The ligand-structure-dependent alterations of the c.d. spectrum in the tertiary structural region of the protein allows the placement of various sugar units in the combining region of the lectin. These studies suggest that the primary subsite (subsite A) can accommodate only Gal or GalNAc or alpha-linked Gal or GalNAc, whereas the secondary subsite (subsite B) can associate either with GalNAc beta Me or Gal beta Me. Considering these factors a likely arrangement for various disaccharides in the binding site of the lectin is proposed. Its exquisite specificity for the authentic T-antigen, Gal beta 1-3GalNAc alpha Ser, together with its virtual non-binding to A- and B-blood-group antigens, Gal beta 1-3GalNAc beta Me and asialo-GM1 should make A. integrifolia lectin a valuable probe for monitoring the expression of T-antigen on cell surfaces.
Resumo:
A design methodology for wave-absorbing active material system is reported. The design enforces equivalence between an assumed material model having wave-absorbing behavior and a set of target feedback controllers for an array of microelectro-mechanical transducers which are integral part of the active material system. The proposed methodology is applicable to problems involving the control of acoustic waves in passive-active material system with complex constitutive behavior at different length-scales. A stress relaxation type one-dimensional constitutive model involving viscous damping mechanism is considered, which shows asymmetric wave dispersion characteristics about the half-line. The acoustic power flow and asymptotic stability of such material system are studied. A single sensor non-collocated linear feedback control system in a one-dimensional finite waveguide, which is a representative volume element in an active material system, is considered. Equivalence between the exact dynamic equilibrium of these two systems is imposed. It results in the solution space of the design variables, namely the equivalent damping coefficient, the wavelength(s) to be controlled and the location of the sensor. The characteristics of the controller transfer functions and their pole-placement problem are studied. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The study of electrochemical reduction of Cu(II)-EDTA system by phase sensitive a.c. impedance method at dropping mercury electrode reveals several interesting features. The complex plane polarograms exhibit loop like shape in contrast to the classical zinc ion reduction where crest like shape is found. Again, the relative placement of peaks of in-phase and quadrature components, and the relative placement of portions before and after the peaks of complex plane polarograms are different from that of zinc ion reduction. The complex plane plots suggest that electrochemical reduction of Cu-EDTA is charge transfer controlled.
Resumo:
Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.