118 resultados para surrogate end points
Resumo:
In this paper, a method of tracking the peak power in a wind energy conversion system (WECS) is proposed, which is independent of the turbine parameters and air density. The algorithm searches for the peak power by varying the speed in the desired direction. The generator is operated in the speed control mode with the speed reference being dynamically modified in accordance with the magnitude and direction of change of active power. The peak power points in the P-omega curve correspond to dP/domega = 0. This fact is made use of in the optimum point search algorithm. The generator considered is a wound rotor induction machine whose stator is connected directly to the grid and the rotor is fed through back-to-back pulse-width-modulation (PWM) converters. Stator flux-oriented vector control is applied to control the active and reactive current loops independently. The turbine characteristics are generated by a dc motor fed from a commercial dc drive. All of the control loops are executed by a single-chip digital signal processor (DSP) controller TMS320F240. Experimental results show that the performance of the control algorithm compares well with the conventional torque control method.
Resumo:
Context-sensitive points-to analysis is critical for several program optimizations. However, as the number of contexts grows exponentially, storage requirements for the analysis increase tremendously for large programs, making the analysis non-scalable. We propose a scalable flow-insensitive context-sensitive inclusion-based points-to analysis that uses a specially designed multi-dimensional bloom filter to store the points-to information. Two key observations motivate our proposal: (i) points-to information (between pointer-object and between pointer-pointer) is sparse, and (ii) moving from an exact to an approximate representation of points-to information only leads to reduced precision without affecting correctness of the (may-points-to) analysis. By using an approximate representation a multi-dimensional bloom filter can significantly reduce the memory requirements with a probabilistic bound on loss in precision. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that with an average storage requirement of 4MB, our approach achieves almost the same precision (98.6%) as the exact implementation. By increasing the average memory to 27MB, it achieves precision upto 99.7% for these benchmarks. Using Mod/Ref analysis as the client, we find that the client analysis is not affected that often even when there is some loss of precision in the points-to representation. We find that the NoModRef percentage is within 2% of the exact analysis while requiring 4MB (maximum 15MB) memory and less than 4 minutes on average for the points-to analysis. Another major advantage of our technique is that it allows to trade off precision for memory usage of the analysis.
Resumo:
The method of stress characteristics has been employed to compute the end-bearing capacity of driven piles. The dependency of the soil internal friction angle on the stress level has been incorporated to achieve more realistic predictions for the end-bearing capacity of piles. The validity of the assumption of the superposition principle while using the bearing capacity equation based on soil plasticity concepts, when applied to deep foundations, has been examined. Fourteen pile case histories were compiled with cone penetration tests (CPT) performed in the vicinity of different pile locations. The end-bearing capacity of the piles was computed using different methods, namely, static analysis, effective stress approach, direct CPT, and the proposed approach. The comparison between predictions made by different methods and measured records shows that the stress-level-based method of stress characteristics compares better with experimental data. Finally, the end-bearing capacity of driven piles in sand was expressed in terms of a general expression with the addition of a new factor that accounts for different factors contributing to the bearing capacity. The influence of the soil nonassociative flow rule has also been included to achieve more realistic results.
Resumo:
Regenerating codes are a class of distributed storage codes that allow for efficient repair of failed nodes, as compared to traditional erasure codes. An [n, k, d] regenerating code permits the data to be recovered by connecting to any k of the n nodes in the network, while requiring that a failed node be repaired by connecting to any d nodes. The amount of data downloaded for repair is typically much smaller than the size of the source data. Previous constructions of exact-regenerating codes have been confined to the case n = d + 1. In this paper, we present optimal, explicit constructions of (a) Minimum Bandwidth Regenerating (MBR) codes for all values of [n, k, d] and (b) Minimum Storage Regenerating (MSR) codes for all [n, k, d >= 2k - 2], using a new product-matrix framework. The product-matrix framework is also shown to significantly simplify system operation. To the best of our knowledge, these are the first constructions of exact-regenerating codes that allow the number n of nodes in the network, to be chosen independent of the other parameters. The paper also contains a simpler description, in the product-matrix framework, of a previously constructed MSR code with [n = d + 1, k, d >= 2k - 1].
Resumo:
In many cases, a mobile user has the option of connecting to one of several IEEE 802.11 access points (APs),each using an independent channel. User throughput in each AP is determined by the number of other users as well as the frame size and physical rate being used. We consider the scenario where users could multihome, i.e., split their traffic amongst all the available APs, based on the throughput they obtain and the price charged. Thus, they are involved in a non-cooperative game with each other. We convert the problem into a fluid model and show that under a pricing scheme, which we call the cost price mechanism, the total system throughput is maximized,i.e., the system suffers no loss of efficiency due to selfish dynamics. We also study the case where the Internet Service Provider (ISP) could charge prices greater than that of the cost price mechanism. We show that even in this case multihoming outperforms unihoming, both in terms of throughput as well as profit to the ISP.
Resumo:
The standard quantum search algorithm lacks a feature, enjoyed by many classical algorithms, of having a fixed-point, i.e. a monotonic convergence towards the solution. Here we present two variations of the quantum search algorithm, which get around this limitation. The first replaces selective inversions in the algorithm by selective phase shifts of $\frac{\pi}{3}$. The second controls the selective inversion operations using two ancilla qubits, and irreversible measurement operations on the ancilla qubits drive the starting state towards the target state. Using $q$ oracle queries, these variations reduce the probability of finding a non-target state from $\epsilon$ to $\epsilon^{2q+1}$, which is asymptotically optimal. Similar ideas can lead to robust quantum algorithms, and provide conceptually new schemes for error correction.
Resumo:
We propose a Low Noise Amplifier (LNA) architecture for power scalable receiver front end (FE) for Zigbee. The motivation for power scalable receiver is to enable minimum power operation while meeting the run-time performance needed. We use simple models to find empirical relations between the available signal and interference levels to come up with required Noise Figure (NF) and 3rd order Intermodulation Product (IIP3) numbers. The architecture has two independent digital knobs to control the NF and IIP3. Acceptable input match while using adaptation has been achieved by using an Active Inductor configuration for the source degeneration inductor of the LNA. The low IF receiver front end (LNA with I and Q mixers) was fabricated in 130nm RFCMOS process and tested.
Resumo:
A new approach to machine representation and analysis of three-dimensional objects is presented. The representation, based on the notion of "skeleton" of an object leads to a scheme for comparing two given object views for shape relations. The objects are composed of long, thin, rectangular prisms joined at their ends. The input picture to the program is the digitized line drawing portraying the three-dimensional object. To compare two object views, two characteristic vertices called "cardinal point" and "end-cardinal point," occurring consistently at the bends and open ends of the object are detected. The skeletons are then obtained as a connected path passing through these points. The shape relationships between the objects are then obtained from the matching characteristics of their skeletons. The method explores the possibility of a more detailed and finer analysis leading to detection of features like symmetry, asymmetry and other shape properties of an object.
Resumo:
A torque control scheme, based on a direct torque control (DTC) algorithm using a 12-sided polygonal voltage space vector, is proposed for a variable speed control of an open-end induction motor drive. The conventional DTC scheme uses a stator flux vector for the sector identification and then the switching vector to control stator flux and torque. However, the proposed DTC scheme selects switching vectors based on the sector information of the estimated fundamental stator voltage vector and its relative position with respect to the stator flux vector. The fundamental stator voltage estimation is based on the steady-state model of IM and the synchronous frequency of operation is derived from the computed stator flux using a low-pass filter technique. The proposed DTC scheme utilizes the exact positions of the fundamental stator voltage vector and stator flux vector to select the optimal switching vector for fast control of torque with small variation of stator flux within the hysteresis band. The present DTC scheme allows full load torque control with fast transient response to very low speeds of operation, with reduced switching frequency variation. Extensive experimental results are presented to show the fast torque control for speed of operation from zero to rated.
Resumo:
The relations for the growth and consumption rates of a layer with finite thickness as an end member and the product phases in the interdiffusion zone are developed. We have used two different methodologies, the diffusion based and the physico-chemical approach to develop the same relations. We have shown that the diffusion based approach is rather straightforward; however, the physico-chemical approach is much more versatile than the other method. It was found that the position of the marker plane becomes vague in the second stage of the interdiffusion process in pure A thin layer/B couple, where two phases grow simultaneously.
Resumo:
A topology for voltage-space phasor generation equivalent to a five-level inverter for an open-end winding induction motor is presented. The open-end winding induction motor is fed from both ends by two three-level inverters. The three-level inverters are realised by cascading two two-level inverters. This inverter scheme does not experience neutral-point fluctuations. Of the two three-level inverters only one will be switching at any instant in the lower speed ranges. In the multilevel carrier-based SPWM used for the proposed drive, a progressive discrete DC bias depending on the speed range is given to the reference wave to reduce the inverter switchings. The drive is implemented and tested with a 1 HP open-end winding induction motor and experimental results are presented.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.