898 resultados para Maximum Power Point Tracking algorithms


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective. - The aim of this study was to propose a new method that allows for the estimation of critical power (CP) from non-exhaustive tests using ratings of perceived exertion (RPE). Methods. - Twenty-two subjects underwent two practice trials for ergometer and Borg 15-point scale familiarization, and adaptation to severe exhaustive exercise. After then, four exercise bouts were performed on different days for the estimation of CP and anaerobic work capacity (AWC) by linear work-time equation, and CP(15), CP(17), AWC(15) and AWC(17) were estimated using the work and time to attainment of RPE15 and RPE17 based on the Borg 15-point scale. Results. - The CP, CP(15) and CP(17) (170-177W) were not significantly different (P>0.05). However, AWC, AWC(15) and AWC(17) were all different from each other. The correlations between CP(15) and CP(17), with CP were strong (R=0.871 and 0.911, respectively), but the AWC(15) and AWC(17) were not significantly correlated with AWC. Conclusion. - Sub-maximal. RPE responses can be used for the estimation of CP from non-exhaustive exercise protocols. (C) 2009 Elsevier Masson SAS. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents studies of cases in power systems by Sensitivity Analysis (SA) oriented by Optimal Power Flow (OPF) problems in different operation scenarios. The studies of cases start from a known optimal solution obtained by OPF. This optimal solution is called base case, and from this solution new operation points may be evaluated by SA when perturbations occur in the system. The SA is based on Fiacco`s Theorem and has the advantage of not be an iterative process. In order to show the good performance of the proposed technique tests were carried out on the IEEE 14, 118 and 300 buses systems. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main purpose of this paper is to present architecture of automated system that allows monitoring and tracking in real time (online) the possible occurrence of faults and electromagnetic transients observed in primary power distribution networks. Through the interconnection of this automated system to the utility operation center, it will be possible to provide an efficient tool that will assist in decisionmaking by the Operation Center. In short, the desired purpose aims to have all tools necessary to identify, almost instantaneously, the occurrence of faults and transient disturbances in the primary power distribution system, as well as to determine its respective origin and probable location. The compilations of results from the application of this automated system show that the developed techniques provide accurate results, identifying and locating several occurrences of faults observed in the distribution system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research presents the development and implementation in a computational routine of algorithms for fault location in multiterminal transmission lines. These algorithms are part of a fault-location system, which is capable of correctly identifying the fault point based on voltage and current phasor quantities, calculated by using measurements of voltage and current signals from intelligent electronic devices, located on the transmission-line terminals. The algorithms have access to the electrical parameters of the transmission lines and to information about the transformers loading and their connection type. This paper also presents the development of phase component models for the power system elements used by the fault-location algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on design of digital control for wind turbines and its relation to the quality of power fed into the Brazilian grid on connecting to it a 192 MW wind farm equipped with doubly fed induction generators. PWM converters are deployed as vector controlled regulated current voltage sources for their rotors, for independent control of both active and reactive power of those generators. Both speed control and active power control strategies are analyzed, in the search for maximum efficiency of conversion of wind kinetic energy into electric power and enhanced quality of delivered power. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the Hertz theory is not applicable in the analysis of the indentation of elastic-plastic materials, it is common practice to incorporate the concept of indenter/specimen combined modulus to consider indenter deformation. The appropriateness was assessed of the use of reduced modulus to incorporate the effect of indenter deformation in the analysis of the indentation with spherical indenters. The analysis based on finite element simulations considered four values of the ratio of the indented material elastic modulus to that of the diamond indenter, E/E(i) (0, 0.04, 0.19, 0.39), four values of the ratio of the elastic reduced modulus to the initial yield strength, E(r)/Y (0, 10, 20, 100), and two values of the ratio of the indenter radius to maximum total displacement, R/delta(max) (3, 10). Indenter deformation effects are better accounted for by the reduced modulus if the indented material behaves entirely elastically. In this case, identical load-displacement (P - delta) curves are obtained with rigid and elastic spherical indenters for the same elastic reduced modulus. Changes in the ratio E/E(i), from 0 to 0.39, resulted in variations lower than 5% for the load dimensionless functions, lower than 3% in the contact area, A(c), and lower than 5% in the ratio H/E(r). However, deformations of the elastic indenter made the actual radius of contact change, even in the indentation of elastic materials. Even though the load dimensionless functions showed only a little increase with the ratio E/E(i), the hardening coefficient and the yield strength could be slightly overestimated when algorithms based on rigid indenters are used. For the unloading curves, the ratio delta(e)/delta(max), where delta(e) is the point corresponding to zero load of a straight line with slope S from the point (P(max), delta(max)), varied less than 5% with the ratio E/E(i). Similarly, the relationship between reduced modulus and the unloading indentation curve, expressed by Sneddon`s equation, did not reveal the necessity of correction with the ratio E/E(i). The most affected parameter in the indentation curve, as a consequence of the indentation deformation, was the ratio between the residual indentation depth after complete unloading and the maximum indenter displacement, delta(r)/delta(max) (up to 26%), but this variation did not significantly decrease the capability to estimate hardness and elastic modulus based on the ratio of the residual indentation depth to maximum indentation depth, h(r)/h(max). In general, the results confirm the convenience of the use of the reduced modulus in the spherical instrumented indentation tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Twelve samples with different grain sizes were prepared by normal grain growth and by primary recrystallization, and the hysteresis dissipated energy was measured by a quasi-static method. Results showed a linear relation between hysteresis energy loss and the inverse of grain size, which is here called Mager`s law, for maximum inductions from 0.6 to 1.5 T, and a Steinmetz power law relation between hysteresis loss and maximum induction for all samples. The combined effect is better described by a Mager`s law where the coefficients follow Steinmetz law.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the problem of tracking target sets using a model predictive control (MPC) law. Some MPC applications require a control strategy in which some system outputs are controlled within specified ranges or zones (zone control), while some other variables - possibly including input variables - are steered to fixed target or set-point. In real applications, this problem is often overcome by including and excluding an appropriate penalization for the output errors in the control cost function. In this way, throughout the continuous operation of the process, the control system keeps switching from one controller to another, and even if a stabilizing control law is developed for each of the control configurations, switching among stable controllers not necessarily produces a stable closed loop system. From a theoretical point of view, the control objective of this kind of problem can be seen as a target set (in the output space) instead of a target point, since inside the zones there are no preferences between one point or another. In this work, a stable MPC formulation for constrained linear systems, with several practical properties is developed for this scenario. The concept of distance from a point to a set is exploited to propose an additional cost term, which ensures both, recursive feasibility and local optimality. The performance of the proposed strategy is illustrated by simulation of an ill-conditioned distillation column. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Market-based transmission expansion planning gives information to investors on where is the most cost efficient place to invest and brings benefits to those who invest in this grid. However, both market issue and power system adequacy problems are system planers’ concern. In this paper, a hybrid probabilistic criterion of Expected Economical Loss (EEL) is proposed as an index to evaluate the systems’ overall expected economical losses during system operation in a competitive market. It stands on both investors’ and planner’s point of view and will further improves the traditional reliability cost. By applying EEL, it is possible for system planners to obtain a clear idea regarding the transmission network’s bottleneck and the amount of losses arises from this weak point. Sequentially, it enables planners to assess the worth of providing reliable services. Also, the EEL will contain valuable information for moneymen to undertake their investment. This index could truly reflect the random behaviors of power systems and uncertainties from electricity market. The performance of the EEL index is enhanced by applying Normalized Coefficient of Probability (NCP), so it can be utilized in large real power systems. A numerical example is carried out on IEEE Reliability Test System (RTS), which will show how the EEL can predict the current system bottleneck under future operational conditions and how to use EEL as one of planning objectives to determine future optimal plans. A well-known simulation method, Monte Carlo simulation, is employed to achieve the probabilistic characteristic of electricity market and Genetic Algorithms (GAs) is used as a multi-objective optimization tool.