929 resultados para Maximum Power Point Tracking
Resumo:
In this paper, a modeling technique for small-signal stability assessment of unbalanced power systems is presented. Since power distribution systems are inherently unbalanced, due to its lines and loads characteristics, and the penetration of distributed generation into these systems is increasing nowadays, such a tool is needed in order to ensure a secure and reliable operation of these systems. The main contribution of this paper is the development of a phasor-based model for the study of dynamic phenomena in unbalanced power systems. Using an assumption on the net torque of the generator, it is possible to precisely define an equilibrium point for the phasor model of the system, thus enabling its linearization around this point, and, consequently, its eigenvalue/eigenvector analysis for small-signal stability assessment. The modeling technique presented here was compared to the dynamic behavior observed in ATP simulations and the results show that, for the generator and controller models used, the proposed modeling approach is adequate and yields reliable and precise results.
Resumo:
This paper describes a CMOS implementation of a linear voltage regulator (LVR) used to power up implanted physiological signal systems, as it is the case of a wireless blood pressure biosensor. The topology is based on a classical structure of a linear low-dropout regulator. The circuit is powered up from an RF link, thus characterizing a passive radio frequency identification (RFID) tag. The LVR was designed to meet important features such as low power consumption and small silicon area, without the need for any external discrete components. The low power operation represents an essential condition to avoid a high-energy RF link, thus minimizing the transmitted power and therefore minimizing the thermal effects on the patient's tissues. The project was implemented in a 0.35-mu m CMOS process, and the prototypes were tested to validate the overall performance. The LVR output is regulated at 1 V and supplies a maximum load current of 0.5 mA at 37 degrees C. The load regulation is 13 mV/mA, and the line regulation is 39 mV/V. The LVR total power consumption is 1.2 mW.
Resumo:
In the present work, we report experimental results of He stopping power into Al2O3 films by using both transmission and Rutherford backscattering techniques. We have performed measurements along a wide energy range, from 60 to 3000 key, covering the maximum stopping range. The results of this work are compared with previously published dap-, showing a good agreement for the high-energy range, but evidencing discrepancies in the low-energy region. The existing theories follow the same tendency: good theoretical-experimental agreement for higher energies, but they failed to reproduce previous and present results in the low energy regime. On the other hand it is interesting to note that the semi-empirical SRIM code reproduces quite well the present data. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Ferreira, SLA, Panissa, VLG, Miarka, B, and Franchini, E. Postactivation potentiation: effect of various recovery intervals on bench press power performance. J Strength Cond Res 26(3): 739-744, 2012-Postactivation potentiation (PAP) is a strategy used to improve performance in power activities. The aim of this study was to determine if power during bench press exercise was increased when preceded by 1 repetition maximum (1RM) in the same exercise and to determine which time interval could optimize PAP response. For this, 11 healthy male subjects (age, 25 +/- 4 years; height, 178 +/- 6 cm; body mass, 74 +/- 8 kg; bench press 1RM, 76 +/- 19 kg) underwent 6 sessions. Two control sessions were conducted to determine both bench press 1RM and power (6 repetitions at 50% 1RM). The 4 experimental sessions were composed of a 1RM exercise followed by power sets with different recovery intervals (1, 3, 5, and 7 minutes), performed on different days, and determined randomly. Power values were measured via Peak Power equipment (Cefise, Nova Odessa, Sao Paulo, Brazil). The conditions were compared using an analysis of variance with repeated measures, followed by a Tukey test. The significance level was set at p < 0.05. There was a significant increase in PAP in concentric contractions after 7 minutes of recovery compared with the control and 1-minute recovery conditions (p < 0.05). Our results indicated that 7 minutes of recovery has generated an increase in PAP in bench press and that such a strategy could be applied as an interesting alternative to enhance the performance in tasks aimed at increasing upper-body power performance.
Resumo:
We present a comprehensive experimental and theoretical investigation of the thermodynamic properties: specific heat, magnetization, and thermal expansion in the vicinity of the field-induced quantum critical point (QCP) around the lower critical field H-c1 approximate to 2 T in NiCl2-4SC(NH2)(2). A T-3/2 behavior in the specific heat and magnetization is observed at very low temperatures at H = H-c1, which is consistent with the universality class of Bose-Einstein condensation of magnons. The temperature dependence of the thermal expansion coefficient at H-c1 shows minor deviations from the expected T-1/2 behavior. Our experimental study is complemented by analytical calculations and quantum Monte Carlo simulations, which reproduce nicely the measured quantities. We analyze the thermal and the magnetic Gruneisen parameters, which are ideal quantities to identify QCPs. Both parameters diverge at H-c1 with the expected T-1 power law. By using the Ehrenfest relations at the second-order phase transition, we are able to estimate the pressure dependencies of the characteristic temperature and field scales.
Resumo:
This study performed an exploratory analysis of the anthropometrical and morphological muscle variables related to the one-repetition maximum (1RM) performance. In addition, the capacity of these variables to predict the force production was analyzed. 50 active males were submitted to the experimental procedures: vastus lateralis muscle biopsy, quadriceps magnetic resonance imaging, body mass assessment and 1RM test in the leg-press exercise. K-means cluster analysis was performed after obtaining the body mass, sum of the left and right quadriceps muscle cross-sectional area (Sigma CSA), percentage of the type II fibers and the 1RM performance. The number of clusters was defined a priori and then were labeled as high strength performance (HSP1RM) group and low strength performance (LSP1RM) group. Stepwise multiple regressions were performed by means of body mass, Sigma CSA, percentage of the type II fibers and clusters as predictors' variables and 1RM performance as response variable. The clusters mean +/- SD were: 292.8 +/- 52.1 kg, 84.7 +/- 17.9 kg, 19249.7 +/- 1645.5 mm(2) and 50.8 +/- 7.2% for the HSP1RM and 254.0 +/- 51.1 kg, 69.2 +/- 8.1 kg, 15483.1 +/- 1 104.8 mm(2) and 51.7 +/- 6.2 %, for the LSP1RM in the 1RM, body mass, Sigma CSA and muscle fiber type II percentage, respectively. The most important variable in the clusters division was the Sigma CSA. In addition, the Sigma CSA and muscle fiber type II percentage explained the variance in the 1RM performance (Adj R-2 = 0.35, p = 0.0001) for all participants and for the LSP1RM (Adj R-2 = 0.25, p = 0.002). For the HSP1RM, only the Sigma CSA was entered in the model and showed the highest capacity to explain the variance in the 1RM performance (Adj R-2 = 0.38, p = 0.01). As a conclusion, the muscle CSA was the most relevant variable to predict force production in individuals with no strength training background.
Resumo:
In many applications of lifetime data analysis, it is important to perform inferences about the change-point of the hazard function. The change-point could be a maximum for unimodal hazard functions or a minimum for bathtub forms of hazard functions and is usually of great interest in medical or industrial applications. For lifetime distributions where this change-point of the hazard function can be analytically calculated, its maximum likelihood estimator is easily obtained from the invariance properties of the maximum likelihood estimators. From the asymptotical normality of the maximum likelihood estimators, confidence intervals can also be obtained. Considering the exponentiated Weibull distribution for the lifetime data, we have different forms for the hazard function: constant, increasing, unimodal, decreasing or bathtub forms. This model gives great flexibility of fit, but we do not have analytic expressions for the change-point of the hazard function. In this way, we consider the use of Markov Chain Monte Carlo methods to get posterior summaries for the change-point of the hazard function considering the exponentiated Weibull distribution.
Resumo:
The aim of solving the Optimal Power Flow problem is to determine the optimal state of an electric power transmission system, that is, the voltage magnitude and phase angles and the tap ratios of the transformers that optimize the performance of a given system, while satisfying its physical and operating constraints. The Optimal Power Flow problem is modeled as a large-scale mixed-discrete nonlinear programming problem. This paper proposes a method for handling the discrete variables of the Optimal Power Flow problem. A penalty function is presented. Due to the inclusion of the penalty function into the objective function, a sequence of nonlinear programming problems with only continuous variables is obtained and the solutions of these problems converge to a solution of the mixed problem. The obtained nonlinear programming problems are solved by a Primal-Dual Logarithmic-Barrier Method. Numerical tests using the IEEE 14, 30, 118 and 300-Bus test systems indicate that the method is efficient. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Context. Convergent point (CP) search methods are important tools for studying the kinematic properties of open clusters and young associations whose members share the same spatial motion. Aims. We present a new CP search strategy based on proper motion data. We test the new algorithm on synthetic data and compare it with previous versions of the CP search method. As an illustration and validation of the new method we also present an application to the Hyades open cluster and a comparison with independent results. Methods. The new algorithm rests on the idea of representing the stellar proper motions by great circles over the celestial sphere and visualizing their intersections as the CP of the moving group. The new strategy combines a maximum-likelihood analysis for simultaneously determining the CP and selecting the most likely group members and a minimization procedure that returns a refined CP position and its uncertainties. The method allows one to correct for internal motions within the group and takes into account that the stars in the group lie at different distances. Results. Based on Monte Carlo simulations, we find that the new CP search method in many cases returns a more precise solution than its previous versions. The new method is able to find and eliminate more field stars in the sample and is not biased towards distant stars. The CP solution for the Hyades open cluster is in excellent agreement with previous determinations.
Resumo:
We derive asymptotic expansions for the nonnull distribution functions of the likelihood ratio, Wald, score and gradient test statistics in the class of dispersion models, under a sequence of Pitman alternatives. The asymptotic distributions of these statistics are obtained for testing a subset of regression parameters and for testing the precision parameter. Based on these nonnull asymptotic expansions, the power of all four tests, which are equivalent to first order, are compared. Furthermore, in order to compare the finite-sample performance of these tests in this class of models, Monte Carlo simulations are presented. An empirical application to a real data set is considered for illustrative purposes. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
OBJECTIVE: Define and compare numbers and types of occlusal contacts in maximum intercuspation. METHODS: The study consisted of clinical and photographic analysis of occlusal contacts in maximum intercuspation. Twenty-six Caucasian Brazilian subjects were selected before orthodontic treatment, 20 males and 6 females, with ages ranging between 12 and 18 years. The subjects were diagnosed and grouped as follows: 13 with Angle Class I malocclusion and 13 with Angle Class II Division 1 malocclusion. After analysis, the occlusal contacts were classified according to the established criteria as: tripodism, bipodism, monopodism (respectively, three, two or one contact point with the slope of the fossa); cuspid to a marginal ridge; cuspid to two marginal ridges; cuspid tip to opposite inclined plane; surface to surface; and edge to edge. RESULTS: The mean number of occlusal contacts per subject in Class I malocclusion was 43.38 and for Class II Division 1 malocclusion it was 44.38, this difference was not statistically significant (p>0.05). CONCLUSIONS: There is a variety of factors that influence the number of occlusal contacts between a Class I and a Class II, Division 1 malocclusion. There is no standardization of occlusal contact type according to the studied malocclusions. A proper selection of occlusal contact types such as cuspid to fossa or cuspid to marginal ridge and its location in the teeth should be individually defined according to the demands of each case. The existence of an adequate occlusal contact leads to a correct distribution of forces, promoting periodontal health.
Resumo:
Diffusion is a common phenomenon in nature and generally is associated with a system trying to reach a local or a global equilibrium state, as a result of highly irregular individual particle motion. Therefore it is of fundamental importance in physics, chemistry and biology. Particle tracking in complex fluids can reveal important characteristics of its properties. In living cells, we coat the microbead with a peptide (RGD) that binds to integrin receptors at the plasma membrane, which connects to the CSK. This procedure is based on the hypothesis that the microsphere can move only if the structure where it is attached move as well. Then, the observed trajectory of microbeads is a probe of the cytoskeleton (CSK), which is governed by several factors, including thermal diffusion, pressure gradients, and molecular motors. The possibility of separating the trajectories into passive and active diffusion may give information about the viscoelasticity of the cell structure and molecular motors activity. And also we could analyze the motion via generalized Stokes-Einstein relation, avoiding the use of any active techniques. Usually a 12 to 16 Frames Per Second (FPS) system is used to track the microbeads in cell for about 5 minutes. Several factors make this FPS limitation: camera computer communication, light, computer speed for online analysis among others. Here we used a high quality camera and our own software, developed in C++ and Linux, to reach high FPS. Measurements were conducted with samples for 10£ and 20£ objectives. We performed sequentially images with different intervals, all with 2 ¹s exposure. The sequences of intervals are in milliseconds: 4 5 ms (maximum speed) 14, 25, 50 and 100 FPS. Our preliminary results highlight the difference between passive and active diffusion, since the passive diffusion is represented by a Gaussian in the distribution of displacements of the center of mass of individual beads between consecutive frames. However, the active process, or anomalous diffusion, shows as long tails in the distribution of displacements.
Resumo:
The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.
Resumo:
Recent progress in microelectronic and wireless communications have enabled the development of low cost, low power, multifunctional sensors, which has allowed the birth of new type of networks named wireless sensor networks (WSNs). The main features of such networks are: the nodes can be positioned randomly over a given field with a high density; each node operates both like sensor (for collection of environmental data) as well as transceiver (for transmission of information to the data retrieval); the nodes have limited energy resources. The use of wireless communications and the small size of nodes, make this type of networks suitable for a large number of applications. For example, sensor nodes can be used to monitor a high risk region, as near a volcano; in a hospital they could be used to monitor physical conditions of patients. For each of these possible application scenarios, it is necessary to guarantee a trade-off between energy consumptions and communication reliability. The thesis investigates the use of WSNs in two possible scenarios and for each of them suggests a solution that permits to solve relating problems considering the trade-off introduced. The first scenario considers a network with a high number of nodes deployed in a given geographical area without detailed planning that have to transmit data toward a coordinator node, named sink, that we assume to be located onboard an unmanned aerial vehicle (UAV). This is a practical example of reachback communication, characterized by the high density of nodes that have to transmit data reliably and efficiently towards a far receiver. It is considered that each node transmits a common shared message directly to the receiver onboard the UAV whenever it receives a broadcast message (triggered for example by the vehicle). We assume that the communication channels between the local nodes and the receiver are subject to fading and noise. The receiver onboard the UAV must be able to fuse the weak and noisy signals in a coherent way to receive the data reliably. It is proposed a cooperative diversity concept as an effective solution to the reachback problem. In particular, it is considered a spread spectrum (SS) transmission scheme in conjunction with a fusion center that can exploit cooperative diversity, without requiring stringent synchronization between nodes. The idea consists of simultaneous transmission of the common message among the nodes and a Rake reception at the fusion center. The proposed solution is mainly motivated by two goals: the necessity to have simple nodes (to this aim we move the computational complexity to the receiver onboard the UAV), and the importance to guarantee high levels of energy efficiency of the network, thus increasing the network lifetime. The proposed scheme is analyzed in order to better understand the effectiveness of the approach presented. The performance metrics considered are both the theoretical limit on the maximum amount of data that can be collected by the receiver, as well as the error probability with a given modulation scheme. Since we deal with a WSN, both of these performance are evaluated taking into consideration the energy efficiency of the network. The second scenario considers the use of a chain network for the detection of fires by using nodes that have a double function of sensors and routers. The first one is relative to the monitoring of a temperature parameter that allows to take a local binary decision of target (fire) absent/present. The second one considers that each node receives a decision made by the previous node of the chain, compares this with that deriving by the observation of the phenomenon, and transmits the final result to the next node. The chain ends at the sink node that transmits the received decision to the user. In this network the goals are to limit throughput in each sensor-to-sensor link and minimize probability of error at the last stage of the chain. This is a typical scenario of distributed detection. To obtain good performance it is necessary to define some fusion rules for each node to summarize local observations and decisions of the previous nodes, to get a final decision that it is transmitted to the next node. WSNs have been studied also under a practical point of view, describing both the main characteristics of IEEE802:15:4 standard and two commercial WSN platforms. By using a commercial WSN platform it is realized an agricultural application that has been tested in a six months on-field experimentation.
Resumo:
In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.