10 resultados para Power quality indices

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research activity carried out during the PhD course in Electrical Engineering belongs to the branch of electric and electronic measurements. The main subject of the present thesis is a distributed measurement system to be installed in Medium Voltage power networks, as well as the method developed to analyze data acquired by the measurement system itself and to monitor power quality. In chapter 2 the increasing interest towards power quality in electrical systems is illustrated, by reporting the international research activity inherent to the problem and the relevant standards and guidelines emitted. The aspect of the quality of voltage provided by utilities and influenced by customers in the various points of a network came out only in recent years, in particular as a consequence of the energy market liberalization. Usually, the concept of quality of the delivered energy has been associated mostly to its continuity. Hence the reliability was the main characteristic to be ensured for power systems. Nowadays, the number and duration of interruptions are the “quality indicators” commonly perceived by most customers; for this reason, a short section is dedicated also to network reliability and its regulation. In this contest it should be noted that although the measurement system developed during the research activity belongs to the field of power quality evaluation systems, the information registered in real time by its remote stations can be used to improve the system reliability too. Given the vast scenario of power quality degrading phenomena that usually can occur in distribution networks, the study has been focused on electromagnetic transients affecting line voltages. The outcome of such a study has been the design and realization of a distributed measurement system which continuously monitor the phase signals in different points of a network, detect the occurrence of transients superposed to the fundamental steady state component and register the time of occurrence of such events. The data set is finally used to locate the source of the transient disturbance propagating along the network lines. Most of the oscillatory transients affecting line voltages are due to faults occurring in any point of the distribution system and have to be seen before protection equipment intervention. An important conclusion is that the method can improve the monitored network reliability, since the knowledge of the location of a fault allows the energy manager to reduce as much as possible both the area of the network to be disconnected for protection purposes and the time spent by technical staff to recover the abnormal condition and/or the damage. The part of the thesis presenting the results of such a study and activity is structured as follows: chapter 3 deals with the propagation of electromagnetic transients in power systems by defining characteristics and causes of the phenomena and briefly reporting the theory and approaches used to study transients propagation. Then the state of the art concerning methods to detect and locate faults in distribution networks is presented. Finally the attention is paid on the particular technique adopted for the same purpose during the thesis, and the methods developed on the basis of such approach. Chapter 4 reports the configuration of the distribution networks on which the fault location method has been applied by means of simulations as well as the results obtained case by case. In this way the performance featured by the location procedure firstly in ideal then in realistic operating conditions are tested. In chapter 5 the measurement system designed to implement the transients detection and fault location method is presented. The hardware belonging to the measurement chain of every acquisition channel in remote stations is described. Then, the global measurement system is characterized by considering the non ideal aspects of each device that can concur to the final combined uncertainty on the estimated position of the fault in the network under test. Finally, such parameter is computed according to the Guide to the Expression of Uncertainty in Measurements, by means of a numeric procedure. In the last chapter a device is described that has been designed and realized during the PhD activity aiming at substituting the commercial capacitive voltage divider belonging to the conditioning block of the measurement chain. Such a study has been carried out aiming at providing an alternative to the used transducer that could feature equivalent performance and lower cost. In this way, the economical impact of the investment associated to the whole measurement system would be significantly reduced, making the method application much more feasible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Power electronic converters are extensively adopted for the solution of timely issues, such as power quality improvement in industrial plants, energy management in hybrid electrical systems, and control of electrical generators for renewables. Beside nonlinearity, this systems are typically characterized by hard constraints on the control inputs, and sometimes the state variables. In this respect, control laws able to handle input saturation are crucial to formally characterize the systems stability and performance properties. From a practical viewpoint, a proper saturation management allows to extend the systems transient and steady-state operating ranges, improving their reliability and availability. The main topic of this thesis concern saturated control methodologies, based on modern approaches, applied to power electronics and electromechanical systems. The pursued objective is to provide formal results under any saturation scenario, overcoming the drawbacks of the classic solution commonly applied to cope with saturation of power converters, and enhancing performance. For this purpose two main approaches are exploited and extended to deal with power electronic applications: modern anti-windup strategies, providing formal results and systematic design rules for the anti-windup compensator, devoted to handle control saturation, and “one step” saturated feedback design techniques, relying on a suitable characterization of the saturation nonlinearity and less conservative extensions of standard absolute stability theory results. The first part of the thesis is devoted to present and develop a novel general anti-windup scheme, which is then specifically applied to a class of power converters adopted for power quality enhancement in industrial plants. In the second part a polytopic differential inclusion representation of saturation nonlinearity is presented and extended to deal with a class of multiple input power converters, used to manage hybrid electrical energy sources. The third part regards adaptive observers design for robust estimation of the parameters required for high performance control of power systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Flicker is a power quality phenomenon that applies to cycle instability of light intensity resulting from supply voltage fluctuation, which, in turn can be caused by disturbances introduced during power generation, transmission or distribution. The standard EN 61000-4-15 which has been recently adopted also by the IEEE as IEEE Standard 1453 relies on the analysis of the supply voltage which is processed according to a suitable model of the lamp – human eye – brain chain. As for the lamp, an incandescent 60 W, 230 V, 50 Hz source is assumed. As far as the human eye – brain model is concerned, it is represented by the so-called flicker curve. Such a curve was determined several years ago by statistically analyzing the results of tests where people were subjected to flicker with different combinations of magnitude and frequency. The limitations of this standard approach to flicker evaluation are essentially two. First, the provided index of annoyance Pst can be related to an actual tiredness of the human visual system only if such an incandescent lamp is used. Moreover, the implemented response to flicker is “subjective” given that it relies on the people answers about their feelings. In the last 15 years, many scientific contributions have tackled these issues by investigating the possibility to develop a novel model of the eye-brain response to flicker and overcome the strict dependence of the standard on the kind of the light source. In this light of fact, this thesis is aimed at presenting an important contribution for a new Flickermeter. An improved visual system model using a physiological parameter that is the mean value of the pupil diameter, has been presented, thus allowing to get a more “objective” representation of the response to flicker. The system used to both generate flicker and measure the pupil diameter has been illustrated along with all the results of several experiments performed on the volunteers. The intent has been to demonstrate that the measurement of that geometrical parameter can give reliable information about the feeling of the human visual system to light flicker.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the present work qualitative aspects of products that fall outside the classic Italian of food production view will be investigated, except for the apricot, a fruit, however, less studied by the methods considered here. The development of computer systems and the advanced software systems dedicated for statistical processing of data, has permitted the application of advanced technologies including the analysis of niche products. The near-infrared spectroscopic analysis was applied to the chemical industry for over twenty years and, subsequently, was applied in food industry with great success for non-destructive in line and off-line analysis. The work that will be presented below range from the use of spectroscopy for the determination of some rheological indices of ice cream applications to the characterization of the main quality indices of apricots, fresh dates, determination of the production areas of pistachio. Next to the spectroscopy will be illustrated different methods of multivariate analysis for spectra interpretation or for the construction of qualitative models of estimation. The thesis is divided into four separate studies that consider the same number of products. Each one of it is introduced by its own premise and ended with its own bibliography. This studies are preceded by a general discussion on the state of art and the basics of NIR spectroscopy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this dissertation some novel indices for vulnerability and robustness assessment of power grids are presented. Such indices are mainly defined from the structure of transmission power grids, and with the aim of Blackout (BO) prevention and mitigation. Numerical experiments showing how they could be used alone or in coordination with pre-existing ones to reduce the effects of BOs are discussed. These indices are introduced inside 3 different sujects: The first subject is for taking a look into economical aspects of grids’ operation and their effects in BO propagation. Basically, simulations support that: the determination to operate the grid in the most profitable way could produce an increase in the size or frequency of BOs. Conversely, some uneconomical ways of supplying energy are shown to be less affected by BO phenomena. In the second subject new topological indices are devised to address the question of "which are the best buses to place distributed generation?". The combined use of two indices, is shown as a promising alternative for extracting grid’s significant features regarding robustness against BOs and distributed generation. For this purpose, a new index based on outage shift factors is used along with a previously defined electric centrality index. The third subject is on Static Robustness Analysis of electric networks, from a purely structural point of view. A pair of existing topological indices, (namely degree index and clustering coefficient), are combined to show how degradation of the network structure can be accelerated. Blackout simulations were carried out using the DC Power Flow Method and models of transmission networks from the USA and Europe.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents several data processing and compression techniques capable of addressing the strict requirements of wireless sensor networks. After introducing a general overview of sensor networks, the energy problem is introduced, dividing the different energy reduction approaches according to the different subsystem they try to optimize. To manage the complexity brought by these techniques, a quick overview of the most common middlewares for WSNs is given, describing in detail SPINE2, a framework for data processing in the node environment. The focus is then shifted on the in-network aggregation techniques, used to reduce data sent by the network nodes trying to prolong the network lifetime as long as possible. Among the several techniques, the most promising approach is the Compressive Sensing (CS). To investigate this technique, a practical implementation of the algorithm is compared against a simpler aggregation scheme, deriving a mixed algorithm able to successfully reduce the power consumption. The analysis moves from compression implemented on single nodes to CS for signal ensembles, trying to exploit the correlations among sensors and nodes to improve compression and reconstruction quality. The two main techniques for signal ensembles, Distributed CS (DCS) and Kronecker CS (KCS), are introduced and compared against a common set of data gathered by real deployments. The best trade-off between reconstruction quality and power consumption is then investigated. The usage of CS is also addressed when the signal of interest is sampled at a Sub-Nyquist rate, evaluating the reconstruction performance. Finally the group sparsity CS (GS-CS) is compared to another well-known technique for reconstruction of signals from an highly sub-sampled version. These two frameworks are compared again against a real data-set and an insightful analysis of the trade-off between reconstruction quality and lifetime is given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present dissertation aims to explore, theoretically and experimentally, the problems and the potential advantages of different types of power converters for “Smart Grid” applications, with particular emphasis on multi-level architectures, which are attracting a rising interest even for industrial requests. The models of the main multilevel architectures (Diode-Clamped and Cascaded) are shown. The best suited modulation strategies to function as a network interface are identified. In particular, the close correlation between PWM (Pulse Width Modulation) approach and SVM (Space Vector Modulation) approach is highlighted. An innovative multilevel topology called MMC (Modular Multilevel Converter) is investigated, and the single-phase, three-phase and "back to back" configurations are analyzed. Specific control techniques that can manage, in an appropriate way, the charge level of the numerous capacitors and handle the power flow in a flexible way are defined and experimentally validated. Another converter that is attracting interest in “Power Conditioning Systems” field is the “Matrix Converter”. Even in this architecture, the output voltage is multilevel. It offers an high quality input current, a bidirectional power flow and has the possibility to control the input power factor (i.e. possibility to participate to active and reactive power regulations). The implemented control system, that allows fast data acquisition for diagnostic purposes, is described and experimentally verified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The soil carries out a wide range of functions and it is important study the effects of land use on soil quality in order to provide most sustainable practices. Three fields trial have been considered to assess soil quality and functionality after human alteration, and to determine the power of soil enzymatic activities, biochemical indexes and mathematical model in the evaluation of soil status. The first field was characterized by conventional and organic management in which were tested also tillage effects. The second was characterized by conventional, organic and agro-ecological management. Finally, the third was a beech forest where was tested the effects of N deposition on soil organic carbon sequestration. Results highlight that both enzyme activities and biochemical indexes could be valid parameters for soil quality evaluation. Conventional management and plowing negatively affected soil quality and functionality with intensive tillage that lead to the downturn of microbial biomass and activity. Both organic and agro-ecological management revealed to be good practices for the maintenance of soil functionality with better microbial activity and metabolic efficiency. This positively affected also soil organic carbon content. At the eutrophic forest, enzyme activities and biochemical indexes positively respond to the treatments but one year of experimentation resulted to be not enough to observe variation in soil organic carbon content. Mathematical models and biochemical indicators resulted to be valid tools for assess soil quality, nonetheless it would be better including the microbial component in the mathematical model and consider more than one index if the aim of the work is to evaluate the overall soil quality and functionality. Concluding, the forest site is the richest one in terms of organic carbon, microbial biomass and activity while, the organic and the agro-ecological management seem to be the more sustainable but without taking in consideration the yield.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to explore the possible influence of the food matrix on food quality attributes. Using nuclear magnetic resonance techniques, the matrix-dependent properties of different foods were studied and some useful indices were defined to classify food products based on the matrix behaviour when responding to processing phenomena. Correlations were found between fish freshness indices, assessed by certain geometric parameters linked to the morphology of the animal, i.e. a macroscopic structure, and the degradation of the product structure. The same foodomics approach was also applied to explore the protective effect of modified atmospheres on the stability of fish fillets, which are typically susceptible to oxidation of the polyunsaturated fatty acids incorporated in the meat matrix. Here, freshness is assessed by evaluating the time-dependent change in the fish metabolome, providing an established freshness index, and its relationship to lipid oxidation. In vitro digestion studies, focusing on food products with different matrixes, alone and in combination with other meal components (e.g. seasoning), were conducted to investigate possible interactions between enzymes and food, modulated by matrix structure, which influence digestibility. The interaction between water and the gelatinous matrix of the food, consisting of a network of protein gels incorporating fat droplets, was also studied by means of nuclear magnetic relaxometry, in order to create a prediction tool for the correct classification of authentic and counterfeit food products protected by a quality label. This is one of the first applications of an NMR method focusing on the supramolecular structure of the matrix, rather than the chemical composition, to assess food authenticity. The effect of innovative processing technologies, such as PEF applied to fruit products, has been assessed by magnetic resonance imaging, exploiting information associated with the rehydration kinetics exerted by a modified food structure.