54 resultados para Constraint based modelling
Resumo:
Koneet voidaan usein jakaa osajärjestelmiin, joita ovat ohjaus- ja säätöjärjestelmät, voimaa tuottavat toimilaitteet ja voiman välittävät mekanismit. Eri osajärjestelmiä on simuloitu tietokoneavusteisesti jo usean vuosikymmenen ajan. Osajärjestelmien yhdistäminen on kuitenkin uudempi ilmiö. Usein esimerkiksi mekanismien mallinnuksessa toimilaitteen tuottama voimaon kuvattu vakiona, tai ajan funktiona muuttuvana voimana. Vastaavasti toimilaitteiden analysoinnissa mekanismin toimilaitteeseen välittämä kuormitus on kuvattu vakiovoimana, tai ajan funktiona työkiertoa kuvaavana kuormituksena. Kun osajärjestelmät on erotettu toisistaan, on niiden välistenvuorovaikutuksien tarkastelu erittäin epätarkkaa. Samoin osajärjestelmän vaikutuksen huomioiminen koko järjestelmän käyttäytymissä on hankalaa. Mekanismien dynamiikan mallinnukseen on kehitetty erityisesti tietokoneille soveltuvia numeerisia mallinnusmenetelmiä. Useimmat menetelmistä perustuvat Lagrangen menetelmään, joka mahdollistaa vapaasti valittaviin koordinaattimuuttujiin perustuvan mallinnuksen. Numeerista ratkaisun mahdollistamiseksi menetelmän avulla muodostettua differentiaali-algebraaliyhtälöryhmää joudutaan muokkaamaan esim. derivoimalla rajoiteyhtälöitä kahteen kertaan. Menetelmän alkuperäisessä numeerisissa ratkaisuissa kaikki mekanismia kuvaavat yleistetyt koordinaatit integroidaan jokaisella aika-askeleella. Tästä perusmenetelmästä johdetuissa menetelmissä riippumattomat yleistetyt koordinaatit joko integroidaan ja riippuvat koordinaatit ratkaistaan rajoiteyhtälöiden perusteella tai yhtälöryhmän kokoa pienennetään esim. käyttämällä nopeus- ja kiihtyvyysanalyyseissä eri kiertymäkoordinaatteja kuin asema-analyysissä. Useimmat integrointimenetelmät on alun perin tarkoitettu differentiaaliyhtälöiden (ODE) ratkaisuunjolloin yhtälöryhmään liitetyt niveliä kuvaavat algebraaliset rajoiteyhtälöt saattavat aiheuttaa ongelmia. Nivelrajoitteiden virheiden korjaus, stabilointi, on erittäin tärkeää mekanismien dynamiikan simuloinnin onnistumisen ja tulosten oikeellisuuden kannalta. Mallinnusmenetelmien johtamisessa käytetyn virtuaalisen työn periaatteen oletuksena nimittäin on, etteivät rajoitevoimat tee työtä, eli rajoitteiden vastaista siirtymää ei tapahdu. Varsinkaan monimutkaisten järjestelmien pidemmissä analyyseissä nivelrajoitteet eivät toteudu tarkasti. Tällöin järjestelmän energiatasapainoei toteudu ja järjestelmään muodostuu virtuaalista energiaa, joka rikkoo virtuaalisen työn periaatetta, Tästä syystä tulokset eivät enää pidäpaikkaansa. Tässä raportissa tarkastellaan erityyppisiä mallinnus- ja ratkaisumenetelmiä, ja vertaillaan niiden toimivuutta yksinkertaisten mekanismien numeerisessa ratkaisussa. Menetelmien toimivuutta tarkastellaan ratkaisun tehokkuuden, nivelrajoitteiden toteutumisen ja energiatasapainon säilymisen kannalta.
Resumo:
Tämä diplomityökuuluu tietoliikenneverkkojen suunnittelun tutkimukseen ja pohjimmiltaan kohdistuu verkon mallintamiseen. Tietoliikenneverkkojen suunnittelu on monimutkainen ja vaativa ongelma, joka sisältää mutkikkaita ja aikaa vieviä tehtäviä. Tämä diplomityö esittelee ”monikerroksisen verkkomallin”, jonka tarkoitus on auttaa verkon suunnittelijoita selviytymään ongelmien monimutkaisuudesta ja vähentää verkkojen suunnitteluun kuluvaa aikaa. Monikerroksinen verkkomalli perustuu yleisille objekteille, jotka ovat yhteisiä kaikille tietoliikenneverkoille. Tämä tekee mallista soveltuvan mielivaltaisille verkoille, välittämättä verkkokohtaisista ominaisuuksista tai verkon toteutuksessa käytetyistä teknologioista. Malli määrittelee tarkan terminologian ja käyttää kolmea käsitettä: verkon jakaminen tasoihin (plane separation), kerrosten muodostaminen (layering) ja osittaminen (partitioning). Nämä käsitteet kuvataan yksityiskohtaisesti tässä työssä. Monikerroksisen verkkomallin sisäinen rakenne ja toiminnallisuus ovat määritelty käyttäen Unified Modelling Language (UML) -notaatiota. Tämä työ esittelee mallin use case- , paketti- ja luokkakaaviot. Diplomityö esittelee myös tulokset, jotka on saatu vertailemalla monikerroksista verkkomallia muihin verkkomalleihin. Tulokset osoittavat, että monikerroksisella verkkomallilla on etuja muihin malleihin verrattuna.
Resumo:
Huolimatta korkeasta automaatioasteesta sorvausteollisuudessa, muutama keskeinen ongelma estää sorvauksen täydellisen automatisoinnin. Yksi näistä ongelmista on työkalun kuluminen. Tämä työ keskittyy toteuttamaan automaattisen järjestelmän kulumisen, erityisesti viistekulumisen, mittaukseen konenäön avulla. Kulumisen mittausjärjestelmä poistaa manuaalisen mittauksen tarpeen ja minimoi ajan, joka käytetään työkalun kulumisen mittaukseen. Mittauksen lisäksi tutkitaan kulumisen mallinnusta sekä ennustamista. Automaattinen mittausjärjestelmä sijoitettiin sorvin sisälle ja järjestelmä integroitiin onnistuneesti ulkopuolisten järjestelmien kanssa. Tehdyt kokeet osoittivat, että mittausjärjestelmä kykenee mittaamaan työkalun kulumisen järjestelmän oikeassa ympäristössä. Mittausjärjestelmä pystyy myös kestämään häiriöitä, jotka ovat konenäköjärjestelmille yleisiä. Työkalun kulumista mallinnusta tutkittiin useilla eri menetelmillä. Näihin kuuluivat muiden muassa neuroverkot ja tukivektoriregressio. Kokeet osoittivat, että tutkitut mallit pystyivät ennustamaan työkalun kulumisasteen käytetyn ajan perusteella. Parhaan tuloksen antoivat neuroverkot Bayesiläisellä regularisoinnilla.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.
Resumo:
Synchronous motors are used mainly in large drives, for example in ship propulsion systems and in steel factories' rolling mills because of their high efficiency, high overload capacity and good performance in the field weakening range. This, however, requires an extremely good torque control system. A fast torque response and a torque accuracy are basic requirements for such a drive. For large power, high dynamic performance drives the commonly known principle of field oriented vector control has been used solely hitherto, but nowadays it is not the only way to implement such a drive. A new control method Direct Torque Control (DTC) has also emerged. The performance of such a high quality torque control as DTC in dynamically demanding industrial applications is mainly based on the accurate estimate of the various flux linkages' space vectors. Nowadays industrial motor control systems are real time applications with restricted calculation capacity. At the same time the control system requires a simple, fast calculable and reasonably accurate motor model. In this work a method to handle these problems in a Direct Torque Controlled (DTC) salient pole synchronous motor drive is proposed. A motor model which combines the induction law based "voltage model" and motor inductance parameters based "current model" is presented. The voltage model operates as a main model and is calculated at a very fast sampling rate (for example 40 kHz). The stator flux linkage calculated via integration from the stator voltages is corrected using the stator flux linkage computed from the current model. The current model acts as a supervisor that prevents only the motor stator flux linkage from drifting erroneous during longer time intervals. At very low speeds the role of the current model is emphasised but, nevertheless, the voltage model always stays the main model. At higher speeds the function of the current model correction is to act as a stabiliser of the control system. The current model contains a set of inductance parameters which must be known. The validation of the current model in steady state is not self evident. It depends on the accuracy of the saturated value of the inductances. Parameter measurement of the motor model where the supply inverter is used as a measurement signal generator is presented. This so called identification run can be performed prior to delivery or during drive commissioning. A derivation method for the inductance models used for the representation of the saturation effects is proposed. The performance of the electrically excited synchronous motor supplied with the DTC inverter is proven with experimental results. It is shown that it is possible to obtain a good static accuracy of the DTC's torque controller for an electrically excited synchronous motor. The dynamic response is fast and a new operation point is achieved without oscillation. The operation is stable throughout the speed range. The modelling of the magnetising inductance saturation is essential and cross saturation has to be considered as well. The effect of cross saturation is very significant. A DTC inverter can be used as a measuring equipment and the parameters needed for the motor model can be defined by the inverter itself. The main advantage is that the parameters defined are measured in similar magnetic operation conditions and no disagreement between the parameters will exist. The inductance models generated are adequate to meet the requirements of dynamically demanding drives.
Resumo:
The present thesis in focused on the minimization of experimental efforts for the prediction of pollutant propagation in rivers by mathematical modelling and knowledge re-use. Mathematical modelling is based on the well known advection-dispersion equation, while the knowledge re-use approach employs the methods of case based reasoning, graphical analysis and text mining. The thesis contribution to the pollutant transport research field consists of: (1) analytical and numerical models for pollutant transport prediction; (2) two novel techniques which enable the use of variable parameters along rivers in analytical models; (3) models for the estimation of pollutant transport characteristic parameters (velocity, dispersion coefficient and nutrient transformation rates) as functions of water flow, channel characteristics and/or seasonality; (4) the graphical analysis method to be used for the identification of pollution sources along rivers; (5) a case based reasoning tool for the identification of crucial information related to the pollutant transport modelling; (6) and the application of a software tool for the reuse of information during pollutants transport modelling research. These support tools are applicable in the water quality research field and in practice as well, as they can be involved in multiple activities. The models are capable of predicting pollutant propagation along rivers in case of both ordinary pollution and accidents. They can also be applied for other similar rivers in modelling of pollutant transport in rivers with low availability of experimental data concerning concentration. This is because models for parameter estimation developed in the present thesis enable the calculation of transport characteristic parameters as functions of river hydraulic parameters and/or seasonality. The similarity between rivers is assessed using case based reasoning tools, and additional necessary information can be identified by using the software for the information reuse. Such systems represent support for users and open up possibilities for new modelling methods, monitoring facilities and for better river water quality management tools. They are useful also for the estimation of environmental impact of possible technological changes and can be applied in the pre-design stage or/and in the practical use of processes as well.
Resumo:
This work presents models and methods that have been used in producing forecasts of population growth. The work is intended to emphasize the reliability bounds of the model forecasts. Leslie model and various versions of logistic population models are presented. References to literature and several studies are given. A lot of relevant methodology has been developed in biological sciences. The Leslie modelling approach involves the use of current trends in mortality,fertility, migration and emigration. The model treats population divided in age groups and the model is given as a recursive system. Other group of models is based on straightforward extrapolation of census data. Trajectories of simple exponential growth function and logistic models are used to produce the forecast. The work presents the basics of Leslie type modelling and the logistic models, including multi- parameter logistic functions. The latter model is also analysed from model reliability point of view. Bayesian approach and MCMC method are used to create error bounds of the model predictions.
Resumo:
Calcium oxide looping is a carbon dioxide sequestration technique that utilizes the partially reversible reaction between limestone and carbon dioxide in two interconnected fluidised beds, carbonator and calciner. Flue gases from a combustor are fed into the carbonator where calcium oxide reacts with carbon dioxide within the gases at a temperature of 650 ºC. Calcium oxide is transformed into calcium carbonate which is circulated into the regenerative calciner, where calcium carbonate is returned into calcium oxide and a stream of pure carbon dioxide at a higher temperature of 950 ºC. Calcium oxide looping has proved to have a low impact on the overall process efficiency and would be easily retrofitted into existing power plants. This master’s thesis is done in participation to an EU funded project CaOling as a part of the Lappeenranta University of Technology deliverable, reactor modelling and scale-up tools. Thesis concentrates in creating the first model frame and finding the physically relevant phenomena governing the process.
Resumo:
Crystallization is a purification method used to obtain crystalline product of a certain crystal size. It is one of the oldest industrial unit processes and commonly used in modern industry due to its good purification capability from rather impure solutions with reasonably low energy consumption. However, the process is extremely challenging to model and control because it involves inhomogeneous mixing and many simultaneous phenomena such as nucleation, crystal growth and agglomeration. All these phenomena are dependent on supersaturation, i.e. the difference between actual liquid phase concentration and solubility. Homogeneous mass and heat transfer in the crystallizer would greatly simplify modelling and control of crystallization processes, such conditions are, however, not the reality, especially in industrial scale processes. Consequently, the hydrodynamics of crystallizers, i.e. the combination of mixing, feed and product removal flows, and recycling of the suspension, needs to be thoroughly investigated. Understanding of hydrodynamics is important in crystallization, especially inlargerscale equipment where uniform flow conditions are difficult to attain. It is also important to understand different size scales of mixing; micro-, meso- and macromixing. Fast processes, like nucleation and chemical reactions, are typically highly dependent on micro- and mesomixing but macromixing, which equalizes the concentrations of all the species within the entire crystallizer, cannot be disregarded. This study investigates the influence of hydrodynamics on crystallization processes. Modelling of crystallizers with the mixed suspension mixed product removal (MSMPR) theory (ideal mixing), computational fluid dynamics (CFD), and a compartmental multiblock model is compared. The importance of proper verification of CFD and multiblock models is demonstrated. In addition, the influence of different hydrodynamic conditions on reactive crystallization process control is studied. Finally, the effect of extreme local supersaturation is studied using power ultrasound to initiate nucleation. The present work shows that mixing and chemical feeding conditions clearly affect induction time and cluster formation, nucleation, growth kinetics, and agglomeration. Consequently, the properties of crystalline end products, e.g. crystal size and crystal habit, can be influenced by management of mixing and feeding conditions. Impurities may have varying impacts on crystallization processes. As an example, manganese ions were shown to replace magnesium ions in the crystal lattice of magnesium sulphate heptahydrate, increasing the crystal growth rate significantly, whereas sodium ions showed no interaction at all. Modelling of continuous crystallization based on MSMPR theory showed that the model is feasible in a small laboratoryscale crystallizer, whereas in larger pilot- and industrial-scale crystallizers hydrodynamic effects should be taken into account. For that reason, CFD and multiblock modelling are shown to be effective tools for modelling crystallization with inhomogeneous mixing. The present work shows also that selection of the measurement point, or points in the case of multiprobe systems, is crucial when process analytical technology (PAT) is used to control larger scale crystallization. The thesis concludes by describing how control of local supersaturation by highly localized ultrasound was successfully applied to induce nucleation and to control polymorphism in reactive crystallization of L-glutamic acid.
Resumo:
The results shown in this thesis are based on selected publications of the 2000s decade. The work was carried out in several national and EC funded public research projects and in close cooperation with industrial partners. The main objective of the thesis was to study and quantify the most important phenomena of circulating fluidized bed combustors by developing and applying proper experimental and modelling methods using laboratory scale equipments. An understanding of the phenomena plays an essential role in the development of combustion and emission performance, and the availability and controls of CFB boilers. Experimental procedures to study fuel combustion behaviour under CFB conditions are presented in the thesis. Steady state and dynamic measurements under well controlled conditions were carried out to produce the data needed for the development of high efficiency, utility scale CFB technology. The importance of combustion control and furnace dynamics is emphasized when CFB boilers are scaled up with a once through steam cycle. Qualitative information on fuel combustion characteristics was obtained directly by comparing flue gas oxygen responses during the impulse change experiments with fuel feed. A one-dimensional, time dependent model was developed to analyse the measurement data Emission formation was studied combined with fuel combustion behaviour. Correlations were developed for NO, N2O, CO and char loading, as a function of temperature and oxygen concentration in the bed area. An online method to characterize char loading under CFB conditions was developed and validated with the pilot scale CFB tests. Finally, a new method to control air and fuel feeds in CFB combustion was introduced. The method is based on models and an analysis of the fluctuation of the flue gas oxygen concentration. The effect of high oxygen concentrations on fuel combustion behaviour was also studied to evaluate the potential of CFB boilers to apply oxygenfiring technology to CCS. In future studies, it will be necessary to go through the whole scale up chain from laboratory phenomena devices through pilot scale test rigs to large scale, commercial boilers in order to validate the applicability and scalability of the, results. This thesis shows the chain between the laboratory scale phenomena test rig (bench scale) and the CFB process test rig (pilot). CFB technology has been scaled up successfully from an industrial scale to a utility scale during the last decade. The work shown in the thesis, for its part, has supported the development by producing new detailed information on combustion under CFB conditions.
Resumo:
Over the recent years, development in mobile working machines has concentrated on reducing emissions owing to the tightening rules and needs to improve energy utilization and reduce power losses. This study focuses on energy utilization and regeneration in an electro-hydraulic forklift, which is a lifting equipment application. The study starts from the modelling and simulation of a hydraulic forklift. The energy regeneration from the potential energy of the load was studied. Also a flow-based electric motor speed control was suggested in this thesis instead of the throttle control method or the variable displacement pump control. Topics related to further development in the future are discussed. Finally, a summary and conclusions are presented.
Resumo:
This work is devoted to the analysis of signal variation of the Cross-Direction and Machine-Direction measurements from paper web. The data that we possess comes from the real paper machine. Goal of the work is to reconstruct the basis weight structure of the paper and to predict its behaviour to the future. The resulting synthetic data is needed for simulation of paper web. The main idea that we used for describing the basis weight variation in the Cross-Direction is Empirical Orthogonal Functions (EOF) algorithm, which is closely related to Principal Component Analysis (PCA) method. Signal forecasting in time is based on Time-Series analysis. Two principal mathematical procedures that we used in the work are Autoregressive-Moving Average (ARMA) modelling and Ornstein–Uhlenbeck (OU) process.
Resumo:
This thesis presents a three-dimensional, semi-empirical, steady state model for simulating the combustion, gasification, and formation of emissions in circulating fluidized bed (CFB) processes. In a large-scale CFB furnace, the local feeding of fuel, air, and other input materials, as well as the limited mixing rate of different reactants produce inhomogeneous process conditions. To simulate the real conditions, the furnace should be modelled three-dimensionally or the three-dimensional effects should be taken into account. The only available methods for simulating the large CFB furnaces three-dimensionally are semi-empirical models, which apply a relatively coarse calculation mesh and a combination of fundamental conservation equations, theoretical models and empirical correlations. The number of such models is extremely small. The main objective of this work was to achieve a model which can be applied to calculating industrial scale CFB boilers and which can simulate all the essential sub-phenomena: fluid dynamics, reactions, the attrition of particles, and heat transfer. The core of the work was to develop the model frame and the required sub-models for determining the combustion and sorbent reactions. The objective was reached, and the developed model was successfully used for studying various industrial scale CFB boilers combusting different types of fuel. The model for sorbent reactions, which includes the main reactions for calcitic limestones, was applied for studying the new possible phenomena occurring in the oxygen-fired combustion. The presented combustion and sorbent models and principles can be utilized in other model approaches as well, including other empirical and semi-empirical model approaches, and CFD based simulations. The main achievement is the overall model frame which can be utilized for the further development and testing of new sub-models and theories, and for concentrating the knowledge gathered from the experimental work carried out at bench scale, pilot scale and industrial scale apparatus, and from the computational work performed by other modelling methods.
Resumo:
The objective of this master’s thesis is to investigate the loss behavior of three-level ANPC inverter and compare it with conventional NPC inverter. The both inverters are controlled with mature space vector modulation strategy. In order to provide the comparison both accurate and detailed enough NPC and ANPC simulation models should be obtained. The similar control model of SVM is utilized for both NPC and ANPC inverter models. The principles of control algorithms, the structure and description of models are clarified. The power loss calculation model is based on practical calculation approaches with certain assumptions. The comparison between NPC and ANPC topologies is presented based on results obtained for each semiconductor device, their switching and conduction losses and efficiency of the inverters. Alternative switching states of ANPC topology allow distributing losses among the switches more evenly, than in NPC inverter. Obviously, the losses of a switching device depend on its position in the topology. Losses distribution among the components in ANPC topology allows reducing the stress on certain switches, thus losses are equally distributed among the semiconductors, however the efficiency of the inverters is the same. As a new contribution to earlier studies, the obtained models of SVM control, NPC and ANPC inverters have been built. Thus, this thesis can be used in further more complicated modelling of full-power converters for modern multi-megawatt wind energy conversion systems.