71 resultados para NONLINEAR OPTIMIZATION
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
This dissertation is based on four articles dealing with modeling of ozonation. The literature part of this considers some models for hydrodynamics in bubble column simulation. A literature review of methods for obtaining mass transfer coefficients is presented. The methods presented to obtain mass transfer are general models and can be applied to any gas-liquid system. Ozonation reaction models and methods for obtaining stoichiometric coefficients and reaction rate coefficients for ozonation reactions are discussed in the final section of the literature part. In the first article, ozone gas-liquid mass transfer into water in a bubble column was investigated for different pH values. A more general method for estimation of mass transfer and Henry’s coefficient was developed from the Beltrán method. The ozone volumetric mass transfer coefficient and the Henry’s coefficient were determined simultaneously by parameter estimation using a nonlinear optimization method. A minor dependence of the Henry’s law constant on pH was detected at the pH range 4 - 9. In the second article, a new method using the axial dispersion model for estimation of ozone self-decomposition kinetics in a semi-batch bubble column reactor was developed. The reaction rate coefficients for literature equations of ozone decomposition and the gas phase dispersion coefficient were estimated and compared with the literature data. The reaction order in the pH range 7-10 with respect to ozone 1.12 and 0.51 the hydroxyl ion were obtained, which is in good agreement with literature. The model parameters were determined by parameter estimation using a nonlinear optimization method. Sensitivity analysis was conducted using object function method to obtain information about the reliability and identifiability of the estimated parameters. In the third article, the reaction rate coefficients and the stoichiometric coefficients in the reaction of ozone with the model component p-nitrophenol were estimated at low pH of water using nonlinear optimization. A novel method for estimation of multireaction model parameters in ozonation was developed. In this method the concentration of unknown intermediate compounds is presented as a residual COD (chemical oxygen demand) calculated from the measured COD and the theoretical COD for the known species. The decomposition rate of p-nitrophenol on the pathway producing hydroquinone was found to be about two times faster than the p-nitrophenol decomposition rate on the pathway producing 4- nitrocatechol. In the fourth article, the reaction kinetics of p-nitrophenol ozonation was studied in a bubble column at pH 2. Using the new reaction kinetic model presented in the previous article, the reaction kinetic parameters, rate coefficients, and stoichiometric coefficients as well as the mass transfer coefficient were estimated with nonlinear estimation. The decomposition rate of pnitrophenol was found to be equal both on the pathway producing hydroquinone and on the path way producing 4-nitrocathecol. Comparison of the rate coefficients with the case at initial pH 5 indicates that the p-nitrophenol degradation producing 4- nitrocathecol is more selective towards molecular ozone than the reaction producing hydroquinone. The identifiability and reliability of the estimated parameters were analyzed with the Marcov chain Monte Carlo (MCMC) method. @All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
Resumo:
Parin viime vuosikymmenen aikana on kehitetty huomattavasti entistä lujempia teräslaatuja, joiden käyttö ei kuitenkaan ole yleistynyt läheskään samaan tahtiin. Korkeamman hinnan lisäksi yksi merkittävä syy tähän on, että suunnittelijoilla ei usein ole riittäviä tietoja siitä, millaisissa tilanteissa lujemman teräslaadun käytöstä on merkittävää hyötyä. Tilannetta ei myöskään helpota se, että käytössä olevat standardit eivät tarjoa lainkaan ohjeistusta kaikkein lujimpien, myötörajaltaan yli 700MPa terästen käyttöön ja mitoitukseen. Tässä työssä pyritään tarjoamaan suunnittelijalle ohjeita ja nyrkkisääntöjä sopivan lujuusluokan ja profiilin valintaan sekä yleisesti lujempien teräslaatujen käyttöön. Lujemman teräslaadun käytöllä voidaan keventää suunniteltavaa rakennetta ja saada aikaan huomattavia painonsäästöjä. Usein ongelmaksi nousevat kuitenkin stabiiliuskriteerit, sillä teräksen lommahduskestävyys määräytyy suuresti sen lujuusluokasta siten, että mitä lujempaa teräs on, sitä helpommin se lommahtaa. Kun tämä yhdistetään siihen, että lujempaa terästä käytettäessä rakenteesta tulee optimoituna muutenkin pienempi ja kevyempi, kasvaa näiden kahden asian yhteisvaikutuksena kantokyvyn mukaan mitoitetun rakenteen taipuma korkeampiin lujuusluokkiin edetessä hyvin nopeasti sallittujen rajojen yli. Työssä etsitään siksi keinoja sopivan kompromissin löytämiseksi lujuuden ja jäykkyyden välille. Koska muotoilulla ja poikkileikkauksella on suuri merkitys sekä taipuman että stabiliteetin kannalta, tutkitaan erilaisia poikkileikkausvaihtoehtoja ja etsitään optimaalista poikkileikkausta taivutuspalkille matemaattisen optimointimallin avulla. Kun eri poikkileikkausvaihtoehdot on käsitelty ja optimoitu taivutuksen suhteen, tutkitaan poikkileikkauksia myös muissa kuormitustapauksissa. Huomattavan raskaan laskentatyön takia apuna käytetään Matlab-ohjelmistoa itse optimointiin ja Femap-ohjelmaa muiden kuormitustapausten tutkimiseen ja tulosten verifioitiin.
Resumo:
Environmental issues, including global warming, have been serious challenges realized worldwide, and they have become particularly important for the iron and steel manufacturers during the last decades. Many sites has been shut down in developed countries due to environmental regulation and pollution prevention while a large number of production plants have been established in developing countries which has changed the economy of this business. Sustainable development is a concept, which today affects economic growth, environmental protection, and social progress in setting up the basis for future ecosystem. A sustainable headway may attempt to preserve natural resources, recycle and reuse materials, prevent pollution, enhance yield and increase profitability. To achieve these objectives numerous alternatives should be examined in the sustainable process design. Conventional engineering work cannot address all of these substitutes effectively and efficiently to find an optimal route of processing. A systematic framework is needed as a tool to guide designers to make decisions based on overall concepts of the system, identifying the key bottlenecks and opportunities, which lead to an optimal design and operation of the systems. Since the 1980s, researchers have made big efforts to develop tools for what today is referred to as Process Integration. Advanced mathematics has been used in simulation models to evaluate various available alternatives considering physical, economic and environmental constraints. Improvements on feed material and operation, competitive energy market, environmental restrictions and the role of Nordic steelworks as energy supplier (electricity and district heat) make a great motivation behind integration among industries toward more sustainable operation, which could increase the overall energy efficiency and decrease environmental impacts. In this study, through different steps a model is developed for primary steelmaking, with the Finnish steel sector as a reference, to evaluate future operation concepts of a steelmaking site regarding sustainability. The research started by potential study on increasing energy efficiency and carbon dioxide reduction due to integration of steelworks with chemical plants for possible utilization of available off-gases in the system as chemical products. These off-gases from blast furnace, basic oxygen furnace and coke oven furnace are mainly contained of carbon monoxide, carbon dioxide, hydrogen, nitrogen and partially methane (in coke oven gas) and have proportionally low heating value but are currently used as fuel within these industries. Nonlinear optimization technique is used to assess integration with methanol plant under novel blast furnace technologies and (partially) substitution of coal with other reducing agents and fuels such as heavy oil, natural gas and biomass in the system. Technical aspect of integration and its effect on blast furnace operation regardless of capital expenditure of new operational units are studied to evaluate feasibility of the idea behind the research. Later on the concept of polygeneration system added and a superstructure generated with alternative routes for off-gases pretreatment and further utilization on a polygeneration system producing electricity, district heat and methanol. (Vacuum) pressure swing adsorption, membrane technology and chemical absorption for gas separation; partial oxidation, carbon dioxide and steam methane reforming for methane gasification; gas and liquid phase methanol synthesis are the main alternative process units considered in the superstructure. Due to high degree of integration in process synthesis, and optimization techniques, equation oriented modeling is chosen as an alternative and effective strategy to previous sequential modelling for process analysis to investigate suggested superstructure. A mixed integer nonlinear programming is developed to study behavior of the integrated system under different economic and environmental scenarios. Net present value and specific carbon dioxide emission is taken to compare economic and environmental aspects of integrated system respectively for different fuel systems, alternative blast furnace reductants, implementation of new blast furnace technologies, and carbon dioxide emission penalties. Sensitivity analysis, carbon distribution and the effect of external seasonal energy demand is investigated with different optimization techniques. This tool can provide useful information concerning techno-environmental and economic aspects for decision-making and estimate optimal operational condition of current and future primary steelmaking under alternative scenarios. The results of the work have demonstrated that it is possible in the future to develop steelmaking towards more sustainable operation.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
Selostus: Ó-lactalbumiinin ja ¿̐ư-lactoglobuliinin sentrifugointierotuksen optimointi
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
Diplomityön tarkoituksena oli parantaa Stora Enso Sachsenin siistausprosessissa tuotetun uusiomassan vaaleuden kehitystä ja tutkia siihen vaikuttavia tekijöitä. Työn kirjallisessa osassa käsiteltiin uusiomassan kuidutusta ja vaahdotussiistausprosessia, sekä keräyspaperin ominaisuuksia ja käyttöä paperiteollisuuden raaka-aineena. Kokeellisessa osassa keskityttiin modifioidun natriumsilikaatin annostuksenoptimointiin ja vaikutuksiin laboratorio- ja prosessioloissa, sekä kesäefektin vaikutuksen tutkimiseen kuidutuksessa ja flotaation eri vaiheissa. Natriumsilikaatin laboratoriotutkimuksessa havaittiin, että korkein vaaleus suhteellisesti pienimmällä laboratorioflotaation häviöllä saavutettiin korkeimmalla tutkitulla natriumsilikaatin annostuksella, joka oli 1,1 %. Korkea natriumsilikaattiannostus yhdistettyinä korkeisiin vetyperoksidiannostukseen, 0,5 %, sekä korkeaan kokonaisalkaliteettiin, 0.33 %, johti korkeimpaan massan vaaleuteen ja pienimpiin häviöihin. Laboratoriotutkimuksen pohjalta modifioidulla natriumsilikaatilla suoritettiin koeajoja prosessissa. Noin 1 % natriumsilikaatin annostuksella havaittiin parempi pH:n bufferointikyky, pienempi kalsiumkarbonaatin määrä flotaation primäärivaiheissa, sekä lievästi parempi massan vaaleus verrattuna prosessissa aiemmin käytettyyn standardinatriumsilikaattiin. Kesäefektitutkimuksessa havaittiin, että kesäefektillä on suurin vaikutus esiflotaation primäärivaiheeseen, sillä primäärivaiheessa kuitujen osuus on huomattavasti suurempi kuin sekundäärivaiheissa. Esiflotaation primäärivaiheen uusiomassojen laboratorioflotaatioiden avulla saavutettujen maksimivaaleuksien ero kesän ja talven välillä oli noin 1,5 %ISO. Kesäefektin ei havaittu suuresti vaikuttavan flotaation sekundäärivaiheisiin.
Resumo:
Pumppauksessa arvioidaan olevan niin teknisesti kuin taloudellisestikin huomattavia mahdollisuuksia säästää energiaa. Maailmanlaajuisesti pumppaus kuluttaa lähes 22 % sähkö-moottorien energiantarpeesta. Tietyillä teollisuudenaloilla jopa yli 50 % moottorien käyttämästä sähköenergiasta voi kulua pumppaukseen. Jäteveden pumppauksessa pumppujen toiminta perustuu tyypillisesti on-off käyntiin, jolloin pumpun ollessa päällä se käy täydellä teholla. Monissa tapauksissa pumput ovat myös ylimitoitettuja. Yhdessä nämä seikat johtavat kasvaneeseen energian kulutukseen. Työn teoriaosassa esitellään perusteet jätevesihuollosta ja jäteveden käsittelystä sekä pumppaussysteemin pääkomponentit: pumppu, putkisto, moottori ja taajuusmuuttaja. Työn empiirisessä osassa esitellään työn aikana kehitetty laskuri, jonka avulla voidaan arvioida energiansäästöpotentiaalia jäteveden pumppaussysteemeissä. Laskurilla on mandollista laskea energiansäästöpotentiaali käytettäessä pumpun tuoton ohjaustapana pyörimisnopeuden säätöä taajuusmuuttajalla on-off säädön sijasta. Laskuri ilmoittaa optimaalisimmanpumpun pyörimisnopeuden sekä ominaisenergiankulutuksen. Perustuen laskuriin, kolme kunnallista jätevedenpumppaamoa tutkittiin. Myös laboratorio-testitsuoritettiin laskurin simuloimiseksi sekä energiansäästöpotentiaalin arvioimiseksi. Tutkimukset osoittavat, että jätevedenpumppauksessa on huomattavia mandollisuuksia säästää energiaa pumpun pyörimisnopeutta pienentämällä. Geodeettisen nostokorkeuden ollessa pieni, voidaan energiaa säästää jopa 50 % ja pitkällä aikavälillä säästö voi olla merkittävä. Tulokset vahvistavat myös tarpeen jätevedenpumppaussysteemien toiminnan optimoimiseksi.
Resumo:
In this thesis, cleaning of ceramic filter media was studied. Mechanisms of fouling and dissolution of iron compounds, as well as methods for cleaning ceramic membranes fouled by iron deposits were studied in the literature part. Cleaning agents and different methods were closer examined in the experimental part of the thesis. Pyrite is found in the geologic strata. It is oxidized to form ferrous ions Fe(II) and ferric ions Fe(III). Fe(III) is further oxidized in the hydrolysis to form ferric hydroxide. Hematite and goethite, for instance, are naturally occurring iron oxidesand hydroxides. In contact with filter media, they can cause severe fouling, which common cleaning techniques competent enough to remove. Mechanisms for the dissolution of iron oxides include the ligand-promoted pathway and the proton-promoted pathway. The dissolution can also be reductive or non-reductive. The most efficient mechanism is the ligand-promoted reductive mechanism that comprises two stages: the induction period and the autocatalytic dissolution.Reducing agents(such as hydroquinone and hydroxylamine hydrochloride), chelating agents (such as EDTA) and organic acids are used for the removal of iron compounds. Oxalic acid is the most effective known cleaning agent for iron deposits. Since formulations are often more effective than organic acids, reducing agents or chelating agents alone, the citrate¿bicarbonate¿dithionite system among others is well studied in the literature. The cleaning is also enhanced with ultrasound and backpulsing.In the experimental part, oxalic acid and nitric acid were studied alone andin combinations. Also citric acid and ascorbic acid among other chemicals were tested. Soaking experiments, experiments with ultrasound and experiments for alternative methods to apply the cleaning solution on the filter samples were carried out. Permeability and ISO Brightness measurements were performed to examine the influence of the cleaning methods on the samples. Inductively coupled plasma optical emission spectroscopy (ICP-OES) analysis of the solutions was carried out to determine the dissolved metals.
Resumo:
Tutkimus keskittyy kansainväliseen hajauttamiseen suomalaisen sijoittajan näkökulmasta. Tutkimuksen toinen tavoite on selvittää tehostavatko uudet kovarianssimatriisiestimaattorit minimivarianssiportfolion optimointiprosessia. Tavallisen otoskovarianssimatriisin lisäksi optimoinnissa käytetään kahta kutistusestimaattoria ja joustavaa monimuuttuja-GARCH(1,1)-mallia. Tutkimusaineisto koostuu Dow Jonesin toimialaindekseistä ja OMX-H:n portfolioindeksistä. Kansainvälinen hajautusstrategia on toteutettu käyttäen toimialalähestymistapaa ja portfoliota optimoidaan käyttäen kahtatoista komponenttia. Tutkimusaieisto kattaa vuodet 1996-2005 eli 120 kuukausittaista havaintoa. Muodostettujen portfolioiden suorituskykyä mitataan Sharpen indeksillä. Tutkimustulosten mukaan kansainvälisesti hajautettujen investointien ja kotimaisen portfolion riskikorjattujen tuottojen välillä ei ole tilastollisesti merkitsevää eroa. Myöskään uusien kovarianssimatriisiestimaattoreiden käytöstä ei synnytilastollisesti merkitsevää lisäarvoa verrattuna otoskovarianssimatrisiin perustuvaan portfolion optimointiin.
Resumo:
An alternative relation to Pareto-dominance relation is proposed. The new relation is based on ranking a set of solutions according to each separate objective and an aggregation function to calculate a scalar fitness value for each solution. The relation is called as ranking-dominance and it tries to tackle the curse of dimensionality commonly observedin evolutionary multi-objective optimization. Ranking-dominance can beused to sort a set of solutions even for a large number of objectives when Pareto-dominance relation cannot distinguish solutions from one another anymore. This permits search to advance even with a large number of objectives. It is also shown that ranking-dominance does not violate Pareto-dominance. Results indicate that selection based on ranking-dominance is able to advance search towards the Pareto-front in some cases, where selection based on Pareto-dominance stagnates. However, in some cases it is also possible that search does not proceed into direction of Pareto-front because the ranking-dominance relation permits deterioration of individual objectives. Results also show that when the number of objectives increases, selection based on just Pareto-dominance without diversity maintenance is able to advance search better than with diversity maintenance. Therefore, diversity maintenance is connive at the curse of dimensionality.