954 resultados para Robust estimation
Resumo:
This thesis presents a set of methods and models for estimation of iron and slag flows in the blast furnace hearth and taphole. The main focus was put on predicting taphole flow patterns and estimating the effects of various taphole conditions on the drainage behavior of the blast furnace hearth. All models were based on a general understanding of the typical tap cycle of an industrial blast furnace. Some of the models were evaluated on short-term process data from the reference furnace. A computational fluid dynamics (CFD) model was built and applied to simulate the complicated hearth flows and thus to predict the regions of the hearth exerted to erosion under various operating conditions. Key boundary variables of the CFD model were provided by a simplified drainage model based on the first principles. By examining the evolutions of liquid outflow rates measured from the furnace studied, the drainage model was improved to include the effects of taphole diameter and length. The estimated slag delays showed good agreement with the observed ones. The liquid flows in the taphole were further studied using two different models and the results of both models indicated that it is more likely that separated flow of iron and slag occurs in the taphole when the liquid outflow rates are comparable during tapping. The drainage process was simulated with an integrated model based on an overall balance analysis: The high in-furnace overpressure can compensate for the resistances induced by the liquid flows in the hearth and through the taphole. Finally, a recently developed multiphase CFD model including interfacial forces between immiscible liquids was developed and both the actual iron-slag system and a water-oil system in laboratory scale were simulated. The model was demonstrated to be a useful tool for simulating hearth flows for gaining understanding of the complex phenomena in the drainage of the blast furnace.
Resumo:
Design of flight control laws, verification of performance predictions, and the implementation of flight simulations are tasks that require a mathematical model of the aircraft dynamics. The dynamical models are characterized by coefficients (aerodynamic derivatives) whose values must be determined from flight tests. This work outlines the use of the Extended Kalman Filter (EKF) in obtaining the aerodynamic derivatives of an aircraft. The EKF shows several advantages over the more traditional least-square method (LS). Among these the most important are: there are no restrictions on linearity or in the form which the parameters appears in the mathematical model describing the system, and it is not required that these parameters be time invariant. The EKF uses the statistical properties of the process and the observation noise, to produce estimates based on the mean square error of the estimates themselves. Differently, the LS minimizes a cost function based on the plant output behavior. Results for the estimation of some longitudinal aerodynamic derivatives from simulated data are presented.
Resumo:
This paper deals with the use of the conjugate gradient method of function estimation for the simultaneous identification of two unknown boundary heat fluxes in parallel plate channels. The fluid flow is assumed to be laminar and hydrodynamically developed. Temperature measurements taken inside the channel are used in the inverse analysis. The accuracy of the present solution approach is examined by using simulated measurements containing random errors, for strict cases involving functional forms with discontinuities and sharp-corners for the unknown functions. Three different types of inverse problems are addressed in the paper, involving the estimation of: (i) Spatially dependent heat fluxes; (ii) Time-dependent heat fluxes; and (iii) Time and spatially dependent heat fluxes.
Resumo:
In this work, we present the solution of a class of linear inverse heat conduction problems for the estimation of unknown heat source terms, with no prior information of the functional forms of timewise and spatial dependence of the source strength, using the conjugate gradient method with an adjoint problem. After describing the mathematical formulation of a general direct problem and the procedure for the solution of the inverse problem, we show applications to three transient heat transfer problems: a one-dimensional cylindrical problem; a two-dimensional cylindrical problem; and a one-dimensional problem with two plates.
Resumo:
Wind power is a low-carbon energy production form that reduces the dependence of society on fossil fuels. Finland has adopted wind energy production into its climate change mitigation policy, and that has lead to changes in legislation, guidelines, regional wind power areas allocation and establishing a feed-in tariff. Wind power production has indeed boosted in Finland after two decades of relatively slow growth, for instance from 2010 to 2011 wind energy production increased with 64 %, but there is still a long way to the national goal of 6 TWh by 2020. This thesis introduces a GIS-based decision-support methodology for the preliminary identification of suitable areas for wind energy production including estimation of their level of risk. The goal of this study was to define the least risky places for wind energy development within Kemiönsaari municipality in Southwest Finland. Spatial multicriteria decision analysis (SMCDA) has been used for searching suitable wind power areas along with many other location-allocation problems. SMCDA scrutinizes complex ill-structured decision problems in GIS environment using constraints and evaluation criteria, which are aggregated using weighted linear combination (WLC). Weights for the evaluation criteria were acquired using analytic hierarchy process (AHP) with nine expert interviews. Subsequently, feasible alternatives were ranked in order to provide a recommendation and finally, a sensitivity analysis was conducted for the determination of recommendation robustness. The first study aim was to scrutinize the suitability and necessity of existing data for this SMCDA study. Most of the available data sets were of sufficient resolution and quality. Input data necessity was evaluated qualitatively for each data set based on e.g. constraint coverage and attribute weights. Attribute quality was estimated mainly qualitatively by attribute comprehensiveness, operationality, measurability, completeness, decomposability, minimality and redundancy. The most significant quality issue was redundancy as interdependencies are not tolerated by WLC and AHP does not include measures to detect them. The third aim was to define the least risky areas for wind power development within the study area. The two highest ranking areas were Nordanå-Lövböle and Påvalsby followed by Helgeboda, Degerdal, Pungböle, Björkboda, and Östanå-Labböle. The fourth aim was to assess the recommendation reliability, and the top-ranking two areas proved robust whereas the other ones were more sensitive.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
The recent emergence of low-cost RGB-D sensors has brought new opportunities for robotics by providing affordable devices that can provide synchronized images with both color and depth information. In this thesis, recent work on pose estimation utilizing RGBD sensors is reviewed. Also, a pose recognition system for rigid objects using RGB-D data is implemented. The implementation uses half-edge primitives extracted from the RGB-D images for pose estimation. The system is based on the probabilistic object representation framework by Detry et al., which utilizes Nonparametric Belief Propagation for pose inference. Experiments are performed on household objects to evaluate the performance and robustness of the system.
Resumo:
After introducing the no-cloning theorem and the most common forms of approximate quantum cloning, universal quantum cloning is considered in detail. The connections it has with universal NOT-gate, quantum cryptography and state estimation are presented and briefly discussed. The state estimation connection is used to show that the amount of extractable classical information and total Bloch vector length are conserved in universal quantum cloning. The 1 2 qubit cloner is also shown to obey a complementarity relation between local and nonlocal information. These are interpreted to be a consequence of the conservation of total information in cloning. Finally, the performance of the 1 M cloning network discovered by Bužek, Hillery and Knight is studied in the presence of decoherence using the Barenco et al. approach where random phase fluctuations are attached to 2-qubit gates. The expression for average fidelity is calculated for three cases and it is found to depend on the optimal fidelity and the average of the phase fluctuations in a specific way. It is conjectured to be the form of the average fidelity in the general case. While the cloning network is found to be rather robust, it is nevertheless argued that the scalability of the quantum network implementation is poor by studying the effect of decoherence during the preparation of the initial state of the cloning machine in the 1 ! 2 case and observing that the loss in average fidelity can be large. This affirms the result by Maruyama and Knight, who reached the same conclusion in a slightly different manner.
Resumo:
More discussion is required on how and which types of biomass should be used to achieve a significant reduction in the carbon load released into the atmosphere in the short term. The energy sector is one of the largest greenhouse gas (GHG) emitters and thus its role in climate change mitigation is important. Replacing fossil fuels with biomass has been a simple way to reduce carbon emissions because the carbon bonded to biomass is considered as carbon neutral. With this in mind, this thesis has the following objectives: (1) to study the significance of the different GHG emission sources related to energy production from peat and biomass, (2) to explore opportunities to develop more climate friendly biomass energy options and (3) to discuss the importance of biogenic emissions of biomass systems. The discussion on biogenic carbon and other GHG emissions comprises four case studies of which two consider peat utilization, one forest biomass and one cultivated biomasses. Various different biomass types (peat, pine logs and forest residues, palm oil, rapeseed oil and jatropha oil) are used as examples to demonstrate the importance of biogenic carbon to life cycle GHG emissions. The biogenic carbon emissions of biomass are defined as the difference in the carbon stock between the utilization and the non-utilization scenarios of biomass. Forestry-drained peatlands were studied by using the high emission values of the peatland types in question to discuss the emission reduction potential of the peatlands. The results are presented in terms of global warming potential (GWP) values. Based on the results, the climate impact of the peat production can be reduced by selecting high-emission-level peatlands for peat production. The comparison of the two different types of forest biomass in integrated ethanol production in pulp mill shows that the type of forest biomass impacts the biogenic carbon emissions of biofuel production. The assessment of cultivated biomasses demonstrates that several selections made in the production chain significantly affect the GHG emissions of biofuels. The emissions caused by biofuel can exceed the emissions from fossil-based fuels in the short term if biomass is in part consumed in the process itself and does not end up in the final product. Including biogenic carbon and other land use carbon emissions into the carbon footprint calculations of biofuel reveals the importance of the time frame and of the efficiency of biomass carbon content utilization. As regards the climate impact of biomass energy use, the net impact on carbon stocks (in organic matter of soils and biomass), compared to the impact of the replaced energy source, is the key issue. Promoting renewable biomass regardless of biogenic GHG emissions can increase GHG emissions in the short term and also possibly in the long term.
Resumo:
Growing concerns about toxicity and development of resistance against synthetic herbicides have demanded looking for alternative weed management approaches. Allelopathy has gained sufficient support and potential for sustainable weed management. Aqueous extracts of six plant species (sunflower, rice, mulberry, maize, brassica and sorghum) in different combinations alone or in mixture with 75% reduced dose of herbicides were evaluated for two consecutive years under field conditions. A weedy check and S-metolachlor with atrazine (pre emergence) and atrazine alone (post emergence) at recommended rates was included for comparison. Weed dynamics, maize growth indices and yield estimation were done by following standard procedures. All aqueous plant extract combinations suppressed weed growth and biomass. Moreover, the suppressive effect was more pronounced when aqueous plant extracts were supplemented with reduced doses of herbicides. Brassica-sunflower-sorghum combination suppressed weeds by 74-80, 78-70, 65-68% during both years of study that was similar with S-metolachlor along half dose of atrazine and full dose of atrazine alone. Crop growth rate and dry matter accumulation attained peak values of 32.68 and 1,502 g m-2 d-1 for brassica-sunflower-sorghum combination at 60 and 75 days after sowing. Curve fitting regression for growth and yield traits predicted strong positive correlation to grain yield and negative correlation to weed dry biomass under allelopathic weed management in maize crop.
Resumo:
In this thesis, the main point of interest is the robust control of a DC/DC converter. The use of reactive components in the power conversion gives rise to dynamical effects in DC/DC converters and the dynamical effects of the converter mandates the use of active control. Active control uses measurements from the converter to correct errors present in the converter’s output. The controller needs to be able to perform in the presence of varying component values and different kinds of disturbances in loading and noises in measurements. Such a feature in control design is referred as robustness. This thesis also contains survey of general properties of DC/DC converters and their effects on control design. In this thesis, a linear robust control design method is studied. A robust controller is then designed and applied to the current control of a phase shifted full bridge converter. The experimental results are shown to match simulations.
A simple model for the estimation of congenital malformation frequency in racially mixed populations
Resumo:
A simple model is proposed, using the method of maximum likelihood to estimate malformation frequencies in racial groups based on data obtained from hospital services. This model uses the proportions of racial admixture, and the observed malformation frequency. It was applied to two defects: postaxial polydactyly and cleft lip, the frequencies of which are recognizedly heterogeneous among racial groups. The frequencies estimated in each racial group were those expected for these malformations, which proves the applicability of the method.
Resumo:
The use of limiting dilution assay (LDA) for assessing the frequency of responders in a cell population is a method extensively used by immunologists. A series of studies addressing the statistical method of choice in an LDA have been published. However, none of these studies has addressed the point of how many wells should be employed in a given assay. The objective of this study was to demonstrate how a researcher can predict the number of wells that should be employed in order to obtain results with a given accuracy, and, therefore, to help in choosing a better experimental design to fulfill one's expectations. We present the rationale underlying the expected relative error computation based on simple binomial distributions. A series of simulated in machina experiments were performed to test the validity of the a priori computation of expected errors, thus confirming the predictions. The step-by-step procedure of the relative error estimation is given. We also discuss the constraints under which an LDA must be performed.
Resumo:
The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.