8 resultados para Least Square Method
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Sähkönkulutuksen lyhyen aikavälin ennustamista on tutkittu jo pitkään. Pohjoismaisien sähkömarkkinoiden vapautuminen on vaikuttanut sähkönkulutuksen ennustamiseen. Aluksi työssä perehdyttiin aiheeseen liittyvään kirjallisuuteen. Sähkönkulutuksen käyttäytymistä tutkittiin eri aikoina. Lämpötila tilastojen käyttökelpoisuutta arvioitiin sähkönkulutusennustetta ajatellen. Kulutus ennusteet tehtiin tunneittain ja ennustejaksona käytettiin yhtä viikkoa. Työssä tutkittiin sähkönkulutuksen- ja lämpötiladatan saatavuutta ja laatua Nord Poolin markkina-alueelta. Syötettävien tietojen ominaisuudet vaikuttavat tunnittaiseen sähkönkulutuksen ennustamiseen. Sähkönkulutuksen ennustamista varten mallinnettiin kaksi lähestymistapaa. Testattavina malleina käytettiin regressiomallia ja autoregressiivistä mallia (autoregressive model, ARX). Mallien parametrit estimoitiin pienimmän neliösumman menetelmällä. Tulokset osoittavat että kulutus- ja lämpötiladata on tarkastettava jälkikäteen koska reaaliaikaisen syötetietojen laatu on huonoa. Lämpötila vaikuttaa kulutukseen talvella, mutta se voidaan jättää huomiotta kesäkaudella. Regressiomalli on vakaampi kuin ARX malli. Regressiomallin virhetermi voidaan mallintaa aikasarjamallia hyväksikäyttäen.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
Osakemarkkinoilta on jo useiden vuosien ajan julkaistu lukuisia tutkimuksia, joissa on esitetty havaintoja ajallisesta säännönmukaisuudesta osakkeiden hinnoissa, joita ei pystytä selittämään markkinakohtaisilla fundamenteilla. Nämä niin kutsutut kalenterianomaliat esiintyvät tyypillisesti ajallisissa käännepisteissä, kuten vuoden, kuukauden tai viikon vaihtuessa seuraavaksi. Myös erilaisten katkosten, kuten juhlapyhien, kaupankäyntirutiineissa on havaittu aiheuttavan anomalioita. Tutkimuksen tavoitteena oli tutkia osakemarkkinoilla havaittujen kalenterianomalioiden esiintymistä pohjoismaisilla sähkömarkkinoilla. Tutkitut anomaliat olivat viikonpäivä- kuukausi-, kuunvaihde- ja juhlapyhäanomalia. Näiden lisäksi tutkittiin tuottojen käyttäytymistä optioiden erääntymispäivien läheisyydessä. Yksittäisten tuotteiden sijasta tarkastelut suoritettiin sesonki- ja kvartaalituotteista muodostetuilla vuosituotteilla. Testauksessa käytettiin pienimmän neliösumman menetelmää, huomioidenheteroskedastisuuden, autokorrelaation ja multikollineaarisuuden vaikutukset. Pelkkien kalenterimuuttujien lisäksi testit suoritettiin regressiomalleilla, joissa lisäselittäjinä käytettiin spot-hintaa, päästöoikeuden hintaa ja/tai sade-ennusteita. Tarkastelujakso koostui vuosista 1998-2006.
Resumo:
In this thesis different parameters influencing critical flux in protein ultrafiltration and membrane foul-ing were studied. Short reviews of proteins, cross-flow ultrafiltration, flux decline and criticalflux and the basic theory of Partial Least Square analysis (PLS) are given at the beginning. The experiments were mainly performed using dilute solutions of globular proteins, commercial polymeric membranes and laboratory scale apparatuses. Fouling was studied by flux, streaming potential and FTIR-ATR measurements. Critical flux was evaluated by different kinds of stepwise procedures and by both con-stant pressure and constant flux methods. The critical flux was affected by transmembrane pressure, flow velocity, protein concentration, mem-brane hydrophobicity and protein and membrane charges. Generally, the lowest critical fluxes were obtained at the isoelectric points of the protein and the highest in the presence of electrostatic repulsion between the membrane surface and the protein molecules. In the laminar flow regime the critical flux increased with flow velocity, but not any more above this region. An increase in concentration de-creased the critical flux. Hydrophobic membranes showed fouling in all charge conditionsand, furthermore, especially at the beginning of the experiment even at very low transmembrane pressures. Fouling of these membranes was thought to be due to protein adsorption by hydrophobic interactions. The hydrophilic membranes used suffered more from reversible fouling and concentration polarisation than from irreversible foul-ing. They became fouled at higher transmembrane pressures becauseof pore blocking. In this thesis some new aspects on critical flux are presented that are important for ultrafiltration and fractionation of proteins.
Resumo:
The present world energy production is heavily relying on the combustion of solid fuels like coals, peat, biomass, municipal solid waste, whereas the share of renewable fuels is anticipated to increase in the future to mitigate climate change. In Finland, peat and wood are widely used for energy production. In any case, the combustion of solid fuels results in generation of several types of thermal conversion residues, such as bottom ash, fly ash, and boiler slag. The predominant residue type is determined by the incineration technology applied, while its composition is primarily relevant to the composition of fuels combusted. An extensive research has been conducted on technical suitability of ash for multiple recycling methods. Most of attention was drawn to the recycling of the coal combustion residues, as coal is the primary solid fuel consumed globally. The recycling methods of coal residues include utilization in a cement industry, in concrete manufacturing, and mine backfilling, to name few. Biomass combustion residues were also studied to some extent with forest fertilization, road construction, and road stabilization being the predominant utilization options. Lastly, residues form municipal solid waste incineration attracted more attention recently following the growing number of waste incineration plants globally. The recycling methods of waste incineration residues are the most limited due to its hazardous nature and varying composition, and include, among others, landfill construction, road construction, mine backfilling. In the study, environmental and economic aspects of multiple recycling options of thermal conversion residues generated within a case-study area were studied. The case-study area was South-East Finland. The environmental analysis was performed using an internationally recognized methodology — life cycle assessment. Economic assessment was conducted applying a widely used methodology — cost-benefit analysis. Finally, the results of the analyses were combined to enable easier comparison of the recycling methods. The recycling methods included the use of ash in forest fertilization, road construction, road stabilization, and landfill construction. Ash landfilling was set as a baseline scenario. Quantitative data about the amounts of ash generated and its composition was obtained from companies, their environmental reports, technical reports and other previously published literature. Overall, the amount of ash in the case-study area was 101 700 t. However, the data about 58 400 t of fly ash and 35 100 t of bottom ash and boiler slag were included in the study due to lack of data about leaching of heavy metals in some cases. The recycling methods were modelled according to the scientific studies published previously. Overall, the results of the study indicated that ash utilization for fertilization and neutralization of 17 600 ha of forest was the most economically beneficial method, which resulted in the net present value increase by 58% compared to ash landfilling. Regarding the environmental impact, the use of ash in the construction of 11 km of roads was the most attractive method with decreased environmental impact of 13% compared to ash landfilling. The least preferred method was the use of ash for landfill construction since it only enabled 11% increase of net present value, while inducing additional 1% of negative impact on the environment. Therefore, a following recycling route was proposed in the study. Where possible and legally acceptable, recycle fly and bottom ash for forest fertilization, which has strictest requirements out of all studied methods. If the quality of fly ash is not suitable for forest fertilization, then it should be utilized, first, in paved road construction, second, in road stabilization. Bottom ash not suitable for forest fertilization, as well as boiler slag, should be used in landfill construction. Landfilling should only be practiced when recycling by either of the methods is not possible due to legal requirements or there is not enough demand on the market. Current demand on ash and possible changes in the future were assessed in the study. Currently, the area of forest fertilized in the case-study are is only 451 ha, whereas about 17 600 ha of forest could be fertilized with ash generated in the region. Provided that the average forest fertilizing values in Finland are higher and the area treated with fellings is about 40 000 ha, the amount of ash utilized in forest fertilization could be increased. Regarding road construction, no new projects launched by the Center of Economic Development, Transport and the Environment in the case-study area were identified. A potential application can be found in the construction of private roads. However, no centralized data about such projects is available. The use of ash in stabilization of forest roads is not expected to increased in the future with a current downwards trend in the length of forest roads built. Finally, the use of ash in landfill construction is not a promising option due to the reducing number of landfills in operation in Finland.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.