12 resultados para Divergence time estimation
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Today cloud computing is the next stage in development information-oriented society in field of information technologies. Great attention is paid to cloud computing in general, but the lack of scientific consideration to components brings to the problem, that not all aspects are well examined. This thesis is an attempt to consider Platform as a Service (a technology of providing development environment through the Internet) from divergent angles. Technical characteristics, costs, time, estimation of effectiveness, risks, strategies that can be applied, migration process, advantages and disadvantages and the future of the approach are examined to get the overall picture of cloud platforms. During the work literature study was used to examine Platform as a Service, characteristics of existent cloud platforms were explored and a model of a typical software development company was developed to create a scenario of migration to cloud technologies. The research showed that besides all virtues in reducing costs and time, cloud platforms have some significant obstacles in adoption. Privacy, security and insufficient legislation impede the concept to be widespread.
Resumo:
Gas-liquid mass transfer is an important issue in the design and operation of many chemical unit operations. Despite its importance, the evaluation of gas-liquid mass transfer is not straightforward due to the complex nature of the phenomena involved. In this thesis gas-liquid mass transfer was evaluated in three different gas-liquid reactors in a traditional way by measuring the volumetric mass transfer coefficient (kLa). The studied reactors were a bubble column with a T-junction two-phase nozzle for gas dispersion, an industrial scale bubble column reactor for the oxidation of tetrahydroanthrahydroquinone and a concurrent downflow structured bed.The main drawback of this approach is that the obtained correlations give only the average volumetric mass transfer coefficient, which is dependent on average conditions. Moreover, the obtained correlations are valid only for the studied geometry and for the chemical system used in the measurements. In principle, a more fundamental approach is to estimate the interfacial area available for mass transfer from bubble size distributions obtained by solution of population balance equations. This approach has been used in this thesis by developing a population balance model for a bubble column together with phenomenological models for bubble breakage and coalescence. The parameters of the bubble breakage rate and coalescence rate models were estimated by comparing the measured and calculated bubble sizes. The coalescence models always have at least one experimental parameter. This is because the bubble coalescence depends on liquid composition in a way which is difficult to evaluate using known physical properties. The coalescence properties of some model solutions were evaluated by measuring the time that a bubble rests at the free liquid-gas interface before coalescing (the so-calledpersistence time or rest time). The measured persistence times range from 10 msup to 15 s depending on the solution. The coalescence was never found to be instantaneous. The bubble oscillates up and down at the interface at least a coupleof times before coalescence takes place. The measured persistence times were compared to coalescence times obtained by parameter fitting using measured bubble size distributions in a bubble column and a bubble column population balance model. For short persistence times, the persistence and coalescence times are in good agreement. For longer persistence times, however, the persistence times are at least an order of magnitude longer than the corresponding coalescence times from parameter fitting. This discrepancy may be attributed to the uncertainties concerning the estimation of energy dissipation rates, collision rates and mechanisms and contact times of the bubbles.
Resumo:
Identification of order of an Autoregressive Moving Average Model (ARMA) by the usual graphical method is subjective. Hence, there is a need of developing a technique to identify the order without employing the graphical investigation of series autocorrelations. To avoid subjectivity, this thesis focuses on determining the order of the Autoregressive Moving Average Model using Reversible Jump Markov Chain Monte Carlo (RJMCMC). The RJMCMC selects the model from a set of the models suggested by better fitting, standard deviation errors and the frequency of accepted data. Together with deep analysis of the classical Box-Jenkins modeling methodology the integration with MCMC algorithms has been focused through parameter estimation and model fitting of ARMA models. This helps to verify how well the MCMC algorithms can treat the ARMA models, by comparing the results with graphical method. It has been seen that the MCMC produced better results than the classical time series approach.
Resumo:
This study investigates futures market efficiency and optimal hedge ratio estimation. First, cointegration between spot and futures prices is studied using Johansen method, with two different model specifications. If prices are found cointegrated, restrictions on cointegrating vector and adjustment coefficients are imposed, to account for unbiasedness, weak exogeneity and prediction hypothesis. Second, optimal hedge ratios are estimated using static OLS, and time-varying DVEC and CCC models. In-sample and out-of-sample results for one, two and five period ahead are reported. The futures used in thesis are RTS index, EUR/RUB exchange rate and Brent oil, traded in Futures and options on RTS.(FORTS) For in-sample period, data points were acquired from start of trading of each futures contract, RTS index from August 2005, EUR/RUB exchange rate March 2009 and Brent oil October 2008, lasting till end of May 2011. Out-of-sample period covers start of June 2011, till end of December 2011. Our results indicate that all three asset pairs, spot and futures, are cointegrated. We found RTS index futures to be unbiased predictor of spot price, mixed evidence for exchange rate, and for Brent oil futures unbiasedness was not supported. Weak exogeneity results for all pairs indicated spot price to lead in price discovery process. Prediction hypothesis, unbiasedness and weak exogeneity of futures, was rejected for all asset pairs. Variance reduction results varied between assets, in-sample in range of 40-85 percent and out-of sample in range of 40-96 percent. Differences between models were found small, except for Brent oil in which OLS clearly dominated. Out-of-sample results indicated exceptionally high variance reduction for RTS index, approximately 95 percent.
Resumo:
Bone strain plays a major role as the activation signal for the bone (re)modeling process, which is vital for keeping bones healthy. Maintaining high bone mineral density reduces the chances of fracture in the event of an accident. Numerous studies have shown that bones can be strengthened with physical exercise. Several hypotheses have asserted that a stronger osteogenic (bone producing) effect results from dynamic exercise than from static exercise. These previous studies are based on short-term empirical research, which provide the motivation for justifying the experimental results with a solid mathematical background. The computer simulation techniques utilized in this work allow for non-invasive bone strain estimation during physical activity at any bone site within the human skeleton. All models presented in the study are threedimensional and actuated by muscle models to replicate the real conditions accurately. The objective of this work is to determine and present loading-induced bone strain values resulting from physical activity. It includes a comparison of strain resulting from four different gym exercises (knee flexion, knee extension, leg press, and squat) and walking, with the results reported for walking and jogging obtained from in-vivo measurements described in the literature. The objective is realized primarily by carrying out flexible multibody dynamics computer simulations. The dissertation combines the knowledge of finite element analysis and multibody simulations with experimental data and information available from medical field literature. Measured subject-specific motion data was coupled with forward dynamics simulation to provide natural skeletal movement. Bone geometries were defined using a reverse engineering approach based on medical imaging techniques. Both computed tomography and magnetic resonance imaging were utilized to explore modeling differences. The predicted tibia bone strains during walking show good agreement with invivo studies found in the literature. Strain measurements were not available for gym exercises; therefore, the strain results could not be validated. However, the values seem reasonable when compared to available walking and running invivo strain measurements. The results can be used for exercise equipment design aimed at strengthening the bones as well as the muscles during workout. Clinical applications in post fracture recovery exercising programs could also be the target. In addition, the methodology introduced in this study, can be applied to investigate the effect of weightlessness on astronauts, who often suffer bone loss after long time spent in the outer space.
Resumo:
State-of-the-art predictions of atmospheric states rely on large-scale numerical models of chaotic systems. This dissertation studies numerical methods for state and parameter estimation in such systems. The motivation comes from weather and climate models and a methodological perspective is adopted. The dissertation comprises three sections: state estimation, parameter estimation and chemical data assimilation with real atmospheric satellite data. In the state estimation part of this dissertation, a new filtering technique based on a combination of ensemble and variational Kalman filtering approaches, is presented, experimented and discussed. This new filter is developed for large-scale Kalman filtering applications. In the parameter estimation part, three different techniques for parameter estimation in chaotic systems are considered. The methods are studied using the parameterized Lorenz 95 system, which is a benchmark model for data assimilation. In addition, a dilemma related to the uniqueness of weather and climate model closure parameters is discussed. In the data-oriented part of this dissertation, data from the Global Ozone Monitoring by Occultation of Stars (GOMOS) satellite instrument are considered and an alternative algorithm to retrieve atmospheric parameters from the measurements is presented. The validation study presents first global comparisons between two unique satellite-borne datasets of vertical profiles of nitrogen trioxide (NO3), retrieved using GOMOS and Stratospheric Aerosol and Gas Experiment III (SAGE III) satellite instruments. The GOMOS NO3 observations are also considered in a chemical state estimation study in order to retrieve stratospheric temperature profiles. The main result of this dissertation is the consideration of likelihood calculations via Kalman filtering outputs. The concept has previously been used together with stochastic differential equations and in time series analysis. In this work, the concept is applied to chaotic dynamical systems and used together with Markov chain Monte Carlo (MCMC) methods for statistical analysis. In particular, this methodology is advocated for use in numerical weather prediction (NWP) and climate model applications. In addition, the concept is shown to be useful in estimating the filter-specific parameters related, e.g., to model error covariance matrix parameters.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
More discussion is required on how and which types of biomass should be used to achieve a significant reduction in the carbon load released into the atmosphere in the short term. The energy sector is one of the largest greenhouse gas (GHG) emitters and thus its role in climate change mitigation is important. Replacing fossil fuels with biomass has been a simple way to reduce carbon emissions because the carbon bonded to biomass is considered as carbon neutral. With this in mind, this thesis has the following objectives: (1) to study the significance of the different GHG emission sources related to energy production from peat and biomass, (2) to explore opportunities to develop more climate friendly biomass energy options and (3) to discuss the importance of biogenic emissions of biomass systems. The discussion on biogenic carbon and other GHG emissions comprises four case studies of which two consider peat utilization, one forest biomass and one cultivated biomasses. Various different biomass types (peat, pine logs and forest residues, palm oil, rapeseed oil and jatropha oil) are used as examples to demonstrate the importance of biogenic carbon to life cycle GHG emissions. The biogenic carbon emissions of biomass are defined as the difference in the carbon stock between the utilization and the non-utilization scenarios of biomass. Forestry-drained peatlands were studied by using the high emission values of the peatland types in question to discuss the emission reduction potential of the peatlands. The results are presented in terms of global warming potential (GWP) values. Based on the results, the climate impact of the peat production can be reduced by selecting high-emission-level peatlands for peat production. The comparison of the two different types of forest biomass in integrated ethanol production in pulp mill shows that the type of forest biomass impacts the biogenic carbon emissions of biofuel production. The assessment of cultivated biomasses demonstrates that several selections made in the production chain significantly affect the GHG emissions of biofuels. The emissions caused by biofuel can exceed the emissions from fossil-based fuels in the short term if biomass is in part consumed in the process itself and does not end up in the final product. Including biogenic carbon and other land use carbon emissions into the carbon footprint calculations of biofuel reveals the importance of the time frame and of the efficiency of biomass carbon content utilization. As regards the climate impact of biomass energy use, the net impact on carbon stocks (in organic matter of soils and biomass), compared to the impact of the replaced energy source, is the key issue. Promoting renewable biomass regardless of biogenic GHG emissions can increase GHG emissions in the short term and also possibly in the long term.
Resumo:
The aim of this work is to apply approximate Bayesian computation in combination with Marcov chain Monte Carlo methods in order to estimate the parameters of tuberculosis transmission. The methods are applied to San Francisco data and the results are compared with the outcomes of previous works. Moreover, a methodological idea with the aim to reduce computational time is also described. Despite the fact that this approach is proved to work in an appropriate way, further analysis is needed to understand and test its behaviour in different cases. Some related suggestions to its further enhancement are described in the corresponding chapter.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
Since its discovery, chaos has been a very interesting and challenging topic of research. Many great minds spent their entire lives trying to give some rules to it. Nowadays, thanks to the research of last century and the advent of computers, it is possible to predict chaotic phenomena of nature for a certain limited amount of time. The aim of this study is to present a recently discovered method for the parameter estimation of the chaotic dynamical system models via the correlation integral likelihood, and give some hints for a more optimized use of it, together with a possible application to the industry. The main part of our study concerned two chaotic attractors whose general behaviour is diff erent, in order to capture eventual di fferences in the results. In the various simulations that we performed, the initial conditions have been changed in a quite exhaustive way. The results obtained show that, under certain conditions, this method works very well in all the case. In particular, it came out that the most important aspect is to be very careful while creating the training set and the empirical likelihood, since a lack of information in this part of the procedure leads to low quality results.