889 resultados para Computational effort


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a new two-dimensional shear deformable beam element based on the absolute nodal coordinate formulation is proposed. The nonlinear elastic forces of the beam element are obtained using a continuum mechanics approach without employing a local element coordinate system. In this study, linear polynomials are used to interpolate both the transverse and longitudinal components of the displacement. This is different from other absolute nodal-coordinate-based beam elements where cubic polynomials are used in the longitudinal direction. The accompanying defects of the phenomenon known as shear locking are avoided through the adoption of selective integration within the numerical integration method. The proposed element is verified using several numerical examples, and the results are compared to analytical solutions and the results for an existing shear deformable beam element. It is shown that by using the proposed element, accurate linear and nonlinear static deformations, as well as realistic dynamic behavior, can be achieved with a smaller computational effort than by using existing shear deformable two-dimensional beam elements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The computer simulation of reaction dynamics has nowadays reached a remarkable degree of accuracy. Triatomic elementary reactions are rigorously studied with great detail on a straightforward basis using a considerable variety of Quantum Dynamics computational tools available to the scientific community. In our contribution we compare the performance of two quantum scattering codes in the computation of reaction cross sections of a triatomic benchmark reaction such as the gas phase reaction Ne + H2+ %12. NeH++ H. The computational codes are selected as representative of time-dependent (Real Wave Packet [ ]) and time-independent (ABC [ ]) methodologies. The main conclusion to be drawn from our study is that both strategies are, to a great extent, not competing but rather complementary. While time-dependent calculations advantages with respect to the energy range that can be covered in a single simulation, time-independent approaches offer much more detailed information from each single energy calculation. Further details such as the calculation of reactivity at very low collision energies or the computational effort related to account for the Coriolis couplings are analyzed in this paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present an algorithm for the computation of reducible invariant tori of discrete dynamical systems that is suitable for tori of dimensions larger than 1. It is based on a quadratically convergent scheme that approximates, at the same time, the Fourier series of the torus, its Floquet transformation, and its Floquet matrix. The Floquet matrix describes the linearization of the dynamics around the torus and, hence, its linear stability. The algorithm presents a high degree of parallelism, and the computational effort grows linearly with the number of Fourier modes needed to represent the solution. For these reasons it is a very good option to compute quasi-periodic solutions with several basic frequencies. The paper includes some examples (flows) to show the efficiency of the method in a parallel computer. In these flows we compute invariant tori of dimensions up to 5, by taking suitable sections.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this work is to demonstrate the efficient utilization of the Principal Components Analysis (PCA) as a method to pre-process the original multivariate data, that is rewrite in a new matrix with principal components sorted by it's accumulated variance. The Artificial Neural Network (ANN) with backpropagation algorithm is trained, using this pre-processed data set derived from the PCA method, representing 90.02% of accumulated variance of the original data, as input. The training goal is modeling Dissolved Oxygen using information of other physical and chemical parameters. The water samples used in the experiments are gathered from the Paraíba do Sul River in São Paulo State, Brazil. The smallest Mean Square Errors (MSE) is used to compare the results of the different architectures and choose the best. The utilization of this method allowed the reduction of more than 20% of the input data, which contributed directly for the shorting time and computational effort in the ANN training.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work presents the implementation and comparison of three different techniques of three-dimensional computer vision as follows: • Stereo vision - correlation between two 2D images • Sensorial fusion - use of different sensors: camera 2D + ultrasound sensor (1D); • Structured light The computer vision techniques herein presented took into consideration the following characteristics: • Computational effort ( elapsed time for obtain the 3D information); • Influence of environmental conditions (noise due to a non uniform lighting, overlighting and shades); • The cost of the infrastructure for each technique; • Analysis of uncertainties, precision and accuracy. The option of using the Matlab software, version 5.1, for algorithm implementation of the three techniques was due to the simplicity of their commands, programming and debugging. Besides, this software is well known and used by the academic community, allowing the results of this work to be obtained and verified. Examples of three-dimensional vision applied to robotic assembling tasks ("pick-and-place") are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the present paper we concentrate on solving sequences of nonsymmetric linear systems with block structure arising from compressible flow problems. We attempt to improve the solution process by sharing part of the computational effort throughout the sequence. This is achieved by application of a cheap updating technique for preconditioners which we adapted in order to be used for our applications. Tested on three benchmark compressible flow problems, the strategy speeds up the entire computation with an acceleration being particularly pronounced in phases of instationary behavior.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using the independent particle model as our basis we present a scheme to reduce the complexity and computational effort to calculate inclusive probabilities in many-electron collision system. As an example we present an application to K - K charge transfer in collisions of 2.6 MeV Ne{^9+} on Ne. We are able to give impact parameter-dependent probabilities for many-particle states which could lead to KLL-Auger electrons after collision and we compare with experimental values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An automatic nonlinear predictive model-construction algorithm is introduced based on forward regression and the predicted-residual-sums-of-squares (PRESS) statistic. The proposed algorithm is based on the fundamental concept of evaluating a model's generalisation capability through crossvalidation. This is achieved by using the PRESS statistic as a cost function to optimise model structure. In particular, the proposed algorithm is developed with the aim of achieving computational efficiency, such that the computational effort, which would usually be extensive in the computation of the PRESS statistic, is reduced or minimised. The computation of PRESS is simplified by avoiding a matrix inversion through the use of the orthogonalisation procedure inherent in forward regression, and is further reduced significantly by the introduction of a forward-recursive formula. Based on the properties of the PRESS statistic, the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation. Numerical examples are used to demonstrate the efficacy of the algorithm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Methods are developed for predicting rate coefficients for reactions of initiators of tropospheric oxidation with unsaturated compounds that are abundant in the atmosphere; prognostic tools of this kind are essential for atmospheric chemists and modellers. To pursue the aim of exploring such tools, the kinetics of reactions of NO3, OH and O-3 with a series of alkenes are examined for correlations relating the logarithms of the rate coefficients to the energies of the highest occupied molecular orbitals (HOMOs) of the alkenes. A comparison of the values predicted by the correlations with experimental data (where the latter exist) allowed us to assess the reliability of our method. We used a series of theoretical methods to calculate the HOMO energies, and found that higher computational effort improves the agreement of the predicted rate coefficients with experimental values, especially for reactions of NO3 with alkenes that possess vinyllic halogen substituents. As a consequence, it is expedient to suggest new correlations to replace those presented by us and others that were based on the lower level of theory. We propose the following correlations for the reactions of NO3, OH and O-3 with alkenes: ln(k(NO3)/cm(3) molecule(-1) s(-1)) = 6.40(E-HOMO/eV) + 31.69, ln(k(OH)/cm(3) molecule(-1) s(-1)) = 1.21 (E-HOMO/eV)-12.34 and ln(k(O3)/cm(3) molecule(-1) s(-1)) = 3.28(E-HOMO/eV)-6.78. These new correlations have been developed using the larger experimental data sets now available, and the impact of the extended data on the quality of the correlations is examined in the paper. Atmospheric lifetimes have been calculated from both experimental and estimated rate coefficients to provide an overview of removal efficiencies for different classes of alkenes with respect to oxidative processes initiated by NO3, OH and O-3. A figure is presented to show the spatial scales over which alkenes may survive transport in competition with attack by NO3, OH and O-3. Removal by NO3 or OH is always more important than removal by O-3, and reactions with NO3 dominate for scales up to a few hundred metres.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The paper describes a self-tuning adaptive PID controller suitable for use in the control of robotic manipulators. The scheme employs a simple recursive estimator which reduces the computational effort to an acceptable level for many applications in robotics.