972 resultados para Laplace-Metropolis estimator
Resumo:
Seismic wave field numerical modeling and seismic migration imaging based on wave equation have become useful and absolutely necessarily tools for imaging of complex geological objects. An important task for numerical modeling is to deal with the matrix exponential approximation in wave field extrapolation. For small value size matrix exponential, we can approximate the square root operator in exponential using different splitting algorithms. Splitting algorithms are usually used on the order or the dimension of one-way wave equation to reduce the complexity of the question. In this paper, we achieve approximate equation of 2-D Helmholtz operator inversion using multi-way splitting operation. Analysis on Gauss integral and coefficient of optimized partial fraction show that dispersion may accumulate by splitting algorithms for steep dipping imaging. High-order symplectic Pade approximation may deal with this problem, However, approximation of square root operator in exponential using splitting algorithm cannot solve dispersion problem during one-way wave field migration imaging. We try to implement exact approximation through eigenfunction expansion in matrix. Fast Fourier Transformation (FFT) method is selected because of its lowest computation. An 8-order Laplace matrix splitting is performed to achieve a assemblage of small matrixes using FFT method. Along with the introduction of Lie group and symplectic method into seismic wave-field extrapolation, accurate approximation of matrix exponential based on Lie group and symplectic method becomes the hot research field. To solve matrix exponential approximation problem, the Second-kind Coordinates (SKC) method and Generalized Polar Decompositions (GPD) method of Lie group are of choice. SKC method utilizes generalized Strang-splitting algorithm. While GPD method utilizes polar-type splitting and symmetric polar-type splitting algorithm. Comparing to Pade approximation, these two methods are less in computation, but they can both assure the Lie group structure. We think SKC and GPD methods are prospective and attractive in research and practice.
Resumo:
In this paper, we have presented the combined preconditioner which is derived from k =±-1~(1/2) circulant extensions of the real symmetric positive-definite Toeplitz matrices, proved it with great efficiency and stability and shown that it is easy to make error analysis and to remove the boundary effect with the combined preconditioner. This paper has also presented the methods for the direct and inverse computation of the real Toeplitz sets of equations and discussed many problems correspondingly, especially replaced the Toeplitz matrices with the combined preconditoners for analysis. The paper has also discussed the spectral analysis and boundary effect. Finally, as an application in geophysics, the paper makes some discussion about the squared root of a real matrix which comes from the Laplace algorithm.
Resumo:
Study of 3D visualization technology of engineering geology and its application to engineering is a cross subject which includes geosciences, computer, software and information technology. Being an important part of the secondary theme of National Basic Research Program of China (973 Program) whose name is Study of Multi-Scale Structure and Occurrence Environment of Complicated Geological Engineering Mass(No.2002CB412701), the dissertation involves the studies of key problems of 3D geological modeling, integrated applications of multi-format geological data, effective modeling methods of complex approximately layered geological mass as well as applications of 3D virtual reality information management technology.The main research findings are listed below:Integrated application method of multi-format geological data is proposed,which has solved the integrated application of drill holes, engineering geology plandrawings, sectional drawings and cutting drawings as well as exploratory trenchsketch. Its application can provide as more as possible fundamental data for 3Dgeological modeling.A 3D surface construction method combined Laplace interpolation points withoriginal points is proposed, so the deformation of 3D model and the crossing error ofupper and lower surface of model resulted from lack of data when constructing alaminated stratum can be eliminated.3D modeling method of approximately layered geological mass is proposed,which has solved the problems of general modeling method based on the sections or points and faces when constructing terrain and concordant strata.The 3D geological model of VII dam site of Xiangjiaba hydropower stationhas been constructed. The applications of 3D geological model to the auto-plotting ofsectional drawing and the converting of numerical analysis model are also discussed.3D virtual reality information integrated platform is developed, whose mostimportant character is that it is a software platform having the functions of 3D virtualreality flying and multi-format data management simultaneously. Therefore, theplatform can load different 3D model so as to satisfy the different engineeringdemands.The relics of Aigong Cave of Longyou Stone Caves are recovered. Thereinforcement plans of 1# and 2# cave in phoenix hill also be expressed. The intuitiveexpression provided decision makers and designers a very good environment.The basic framework and specific functions of 3D geological informationsystem are proposed.The main research findings in the dissertation have been successfully applied to some important engineering such as Xiangjiaba hydropower station, a military airport and Longyou Stone Caves etc.
Resumo:
It is a weighting process to justify the importance of different items. This paper is about how the estimator distributes profiling the weights to different items. This paper is about using design of psychological experiment to figure out the profiles of the weights of different ranked items in three weighting methods, and in the experiment we use five topics as experimental materials. At the same time, we controlled the factors such as the familiarity about the topic and the number of items. Then we use the curve estimation to figure out the exact profiles and use ANOVA to test the topic effects. Curve estimation result shows that there is difference between the profiles of the weights of different ranking items in three weighting methods. To PA, the weighting profile is logarithmic curving style, and to DR, the weighting profile is linear style. ANOVA result shows that there are topic effects to different weighting profiles. But to go ahead, when we use point allocation (PA) to the more important items, the topics seems to have little effects. The methods have relatively more effect than the topics. The result also indicates that to PA methods, there is no significant difference the weight profiles between the fix-sum and the open-sum. This result has a contribution to the basic research to the management science and social science.
Resumo:
Nonlinear multivariate statistical techniques on fast computers offer the potential to capture more of the dynamics of the high dimensional, noisy systems underlying financial markets than traditional models, while making fewer restrictive assumptions. This thesis presents a collection of practical techniques to address important estimation and confidence issues for Radial Basis Function networks arising from such a data driven approach, including efficient methods for parameter estimation and pruning, a pointwise prediction error estimator, and a methodology for controlling the "data mining'' problem. Novel applications in the finance area are described, including customized, adaptive option pricing and stock price prediction.
Resumo:
Q. Shen and R. Jensen, 'Approximation-based feature selection and application for algae population estimation,' Applied Intelligence, vol. 28, no. 2, pp. 167-181, 2008. Sponsorship: EPSRC RONO: EP/E058388/1
Resumo:
In this paper, we study the efficacy of genetic algorithms in the context of combinatorial optimization. In particular, we isolate the effects of cross-over, treated as the central component of genetic search. We show that for problems of nontrivial size and difficulty, the contribution of cross-over search is marginal, both synergistically when run in conjunction with mutation and selection, or when run with selection alone, the reference point being the search procedure consisting of just mutation and selection. The latter can be viewed as another manifestation of the Metropolis process. Considering the high computational cost of maintaining a population to facilitate cross-over search, its marginal benefit renders genetic search inferior to its singleton-population counterpart, the Metropolis process, and by extension, simulated annealing. This is further compounded by the fact that many problems arising in practice may inherently require a large number of state transitions for a near-optimal solution to be found, making genetic search infeasible given the high cost of computing a single iteration in the enlarged state-space.
Resumo:
The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. We ask a fundamental question: What is the basic predictive power of TCP of network state, including wireless error conditions? The goal is to improve or readily exploit this predictive power to enable TCP (or variants) to perform well in generalized network settings. To that end, we use Maximum Likelihood Ratio tests to evaluate TCP as a detector/estimator. We quantify how well network state can be estimated, given network response such as distributions of packet delays or TCP throughput that are conditioned on the type of packet loss. Using our model-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient detector can be built; distributions of network loads can provide effective means for estimating packet loss type; and packet delay is a better signal of network state than short-term throughput. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect estimation.
Resumo:
A mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture of Gaussian classifier. The system was tested in recognizing various dynamic human outdoor activities; e.g., running, walking, roller blading, and cycling. Experiments with synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.
Resumo:
A combined 2D, 3D approach is presented that allows for robust tracking of moving people and recognition of actions. It is assumed that the system observes multiple moving objects via a single, uncalibrated video camera. Low-level features are often insufficient for detection, segmentation, and tracking of non-rigid moving objects. Therefore, an improved mechanism is proposed that integrates low-level (image processing), mid-level (recursive 3D trajectory estimation), and high-level (action recognition) processes. A novel extended Kalman filter formulation is used in estimating the relative 3D motion trajectories up to a scale factor. The recursive estimation process provides a prediction and error measure that is exploited in higher-level stages of action recognition. Conversely, higher-level mechanisms provide feedback that allows the system to reliably segment and maintain the tracking of moving objects before, during, and after occlusion. The 3D trajectory, occlusion, and segmentation information are utilized in extracting stabilized views of the moving object that are then used as input to action recognition modules. Trajectory-guided recognition (TGR) is proposed as a new and efficient method for adaptive classification of action. The TGR approach is demonstrated using "motion history images" that are then recognized via a mixture-of-Gaussians classifier. The system was tested in recognizing various dynamic human outdoor activities: running, walking, roller blading, and cycling. Experiments with real and synthetic data sets are used to evaluate stability of the trajectory estimator with respect to noise.
Resumo:
For two multinormal populations with equal covariance matrices the likelihood ratio discriminant function, an alternative allocation rule to the sample linear discriminant function when n1 ≠ n2 ,is studied analytically. With the assumption of a known covariance matrix its distribution is derived and the expectation of its actual and apparent error rates evaluated and compared with those of the sample linear discriminant function. This comparison indicates that the likelihood ratio allocation rule is robust to unequal sample sizes. The quadratic discriminant function is studied, its distribution reviewed and evaluation of its probabilities of misclassification discussed. For known covariance matrices the distribution of the sample quadratic discriminant function is derived. When the known covariance matrices are proportional exact expressions for the expectation of its actual and apparent error rates are obtained and evaluated. The effectiveness of the sample linear discriminant function for this case is also considered. Estimation of true log-odds for two multinormal populations with equal or unequal covariance matrices is studied. The estimative, Bayesian predictive and a kernel method are compared by evaluating their biases and mean square errors. Some algebraic expressions for these quantities are derived. With equal covariance matrices the predictive method is preferable. Where it derives this superiority is investigated by considering its performance for various levels of fixed true log-odds. It is also shown that the predictive method is sensitive to n1 ≠ n2. For unequal but proportional covariance matrices the unbiased estimative method is preferred. Product Normal kernel density estimates are used to give a kernel estimator of true log-odds. The effect of correlation in the variables with product kernels is considered. With equal covariance matrices the kernel and parametric estimators are compared by simulation. For moderately correlated variables and large dimension sizes the product kernel method is a good estimator of true log-odds.
Resumo:
The class of all Exponential-Polynomial-Trigonometric (EPT) functions is classical and equal to the Euler-d’Alembert class of solutions of linear differential equations with constant coefficients. The class of non-negative EPT functions defined on [0;1) was discussed in Hanzon and Holland (2010) of which EPT probability density functions are an important subclass. EPT functions can be represented as ceAxb, where A is a square matrix, b a column vector and c a row vector where the triple (A; b; c) is the minimal realization of the EPT function. The minimal triple is only unique up to a basis transformation. Here the class of 2-EPT probability density functions on R is defined and shown to be closed under a variety of operations. The class is also generalised to include mixtures with the pointmass at zero. This class coincides with the class of probability density functions with rational characteristic functions. It is illustrated that the Variance Gamma density is a 2-EPT density under a parameter restriction. A discrete 2-EPT process is a process which has stochastically independent 2-EPT random variables as increments. It is shown that the distribution of the minimum and maximum of such a process is an EPT density mixed with a pointmass at zero. The Laplace Transform of these distributions correspond to the discrete time Wiener-Hopf factors of the discrete time 2-EPT process. A distribution of daily log-returns, observed over the period 1931-2011 from a prominent US index, is approximated with a 2-EPT density function. Without the non-negativity condition, it is illustrated how this problem is transformed into a discrete time rational approximation problem. The rational approximation software RARL2 is used to carry out this approximation. The non-negativity constraint is then imposed via a convex optimisation procedure after the unconstrained approximation. Sufficient and necessary conditions are derived to characterise infinitely divisible EPT and 2-EPT functions. Infinitely divisible 2-EPT density functions generate 2-EPT Lévy processes. An assets log returns can be modelled as a 2-EPT Lévy process. Closed form pricing formulae are then derived for European Options with specific times to maturity. Formulae for discretely monitored Lookback Options and 2-Period Bermudan Options are also provided. Certain Greeks, including Delta and Gamma, of these options are also computed analytically. MATLAB scripts are provided for calculations involving 2-EPT functions. Numerical option pricing examples illustrate the effectiveness of the 2-EPT approach to financial modelling.
Resumo:
A popular way to account for unobserved heterogeneity is to assume that the data are drawn from a finite mixture distribution. A barrier to using finite mixture models is that parameters that could previously be estimated in stages must now be estimated jointly: using mixture distributions destroys any additive separability of the log-likelihood function. We show, however, that an extension of the EM algorithm reintroduces additive separability, thus allowing one to estimate parameters sequentially during each maximization step. In establishing this result, we develop a broad class of estimators for mixture models. Returning to the likelihood problem, we show that, relative to full information maximum likelihood, our sequential estimator can generate large computational savings with little loss of efficiency.
Resumo:
We exploit the distributional information contained in high-frequency intraday data in constructing a simple conditional moment estimator for stochastic volatility diffusions. The estimator is based on the analytical solutions of the first two conditional moments for the latent integrated volatility, the realization of which is effectively approximated by the sum of the squared high-frequency increments of the process. Our simulation evidence indicates that the resulting GMM estimator is highly reliable and accurate. Our empirical implementation based on high-frequency five-minute foreign exchange returns suggests the presence of multiple latent stochastic volatility factors and possible jumps. © 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Empirical modeling of high-frequency currency market data reveals substantial evidence for nonnormality, stochastic volatility, and other nonlinearities. This paper investigates whether an equilibrium monetary model can account for nonlinearities in weekly data. The model incorporates time-nonseparable preferences and a transaction cost technology. Simulated sample paths are generated using Marcet's parameterized expectations procedure. The paper also develops a new method for estimation of structural economic models. The method forces the model to match (under a GMM criterion) the score function of a nonparametric estimate of the conditional density of observed data. The estimation uses weekly U.S.-German currency market data, 1975-90. © 1995.