953 resultados para Interval discrete log problem
Resumo:
We obtain the exact analytical expression, up to a quadrature, for the mean exit time, T(x,v), of a free inertial process driven by Gaussian white noise from a region (0,L) in space. We obtain a completely explicit expression for T(x,0) and discuss the dependence of T(x,v) as a function of the size L of the region. We develop a new method that may be used to solve other exit time problems.
Exact solution to the exit-time problem for an undamped free particle driven by Gaussian white noise
Resumo:
In a recent paper [Phys. Rev. Lett. 75, 189 (1995)] we have presented the exact analytical expression for the mean exit time, T(x,v), of a free inertial process driven by Gaussian white noise out of a region (0,L) in space. In this paper we give a detailed account of the method employed and present results on asymptotic properties and averages of T(x,v).
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
In the Hamiltonian formulation of predictive relativistic systems, the canonical coordinates cannot be the physical positions. The relation between them is given by the individuality differential equations. However, due to the arbitrariness in the choice of Cauchy data, there is a wide family of solutions for these equations. In general, those solutions do not satisfy the condition of constancy of velocities moduli, and therefore we have to reparametrize the world lines into the proper time. We derive here a condition on the Cauchy data for the individuality equations which ensures the constancy of the velocities moduli and makes the reparametrization unnecessary.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
The pion spectrum for charged and neutral pions is investigated in pure neutron matter, by letting the pions interact with a neutron Fermi sea in a self-consistent scheme that renormalizes simultaneously the mesons, considered the source of the interaction, and the nucleons. The possibility of obtaining different kinds of pion condensates is investigated with the result that they cannot be reached even for values of the spin-spin correlation parameter, g', far below the range commonly accepted.
Resumo:
Background: Publications from the International Breast Screening Network (IBSN) have shown that varying definitions create hurdles for comparison of screening performance. Interval breast cancer rates are particularly affected. Objective: to test whether variations in definition of interval cancer rates (ICR) affect comparisons of international ICR, specific to a comparison of ICR in Norway and North Carolina (NC). Methods: An interval cancer (IC) was defined as a cancer diagnosed following a negative screening mammogram in a defined follow-up period. ICR was calculated for women ages 50-69, at subsequent screening in Norway and NC, during the time period 1996 - 2002. ICR was defined using three different denominators (negative screens, negative final assessments and all screens) and three different numerators (DCIS, invasive cancer and all cancers). ICR was then calculated with two methods: 1) number of ICs divided by the number of screens, and ICs divided by the number of women-years at risk for IC. Results: There were no differences in ICR depending on the definition used. In the 1-12 month follow up period ICR (based on number of screens) were: 0.53, 0.54, and 0.54 for Norway; and 1.20, 1.25 and 1.17 for NC, for negative screens, negative final assessment and all screens, respectively: The same trend was seen for 13-24 and 1-24 months follow-up. Using women-years for the analysis did not change the trend. ICR was higher in NC compared to Norway under all definitions and in all follow-up time periods, regardless of calculation method. Conclusion: The ICR within or between Norway and NC did not differ by definition used. ICR were higher in NC than Norway. There are many potential explanations for the difference.