995 resultados para Exponential models
Resumo:
Populations of phase oscillators interacting globally through a general coupling function f(x) have been considered. We analyze the conditions required to ensure the existence of a Lyapunov functional giving close expressions for it in terms of a generating function. We have also proposed a family of exactly solvable models with singular couplings showing that it is possible to map the synchronization phenomenon into other physical problems. In particular, the stationary solutions of the least singular coupling considered, f(x) = sgn(x), have been found analytically in terms of elliptic functions. This last case is one of the few nontrivial models for synchronization dynamics which can be analytically solved.
Resumo:
A general method to find, in a systematic way, efficient Monte Carlo cluster dynamics among the avast class of dynamics introduced by Kandel et al. [Phys. Rev. Lett. 65, 941 (1990)] is proposed. The method is successfully applied to a class of frustrated two-dimensional Ising systems. In the case of the fully frustrated model, we also find the intriguing result that critical clusters consist of self-avoiding walk at the theta point.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
A general scheme for devising efficient cluster dynamics proposed in a previous paper [Phys. Rev. Lett. 72, 1541 (1994)] is extensively discussed. In particular, the strong connection among equilibrium properties of clusters and dynamic properties as the correlation time for magnetization is emphasized. The general scheme is applied to a number of frustrated spin models and the results discussed.
Resumo:
The invaded cluster (IC) dynamics introduced by Machta et al. [Phys. Rev. Lett. 75, 2792 (1995)] is extended to the fully frustrated Ising model on a square lattice. The properties of the dynamics that exhibits numerical evidence of self-organized criticality are studied. The fluctuations in the IC dynamics are shown to be intrinsic of the algorithm and the fluctuation-dissipation theorem is no longer valid. The relaxation time is found to be very short and does not present a critical size dependence.
Resumo:
Extreme times techniques, generally applied to nonequilibrium statistical mechanical processes, are also useful for a better understanding of financial markets. We present a detailed study on the mean first-passage time for the volatility of return time series. The empirical results extracted from daily data of major indices seem to follow the same law regardless of the kind of index thus suggesting an universal pattern. The empirical mean first-passage time to a certain level L is fairly different from that of the Wiener process showing a dissimilar behavior depending on whether L is higher or lower than the average volatility. All of this indicates a more complex dynamics in which a reverting force drives volatility toward its mean value. We thus present the mean first-passage time expressions of the most common stochastic volatility models whose approach is comparable to the random diffusion description. We discuss asymptotic approximations of these models and confront them to empirical results with a good agreement with the exponential Ornstein-Uhlenbeck model.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Glioblastoma multiforme (GBM) is the most common and lethal of all gliomas. The current standard of care includes surgery followed by concomitant radiation and chemotherapy with the DNA alkylating agent temozolomide (TMZ). O⁶-methylguanine-DNA methyltransferase (MGMT) repairs the most cytotoxic of lesions generated by TMZ, O⁶-methylguanine. Methylation of the MGMT promoter in GBM correlates with increased therapeutic sensitivity to alkylating agent therapy. However, several aspects of TMZ sensitivity are not explained by MGMT promoter methylation. Here, we investigated our hypothesis that the base excision repair enzyme alkylpurine-DNA-N-glycosylase (APNG), which repairs the cytotoxic lesions N³-methyladenine and N⁷-methylguanine, may contribute to TMZ resistance. Silencing of APNG in established and primary TMZ-resistant GBM cell lines endogenously expressing MGMT and APNG attenuated repair of TMZ-induced DNA damage and enhanced apoptosis. Reintroducing expression of APNG in TMZ-sensitive GBM lines conferred resistance to TMZ in vitro and in orthotopic xenograft mouse models. In addition, resistance was enhanced with coexpression of MGMT. Evaluation of APNG protein levels in several clinical datasets demonstrated that in patients, high nuclear APNG expression correlated with poorer overall survival compared with patients lacking APNG expression. Loss of APNG expression in a subset of patients was also associated with increased APNG promoter methylation. Collectively, our data demonstrate that APNG contributes to TMZ resistance in GBM and may be useful in the diagnosis and treatment of the disease.
Resumo:
Due to the existence of free software and pedagogical guides, the use of data envelopment analysis (DEA) has been further democratized in recent years. Nowadays, it is quite usual for practitioners and decision makers with no or little knowledge in operational research to run themselves their own efficiency analysis. Within DEA, several alternative models allow for an environment adjustment. Five alternative models, each of them easily accessible to and achievable by practitioners and decision makers, are performed using the empirical case of the 90 primary schools of the State of Geneva, Switzerland. As the State of Geneva practices an upstream positive discrimination policy towards schools, this empirical case is particularly appropriate for an environment adjustment. The alternative of the majority of DEA models deliver divergent results. It is a matter of concern for applied researchers and a matter of confusion for practitioners and decision makers. From a political standpoint, these diverging results could lead to potentially opposite decisions. Grâce à l'existence de logiciels en libre accès et de guides pédagogiques, la méthode data envelopment analysis (DEA) s'est démocratisée ces dernières années. Aujourd'hui, il n'est pas rare que les décideurs avec peu ou pas de connaissances en recherche opérationnelle réalisent eux-mêmes leur propre analyse d'efficience. A l'intérieur de la méthode DEA, plusieurs modèles permettent de tenir compte des conditions plus ou moins favorables de l'environnement. Cinq de ces modèles, facilement accessibles et applicables par les décideurs, sont utilisés pour mesurer l'efficience des 90 écoles primaires du canton de Genève, Suisse. Le canton de Genève pratiquant une politique de discrimination positive envers les écoles défavorisées, ce cas pratique est particulièrement adapté pour un ajustement à l'environnement. La majorité des modèles DEA génèrent des résultats divergents. Ce constat est préoccupant pour les chercheurs appliqués et perturbant pour les décideurs. D'un point de vue politique, ces résultats divergents conduisent à des prises de décision différentes selon le modèle sur lequel elles sont fondées.
Liming in Agricultural Production Models with and Without the Adoption of Crop-Livestock Integration
Resumo:
ABSTRACT Perennial forage crops used in crop-livestock integration (CLI) are able to accumulate large amounts of straw on the soil surface in no-tillage system (NTS). In addition, they can potentially produce large amounts of soluble organic compounds that help improving the efficiency of liming in the subsurface, which favors root growth, thus reducing the risks of loss in yield during dry spells and the harmful effects of “overliming”. The aim of this study was to test the effects of liming on two models of agricultural production, with and without crop-livestock integration, for 2 years. Thus, an experiment was conducted in a Latossolo Vermelho (Oxisol) with a very clayey texture located in an agricultural area under the NTS in Bandeirantes, PR, Brazil. Liming was performed to increase base saturation (V) to 65, 75, and 90 % while one plot per block was maintained without the application of lime (control). A randomized block experimental design was adopted arranged in split-plots and four plots/block, with four replications. The soil properties evaluated were: pH in CaCl2, soil organic matter (SOM), Ca, Mg, K, Al, and P. The effects of liming were observed to a greater depth and for a long period through mobilization of ions in the soil, leading to a reduction in SOM and Al concentration and an increase in pH and the levels of Ca and Mg. In the first crop year, adoption of CLI led to an increase in the levels of K and Mg and a reduction in the levels of SOM; however, in the second crop year, the rate of decline of SOM decreased compared to the decline observed in the first crop year, and the level of K increased, whereas that of P decreased. The extent of the effects of liming in terms of depth and improvement in the root environment from the treatments were observed only partially from the changes observed in the chemical properties studied.
Resumo:
We study numerically the out-of-equilibrium dynamics of the hypercubic cell spin glass in high dimensionalities. We obtain evidence of aging effects qualitatively similar both to experiments and to simulations of low-dimensional models. This suggests that the Sherrington-Kirkpatrick model as well as other mean-field finite connectivity lattices can be used to study these effects analytically.