959 resultados para poisson zèbre
Resumo:
[spa] En un modelo de Poisson compuesto, definimos una estrategia de reaseguro proporcional de umbral : se aplica un nivel de retención k1 siempre que las reservas sean inferiores a un determinado umbral b, y un nivel de retención k2 en caso contrario. Obtenemos la ecuación íntegro-diferencial para la función Gerber-Shiu, definida en Gerber-Shiu -1998- en este modelo, que nos permite obtener las expresiones de la probabilidad de ruina y de la transformada de Laplace del momento de ruina para distintas distribuciones de la cuantía individual de los siniestros. Finalmente presentamos algunos resultados numéricos.
Resumo:
Systematic trends in the properties of a linear split-gate heterojunction are studied by solving iteratively the Poisson and Schrödinger equations for different gate potentials and temperatures. A two-dimensional approximation is presented that is much simpler in the numerical implementation and that accurately reproduces all significant trends. In deriving this approximation, we provide a rigorous and quantitative basis for the formulation of models that assumes a two-dimensional character for the electron gas at the junction.
Resumo:
An efficient method is developed for an iterative solution of the Poisson and Schro¿dinger equations, which allows systematic studies of the properties of the electron gas in linear deep-etched quantum wires. A much simpler two-dimensional (2D) approximation is developed that accurately reproduces the results of the 3D calculations. A 2D Thomas-Fermi approximation is then derived, and shown to give a good account of average properties. Further, we prove that an analytic form due to Shikin et al. is a good approximation to the electron density given by the self-consistent methods.
Resumo:
Recent measurements of electron escape from a nonequilibrium charged quantum dot are interpreted within a two-dimensional (2D) separable model. The confining potential is derived from 3D self-consistent Poisson-Thomas-Fermi calculations. It is found that the sequence of decay lifetimes provides a sensitive test of the confining potential and its dependence on electron occupation
Resumo:
Swiss death certification data over the period 1951-1984 for total cancer mortality and 30 major cancer sites in the population aged 25 to 74 years were analysed using a log-linear Poisson model with arbitrary constraints on the parameters to isolate the effects of birth cohort, calendar period of death and age. The overall pattern of total cancer mortality in males was stable for period values and showed some moderate decreases in cohort values restricted to the generations born after 1930. Cancer mortality trends were more favourable in females, with steady, though moderate, declines in both cohort and period values. According to the estimates from the model, the worst affected generation for male lung cancer was that born around 1910, and a flattening of trends or some moderate decline was observed for more recent cohorts, although this decline was considerably more limited than in other European countries. There were decreases in cohort and period values for stomach, intestine and oesophageal cancer in both sexes and (cervix) uteri in females. Increases were observed in both cohort and period trends for pancreas and liver in males and for several other neoplasms, including prostate, brain, leukaemias and lymphomas, restricted, however, for the latter sites, to the earlier cohorts and hence partly attributable to improved diagnosis and certification in the elderly. Although age values for lung cancer in females were around 10-times lower than in males, upward trends in female lung cancer cohort values were observed in subsequent cohorts and for period values from the late 1960's onwards. Therefore, future trends in female lung cancer mortality should continue to be monitored. The application of these age/period/cohort models thus provides a summary guide for the reading and interpretation of cancer mortality trends, although it cannot replace careful inspection of single age-specific rates.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
[spa] En un modelo de Poisson compuesto, definimos una estrategia de reaseguro proporcional de umbral : se aplica un nivel de retención k1 siempre que las reservas sean inferiores a un determinado umbral b, y un nivel de retención k2 en caso contrario. Obtenemos la ecuación íntegro-diferencial para la función Gerber-Shiu, definida en Gerber-Shiu -1998- en este modelo, que nos permite obtener las expresiones de la probabilidad de ruina y de la transformada de Laplace del momento de ruina para distintas distribuciones de la cuantía individual de los siniestros. Finalmente presentamos algunos resultados numéricos.
Resumo:
We assessed whether fasting modifies the prognostic value of these measurements for the risk of myocardial infarction (MI). Analyses used mixed effect models and Poisson regression. After confounders were controlled for, fasting triglyceride levels were, on average, 0.122 mmol/L lower than nonfasting levels. Each 2-fold increase in the latest triglyceride level was associated with a 38% increase in MI risk (relative rate, 1.38; 95% confidence interval, 1.26-1.51); fasting status did not modify this association. Our results suggest that it may not be necessary to restrict analyses to fasting measurements when considering MI risk.
Resumo:
In alcohol epidemiology surveys, there is a tradition of measuring alcohol-related consequences using respondents' attribution of alcohol as the cause. The authors aimed to compare the prevalence and frequency of self-attributed consequences to consequences without self-attribution using alcohol-attributable fractions (AAF). In 2007, a total of 7,174 Swiss school students aged 13-16 years reported the numbers of 6 alcohol-related adverse consequences (e.g., fights, injuries) they had incurred in the past 12 months. Consequences were measured with and without attribution of alcohol as the cause. The alcohol-use measures were frequency and volume of drinking in the past 12 months and number of risky single-occasion (> or =5 drinks) drinking episodes in the past 30 days. Attributable fractions were derived from logistic (> or =1 incident) and Poisson (number of incidents) regression analyses. Although relative risk estimates were higher when alcohol-attributed consequences were compared with nonattributed consequences, the use of AAFs resulted in more alcohol-related consequences (10,422 self-attributed consequences vs. 24,520 nonattributed consequences determined by means of AAFs). The likelihood of underreporting was higher among drinkers with intermediate frequencies than among either rare drinkers or frequent drinkers. Therefore, the extent of alcohol-related adverse consequences among adolescents may be underestimated when using self-attributed consequences, because of differential attribution processes, especially among infrequent drinkers.
Resumo:
A method is proposed for the estimation of absolute binding free energy of interaction between proteins and ligands. Conformational sampling of the protein-ligand complex is performed by molecular dynamics (MD) in vacuo and the solvent effect is calculated a posteriori by solving the Poisson or the Poisson-Boltzmann equation for selected frames of the trajectory. The binding free energy is written as a linear combination of the buried surface upon complexation, SASbur, the electrostatic interaction energy between the ligand and the protein, Eelec, and the difference of the solvation free energies of the complex and the isolated ligand and protein, deltaGsolv. The method uses the buried surface upon complexation to account for the non-polar contribution to the binding free energy because it is less sensitive to the details of the structure than the van der Waals interaction energy. The parameters of the method are developed for a training set of 16 HIV-1 protease-inhibitor complexes of known 3D structure. A correlation coefficient of 0.91 was obtained with an unsigned mean error of 0.8 kcal/mol. When applied to a set of 25 HIV-1 protease-inhibitor complexes of unknown 3D structures, the method provides a satisfactory correlation between the calculated binding free energy and the experimental pIC5o without reparametrization.
Resumo:
Usando os dados reportados em artigos publicados em revistas brasileiras e trabalhos apresentados em congressos nacionais, replicaram-se as aplicações da Lei de Lotka à literatura brasileira em 10 campos diferentes. Utilizou-se o modelo do poder inverso pelos métodos do mínimo quadrado e probabilidade máxima. Das 10 literaturas nacionais analisadas, somente a literatura de medicina, siderurgia, jaca e biblioteconomia ajustaram-se ao modelo do poder inverso generalizado pelo método dos mínimos quadrados. No entanto, só duas literaturas (veterinária e cartas do Arquivo Privado de Getúlio Vargas) não se ajustaram ao modelo quando se usou o método da máxima probabilidade. Para ambas literaturas, tentaram-se modelos diferentes. A literatura de veterinária ajustou-se à distribuição binomial negativa, e as cartas do Arquivo Privado de Getúlio Vargas ajustaram-se melhor à distribuição Gauss-Poisson Inversa Generalizada.