119 resultados para Poisson Process
em Université de Lausanne, Switzerland
Resumo:
In the framework of the classical compound Poisson process in collective risk theory, we study a modification of the horizontal dividend barrier strategy by introducing random observation times at which dividends can be paid and ruin can be observed. This model contains both the continuous-time and the discrete-time risk model as a limit and represents a certain type of bridge between them which still enables the explicit calculation of moments of total discounted dividend payments until ruin. Numerical illustrations for several sets of parameters are given and the effect of random observation times on the performance of the dividend strategy is studied.
Resumo:
In many fields, the spatial clustering of sampled data points has many consequences. Therefore, several indices have been proposed to assess the level of clustering affecting datasets (e.g. the Morisita index, Ripley's Kfunction and Rényi's generalized entropy). The classical Morisita index measures how many times it is more likely to select two measurement points from the same quadrats (the data set is covered by a regular grid of changing size) than it would be in the case of a random distribution generated from a Poisson process. The multipoint version (k-Morisita) takes into account k points with k >= 2. The present research deals with a new development of the k-Morisita index for (1) monitoring network characterization and for (2) detection of patterns in monitored phenomena. From a theoretical perspective, a connection between the k-Morisita index and multifractality has also been found and highlighted on a mathematical multifractal set.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
In this thesis, I develop analytical models to price the value of supply chain investments under demand uncer¬tainty. This thesis includes three self-contained papers. In the first paper, we investigate the value of lead-time reduction under the risk of sudden and abnormal changes in demand forecasts. We first consider the risk of a complete and permanent loss of demand. We then provide a more general jump-diffusion model, where we add a compound Poisson process to a constant-volatility demand process to explore the impact of sudden changes in demand forecasts on the value of lead-time reduction. We use an Edgeworth series expansion to divide the lead-time cost into that arising from constant instantaneous volatility, and that arising from the risk of jumps. We show that the value of lead-time reduction increases substantially in the intensity and/or the magnitude of jumps. In the second paper, we analyze the value of quantity flexibility in the presence of supply-chain dis- intermediation problems. We use the multiplicative martingale model and the "contracts as reference points" theory to capture both positive and negative effects of quantity flexibility for the downstream level in a supply chain. We show that lead-time reduction reduces both supply-chain disintermediation problems and supply- demand mismatches. We furthermore analyze the impact of the supplier's cost structure on the profitability of quantity-flexibility contracts. When the supplier's initial investment cost is relatively low, supply-chain disin¬termediation risk becomes less important, and hence the contract becomes more profitable for the retailer. We also find that the supply-chain efficiency increases substantially with the supplier's ability to disintermediate the chain when the initial investment cost is relatively high. In the third paper, we investigate the value of dual sourcing for the products with heavy-tailed demand distributions. We apply extreme-value theory and analyze the effects of tail heaviness of demand distribution on the optimal dual-sourcing strategy. We find that the effects of tail heaviness depend on the characteristics of demand and profit parameters. When both the profit margin of the product and the cost differential between the suppliers are relatively high, it is optimal to buffer the mismatch risk by increasing both the inventory level and the responsive capacity as demand uncertainty increases. In that case, however, both the optimal inventory level and the optimal responsive capacity decrease as the tail of demand becomes heavier. When the profit margin of the product is relatively high, and the cost differential between the suppliers is relatively low, it is optimal to buffer the mismatch risk by increasing the responsive capacity and reducing the inventory level as the demand uncertainty increases. In that case, how¬ever, it is optimal to buffer with more inventory and less capacity as the tail of demand becomes heavier. We also show that the optimal responsive capacity is higher for the products with heavier tails when the fill rate is extremely high.
Resumo:
The paper is motivated by the valuation problem of guaranteed minimum death benefits in various equity-linked products. At the time of death, a benefit payment is due. It may depend not only on the price of a stock or stock fund at that time, but also on prior prices. The problem is to calculate the expected discounted value of the benefit payment. Because the distribution of the time of death can be approximated by a combination of exponential distributions, it suffices to solve the problem for an exponentially distributed time of death. The stock price process is assumed to be the exponential of a Brownian motion plus an independent compound Poisson process whose upward and downward jumps are modeled by combinations (or mixtures) of exponential distributions. Results for exponential stopping of a Lévy process are used to derive a series of closed-form formulas for call, put, lookback, and barrier options, dynamic fund protection, and dynamic withdrawal benefit with guarantee. We also discuss how barrier options can be used to model lapses and surrenders.
Resumo:
We characterize the value function of maximizing the total discounted utility of dividend payments for a compound Poisson insurance risk model when strictly positive transaction costs are included, leading to an impulse control problem. We illustrate that well known simple strategies can be optimal in the case of exponential claim amounts. Finally we develop a numerical procedure to deal with general claim amount distributions.
Resumo:
Current explanatory models for binge eating in binge eating disorder (BED) mostly rely onmodels for bulimianervosa (BN), although research indicates different antecedents for binge eating in BED. This studyinvestigates antecedents and maintaining factors in terms of positive mood, negative mood and tension in asample of 22 women with BED using ecological momentary assessment over a 1-week. Values for negativemood were higher and those for positive mood lower during binge days compared with non-binge days.During binge days, negative mood and tension both strongly and significantly increased and positive moodstrongly and significantly decreased at the first binge episode, followed by a slight though significant, andlonger lasting decrease (negative mood, tension) or increase (positive mood) during a 4-h observation periodfollowing binge eating. Binge eating in BED seems to be triggered by an immediate breakdown of emotionregulation. There are no indications of an accumulation of negative mood triggering binge eating followed byimmediate reinforcing mechanisms in terms of substantial and stable improvement of mood as observed inBN. These differences implicate a further specification of etiological models and could serve as a basis fordeveloping new treatment approaches for BED.
Resumo:
BACKGROUND: Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. METHODS: Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. RESULTS: Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). CONCLUSIONS: The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety.
Resumo:
The level of information provided by ink evidence to the criminal and civil justice system is limited. The limitations arise from the weakness of the interpretative framework currently used, as proposed in the ASTM 1422-05 and 1789-04 on ink analysis. It is proposed to use the likelihood ratio from the Bayes theorem to interpret ink evidence. Unfortunately, when considering the analytical practices, as defined in the ASTM standards on ink analysis, it appears that current ink analytical practices do not allow for the level of reproducibility and accuracy required by a probabilistic framework. Such framework relies on the evaluation of the statistics of the ink characteristics using an ink reference database and the objective measurement of similarities between ink samples. A complete research programme was designed to (a) develop a standard methodology for analysing ink samples in a more reproducible way, (b) comparing automatically and objectively ink samples and (c) evaluate the proposed methodology in a forensic context. This report focuses on the first of the three stages. A calibration process, based on a standard dye ladder, is proposed to improve the reproducibility of ink analysis by HPTLC, when these inks are analysed at different times and/or by different examiners. The impact of this process on the variability between the repetitive analyses of ink samples in various conditions is studied. The results show significant improvements in the reproducibility of ink analysis compared to traditional calibration methods.
Resumo:
Immunocompetent microglia play an important role in the pathogenesis of Alzheimer's disease (AD). Antimicroglial antibodies in the cerebrospinal fluid (CSF) in clinically diagnosed AD patients have been previously recorded. Here, we report the results of the analysis of the CSF from 38 autopsy cases: 7 with definite AD; 14 with mild and 10 with moderate Alzheimer's type pathology; and 7 controls. Antimicroglial antibodies were identified in 70% of patients with definite AD, in 80% of patients with moderate and in 28% of patients with mild Alzheimer's type pathology. CSF antimicroglial antibodies were not observed in any of the control cases. The results show that CSF antimicroglial antibodies are present in the majority of patients with definite AD and also in cases with moderate Alzheimer's type changes. They may also indicate dysregulation of microglial function. Together with previous observations, these findings indicate that compromised immune defense mechanisms play an important role in the pathogenesis of AD.
Resumo:
For several years, all five medical faculties of Switzerland have embarked on a reform of their training curricula for two reasons: first, according to a new federal act issued in 2006 by the administration of the confederation, faculties needed to meet international standards in terms of content and pedagogic approaches; second, all Swiss universities and thus all medical faculties had to adapt the structure of their curriculum to the frame and principles which govern the Bologna process. This process is the result of the Bologna Declaration of June 1999 which proposes and requires a series of reforms to make European Higher Education more compatible and comparable, more competitive and more attractive for Europeans students. The present paper reviews some of the results achieved in the field, focusing on several issues such as the shortage of physicians and primary care practitioners, the importance of public health, community medicine and medical humanities, and the implementation of new training approaches including e-learning and simulation. In the future, faculties should work on several specific challenges such as: students' mobility, the improvement of students' autonomy and critical thinking as well as their generic and specific skills and finally a reflection on how to improve the attractiveness of the academic career, for physicians of both sexes.
Resumo:
Neutrophils are recruited to the site of parasite inoculation within a few hours of infection with the protozoan parasite Leishmania major. In C57BL/6 mice, which are resistant to infection, neutrophils are cleared from the site of s.c. infection within 3 days, whereas they persist for at least 10 days in susceptible BALB/c mice. In the present study, we investigated the role of macrophages (MPhi) in regulating neutrophil number. Inflammatory cells were recruited by i.p. injection of either 2% starch or L. major promastigotes. Neutrophils were isolated and cultured in the presence of increasing numbers of MPhi. Extent of neutrophil apoptosis positively correlated with the number of MPhi added. This process was strictly dependent on TNF because MPhi from TNF-deficient mice failed to induce neutrophil apoptosis. Assays using MPhi derived from membrane TNF knock-in mice or cultures in Transwell chambers revealed that contact with MPhi was necessary to induce neutrophil apoptosis, a process requiring expression of membrane TNF. L. major was shown to exacerbate MPhi-induced apoptosis of neutrophils, but BALB/c MPhi were not as potent as C57BL/6 MPhi in this induction. Our results emphasize the importance of MPhi-induced neutrophil apoptosis, and membrane TNF in the early control of inflammation.