123 resultados para Continuous Markov processes
em Université de Lausanne, Switzerland
Resumo:
The metasomatism observed in the oceanic and continental lithosphere is generally interpreted to represent a continuous differentiation process forming anhydrous and hydrous veins plus a cryptic enrichment in the surrounding peridotite. In order to constrain the mechanisms of vein formation and potentially clarify the nature and origin of the initial metasomatic agent, we performed a series of high-pressure experiments simulating the liquid line of descent of a basanitic magma differentiating within continental or mature oceanic lithosphere. This series of experiments has been conducted in an end-loaded piston cylinder apparatus starting from an initial hydrous ne-normative basanite at 1.5 GPa and temperature varying between 1,250 and 980°C. Near-pure fractional crystallization process was achieved in a stepwise manner in 30°C temperature steps and starting compositions corresponding to the liquid composition of the previous, higher-temperature glass composition. Liquids evolve progressively from basanite to peralkaline, aluminum-rich compositions without significant SiO2 variation. The resulting cumulates are characterized by an anhydrous clinopyroxene + olivine assemblage at high temperature (1,250-1,160°C), while at lower temperature (1,130-980°C), hydrous cumulates with dominantly amphibole + minor clinopyroxene, spinel, ilmenite, titanomagnetite and apatite (1,130-980°C) are formed. This new data set supports the interpretation that anhydrous and hydrous metasomatic veins could be produced during continuous differentiation processes of primary, hydrous alkaline magmas at high pressure. However, the comparison between the cumulates generated by the fractional crystallization from an initial ne-normative liquid or from hy-normative initial compositions (hawaiite or picrobasalt) indicates that for all hydrous liquids, the different phases formed upon differentiation are mostly similar even though the proportions of hydrous versus anhydrous minerals could vary significantly. This suggests that the formation of amphibole-bearing metasomatic veins observed in the lithospheric mantle could be linked to the differentiation of initial liquids ranging from ne-normative to hy-normative in composition. The present study does not resolve the question whether the metasomatism observed in lithospheric mantle is a precursor or a consequence of alkaline magmatism; however, it confirms that the percolation and differentiation of a liquid produced by a low degree of partial melting of a source similar or slightly more enriched than depleted MORB mantle could generate hydrous metasomatic veins interpreted as a potential source for alkaline magmatism by various authors.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In a weighted spatial network, as specified by an exchange matrix, the variances of the spatial values are inversely proportional to the size of the regions. Spatial values are no more exchangeable under independence, thus weakening the rationale for ordinary permutation and bootstrap tests of spatial autocorrelation. We propose an alternative permutation test for spatial autocorrelation, based upon exchangeable spatial modes, constructed as linear orthogonal combinations of spatial values. The coefficients obtain as eigenvectors of the standardised exchange matrix appearing in spectral clustering, and generalise to the weighted case the concept of spatial filtering for connectivity matrices. Also, two proposals aimed at transforming an acessibility matrix into a exchange matrix with with a priori fixed margins are presented. Two examples (inter-regional migratory flows and binary adjacency networks) illustrate the formalism, rooted in the theory of spectral decomposition for reversible Markov chains.
Resumo:
We reconsider a formula for arbitrary moments of expected discounted dividend payments in a spectrally negative L,vy risk model that was obtained in Renaud and Zhou (2007, [4]) and in Kyprianou and Palmowski (2007, [3]) and extend the result to stationary Markov processes that are skip-free upwards.
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Evolutionary processes acting at the expanding margins of a species' range are still poorly understood. Genetic drift is considered prevalent in marginal populations, and the maintenance of genetic diversity during recolonization might seem puzzling. To investigate such processes, a fine-scale investigation of 219 individuals was performed within a population of Biscutella laevigata (Brassicaceae), located at the leading edge of its range. The survey used amplified fragment length polymorphisms (AFLPs). As commonly reported across the whole species distribution range, individual density and genetic diversity decreased along the local axis of recolonization of this expanding population, highlighting the enduring effect of the historical colonization on present-day diversity. The self-incompatibility system of the plant may have prevented local inbreeding in newly found patches and sustained genetic diversity by ensuring gene flow from established populations. Within the more continuously populated region, spatial analysis of genetic structure revealed restricted gene flow among individuals. The distribution of genotypes formed a mosaic of relatively homogenous patches within the continuous population. This pattern could be explained by a history of expansion by long-distance dispersal followed by fine-scale diffusion (that is, a stratified dispersal combination). The secondary contact among expanding patches apparently led to admixture among differentiated genotypes where they met (that is, a reshuffling effect). This type of dynamics could explain the maintenance of genetic diversity during recolonization.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Flood effectiveness observations imply that two families of processes describe the formation of debris flow volume. One is related to the rainfall?erosion relationship, and can be seen as a gradual process, and one is related to additional geological/geotechnical events, those named hereafter extraordinary events. In order to discuss the hypothesis of coexistence of two modes of volume formation, some methodologies are applied. Firstly, classical approaches consisting in relating volume to catchments characteristics are considered. These approaches raise questions about the quality of the data rather than providing answers concerning the controlling processes. Secondly, we consider statistical approaches (cumulative number of events distribution and cluster analysis) and these suggest the possibility of having two distinct families of processes. However the quantitative evaluation of the threshold differs from the one that could be obtained from the first approach, but they all agree in the sense of the coexistence of two families of events. Thirdly, a conceptual model is built exploring how and why debris flow volume in alpine catchments changes with time. Depending on the initial condition (sediment production), the model shows that large debris flows (i.e. with important volume) are observed in the beginning period, before a steady-state is reached. During this second period debris flow volume such as is observed in the beginning period is not observed again. Integrating the results of the three approaches, two case studies are presented showing: (1) the possibility to observe in a catchment large volumes that will never happen again due to a drastic decrease in the sediment availability, supporting its difference from gradual erosion processes; (2) that following a rejuvenation of the sediment storage (by a rock avalanche) the magnitude?frequency relationship of a torrent can be differentiated into two phases, the beginning one with large and frequent debris flow and a later one with debris flow less intense and frequent, supporting the results of the conceptual model. Although the results obtained cannot identify a clear threshold between the two families of processes, they show that some debris flows can be seen as pulse of sediment differing from that expected from gradual erosion.
Resumo:
Motivation: Hormone pathway interactions are crucial in shaping plant development, such as synergism between the auxin and brassinosteroid pathways in cell elongation. Both hormone pathways have been characterized in detail, revealing several feedback loops. The complexity of this network, combined with a shortage of kinetic data, renders its quantitative analysis virtually impossible at present.Results: As a first step towards overcoming these obstacles, we analyzed the network using a Boolean logic approach to build models of auxin and brassinosteroid signaling, and their interaction. To compare these discrete dynamic models across conditions, we transformed them into qualitative continuous systems, which predict network component states more accurately and can accommodate kinetic data as they become available. To this end, we developed an extension for the SQUAD software, allowing semi-quantitative analysis of network states. Contrasting the developmental output depending on cell type-specific modulators enabled us to identify a most parsimonious model, which explains initially paradoxical mutant phenotypes and revealed a novel physiological feature.
Resumo:
PURPOSE: To present the long-term follow-up of 10 adolescents and young adults with documented cognitive and behavioral regression as children due to nonlesional focal, mainly frontal, epilepsy with continuous spike-waves during slow wave sleep (CSWS). METHODS: Past medical and electroencephalography (EEG) data were reviewed and neuropsychological tests exploring main cognitive functions were administered. KEY FINDINGS: After a mean duration of follow-up of 15.6 years (range, 8-23 years), none of the 10 patients had recovered fully, but four regained borderline to normal intelligence and were almost independent. Patients with prolonged global intellectual regression had the worst outcome, whereas those with more specific and short-lived deficits recovered best. The marked behavioral disorders resolved in all but one patient. Executive functions were neither severely nor homogenously affected. Three patients with a frontal syndrome during the active phase (AP) disclosed only mild residual executive and social cognition deficits. The main cognitive gains occurred shortly after the AP, but qualitative improvements continued to occur. Long-term outcome correlated best with duration of CSWS. SIGNIFICANCE: Our findings emphasize that cognitive recovery after cessation of CSWS depends on the severity and duration of the initial regression. None of our patients had major executive and social cognition deficits with preserved intelligence, as reported in adults with early destructive lesions of the frontal lobes. Early recognition of epilepsy with CSWS and rapid introduction of effective therapy are crucial for a best possible outcome.
Resumo:
Among the largest resources for biological sequence data is the large amount of expressed sequence tags (ESTs) available in public and proprietary databases. ESTs provide information on transcripts but for technical reasons they often contain sequencing errors. Therefore, when analyzing EST sequences computationally, such errors must be taken into account. Earlier attempts to model error prone coding regions have shown good performance in detecting and predicting these while correcting sequencing errors using codon usage frequencies. In the research presented here, we improve the detection of translation start and stop sites by integrating a more complex mRNA model with codon usage bias based error correction into one hidden Markov model (HMM), thus generalizing this error correction approach to more complex HMMs. We show that our method maintains the performance in detecting coding sequences.
Resumo:
INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.