963 resultados para continuous-time models
Resumo:
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.
Resumo:
In Neo-Darwinism, variation and natural selection are the two evolutionary mechanisms which propel biological evolution. Our previous article presented a histogram model [1] consisting in populations of individuals whose number changed under the influence of variation and/or fitness, the total population remaining constant. Individuals are classified into bins, and the content of each bin is calculated generation after generation by an Excel spreadsheet. Here, we apply the histogram model to a stable population with fitness F(1)=1.00 in which one or two fitter mutants emerge. In a first scenario, a single mutant emerged in the population whose fitness was greater than 1.00. The simulations ended when the original population was reduced to a single individual. The histogram model was validated by excellent agreement between its predictions and those of a classical continuous function (Eqn. 1) which predicts the number of generations needed for a favorable mutation to spread throughout a population. But in contrast to Eqn. 1, our histogram model is adaptable to more complex scenarios, as demonstrated here. In the second and third scenarios, the original population was present at time zero together with two mutants which differed from the original population by two higher and distinct fitness values. In the fourth scenario, the large original population was present at time zero together with one fitter mutant. After a number of generations, when the mutant offspring had multiplied, a second mutant was introduced whose fitness was even greater. The histogram model also allows Shannon entropy (SE) to be monitored continuously as the information content of the total population decreases or increases. The results of these simulations illustrate, in a graphically didactic manner, the influence of natural selection, operating through relative fitness, in the emergence and dominance of a fitter mutant.
Resumo:
Wastewater application to soil is an alternative for fertilization and water reuse. However, particular care must be taken with this practice, since successive wastewater applications can cause soil salinization. Time-domain reflectometry (TDR) allows for the simultaneous and continuous monitoring of both soil water content and apparent electrical conductivity and thus for the indirect measurement of the electrical conductivity of the soil solution. This study aimed to evaluate the suitability of TDR for the indirect determination of the electrical conductivity (ECse) of the saturated soil extract by using an empirical equation for the apparatus TDR Trase 6050X1. Disturbed soil samples saturated with swine wastewater were used, at soil proportions of 0, 0.45, 0.90, 1.80, 2.70, and 3.60 m³ m-3. The probes were equipped with three handmade 0.20 cm long rods. The fit of the empirical model that associated the TDR measured values of electrical conductivity (EC TDR) to ECse was excellent, indicating this approach as suitable for the determination of electrical conductivity of the soil solution.
Resumo:
In fluid dynamical models the freeze-out of particles across a three-dimensional space-time hypersurface is discussed. The calculation of final momentum distribution of emitted particles is described for freeze-out surfaces, with both spacelike and timelike normals, taking into account conservation laws across the freeze-out discontinuity.
Resumo:
BACKGROUND: Intimal hyperplasia (IH) is a vascular remodeling process which often leads to failure of arterial bypass or hemodialysis access. Experimental and clinical work have provided insight in IH development; however, further studies under precise controlled conditions are required to improve therapeutic strategies to inhibit IH development. Ex vivo perfusion of human vessel segments under standardized hemodynamic conditions may provide an adequate experimental approach for this purpose. Therefore, chronically perfused venous segments were studied and compared to traditional static culture procedures with regard to functional and histomorphologic characteristics as well as gene expression. MATERIALS AND METHODS: Static vein culture allowing high tissue viability was performed as previously described. Ex vivo vein support system (EVVSS) was performed using a vein support system consisting of an incubator with a perfusion chamber and a pump. EVVSS allows vessel perfusion under continuous flow while maintaining controlled hemodynamic conditions. Each human saphenous vein was divided in two parts, one cultured in a Pyrex dish and the other part perfused in EVVSS for 14days. Testing of vasomotion, histomorphometry, expression of CD 31, Factor VIII, MIB 1, alpha-actin, and PAI-l were determined before and after 14days of either experimental conditions. RESULTS: Human venous segments cultured under traditional or perfused conditions exhibited similar IH after 14 days as shown by histomorphometry. Smooth-muscle cell (SMC) was preserved after chronic perfusion. Although integrity of both endothelial and smooth-muscle cells appears to be maintained in both culture conditions as confirmed by CD31, factor VIII, and alpha-actin expression, a few smooth-muscle cells in the media stained positive for factor VIII. Cell-proliferation marker MIB-1 was also detected in the two settings and PAI-1 mRNA expression and activity increased significantly after 14 days of culture and perfusion. CONCLUSION: This study demonstrates the feasibility to chronically perfuse human vessels under sterile conditions with preservation of cellular integrity and vascular contractility. To gain insights into the mechanisms leading to IH, it will now be possible to study vascular remodeling not only under static conditions but also in hemodynamic environment mimicking as closely as possible the flow conditions encountered in reconstructive vascular surgery.
Resumo:
We show that time-dependent couplings may lead to nontrivial scaling properties of the surface fluctuations of the asymptotic regime in nonequilibrium kinetic roughening models. Three typical situations are studied. In the case of a crossover between two different rough regimes, the time-dependent coupling may result in anomalous scaling for scales above the crossover length. In a different setting, for a crossover from a rough to either a flat or damping regime, the time-dependent crossover length may conspire to produce a rough surface, although the most relevant term tends to flatten the surface. In addition, our analysis sheds light into an existing debate in the problem of spontaneous imbibition, where time-dependent couplings naturally arise in theoretical models and experiments.
Resumo:
The intensity correlation functions C(t) for the colored-gain-noise model of dye lasers are analyzed and compared with those for the loss-noise model. For correlation times ¿ larger than the deterministic relaxation time td, we show with the use of the adiabatic approximation that C(t) values coincide for both models. For small correlation times we use a method that provides explicit expressions of non-Markovian correlation functions, approximating simultaneously short- and long-time behaviors. Comparison with numerical simulations shows excellent results simultaneously for short- and long-time regimes. It is found that, when the correlation time of the noise increases, differences between the gain- and loss-noise models tend to disappear. The decay of C(t) for both models can be described by a time scale that approaches the deterministic relaxation time. However, in contrast with the loss-noise model, a secondary time scale remains for large times for the gain-noise model, which could allow one to distinguish between both models.
Resumo:
Soil properties have an enormous impact on economic and environmental aspects of agricultural production. Quantitative relationships between soil properties and the factors that influence their variability are the basis of digital soil mapping. The predictive models of soil properties evaluated in this work are statistical (multiple linear regression-MLR) and geostatistical (ordinary kriging and co-kriging). The study was conducted in the municipality of Bom Jardim, RJ, using a soil database with 208 sampling points. Predictive models were evaluated for sand, silt and clay fractions, pH in water and organic carbon at six depths according to the specifications of the consortium of digital soil mapping at the global level (GlobalSoilMap). Continuous covariates and categorical predictors were used and their contributions to the model assessed. Only the environmental covariates elevation, aspect, stream power index (SPI), soil wetness index (SWI), normalized difference vegetation index (NDVI), and b3/b2 band ratio were significantly correlated with soil properties. The predictive models had a mean coefficient of determination of 0.21. Best results were obtained with the geostatistical predictive models, where the highest coefficient of determination 0.43 was associated with sand properties between 60 to 100 cm deep. The use of a sparse data set of soil properties for digital mapping can explain only part of the spatial variation of these properties. The results may be related to the sampling density and the quantity and quality of the environmental covariates and predictive models used.
Resumo:
Estimating the time since discharge of a spent cartridge or a firearm can be useful in criminal situa-tions involving firearms. The analysis of volatile gunshot residue remaining after shooting using solid-phase microextraction (SPME) followed by gas chromatography (GC) was proposed to meet this objective. However, current interpretative models suffer from several conceptual drawbacks which render them inadequate to assess the evidential value of a given measurement. This paper aims to fill this gap by proposing a logical approach based on the assessment of likelihood ratios. A probabilistic model was thus developed and applied to a hypothetical scenario where alternative hy-potheses about the discharge time of a spent cartridge found on a crime scene were forwarded. In order to estimate the parameters required to implement this solution, a non-linear regression model was proposed and applied to real published data. The proposed approach proved to be a valuable method for interpreting aging-related data.
Resumo:
Abstract Traditionally, the common reserving methods used by the non-life actuaries are based on the assumption that future claims are going to behave in the same way as they did in the past. There are two main sources of variability in the processus of development of the claims: the variability of the speed with which the claims are settled and the variability between the severity of the claims from different accident years. High changes in these processes will generate distortions in the estimation of the claims reserves. The main objective of this thesis is to provide an indicator which firstly identifies and quantifies these two influences and secondly to determine which model is adequate for a specific situation. Two stochastic models were analysed and the predictive distributions of the future claims were obtained. The main advantage of the stochastic models is that they provide measures of variability of the reserves estimates. The first model (PDM) combines one conjugate family Dirichlet - Multinomial with the Poisson distribution. The second model (NBDM) improves the first one by combining two conjugate families Poisson -Gamma (for distribution of the ultimate amounts) and Dirichlet Multinomial (for distribution of the incremental claims payments). It was found that the second model allows to find the speed variability in the reporting process and development of the claims severity as function of two above mentioned distributions' parameters. These are the shape parameter of the Gamma distribution and the Dirichlet parameter. Depending on the relation between them we can decide on the adequacy of the claims reserve estimation method. The parameters have been estimated by the Methods of Moments and Maximum Likelihood. The results were tested using chosen simulation data and then using real data originating from the three lines of business: Property/Casualty, General Liability, and Accident Insurance. These data include different developments and specificities. The outcome of the thesis shows that when the Dirichlet parameter is greater than the shape parameter of the Gamma, resulting in a model with positive correlation between the past and future claims payments, suggests the Chain-Ladder method as appropriate for the claims reserve estimation. In terms of claims reserves, if the cumulated payments are high the positive correlation will imply high expectations for the future payments resulting in high claims reserves estimates. The negative correlation appears when the Dirichlet parameter is lower than the shape parameter of the Gamma, meaning low expected future payments for the same high observed cumulated payments. This corresponds to the situation when claims are reported rapidly and fewer claims remain expected subsequently. The extreme case appears in the situation when all claims are reported at the same time leading to expectations for the future payments of zero or equal to the aggregated amount of the ultimate paid claims. For this latter case, the Chain-Ladder is not recommended.
Resumo:
A general scheme for devising efficient cluster dynamics proposed in a previous paper [Phys. Rev. Lett. 72, 1541 (1994)] is extensively discussed. In particular, the strong connection among equilibrium properties of clusters and dynamic properties as the correlation time for magnetization is emphasized. The general scheme is applied to a number of frustrated spin models and the results discussed.
Resumo:
The invaded cluster (IC) dynamics introduced by Machta et al. [Phys. Rev. Lett. 75, 2792 (1995)] is extended to the fully frustrated Ising model on a square lattice. The properties of the dynamics that exhibits numerical evidence of self-organized criticality are studied. The fluctuations in the IC dynamics are shown to be intrinsic of the algorithm and the fluctuation-dissipation theorem is no longer valid. The relaxation time is found to be very short and does not present a critical size dependence.
Resumo:
We present an exact solution for the order parameters that characterize the stationary behavior of a population of Kuramotos phase oscillators under random external fields [Y. Kuramoto, in International Symposium on Mathematical Problems in Theoretical Physics, Lecture Notes in Physics, Vol. 39 (Springer, Berlin, 1975), p. 420]. From these results it is possible to generate the phase diagram of models with an arbitrary distribution of random frequencies and random fields.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
This paper derives the HJB (Hamilton-Jacobi-Bellman) equation for sophisticated agents in a finite horizon dynamic optimization problem with non-constant discounting in a continuous setting, by using a dynamic programming approach. A simple example is used in order to illustrate the applicability of this HJB equation, by suggesting a method for constructing the subgame perfect equilibrium solution to the problem.Conditions for the observational equivalence with an associated problem with constantdiscounting are analyzed. Special attention is paid to the case of free terminal time. Strotz¿s model (an eating cake problem of a nonrenewable resource with non-constant discounting) is revisited.