107 resultados para Stochastic neurodynamics
Resumo:
The objective of this paper is to correct and improve the results obtained by Van der Ploeg (1984a, 1984b) and utilized in the theoretical literature related to feedback stochastic optimal control sensitive to constant exogenous risk-aversion (see, Jacobson, 1973, Karp, 1987 and Whittle, 1981, 1989, 1990, among others) or to the classic context of risk-neutral decision-makers (see, Chow, 1973, 1976a, 1976b, 1977, 1978, 1981, 1993). More realistic and attractive, this new approach is placed in the context of a time-varying endogenous risk-aversion which is under the control of the decision-maker. It has strong qualitative implications on the agent's optimal policy during the entire planning horizon.
Resumo:
This paper analyzes the persistence of shocks that affect the real exchange rates for a panel of seventeen OECD developed countries during the post-Bretton Woods era. The adoption of a panel data framework allows us to distinguish two different sources of shocks, i.e. the idiosyncratic and the common shocks, each of which may have di¤erent persistence patterns on the real exchange rates. We first investigate the stochastic properties of the panel data set using panel stationarity tests that simultaneously consider both the presence of cross-section dependence and multiple structural breaks that have not received much attention in previous persistence analyses. Empirical results indicate that real exchange rates are non-stationary when the analysis does not account for structural breaks, although this conclusion is reversed when they are modeled. Consequently, misspecification errors due to the non-consideration of structural breaks leads to upward biased shocks' persistence measures. The persistence measures for the idiosyncratic and common shocks have been estimated in this paper always turn out to be less than one year.
Resumo:
Report for the scientific sojourn at the Simon Fraser University, Canada, from July to September 2007. General context: landscape change during the last years is having significant impacts on biodiversity in many Mediterranean areas. Land abandonment, urbanisation and specially fire are profoundly transforming large areas in the Western Mediterranean basin and we know little on how these changes influence species distribution and in particular how these species will respond to further change in a context of global change including climate. General objectives: integrate landscape and population dynamics models in a platform allowing capturing species distribution responses to landscape changes and assessing impact on species distribution of different scenarios of further change. Specific objective 1: develop a landscape dynamic model capturing fire and forest succession dynamics in Catalonia and linked to a stochastic landscape occupancy (SLOM) (or spatially explicit population, SEPM) model for the Ortolan bunting, a species strongly linked to fire related habitat in the region. Predictions from the occupancy or spatially explicit population Ortolan bunting model (SEPM) should be evaluated using data from the DINDIS database. This database tracks bird colonisation of recently burnt big areas (&50 ha). Through a number of different SEPM scenarios with different values for a number of parameter, we should be able to assess different hypothesis in factors driving bird colonisation in new burnt patches. These factors to be mainly, landscape context (i.e. difficulty to reach the patch, and potential presence of coloniser sources), dispersal constraints, type of regenerating vegetation after fire, and species characteristics (niche breadth, etc).
Resumo:
We analyse the Heston stochastic volatility model under an inversion of spot. The result is that under the appropriate measure changes the resulting process is again a Heston type process whose parameters can be explicitly determined from those of the original process. This behaviour can be interpreted as some measure of sanity of the Heston model but does not seem to be a general feature of stochastic volatility processes.
Resumo:
This paper uses sequential stochastic dominance procedures to compare the joint distribution of health and income across space and time. It is the First application of which we are aware of methods to compare multidimensional distributions of income and health using procedures that are robust to aggregation techniques. The paper's approach is more general than comparisons of health gradients and does not require the estimation of health equivalent incomes. We illustrate the approach by contrasting Canada and the US using comparable data. Canada dominates the US over the lower bidimensional welfare distribution of health and income, though not generally in terms of the uni-dimensional distribution of health or income. The paper also finds that welfare for both Canadians and Americans has not unambiguously improved during the last decade over the joint distribution of income and health, in spite of the fact that the uni-dimensional distributions of income have clearly improved during that period.
Resumo:
This paper develops a methodology to estimate the entire population distributions from bin-aggregated sample data. We do this through the estimation of the parameters of mixtures of distributions that allow for maximal parametric flexibility. The statistical approach we develop enables comparisons of the full distributions of height data from potential army conscripts across France's 88 departments for most of the nineteenth century. These comparisons are made by testing for differences-of-means stochastic dominance. Corrections for possible measurement errors are also devised by taking advantage of the richness of the data sets. Our methodology is of interest to researchers working on historical as well as contemporary bin-aggregated or histogram-type data, something that is still widely done since much of the information that is publicly available is in that form, often due to restrictions due to political sensitivity and/or confidentiality concerns.
Resumo:
Traffic forecasts provide essential input for the appraisal of transport investment projects. However, according to recent empirical evidence, long-term predictions are subject to high levels of uncertainty. This paper quantifies uncertainty in traffic forecasts for the tolled motorway network in Spain. Uncertainty is quantified in the form of a confidence interval for the traffic forecast that includes both model uncertainty and input uncertainty. We apply a stochastic simulation process based on bootstrapping techniques. Furthermore, the paper proposes a new methodology to account for capacity constraints in long-term traffic forecasts. Specifically, we suggest a dynamic model in which the speed of adjustment is related to the ratio between the actual traffic flow and the maximum capacity of the motorway. This methodology is applied to a specific public policy that consists of suppressing the toll on a certain motorway section before the concession expires.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
In this study we analyze multinationality (domestic-based firms vs. multinationals) and foreignness (foreign vs. domestic firms) effects in the returns of R&D to productivity. We follow a two-step strategy. In the first step, we consistently ''s productivity by GMM and numerically compute the sample distribution of the R&D returns. In the second step, we use stochastic dominance techniques to make inferences on the multinationality and foreignness effects. Results for a panel of UK manufacturing firms suggest that multinationality and foreignness effects operate in an opposite way: whilst the multinationality effect enhances R&D returns, the foreignness diminishes them.
Resumo:
This paper study repeated games where the time repetitions of the stage game are not known or controlled by the players. We call this feature random monitoring. Kawamori's (2004) shows that perfect random monitoring is always better than the canonical case. Surprisingly, when the monitoring is public, the result is less clear-cut and does not generalize in a straightforward way. Unless the public signals are sufficiently informative about player's actions and/or players are patient enough. In addition to a discount effect, that tends to consistently favor the provision of incentives, we found an information effect, associated with the time uncertainty on the distribution of public signals. Whether payoff improvements are or not possible, depends crucially on the direction and strength of these effects. JEL: C73, D82, D86. KEYWORDS: Repeated Games, Frequent Monitoring, Random Public Monitoring, Moral Hazard, Stochastic Processes.
Resumo:
In this paper, we present a stochastic model for disability insurance contracts. The model is based on a discrete time non-homogeneous semi-Markov process (DTNHSMP) to which the backward recurrence time process is introduced. This permits a more exhaustive study of disability evolution and a more efficient approach to the duration problem. The use of semi-Markov reward processes facilitates the possibility of deriving equations of the prospective and retrospective mathematical reserves. The model is applied to a sample of contracts drawn at random from a mutual insurance company.
Resumo:
There are two principal chemical concepts that are important for studying the naturalenvironment. The first one is thermodynamics, which describes whether a system is atequilibrium or can spontaneously change by chemical reactions. The second main conceptis how fast chemical reactions (kinetics or rate of chemical change) take place wheneverthey start. In this work we examine a natural system in which both thermodynamics andkinetic factors are important in determining the abundance of NH+4 , NO−2 and NO−3 insuperficial waters. Samples were collected in the Arno Basin (Tuscany, Italy), a system inwhich natural and antrophic effects both contribute to highly modify the chemical compositionof water. Thermodynamical modelling based on the reduction-oxidation reactionsinvolving the passage NH+4 -& NO−2 -& NO−3 in equilibrium conditions has allowed todetermine the Eh redox potential values able to characterise the state of each sample and,consequently, of the fluid environment from which it was drawn. Just as pH expressesthe concentration of H+ in solution, redox potential is used to express the tendency of anenvironment to receive or supply electrons. In this context, oxic environments, as thoseof river systems, are said to have a high redox potential because O2 is available as anelectron acceptor.Principles of thermodynamics and chemical kinetics allow to obtain a model that oftendoes not completely describe the reality of natural systems. Chemical reactions may indeedfail to achieve equilibrium because the products escape from the site of the rectionor because reactions involving the trasformation are very slow, so that non-equilibriumconditions exist for long periods. Moreover, reaction rates can be sensitive to poorly understoodcatalytic effects or to surface effects, while variables as concentration (a largenumber of chemical species can coexist and interact concurrently), temperature and pressurecan have large gradients in natural systems. By taking into account this, data of 91water samples have been modelled by using statistical methodologies for compositionaldata. The application of log–contrast analysis has allowed to obtain statistical parametersto be correlated with the calculated Eh values. In this way, natural conditions in whichchemical equilibrium is hypothesised, as well as underlying fast reactions, are comparedwith those described by a stochastic approach
Resumo:
In networks with small buffers, such as optical packet switching based networks, the convolution approach is presented as one of the most accurate method used for the connection admission control. Admission control and resource management have been addressed in other works oriented to bursty traffic and ATM. This paper focuses on heterogeneous traffic in OPS based networks. Using heterogeneous traffic and bufferless networks the enhanced convolution approach is a good solution. However, both methods (CA and ECA) present a high computational cost for high number of connections. Two new mechanisms (UMCA and ISCA) based on Monte Carlo method are proposed to overcome this drawback. Simulation results show that our proposals achieve lower computational cost compared to enhanced convolution approach with an small stochastic error in the probability estimation
Resumo:
Low concentrations of elements in geochemical analyses have the peculiarity of beingcompositional data and, for a given level of significance, are likely to be beyond thecapabilities of laboratories to distinguish between minute concentrations and completeabsence, thus preventing laboratories from reporting extremely low concentrations of theanalyte. Instead, what is reported is the detection limit, which is the minimumconcentration that conclusively differentiates between presence and absence of theelement. A spatially distributed exhaustive sample is employed in this study to generateunbiased sub-samples, which are further censored to observe the effect that differentdetection limits and sample sizes have on the inference of population distributionsstarting from geochemical analyses having specimens below detection limit (nondetects).The isometric logratio transformation is used to convert the compositional data in thesimplex to samples in real space, thus allowing the practitioner to properly borrow fromthe large source of statistical techniques valid only in real space. The bootstrap method isused to numerically investigate the reliability of inferring several distributionalparameters employing different forms of imputation for the censored data. The casestudy illustrates that, in general, best results are obtained when imputations are madeusing the distribution best fitting the readings above detection limit and exposes theproblems of other more widely used practices. When the sample is spatially correlated, itis necessary to combine the bootstrap with stochastic simulation