22 resultados para integro-difference model
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
We introduce a set of sequential integro-difference equations to analyze the dynamics of two interacting species. Firstly, we derive the speed of the fronts when a species invades a space previously occupied by a second species, and check its validity by means of numerical random-walk simulations. As an example, we consider the Neolithic transition: the predictions of the model are consistent with the archaeological data for the front speed, provided that the interaction parameter is low enough. Secondly, an equation for the coexistence time between the invasive and the invaded populations is obtained for the first time. It agrees well with the simulations, is consistent with observations of the Neolithic transition, and makes it possible to estimate the value of the interaction parameter between the incoming and the indigenous populations
Resumo:
We construct estimates of educational attainment for a sample of OECD countries using previously unexploited sources. We follow a heuristic approach to obtain plausible time profiles for attainment levels by removing sharp breaks in the data that seem to reflect changes in classification criteria. We then construct indicators of the information content of our series and a number of previously available data sets and examine their performance in several growth specifications. We find a clear positive correlation between data quality and the size and significance of human capital coefficients in growth regressions. Using an extension of the classical errors in variables model, we construct a set of meta-estimates of the coefficient of years of schooling in an aggregate Cobb-Douglas production function. Our results suggest that, after correcting for measurement error bias, the value of this parameter is well above 0.50.
Resumo:
We provide robust examples of symmetric two-player coordination games in normal form that reveal that equilibrium selection by the evolutionary model of Young (1993) is essentially different from equilibrium selection by the evolutionary model of Kandori, Mailath and Rob (1993).
Resumo:
Departures from pure self interest in economic experiments have recently inspired models of "social preferences". We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of "quasi-maximin" preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willingness to sacrifice to achieve a fair outcome when others are themselves unwilling to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as "difference aversion" in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willing to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players.
Resumo:
This paper presents several applications to interest rate risk managementbased on a two-factor continuous-time model of the term structure of interestrates previously presented in Moreno (1996). This model assumes that defaultfree discount bond prices are determined by the time to maturity and twofactors, the long-term interest rate and the spread (difference between thelong-term rate and the short-term (instantaneous) riskless rate). Several newmeasures of ``generalized duration" are presented and applied in differentsituations in order to manage market risk and yield curve risk. By means ofthese measures, we are able to compute the hedging ratios that allows us toimmunize a bond portfolio by means of options on bonds. Focusing on thehedging problem, it is shown that these new measures allow us to immunize abond portfolio against changes (parallel and/or in the slope) in the yieldcurve. Finally, a proposal of solution of the limitations of conventionalduration by means of these new measures is presented and illustratednumerically.
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
This paper presents a two--factor model of the term structure ofinterest rates. We assume that default free discount bond prices aredetermined by the time to maturity and two factors, the long--term interestrate and the spread (difference between the long--term rate and theshort--term (instantaneous) riskless rate). Assuming that both factorsfollow a joint Ornstein--Uhlenbeck process, a general bond pricing equationis derived. We obtain a closed--form expression for bond prices andexamine its implications for the term structure of interest rates. We alsoderive a closed--form solution for interest rate derivatives prices. Thisexpression is applied to price European options on discount bonds andmore complex types of options. Finally, empirical evidence of the model'sperformance is presented.
Resumo:
This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.
Resumo:
We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.
Resumo:
We provide robust examples of symmetric two-player coordination games in normal form that reveal that equilibrium selection bythe evolutionary model of Young (1993) is essentially different from equilibrium selection by the evolutionary model of Kandori, Mailath and Rob (1993).
Resumo:
A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described
Resumo:
We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.
Resumo:
We study the static properties of the Little model with asymmetric couplings. We show that the thermodynamics of this model coincides with that of the Sherrington-Kirkpatrick model, and we compute the main finite-size corrections to the difference of the free energy between these two models and to some clarifying order parameters. Our results agree with numerical simulations. Numerical results are presented for the symmetric Little model, which show that the same conclusions are also valid in this case.
Resumo:
In this paper we analyze the time of ruin in a risk process with the interclaim times being Erlang(n) distributed and a constant dividend barrier. We obtain an integro-differential equation for the Laplace Transform of the time of ruin. Explicit solutions for the moments of the time of ruin are presented when the individual claim amounts have a distribution with rational Laplace transform. Finally, some numerical results and a compare son with the classical risk model, with interclaim times following an exponential distribution, are given.