951 resultados para Finite Difference Model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide robust examples of symmetric two-player coordination games in normal form that reveal that equilibrium selection by the evolutionary model of Young (1993) is essentially different from equilibrium selection by the evolutionary model of Kandori, Mailath and Rob (1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Departures from pure self interest in economic experiments have recently inspired models of "social preferences". We conduct experiments on simple two-person and three-person games with binary choices that test these theories more directly than the array of games conventionally considered. Our experiments show strong support for the prevalence of "quasi-maximin" preferences: People sacrifice to increase the payoffs for all recipients, but especially for the lowest-payoff recipients. People are also motivated by reciprocity: While people are reluctant to sacrifice to reciprocate good or bad behavior beyond what they would sacrifice for neutral parties, they withdraw willingness to sacrifice to achieve a fair outcome when others are themselves unwilling to sacrifice. Some participants are averse to getting different payoffs than others, but based on our experiments and reinterpretation of previous experiments we argue that behavior that has been presented as "difference aversion" in recent papers is actually a combination of reciprocal and quasi-maximin motivations. We formulate a model in which each player is willing to sacrifice to allocate the quasi-maximin allocation only to those players also believed to be pursuing the quasi-maximin allocation, and may sacrifice to punish unfair players.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents several applications to interest rate risk managementbased on a two-factor continuous-time model of the term structure of interestrates previously presented in Moreno (1996). This model assumes that defaultfree discount bond prices are determined by the time to maturity and twofactors, the long-term interest rate and the spread (difference between thelong-term rate and the short-term (instantaneous) riskless rate). Several newmeasures of ``generalized duration" are presented and applied in differentsituations in order to manage market risk and yield curve risk. By means ofthese measures, we are able to compute the hedging ratios that allows us toimmunize a bond portfolio by means of options on bonds. Focusing on thehedging problem, it is shown that these new measures allow us to immunize abond portfolio against changes (parallel and/or in the slope) in the yieldcurve. Finally, a proposal of solution of the limitations of conventionalduration by means of these new measures is presented and illustratednumerically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a two--factor model of the term structure ofinterest rates. We assume that default free discount bond prices aredetermined by the time to maturity and two factors, the long--term interestrate and the spread (difference between the long--term rate and theshort--term (instantaneous) riskless rate). Assuming that both factorsfollow a joint Ornstein--Uhlenbeck process, a general bond pricing equationis derived. We obtain a closed--form expression for bond prices andexamine its implications for the term structure of interest rates. We alsoderive a closed--form solution for interest rate derivatives prices. Thisexpression is applied to price European options on discount bonds andmore complex types of options. Finally, empirical evidence of the model'sperformance is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a general equilibrium model of money demand wherethe velocity of money changes in response to endogenous fluctuations in the interest rate. The parameter space can be divided into two subsets: one where velocity is constant and equal to one as in cash-in-advance models, and another one where velocity fluctuates as in Baumol (1952). Despite its simplicity, in terms of paramaters to calibrate, the model performs surprisingly well. In particular, it approximates the variability of money velocity observed in the U.S. for the post-war period. The model is then used to analyze the welfare costs of inflation under uncertainty. This application calculates the errors derived from computing the costs of inflation with deterministic models. It turns out that the size of this difference is small, at least for the levels of uncertainty estimated for the U.S. economy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical {\sc vc} dimension, empirical {\sc vc} entropy, andmargin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We provide robust examples of symmetric two-player coordination games in normal form that reveal that equilibrium selection bythe evolutionary model of Young (1993) is essentially different from equilibrium selection by the evolutionary model of Kandori, Mailath and Rob (1993).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A polarizable quantum mechanics and molecular mechanics model has been extended to account for the difference between the macroscopic electric field and the actual electric field felt by the solute molecule. This enables the calculation of effective microscopic properties which can be related to macroscopic susceptibilities directly comparable with experimental results. By seperating the discrete local field into two distinct contribution we define two different microscopic properties, the so-called solute and effective properties. The solute properties account for the pure solvent effects, i.e., effects even when the macroscopic electric field is zero, and the effective properties account for both the pure solvent effects and the effect from the induced dipoles in the solvent due to the macroscopic electric field. We present results for the linear and nonlinear polarizabilities of water and acetonitrile both in the gas phase and in the liquid phase. For all the properties we find that the pure solvent effect increases the properties whereas the induced electric field decreases the properties. Furthermore, we present results for the refractive index, third-harmonic generation (THG), and electric field induced second-harmonic generation (EFISH) for liquid water and acetonitrile. We find in general good agreement between the calculated and experimental results for the refractive index and the THG susceptibility. For the EFISH susceptibility, however, the difference between experiment and theory is larger since the orientational effect arising from the static electric field is not accurately described

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The vulnerability of subpopulations of retinal neurons delineated by their content of cytoskeletal or calcium-binding proteins was evaluated in the retinas of cynomolgus monkeys in which glaucoma was produced with an argon laser. We quantitatively compared the number of neurons containing either neurofilament (NF) protein, parvalbumin, calbindin or calretinin immunoreactivity in central and peripheral portions of the nasal and temporal quadrants of the retina from glaucomatous and fellow non-glaucomatous eyes. There was no significant difference between the proportion of amacrine, horizontal and bipolar cells labeled with antibodies to the calcium-binding proteins comparing the two eyes. NF triplet immunoreactivity was present in a subpopulation of retinal ganglion cells, many of which, but not all, likely correspond to large ganglion cells that subserve the magnocellular visual pathway. Loss of NF protein-containing retinal ganglion cells was widespread throughout the central (59-77% loss) and peripheral (96-97%) nasal and temporal quadrants and was associated with the loss of NF-immunoreactive optic nerve fibers in the glaucomatous eyes. Comparison of counts of NF-immunoreactive neurons with total cell loss evaluated by Nissl staining indicated that NF protein-immunoreactive cells represent a large proportion of the cells that degenerate in the glaucomatous eyes, particularly in the peripheral regions of the retina. Such data may be useful in determining the cellular basis for sensitivity to this pathologic process and may also be helpful in the design of diagnostic tests that may be sensitive to the loss of the subset of NF-immunoreactive ganglion cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the hypothesis that the atmosphere is constrained to maximize its entropy production by using a one-dimensional (1-D) vertical model. We prescribe the lapse rate in the convective layer as that of the standard troposphere. The assumption that convection sustains a critical lapse rate was absent in previous studies, which focused on the vertical distribution of climatic variables, since such a convective adjustment reduces the degrees of freedom of the system and may prevent the application of the maximum entropy production (MEP) principle. This is not the case in the radiative–convective model (RCM) developed here, since we accept a discontinuity of temperatures at the surface similar to that adopted in many RCMs. For current conditions, the MEP state gives a difference between the ground temperature and the air temperature at the surface ≈10 K. In comparison, conventional RCMs obtain a discontinuity ≈2 K only. However, the surface boundary layer velocity in the MEP state appears reasonable (≈3 m s-¹). Moreover, although the convective flux at the surface in MEP states is almost uniform in optically thick atmospheres, it reaches a maximum value for an optical thickness similar to current conditions. This additional result may support the maximum convection hypothesis suggested by Paltridge (1978)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

subsequent extension-induced exhumation. Geochronological dating of various Structural, thermobarometric, and geochronological data place limits on the age and tectonic displacement along the Zanskar shear zone, a major north-dipping synorogenic extensional structure separating the high-grade metamorphic sequence of the High Himalayan Crystalline Sequence from the overlying low-grade sedimentary rocks of the Tethyan Himalaya, A complete Barrovian metamorphic succession, from kyanite to biotite zone mineral assemblages, occurs within the I-km-thick Zanskar shear zone. Thermobarometric data indicate a difference In equilibration depths of 12 +/- 3 km between the lower kyanite zone and the garnet zone, which is Interpreted as a minimum estimate for the finite vertical displacement accommodated by the Zanskar shear zone. For the present-day dip of the structure (20 degrees), a simple geometrical model shows that a net slip of 35 +/- 9 km is required to regroup these samples to the same structural level. Because the kyanite to garnet zone rocks represent only part of the Zanskar shear zone, and because its original dip may have been less than the present-day dip, these estimates fur the finite displacement represent minimum values. Field relations and petrographic data suggest that migmatization and associated leucogranite intrusion in the footwall of the Zanskar shear zone occurred as a continuous profess starting at the Barrovian metamorphic peak and lasting throughout the subsequent extension-induced exhumation. Geochronological dataing of various leucogranitic plutons and dikes in the Zanskar shear zone footwall indicates that the main ductile shearing along the structure ended by 19.8 Ma and that extension most likely initiated shortly before 22.2 Ma.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are interested in the development, implementation and testing of an orthotropic model for cardiac contraction based on an active strain decomposition. Our model addresses the coupling of a transversely isotropic mechanical description at the cell level, with an orthotropic constitutive law for incompressible tissue at the macroscopic level. The main differences with the active stress model are addressed in detail, and a finite element discretization using Taylor-Hood and MINI elements is proposed and illustrated with numerical examples.