971 resultados para Monotone Approximation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Con este trabajo he pretendido realizar un estudio aproximado a la música del período romántico cuya temática gira en torno al mundo de la noche. Lo he hecho a través de los compositores de música para piano más representativos de la época, con una breve pincelada a la música sinfónica y a las artes representativas. He pretendido demostrar cómo cada compositor reflejó su personalidad a través de un concepto tan abstracto y romántico como es el mundo de la noche.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce a variation of the proof for weak approximations that issuitable for studying the densities of stochastic processes which areevaluations of the flow generated by a stochastic differential equation on a random variable that maybe anticipating. Our main assumption is that the process and the initial random variable have to be smooth in the Malliavin sense. Furthermore if the inverse of the Malliavin covariance matrix associated with the process under consideration is sufficiently integrable then approximations fordensities and distributions can also be achieved. We apply theseideas to the case of stochastic differential equations with boundaryconditions and the composition of two diffusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we address the importance of distributive effects in the social valuation of QALY's. We propose a social welfarefunction that generalises the functions traditionally used in the health economic literature. The novelty is that, depending on the individual health gains, our function can representeither preferences for concentrating or preferences for spreading total gain or both together, an issue which has notbeen addressed until now. Based on an experiment, we observe that this generalisation provides a suitable approximation tothe sampled social preferences.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new general concentration-of-measure inequality and illustrate its power by applications in random combinatorics. The results find direct applications in some problems of learning theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper applies the theoretical literature on nonparametric bounds ontreatment effects to the estimation of how limited English proficiency (LEP)affects wages and employment opportunities for Hispanic workers in theUnited States. I analyze the identifying power of several weak assumptionson treatment response and selection, and stress the interactions between LEPand education, occupation and immigration status. I show that thecombination of two weak but credible assumptions provides informative upperbounds on the returns to language skills for certain subgroups of thepopulation. Adding age at arrival as a monotone instrumental variable alsoprovides informative lower bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

That individuals contribute in social dilemma interactions even when contributing is costly is a well-established observation in the experimental literature. Since a contributor is always strictly worse off than a non-contributor the question is raised if an intrinsic motivation to contribute can survive in an evolutionary setting. Using recent results on deterministic approximation of stochastic evolutionary dynamics we give conditions for equilibria with a positive number of contributors to be selected in the long run.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work aims the applicability of the Transient electromagnetic method at an arid and semiarid environmental condition in the Santiago Island – Cape Verde. Some seashore areas of this island show an increasing salt contamination of the groundwater. The main objective of present work is to relate this water-quality condition with parameters taken from the transient sounding’s data. In this context, transient soundings have been acquired from 2005 through 2009, at several chosen valleys near the sea, in a mean rate of one field campaign each year. The first phase of this work was the understanding of the geophysical method details, problems and applicability, as the chosen and acquired equipment was the first one to be permanently available to the Portuguese geosciences community. This first phase was also accomplished with field tests. Interpretation of the transient sounding’s data curves were done by application of 1-D inversion methods already developed and published, as also with quasi 2-D and quasi 3-D inversion algorithms, where applicability was feasible. This was the second phase. The 2-D and 3-D approximation results are satisfactory and promising; although a higher spatial sounding’s density should certainly allow for better results. At phase three, these results have been compared against the available lithologic, hydrologic and hydrochemical data, in the context of Santiago’s island settings. The analyses of these merged data showed that two distinct origins for the observed inland groundwater salinity are possible; seashore shallow mixing with contemporary seawater and mixing with a deep and older salty layer from up flow groundwater. Relations between the electric resistivity and the salt water content distribution were found for the surveyed areas. To this environment condition, the electromagnetic transient method proved to be a reliable and powerful technique. The groundwater quality can be accessed beyond the few available watershed points, which have an uneven distribution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We lay out a small open economy version of the Calvo sticky price model, and show how the equilibrium dynamics can be reduced to simple representation in domestic inflation and the output gap. We use the resulting framework to analyze the macroeconomic implications of three alternative rule-based policy regimes for the small open economy: domestic inflation and CPI-based Taylor rules, and an exchange rate peg. We show that a key difference amongthese regimes lies in the relative amount of exchange rate volatility that they entail. We also discuss a special case for which domestic inflation targeting constitutes the optimal policy, and where a simple second order approximation to the utility of the representative consumer can be derived and used to evaluate the welfare losses associated with the suboptimal rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

I study monotonicity and uniqueness of the equilibrium strategies in a two-person first price auction with affiliated signals. I show thatwhen the game is symmetric there is a unique Nash equilibrium thatsatisfies a regularity condition requiring that the equilibrium strategies be{\sl piecewise monotone}. Moreover, when the signals are discrete-valued, the equilibrium is unique. The central part of the proof consists of showing that at any regular equilibrium the bidders' strategies must be monotone increasing within the support of winning bids. The monotonicity result derived in this paper provides the missing link for the analysis of uniqueness in two-person first price auctions. Importantly, this result extends to asymmetric auctions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper defines concepts of real wealth and saving which take into account the intertemporal index number problem that results from changing interest rates. Unlike conventional measures of real wealth, which are based on the market value of assets and ignore the index number problem, the new measure correctly reflects the changes in the welfare of households over time. An empirically operational approximation to the theoretical measure is provided and applied to US data. A major empirical finding is that US real financial wealth increased strongly in the 1980s, much more than is revealed by the market value of assets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a closed economy context there is common agreement on price inflation stabilization being one of the objects of monetary policy. Moving to an open economy context gives rise to the coexistence of two measures of inflation: domestic inflation (DI) and consumer price inflation (CPI). Which one of the two measures should be the target variable? This is the question addressed in this paper. In particular, I use a small open economy model to show that once sticky wages indexed to past CPI inflation are introduced, a complete inward looking monetary policy is no more optimal. I first, derive a loss function from a secondorder approximation of the utility function and then, I compute the fully optimalmonetary policy under commitment. Then, I use the optimal monetary policy as a benchmark to compare the performance of different monetary policy rules. The main result is that once a positive degree of indexation is introduced in the model the rule performing better (among the Taylor type rules considered) is the one targeting wage inflation and CPI inflation. Moreover this rule delivers results very close to the one obtained under the fully optimal monetary policy with commitment.