980 resultados para Hausdorff Approximation
Resumo:
The paper develops a method to solve higher-dimensional stochasticcontrol problems in continuous time. A finite difference typeapproximation scheme is used on a coarse grid of low discrepancypoints, while the value function at intermediate points is obtainedby regression. The stability properties of the method are discussed,and applications are given to test problems of up to 10 dimensions.Accurate solutions to these problems can be obtained on a personalcomputer.
Resumo:
That individuals contribute in social dilemma interactions even when contributing is costly is a well-established observation in the experimental literature. Since a contributor is always strictly worse off than a non-contributor the question is raised if an intrinsic motivation to contribute can survive in an evolutionary setting. Using recent results on deterministic approximation of stochastic evolutionary dynamics we give conditions for equilibria with a positive number of contributors to be selected in the long run.
Resumo:
This work aims the applicability of the Transient electromagnetic method at an arid and semiarid environmental condition in the Santiago Island – Cape Verde. Some seashore areas of this island show an increasing salt contamination of the groundwater. The main objective of present work is to relate this water-quality condition with parameters taken from the transient sounding’s data. In this context, transient soundings have been acquired from 2005 through 2009, at several chosen valleys near the sea, in a mean rate of one field campaign each year. The first phase of this work was the understanding of the geophysical method details, problems and applicability, as the chosen and acquired equipment was the first one to be permanently available to the Portuguese geosciences community. This first phase was also accomplished with field tests. Interpretation of the transient sounding’s data curves were done by application of 1-D inversion methods already developed and published, as also with quasi 2-D and quasi 3-D inversion algorithms, where applicability was feasible. This was the second phase. The 2-D and 3-D approximation results are satisfactory and promising; although a higher spatial sounding’s density should certainly allow for better results. At phase three, these results have been compared against the available lithologic, hydrologic and hydrochemical data, in the context of Santiago’s island settings. The analyses of these merged data showed that two distinct origins for the observed inland groundwater salinity are possible; seashore shallow mixing with contemporary seawater and mixing with a deep and older salty layer from up flow groundwater. Relations between the electric resistivity and the salt water content distribution were found for the surveyed areas. To this environment condition, the electromagnetic transient method proved to be a reliable and powerful technique. The groundwater quality can be accessed beyond the few available watershed points, which have an uneven distribution.
Resumo:
Given $n$ independent replicates of a jointly distributed pair $(X,Y)\in {\cal R}^d \times {\cal R}$, we wish to select from a fixed sequence of model classes ${\cal F}_1, {\cal F}_2, \ldots$ a deterministic prediction rule $f: {\cal R}^d \to {\cal R}$ whose risk is small. We investigate the possibility of empirically assessingthe {\em complexity} of each model class, that is, the actual difficulty of the estimation problem within each class. The estimated complexities are in turn used to define an adaptive model selection procedure, which is based on complexity penalized empirical risk.The available data are divided into two parts. The first is used to form an empirical cover of each model class, and the second is used to select a candidate rule from each cover based on empirical risk. The covering radii are determined empirically to optimize a tight upper bound on the estimation error. An estimate is chosen from the list of candidates in order to minimize the sum of class complexity and empirical risk. A distinguishing feature of the approach is that the complexity of each model class is assessed empirically, based on the size of its empirical cover.Finite sample performance bounds are established for the estimates, and these bounds are applied to several non-parametric estimation problems. The estimates are shown to achieve a favorable tradeoff between approximation and estimation error, and to perform as well as if the distribution-dependent complexities of the model classes were known beforehand. In addition, it is shown that the estimate can be consistent,and even possess near optimal rates of convergence, when each model class has an infinite VC or pseudo dimension.For regression estimation with squared loss we modify our estimate to achieve a faster rate of convergence.
Resumo:
We lay out a small open economy version of the Calvo sticky price model, and show how the equilibrium dynamics can be reduced to simple representation in domestic inflation and the output gap. We use the resulting framework to analyze the macroeconomic implications of three alternative rule-based policy regimes for the small open economy: domestic inflation and CPI-based Taylor rules, and an exchange rate peg. We show that a key difference amongthese regimes lies in the relative amount of exchange rate volatility that they entail. We also discuss a special case for which domestic inflation targeting constitutes the optimal policy, and where a simple second order approximation to the utility of the representative consumer can be derived and used to evaluate the welfare losses associated with the suboptimal rules.
Resumo:
A family of scaling corrections aimed to improve the chi-square approximation of goodness-of-fit test statistics in small samples, large models, and nonnormal data was proposed in Satorra and Bentler (1994). For structural equations models, Satorra-Bentler's (SB) scaling corrections are available in standard computer software. Often, however, the interest is not on the overall fit of a model, but on a test of the restrictions that a null model say ${\cal M}_0$ implies on a less restricted one ${\cal M}_1$. If $T_0$ and $T_1$ denote the goodness-of-fit test statistics associated to ${\cal M}_0$ and ${\cal M}_1$, respectively, then typically the difference $T_d = T_0 - T_1$ is used as a chi-square test statistic with degrees of freedom equal to the difference on the number of independent parameters estimated under the models ${\cal M}_0$ and ${\cal M}_1$. As in the case of the goodness-of-fit test, it is of interest to scale the statistic $T_d$ in order to improve its chi-square approximation in realistic, i.e., nonasymptotic and nonnormal, applications. In a recent paper, Satorra (1999) shows that the difference between two Satorra-Bentler scaled test statistics for overall model fit does not yield the correct SB scaled difference test statistic. Satorra developed an expression that permits scaling the difference test statistic, but his formula has some practical limitations, since it requires heavy computations that are notavailable in standard computer software. The purpose of the present paper is to provide an easy way to compute the scaled difference chi-square statistic from the scaled goodness-of-fit test statistics of models ${\cal M}_0$ and ${\cal M}_1$. A Monte Carlo study is provided to illustrate the performance of the competing statistics.
Resumo:
We develop a general error analysis framework for the Monte Carlo simulationof densities for functionals in Wiener space. We also study variancereduction methods with the help of Malliavin derivatives. For this, wegive some general heuristic principles which are applied to diffusionprocesses. A comparison with kernel density estimates is made.
Resumo:
The paper defines concepts of real wealth and saving which take into account the intertemporal index number problem that results from changing interest rates. Unlike conventional measures of real wealth, which are based on the market value of assets and ignore the index number problem, the new measure correctly reflects the changes in the welfare of households over time. An empirically operational approximation to the theoretical measure is provided and applied to US data. A major empirical finding is that US real financial wealth increased strongly in the 1980s, much more than is revealed by the market value of assets.
Resumo:
In a closed economy context there is common agreement on price inflation stabilization being one of the objects of monetary policy. Moving to an open economy context gives rise to the coexistence of two measures of inflation: domestic inflation (DI) and consumer price inflation (CPI). Which one of the two measures should be the target variable? This is the question addressed in this paper. In particular, I use a small open economy model to show that once sticky wages indexed to past CPI inflation are introduced, a complete inward looking monetary policy is no more optimal. I first, derive a loss function from a secondorder approximation of the utility function and then, I compute the fully optimalmonetary policy under commitment. Then, I use the optimal monetary policy as a benchmark to compare the performance of different monetary policy rules. The main result is that once a positive degree of indexation is introduced in the model the rule performing better (among the Taylor type rules considered) is the one targeting wage inflation and CPI inflation. Moreover this rule delivers results very close to the one obtained under the fully optimal monetary policy with commitment.
Resumo:
The approximants to regular continued fractions constitute `best approximations' to the numbers they converge to in two ways known as of the first and the second kind.This property of continued fractions provides a solution to Gosper's problem of the batting average: if the batting average of a baseball player is 0.334, what is the minimum number of times he has been at bat? In this paper, we tackle somehow the inverse question: given a rational number P/Q, what is the set of all numbers for which P/Q is a `best approximation' of one or the other kind? We prove that inboth cases these `Optimality Sets' are intervals and we give aprecise description of their endpoints.
Resumo:
Este artigo visa a apresentar um contributo para a reflexão acerca da integração de Cabo Verde na CEDEAO (Comunidade Económica dos Estados da África Ocidental) e sobre a aproximação de Cabo Verde às RUP (Regiões Ultraperiféricas) e, posteriormente, à União Europeia, tirando-se proveito da realidade de uma Nação que se gerou e se vai consolidando a partir da cooperação de povos e culturas oriundos dos dois espaços geográficos referidos. A discussão não se deve somente à posição geográfica privilegiada do arquipélago, mas, igualmente, liga-se a aspectos de ordem cultural, política, económica, comercial e de segurança, com especial destaque para a estruturação do Estado-Nação em Cabo Verde.
Resumo:
Este artigo procura compreender a integração regional do Arquipélago de Cabo Verde, na base do debate contemporâneo sobre a Nação. Resultado de cruzamento e convergência entre povos e culturas oriundos de dois espaços geográficos (África e Europa), a Nação cabo-verdiana afirma-se e consolida-se na margem dos debates perspetivados e delineados por diferentes gerações de intelectuais, políticos e académicos nacionais e estrangeiros. Estes procuram compreender e analisar a integração de Cabo Verde na Comunidade Económica dos Estados da África Ocidental (CEDEAO), a aproximação às Regiões Ultraperiféricas (RUP) e a Parceria Especial com a União Europeia (UE). A análise expressa no presente artigo, além de distinguir a posição geográfica privilegiada do arquipélago, enfatiza também aspetos de ordem cultural, política, económica, comercial e de segurança, relevantes para a construção da Nação em Cabo Verde.
Resumo:
The landslide of Rosiana is considered the largest slope movement amongst those known in historical times in Gran Canana, Canary Islands. It has been activated at least 4 times in the last century, and in the movement of 1956, when about 3.106 m3 of materials were involved, 250 people had to be evacuated and many buildings were destroyed. The present geological hazard has lead to specific studies of the phenomenon which, once characterised, can be used as a guide for the scientific and technical works that are to be made in this or similar areas. This paper wants to increase the knowledge about the unstable mass of Rosiana by using geophysical techniques based on the method of seismic by refraction. The geophysical measues have been interpreted with the aid of the available geomorphologic data, thus obtaining a first approximation to the geometry of the slope movements
Resumo:
El manuscrit que ens disposem a donar a conèixer és un tresor documental interessantíssim per abordar l’estudi dels usos poètics de les dones d’època moderna als territoris de parla catalana, no només pel nombre de composicions recuperades d’una mateixa autora (un total de 53 poesies espirituals, no catalogades i desconegudes fins ara) sinó perquè es tracta d’un dels pocs autògrafs femenins accessibles per a la recerca. La inexistència de treballs dedicats exclusivament a la poesia femenina d’època moderna al panorama català, ens obliga necessàriament a iniciar el treball amb una primera part introductòria dedicada a qüestions relatives als usos poètics de les dones dels segles XVI-XVIII, tot centrant-nos en algunes autores de l’àmbit conventual, al qual pertany el manuscrit objecte d’estudi. En la segona part del treball, ens centrem particularment en l’anàlisi i estudi del manuscrit. Així doncs, en una primera aproximació, descrivim el contingut del quadern, íntegrament en castellà, que recull composicions d caire espiritual i devot, i esbossem les dades biogràfiques de l’autora, la religiosa dominica sor Eulària Teixidor. Tot partint dels interessants estudis apareguts en els darrers anys sobre la literatura conventual femenina, intentem vincular aquest manuscrit amb la variada producció monàstica escrita per nombroses religioses de l’època sota manament del confessor
Resumo:
This work proposes novel network analysis techniques for multivariate time series.We define the network of a multivariate time series as a graph where verticesdenote the components of the process and edges denote non zero long run partialcorrelations. We then introduce a two step LASSO procedure, called NETS, toestimate high dimensional sparse Long Run Partial Correlation networks. This approachis based on a VAR approximation of the process and allows to decomposethe long run linkages into the contribution of the dynamic and contemporaneousdependence relations of the system. The large sample properties of the estimatorare analysed and we establish conditions for consistent selection and estimation ofthe non zero long run partial correlations. The methodology is illustrated with anapplication to a panel of U.S. bluechips.