17 resultados para Lipschitzian bounds
em Repositório digital da Fundação Getúlio Vargas - FGV
Resumo:
Bounds on the distribution function of the sum of two random variables with known marginal distributions obtained by Makarov (1981) can be used to bound the cumulative distribution function (c.d.f.) of individual treatment effects. Identification of the distribution of individual treatment effects is important for policy purposes if we are interested in functionals of that distribution, such as the proportion of individuals who gain from the treatment and the expected gain from the treatment for these individuals. Makarov bounds on the c.d.f. of the individual treatment effect distribution are pointwise sharp, i.e. they cannot be improved in any single point of the distribution. We show that the Makarov bounds are not uniformly sharp. Specifically, we show that the Makarov bounds on the region that contains the c.d.f. of the treatment effect distribution in two (or more) points can be improved, and we derive the smallest set for the c.d.f. of the treatment effect distribution in two (or more) points. An implication is that the Makarov bounds on a functional of the c.d.f. of the individual treatment effect distribution are not best possible.
Resumo:
This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.
Resumo:
When estimating policy parameters, also known as treatment effects, the assignment to treatment mechanism almost always causes endogeneity and thus bias many of these policy parameters estimates. Additionally, heterogeneity in program impacts is more likely to be the norm than the exception for most social programs. In situations where these issues are present, the Marginal Treatment Effect (MTE) parameter estimation makes use of an instrument to avoid assignment bias and simultaneously to account for heterogeneous effects throughout individuals. Although this parameter is point identified in the literature, the assumptions required for identification may be strong. Given that, we use weaker assumptions in order to partially identify the MTE, i.e. to stablish a methodology for MTE bounds estimation, implementing it computationally and showing results from Monte Carlo simulations. The partial identification we perfom requires the MTE to be a monotone function over the propensity score, which is a reasonable assumption on several economics' examples, and the simulation results shows it is possible to get informative even in restricted cases where point identification is lost. Additionally, in situations where estimated bounds are not informative and the traditional point identification is lost, we suggest a more generic method to point estimate MTE using the Moore-Penrose Pseudo-Invese Matrix, achieving better results than traditional methods.
Resumo:
We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than “standard” confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a small simulation study illustrating the numerical behavior of the proposed bounds.
Resumo:
We aim to provide a review of the stochastic discount factor bounds usually applied to diagnose asset pricing models. In particular, we mainly discuss the bounds used to analyze the disaster model of Barro (2006). Our attention is focused in this disaster model since the stochastic discount factor bounds that are applied to study the performance of disaster models usually consider the approach of Barro (2006). We first present the entropy bounds that provide a diagnosis of the analyzed disaster model which are the methods of Almeida and Garcia (2012, 2016); Ghosh et al. (2016). Then, we discuss how their results according to the disaster model are related to each other and also present the findings of other methodologies that are similar to these bounds but provide different evidence about the performance of the framework developed by Barro (2006).
Resumo:
For strictly quasi concave differentiable utility functions, demand is shown to be differentiable almost everywhere if marginal utilities are pointwise Lipschitzian. For concave utility functions, demand is differentiable almost everywhere in the case of differentiable additively separable utility or in the case of quasi-linear utility.
Resumo:
We prove the existence of a competitive equilibrium for exchange economies with a measure space of agents and for which the commodity space is ` p, 1 < p < +∞. A vector x = (xn) in ` p may be interpreted as a security which promises to deliver xn units of numeraire at state (or date) n. Under assumptions imposing uniform bounds on marginal rates of substitution, positive results on core-Walras equivalence were established in Rustichini–Yannelis [21] and Podczeck [20]. In this paper we prove that under similar assumptions on marginal rates of substitution, the set of competitive equilibria (and thus the core) is non-empty.
Resumo:
We study an economy where there are two types of assets. Consumers’ promises are the primitive defaultable assets secured by collateral chosen by the consumers themselves. The purchase of these personalized assets by financial intermediaries is financed by selling back derivatives to consumers. We show that nonarbitrage prices of primitive assets are strict submartingales, whereas nonarbitrage prices of derivatives are supermartingales. Next we establish existence of equilibrium, without imposing bounds on short sales. The nonconvexity of the budget set is overcome by considering a continuum of agents.
Resumo:
Este trabalho pretende analisar os principais centros de pós-graduação e de pesquisa em economia localizados em São Paulo e no Rio de Janeiro, a partir do levantamento de documentos, programas, regulamentos e publicações de seus principais expoentes. Também pretendemos utilizar depoimentos desses expoentes para entender como os processos decisórios foram analisados de "dentro" da instituição. A história da vida do entrevistado permite que entremos no mundo das emoções, nos limites da racionalidade do ator histórico. Ao quebrarmos o esquematismo simplista, podemos desvendar as relações entre o indivíduo e a rede histórica. A memória, com suas falhas, distorções e inversões, torna-se um elemento de análise para explicar o presente, a partir da compreensão do passado sob a ótica de quem vivenciou os fatos.
Resumo:
Avaliamos a efetividade da política de salário mínimo nacional nos segmentos formais e informais do mercado de trabalho brasileiro. A nossa técnica consiste em mapear soluções de canto produzidas pela política de salário mínimo que são posteriormente utilizados como mecanismo de focalização na simulação de limites superiores dos efeitos de reajustes do salário mínimo sobre medidas de pobreza no Brasil. Destacamos dois “efeitos informais” do mínimo: i) a alta porcentagem de trabalhadores sem carteira assinada ganhando exatamente um mínimo, o que potencializa os efeitos aliviadores de pobreza do salário mínimo; e ii) A observação de remunerações que utilizam o salário mínimo, como numerário, em particular no setor formal.
Resumo:
This article is motivated by the prominence of one-sided S,s rules in the literature and by the unrealistic strict conditions necessary for their optimality. It aims to assess whether one-sided pricing rules could be an adequate individual rule for macroeconomic models, despite its suboptimality. It aims to answer two questions. First, since agents are not fully rational, is it plausible that they use such a non-optimal rule? Second, even if the agents adopt optimal rules, is the economist committing a serious mistake by assuming that agents use one-sided Ss rules? Using parameters based on real economy data, we found that since the additional cost involved in adopting the simpler rule is relatively small, it is plausible that one-sided rules are used in practice. We also found that suboptimal one-sided rules and optimal two-sided rules are in practice similar, since one of the bounds is not reached very often. We concluded that the macroeconomic effects when one-sided rules are suboptimal are similar to the results obtained under two-sided optimal rules, when they are close to each other. However, this is true only when one-sided rules are used in the context where they are not optimal.
Resumo:
We study the implications of the absence of arbitrage in an two period economy where default is allowed and assets are secured by collateral choosen by the borrowers. We show that non arbitrage sale prices of assets are submartingales, whereas non arbitrage purchase prices of the derivatives (secured by the pool of collaterals) are supermartingales. We use these non arbitrage conditions to establish existence of equilibrium, without imposing bounds on short sales. The nonconvexity of the budget set is overcome by considering a continuum of agents. Our results are particularly relevant for the collateralized mortgage obligations(CMO) markets.
Resumo:
We study the asset pricing implications of an endowment economy when agents can default on contracts that would leave them otherwise worse off. We specialize and extend the environment studied by Kocherlakota (1995) and Kehoe and Levine (1993) to make it comparable to standard studies of asset pricillg. We completely charactize efficient allocations for several special cases. We illtroduce a competitive equilibrium with complete markets alld with elldogellous solvency constraints. These solvellcy constraints are such as to prevent default -at the cost of reduced risk sharing. We show a version of the classical welfare theorems for this equilibrium definition. We characterize the pricing kernel, alld compare it with the one for economies without participation constraints : interest rates are lower and risk premia can be bigger depending on the covariance of the idiosyncratic and aggregate shocks. Quantitative examples show that for reasonable parameter values the relevant marginal rates of substitution fali within the Hansen-Jagannathan bounds.
Resumo:
Indexing is a passive investment strategy in which the investor weights bis portfolio to match the performance of a broad-based indexo Since severaI studies showed that indexed portfolios have consistently outperformed active management strategies over the last decades, an increasing number of investors has become interested in indexing portfolios IateIy. Brazilian financiaI institutions do not offer indexed portfolios to their clients at this point in time. In this work we propose the use of indexed portfolios to track the performance oftwo ofthe most important Brazilian stock indexes: the mOVESPA and the FGVIOO. We test the tracking performance of our modeI by a historical simulation. We applied several statistical tests to the data to verify how many stocks should be used to controI the portfolio tracking error within user specified bounds.
Resumo:
This paper presents semiparametric estimators for treatment effects parameters when selection to treatment is based on observable characteristics. The parameters of interest in this paper are those that capture summarized distributional effects of the treatment. In particular, the focus is on the impact of the treatment calculated by differences in inequality measures of the potential outcomes of receiving and not receiving the treatment. These differences are called here inequality treatment effects. The estimation procedure involves a first non-parametric step in which the probability of receiving treatment given covariates, the propensity-score, is estimated. Using the reweighting method to estimate parameters of the marginal distribution of potential outcomes, in the second step weighted sample versions of inequality measures are.computed. Calculations of semiparametric effciency bounds for inequality treatment effects parameters are presented. Root-N consistency, asymptotic normality, and the achievement of the semiparametric efficiency bound are shown for the semiparametric estimators proposed. A Monte Carlo exercise is performed to investigate the behavior in finite samples of the estimator derived in the paper.