10 resultados para Inference Mechanism

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho propomos a aplicação das noções de equilíbrio da recente literatura de desenho de mecanismo robusto com aquisição de informação endógena a um problema de divisão de risco entre dois agentes. Através deste exemplo somos capazes de motivar o uso desta noção de equilíbrio, assim como discutir os efeitos da introdu ção de uma restrição de participação que seja dependente da informação. A simplicidade do modelo nos permite caracterizar a possibilidade de implementar a alocação Pareto efiente em termos do custo de aquisição da informação. Além disso, mostramos que a precisão da informação pode ter um efeito negativo sobre a implementação da alocação efi ciente. Ao final, sao dados dois exemplos específicos de situações nas quais este modelo se aplica.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A model is presented in which banks accept deposits of fiat money and intermediate capital. Alt though theories about the coexistence of money and credit are inherently difficult, the model offers a simple explanation for the dual role of financial institutions: Banks are well monitored, and can credibly allow fiat-money withdraws to whom needs its, thus qualifying to become safe brokers of idle capital. The model shares some features with those of Diamond and Dybvig (1983) and Kiyotaki and Wright (1989).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study semiparametric two-step estimators which have the same structure as parametric doubly robust estimators in their second step. The key difference is that we do not impose any parametric restriction on the nuisance functions that are estimated in a first stage, but retain a fully nonparametric model instead. We call these estimators semiparametric doubly robust estimators (SDREs), and show that they possess superior theoretical and practical properties compared to generic semiparametric two-step estimators. In particular, our estimators have substantially smaller first-order bias, allow for a wider range of nonparametric first-stage estimates, rate-optimal choices of smoothing parameters and data-driven estimates thereof, and their stochastic behavior can be well-approximated by classical first-order asymptotics. SDREs exist for a wide range of parameters of interest, particularly in semiparametric missing data and causal inference models. We illustrate our method with a simulation exercise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present article initiates a systematic study of the behavior of a strictly increasing, C2 , utility function u(a), seen as a function of agents' types, a, when the set of types, A, is a compact, convex subset of iRm . When A is a m-dimensional rectangle it shows that there is a diffeomorphism of A such that the function U = u o H is strictly increasing, C2 , and strictly convexo Moreover, when A is a strictly convex leveI set of a nowhere singular function, there exists a change of coordinates H such that B = H-1(A) is a strictly convex set and U = u o H : B ~ iR is a strictly convex function, as long as a characteristic number of u is smaller than a characteristic number of A. Therefore, a utility function can be assumed convex in agents' types without loss of generality in a wide variety of economic environments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Historically, payment systems and capital intermediation interact. Friedman (1959), and many observers of bank instabilities, have ad- vocated separating depositary from credit institutions. His proposal meets today an ever-increasing provision of inside money, and a short- age of monetary models of bank intermediation. In this paper, we eval- uate the proposal from a new angle, with a model in which isolating a safe payments system from commercial intermediation undermines information complementarities in banking activities. Some features of the environment resemble the models in Diamond and Dybvig (1983), and Kiyotaki and Wright (1989).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

No Brasil, a recente reformulação do Exame Nacional de Ensino Médio (ENEM) e a criação do Sistema de Seleção Unificada (SISU), um mecanismo de admissão centralizado que aloca os alunos às instituições, promoveram mudanças relevantes no Ensino Superior. Neste artigo, investigamos os efeitos da introdução do SISU na migração e evasão dos alunos ingressantes a partir dos dados do Censo de Educação Superior. Para tal, exploramos a variação temporal na adesão das instituições ao SISU e encontramos que a adoção do SISU está associada a um aumento da mobilidade entre municípios e entre estados dos alunos ingressantes em 3.8 pontos percentuais (p.p) e 1.6 p.p., respectivamente. Além disso, encontramos um aumento da evasão em 4.5 p.p. Nossos resultados indicam que custos associados à migração e comportamento estratégico são importantes determinantes da evasão dos alunos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).