850 resultados para optimal instruments
Resumo:
This paper considers the general problem of Feasible Generalized Least Squares Instrumental Variables (FG LS IV) estimation using optimal instruments. First we summarize the sufficient conditions for the FG LS IV estimator to be asymptotic ally equivalent to an optimal G LS IV estimator. Then we specialize to stationary dynamic systems with stationary VAR errors, and use the sufficient conditions to derive new moment conditions for these models. These moment conditions produce useful IVs from the lagged endogenous variables, despite the correlation between errors and endogenous variables. This use of the information contained in the lagged endogenous variables expands the class of IV estimators under consideration and there by potentially improves both asymptotic and small-sample efficiency of the optimal IV estimator in the class. Some Monte Carlo experiments compare the new methods with those of Hatanaka [1976]. For the DG P used in the Monte Carlo experiments, asymptotic efficiency is strictly improved by the new IVs, and experimental small-sample efficiency is improved as well.
Resumo:
In this paper, we use identification-robust methods to assess the empirical adequacy of a New Keynesian Phillips Curve (NKPC) equation. We focus on the Gali and Gertler’s (1999) specification, on both U.S. and Canadian data. Two variants of the model are studied: one based on a rationalexpectations assumption, and a modification to the latter which consists in using survey data on inflation expectations. The results based on these two specifications exhibit sharp differences concerning: (i) identification difficulties, (ii) backward-looking behavior, and (ii) the frequency of price adjustments. Overall, we find that there is some support for the hybrid NKPC for the U.S., whereas the model is not suited to Canada. Our findings underscore the need for employing identificationrobust inference methods in the estimation of expectations-based dynamic macroeconomic relations.
Resumo:
This paper examines the optimal design of climate change policies in the context where governments want to encourage the private sector to undertake significant immediate investment in developing cleaner technologies, but the carbon taxes and other environmental policies that could in principle stimulate such investment will be imposed over a very long future. The conventional claim by environmental economists is that environmental policies alone are sufficient to induce firms to undertake optimal investment. However this argument requires governments to be able to commit to these future taxes, and it is far from clear that governments have this degree of commitment. We assume instead that governments cannot commit, and so both they and the private sector have to contemplate the possibility of there being governments in power in the future that give different (relative) weights to the environment. We show that this lack of commitment has a significant asymmetric effect. Compared to the situation where governments can commit it increases the incentive of the current government to have the investment undertaken, but reduces the incentive of the private sector to invest. Consequently governments may need to use additional policy instruments – such as R&D subsidies – to stimulate the required investment.
Resumo:
The choice of a research path in attacking scientific and technological problems is a significant component of firms’ R&D strategy. One of the findings of the patent races literature is that, in a competitive market setting, firms’ noncooperative choices of research projects display an excessive degree of correlation, as compared to the socially optimal level. The paper revisits this question in a context in which firms have access to trade secrets, in addition to patents, to assert intellectual property rights (IPR) over their discoveries. We find that the availability of multiple IPR protection instruments can move the paths chosen by firms engaged in an R&D race toward the social optimum.
Resumo:
The economic literature on crime and punishment focuses on the trade-off between probability and severity of punishment, and suggests that detection probability and fines are substitutes. In this paper it is shown that, in presence of substantial underdeterrence caused by costly detection and punishment, these instruments may become complements. When offenders are poor, the deterrent value of monetary sanctions is low. Thus, the government does not invest a lot in detection. If offenders are rich, however, the deterrent value of monetary sanctions is high, so it is more profitable to prosecute them.
Resumo:
[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.
Resumo:
Les carences en compétences en santé touchent principalement certaines populations à risques en limitant l'accès aux soins, l'interaction avec les soignants et l'autoprise en charge. L'utilisation systématique d'instruments de dépistage n'est pas recommandée et les interventions préconisées en pratique consistent plutôt à diminuer les obstacles entravant la communication patient-soignant. Il s'agit d'intégrer non seulement les compétences de la population en matière de santé mais aussi les compétences communicationnelles d'un système de santé qui se complexifie. Health literacy is defined as "the degree to which individuals have the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions." Low health literacy mainly affects certain populations at risk limiting access to care, interaction with caregivers and self-management. If there are screening tests, their routine use is not advisable and recommended interventions in practice consist rather to reduce barriers to patient-caregiver communication. It is thus important to include not only population's health literacy but also communication skills of a health system wich tend to become more complex.
Resumo:
[eng] This paper provides, from a theoretical and quantitative point of view, an explanation of why taxes on capital returns are high (around 35%) by analyzing the optimal fiscal policy in an economy with intergenerational redistribution. For this purpose, the government is modeled explicitly and can choose (and commit to) an optimal tax policy in order to maximize society's welfare. In an infinitely lived economy with heterogeneous agents, the long run optimal capital tax is zero. If heterogeneity is due to the existence of overlapping generations, this result in general is no longer true. I provide sufficient conditions for zero capital and labor taxes, and show that a general class of preferences, commonly used on the macro and public finance literature, violate these conditions. For a version of the model, calibrated to the US economy, the main results are: first, if the government is restricted to a set of instruments, the observed fiscal policy cannot be disregarded as sub optimal and capital taxes are positive and quantitatively relevant. Second, if the government can use age specific taxes for each generation, then the age profile capital tax pattern implies subsidizing asset returns of the younger generations and taxing at higher rates the asset returns of the older ones.
Resumo:
Motivated by the Chinese experience, we analyze a semi-open economy where the central bank has access to international capital markets, but the private sector has not. This enables the central bank to choose an interest rate different from the international rate. We examine the optimal policy of the central bank by modelling it as a Ramsey planner who can choose the level of domestic public debt and of international reserves. The central bank can improve savings opportunities of credit-constrained consumers modelled as in Woodford (1990). We find that in a steady state it is optimal for the central bank to replicate the open economy, i.e., to issue debt financed by the accumulation of reserves so that the domestic interest rate equals the foreign rate. When the economy is in transition, however, a rapidly growing economy has a higher welfare without capital mobility and the optimal interest rate differs from the international rate. We argue that the domestic interest rate should be temporarily above the international rate. We also find that capital controls can still help reach the first best when the planner has more fiscal instruments.
Resumo:
This paper considers an alternative perspective to China's exchange rate policy. It studies a semi-open economy where the private sector has no access to international capital markets but the central bank has full access. Moreover, it assumes limited financial development generating a large demand for saving instruments by the private sector. The paper analyzes the optimal exchange rate policy by modeling the central bank as a Ramsey planner. Its main result is that in a growth acceleration episode it is optimal to have an initial real depreciation of the currency combined with an accumulation of reserves, which is consistent with the Chinese experience. This depreciation is followed by an appreciation in the long run. The paper also shows that the optimal exchange rate path is close to the one that would result in an economy with full capital mobility and no central bank intervention.
Resumo:
In this paper we investigate the optimal choice of prices and/or exams by universities in the presence of credit constraints. We first compare the optimal behavior of a public, welfare maximizing, monopoly and a private, profit maximizing, monopoly. Then we model competition between a public and a private institution and investigate the new role of exams/prices in this environment. We find that, under certain circumstances, the public university may have an interest to raise tuition fees from minimum levels if it cares for global welfare. This will be the case provided that (i) the private institution has higher quality and uses only prices to select applicants, or (ii) the private institution has lower quality and uses also exams to select students. When this is the case, there are efficiency grounds for raising public prices
Resumo:
Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.
Resumo:
In this paper we investigate the optimal choice of prices and/or exams by universities in the presence of credit constraints. We first compare the optimal behavior of a public, welfare maximizing, monopoly and a private, profit maximizing, monopoly. Then we model competition between a public and a private institution and investigate the new role of exams/prices in this environment. We find that, under certain circumstances, the public university may have an interest to raise tuition fees from minimum levels if it cares for global welfare. This will be the case provided that (i) the private institution has higher quality and uses only prices to select applicants, or (ii) the private institution has lower quality and uses also exams to select students. When this is the case, there are efficiency grounds for raising public prices
Resumo:
This dissertation deals with the problem of making inference when there is weak identification in models of instrumental variables regression. More specifically we are interested in one-sided hypothesis testing for the coefficient of the endogenous variable when the instruments are weak. The focus is on the conditional tests based on likelihood ratio, score and Wald statistics. Theoretical and numerical work shows that the conditional t-test based on the two-stage least square (2SLS) estimator performs well even when instruments are weakly correlated with the endogenous variable. The conditional approach correct uniformly its size and when the population F-statistic is as small as two, its power is near the power envelopes for similar and non-similar tests. This finding is surprising considering the bad performance of the two-sided conditional t-tests found in Andrews, Moreira and Stock (2007). Given this counter intuitive result, we propose novel two-sided t-tests which are approximately unbiased and can perform as well as the conditional likelihood ratio (CLR) test of Moreira (2003).
Resumo:
This paper considers two-sided tests for the parameter of an endogenous variable in an instrumental variable (IV) model with heteroskedastic and autocorrelated errors. We develop the nite-sample theory of weighted-average power (WAP) tests with normal errors and a known long-run variance. We introduce two weights which are invariant to orthogonal transformations of the instruments; e.g., changing the order in which the instruments appear. While tests using the MM1 weight can be severely biased, optimal tests based on the MM2 weight are naturally two-sided when errors are homoskedastic. We propose two boundary conditions that yield two-sided tests whether errors are homoskedastic or not. The locally unbiased (LU) condition is related to the power around the null hypothesis and is a weaker requirement than unbiasedness. The strongly unbiased (SU) condition is more restrictive than LU, but the associated WAP tests are easier to implement. Several tests are SU in nite samples or asymptotically, including tests robust to weak IV (such as the Anderson-Rubin, score, conditional quasi-likelihood ratio, and I. Andrews' (2015) PI-CLC tests) and two-sided tests which are optimal when the sample size is large and instruments are strong. We refer to the WAP-SU tests based on our weights as MM1-SU and MM2-SU tests. Dropping the restrictive assumptions of normality and known variance, the theory is shown to remain valid at the cost of asymptotic approximations. The MM2-SU test is optimal under the strong IV asymptotics, and outperforms other existing tests under the weak IV asymptotics.