871 resultados para Friedman rule, optimal taxation, open economy.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a competi tive general equilibrium model is used to investigate the welfare and long run allocation impacts of privatization. There are two types of capital in this model economy, one private and the other initially public ("infrastructure"), and a positive extemality due to the latter is assumed. A benevolent governrnent can improve upon decentralized allocation intemalizing the extemality, but it introduces distortions in the economy through the finance of its investments. It is shown that even making the best case for public action - maximization of individuais' welfare, no operation inefficiency and free supply to society of infrastructure services - privatization is welfare improving for a large set of economies. Hence, arguments against privatization based solely on under-investment are incorrect, as this maybe the optimal action when the financing of public investment are considered. When operation inefficiency is introduced in the public sector, gains from privatization are much higher and positive for most reasonable combinations of parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work evaluates empirically the Taylor rule for the US and Brazil using Kalman Filter and Markov-Switching Regimes. We show that the parameters of the rule change significantly with variations in both output and output gap proxies, considering hidden variables and states. Such conclusions call naturally for robust optimal monetary rules. We also show that Brazil and US have very contrasting parameters, first because Brazil presents time-varying intercept, second because of the rigidity in the parameters of the Brazilian Taylor rule, regardless the output gap proxy, data frequency or sample data. Finally, we show that the long-run inflation parameter of the US Taylor rule is less than one in many periods, contrasting strongly with Orphanides (forthcoming) and Clarida, Gal´i and Gertler (2000), and the same happens with Brazilian monthly data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on three versions of a small macroeconomic model for Brazil, this paper presents empirical evidence on the effects of parameter uncertainty on monetary policy rules and on the robustness of optimal and simple rules over different model specifications. By comparing the optimal policy rule under parameter uncertainty with the rule calculated under purely additive uncertainty, we find that parameter uncertainty should make policymakers react less aggressively to the economy's state variables, as suggested by Brainard's "conservatism principIe", although this effect seems to be relatively small. We then informally investigate each rule's robustness by analyzing the performance of policy rules derived from each model under each one of the alternative models. We find that optimal rules derived from each model perform very poorly under alternative models, whereas a simple Taylor rule is relatively robusto We also fmd that even within a specific model, the Taylor rule may perform better than the optimal rule under particularly unfavorable realizations from the policymaker' s loss distribution function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Essa tese é constituída por três artigos: "Tax Filing Choices for the Household", "Optimal Tax for the Household: Collective and Unitary Approaches" e "Vertical Differentiation and Heterogeneous Firms".

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to analyze the interaction and the effects of administered prices in the economy, through a DSGE model and the derivation of optimal monetary policies. The model used is a standard New Keynesian DSGE model of a closed economy with two sectors companies. In the first sector, free prices, there is a continuum of firms, and in the second sector of administered prices, there is a single firm. In addition, the model has positive trend inflation in the steady state. The model results suggest that price movements in any sector will impact on both sectors, for two reasons. Firstly, the price dispersion causes productivity to be lower. As the dispersion of prices is a change in the relative price of any sector, relative to general prices in the economy, when a movement in the price of a sector is not followed by another, their relative weights will change, leading to an impact on productivity in both sectors. Second, the path followed by the administered price sector is considered in future inflation expectations, which is used by companies in the free sector to adjust its optimal price. When this path leads to an expectation of higher inflation, the free sector companies will choose a higher mark-up to accommodate this expectation, thus leading to higher inflation trend when there is imperfect competition in the free sector. Finally, the analysis of optimal policies proved inconclusive, certainly indicating that there is influence of the adjustment model of administered prices in the definition of optimal monetary policy, but a quantitative study is needed to define the degree of impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The aim of this study was to evaluate a simple mnemonic rule (the RB-RB/LB-LB rule) for recording intra-oral radiographs with optimal projection for the control of dental implants.Methods: 30 third-year dental students received a short lesson in the RB-RB/LB-LB mnemonic rule. The rule is as follows: if right blur then raise beam (RB-RB), i.e. if implant threads are blurred at the right side of the implant, the X-ray beam direction must be raised towards the ceiling to obtain sharp threads on both implant sides; if left blur then lower beam (LB-LB), i.e. if implant threads are blurred at the left side of the implant, the X-ray beam direction must be lowered towards the floor to obtain sharp threads on both implant sides. Intra-oral radiographs of four screw-type implants placed with different inclination in a Frasaco upper or lower jaw dental model (Frasaco GmbH, Tettnang, Germany) were recorded. The students were unaware of the inclination of the implants and were instructed to re-expose each implant, implementing the mnemonic rule, until an image of the implant with acceptable quality (subjectively judged by the instructor) was obtained. Subsequently, each radiograph was blindly assessed with respect to sharpness of the implant threads and assigned to one of four quality categories: (1) perfect, (2) not perfect, but clinically acceptable, (3) not acceptable and (4) hopeless.Results: For all implants, from one non-perfect exposure to the following, a higher score was obtained in 64% of the cases, 28% received the same score and 8% obtained a lower score. Only a small variation was observed among exposures of implants with different inclination. on average, two exposures per implant (range: one to eight exposures) were needed to obtain a clinically acceptable image.Conclusion: The RB-RB/LB-LB mnemonic rule for recording intra-oral radiographs of dental implants with a correct projection was easy to implement by inexperienced examiners. Dentomaxillofacial Radiology (2012) 41, 298-304. doi: 10.1259/dmfr/20861598

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work concerns the application of the optimal control theory to Dengue epidemics. The dynamics of this insect-borne disease is modelled as a set of non-linear ordinary differential equations including the effect of educational campaigns organized to motivate the population to break the reproduction cycle of the mosquitoes by avoiding the accumulation of still water in open-air recipients. The cost functional is such that it reflects a compromise between actual financial spending (in insecticides and educational campaigns) and the population health (which can be objectively measured in terms of, for instance, treatment costs and loss of productivity). The optimal control problem is solved numerically using a multiple shooting method. However, the optimal control policy is difficult to implement by the health authorities because it is not practical to adjust the investment rate continuously in time. Therefore, a suboptimal control policy is computed assuming, as the admissible set, only those controls which are piecewise constant. The performance achieved by the optimal control and the sub-optimal control policies are compared with the cases of control using only insecticides when Breteau Index is greater or equal to 5 and the case of no-control. The results show that the sub-optimal policy yields a substantial reduction in the cost, in terms of the proposed functional, and is only slightly inferior to the optimal control policy. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a mathematical model and a methodology to solve the transmission network expansion planning problem with security constraints in full competitive market, assuming that all generation programming plans present in the system operation are known. The methodology let us find an optimal transmission network expansion plan that allows the power system to operate adequately in each one of the generation programming plans specified in the full competitive market case, including a single contingency situation with generation rescheduling using the security (n-1) criterion. In this context, the centralized expansion planning with security constraints and the expansion planning in full competitive market are subsets of the proposal presented in this paper. The model provides a solution using a genetic algorithm designed to efficiently solve the reliable expansion planning in full competitive market. The results obtained for several known systems from the literature show the excellent performance of the proposed methodology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In DNA microarray experiments, the gene fragments that are spotted on the slides are usually obtained by the synthesis of specific oligonucleotides that are able to amplify genes through PCR. Shotgun library sequences are an alternative to synthesis of primers for the study of each gene in the genome. The possibility of putting thousands of gene sequences into a single slide allows the use of shotgun clones in order to proceed with microarray analysis without a completely sequenced genome. We developed an OC Identifier tool (optimal clone identifier for genomic shotgun libraries) for the identification of unique genes in shotgun libraries based on a partially sequenced genome; this allows simultaneous use of clones in projects such as transcriptome and phylogeny studies, using comparative genomic hybridization and genome assembly. The OC Identifier tool allows comparative genome analysis, biological databases, query language in relational databases, and provides bioinformatics tools to identify clones that contain unique genes as alternatives to primer synthesis. The OC Identifier allows analysis of clones during the sequencing phase, making it possible to select genes of interest for construction of a DNA microarray. ©FUNPEC-RP.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Includes bibliography

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The rule creation to clone selection in different projects is a hard task to perform by using traditional implementations to control all the processes of the system. The use of an algebraic language is an alternative approach to manage all of system flow in a flexible way. In order to increase the power of versatility and consistency in defining the rules for optimal clone selection, this paper presents the software OCI 2 in which uses process algebra in the flow behavior of the system. OCI 2, controlled by an algebraic approach was applied in the rules elaboration for clone selection containing unique genes in the partial genome of the bacterium Bradyrhizobium elkanii Semia 587 and in the whole genome of the bacterium Xanthomonas axonopodis pv. citri. Copyright© (2009) by the International Society for Research in Science and Technology.