992 resultados para flat rate
Resumo:
Item 1038-A, 1038-B (microfiche).
Resumo:
We analyze the dynamic behavior and the welfare properties of the equilibrium path of a growth model where both habits and consumption externalities affect the utility of consumers. We discuss the effects of flat rate income taxes and characterize the optimal income taxation policy. We show that, when consumption externalities and habit adjusted consumption are not perfect substitutes, a counter-cyclical income tax rate allows the competitive equilibrium to replicate the efficient path. Our analysis highlights the crucial role played by complementarities between externalities and habits in order to generate an inefficient dynamic equilibrium.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.
Resumo:
Several European telecommunications regulatory agencies have recently introduced a fixed capacity charge (flat rate) to regulate access to the incumbent's network. The purpose of this paper is to show that the optimal capacity charge and the optimal access-minute charge analysed by Armstrong, Doyle, and Vickers (1996) have a similar structure and imply the same payment for the entrant. I extend the analysis tothe case where there is a competitor with market power. In this case, the optimalcapacity charge should be modified to avoid that the entrant cream-skims the market,fixing a longer or a shorter peak period than the optimal. Finally, I consider a multiproduct setting, where the effect of the product differentiation is exacerbated.
Resumo:
Several European telecommunications regulatory agencies have recently introduced a fixed capacity charge (flat rate) to regulate access to the incumbent's network. The purpose of this paper is to show that the optimal capacity charge and the optimal access-minute charge analysed by Armstrong, Doyle, and Vickers (1996) have a similar structure and imply the same payment for the entrant. I extend the analysis tothe case where there is a competitor with market power. In this case, the optimalcapacity charge should be modified to avoid that the entrant cream-skims the market,fixing a longer or a shorter peak period than the optimal. Finally, I consider a multiproduct setting, where the effect of the product differentiation is exacerbated.
Resumo:
For many services, consumers can choose among a range of optional tariffs that differ in their access and usage prices. Recent studies indicate that tariff-specific preferences may lead consumers to choose a tariff that does not minimize their expected billing rate. This study analyzes how tariff-specific preferences influence the responsiveness of consumers’ usage and tariff choice to changes in price. We show that consumer heterogeneity in tariff-specific preferences leads to heterogeneity in their sensitivity to price changes. Specifically, consumers with tariff-specific preferences are less sensitive to price increases of their preferred tariff than other consumers. Our results provide an additional reason why firms should offer multiple tariffs rather than a uniform nonlinear pricing plan to extract maximum consumer surplus.
Resumo:
We study a family of models of tax evasion, where a flat-rate tax finances only the provision of public goods, neglecting audits and wage differences. We focus on the comparison of two modeling approaches. The first is based on optimizing agents, who are endowed with social preferences, their utility being the sum of private consumption and moral utility. The second approach involves agents acting according to simple heuristics. We find that while we encounter the traditionally shaped Laffer-curve in the optimizing model, the heuristics models exhibit (linearly) increasing Laffercurves. This difference is related to a peculiar type of behavior emerging within the heuristics based approach: a number of agents lurk in a moral state of limbo, alternating between altruism and selfishness.
Resumo:
Az adócsalásnak egy olyan modellcsaládját vizsgáljuk, ahol az egykulcsos adó kizárólag a közjavakat finanszírozza. Két megközelítés összehasonlítására összpontosítunk. Az elsőben minden dolgozó jövedelme azonos, és ebből minden évben annyit vall be, amennyi maximalizálja a nála maradó jövedelemből fedezhető fogyasztás nyújtotta hasznosság és a jövedelembevallásból fakadó hasznosság összegét. A második hasznosság három tényező szorzata: a dolgozó exogén adómorálja, a környezetében előző évben megfigyelt átlagos jövedelembevallás és saját bevallásából fakadó endogén hasznossága. A második megközelítésben az ágensek egyszerű heurisztikus szabályok szerint cselekszenek. Míg az optimalizáló modellben hagyományos Laffer-görbékkel találkozunk, addig a heurisztikán alapuló modellekben (lineárisan) növekvő Laffer-görbék jönnek létre. E különbség oka, hogy a heurisztikán alapuló modellben egy sajátos viselkedésfajta jelentkezik: számos ágens ingatag helyzetbe kerül, amelyben altruizmus és önzés között ingadozik. ________ The authors study a family of models of tax evasion, where a flat-rate tax only finances the provision of public goods and audits and wage differences are ne-glected. The paper focuses on comparing two modelling approaches. The first is based on optimizing agents, endowed with social preferences, their utility being the sum of private consumption and moral utility. The second approach involves agents acting according to simple heuristics. While the traditionally shaped Laffer curves are encountered in the optimizing model, the heuristics models exhibit (linearly) increasing Laffer curves. This difference is related to a peculiar type of behaviour: within the agent-based approach lurk a number of agents in a moral state of limbo, alternating between altruism and selfishness.
Resumo:
O aumento da pressão sobre os recursos hídricos tem levado muitos países a reconsiderarem os mecanismos utilizados na indução do uso eficiente da água, especialmente na agricultura irrigada. Estabelecer o preço correto da água é um dos mecanismos de tornar mais eficiente a alocação da água. O presente trabalho tem como objetivo a análise dos impactes económicos, sociais e ambientais de políticas de preço da água. A metodologia utilizada foi a Programação Linear, aplicada ao Perímetro Irrigado do Vale de Caxito, Província do Bengo, a 45 km de Luanda, que tem como fonte o rio Dande. Foram testados três cenários relativos a políticas de tarifação de água: tarifa volumétrica simples, tarifa volumétrica variável, e tarifa fixa por superfície. As principais conclusões mostram que, do ponto de vista do uso eficiente da água na agricultura, os melhores resultados obtêm-se com a tarifa volumétrica variável; do ponto de vista social, a tarifação volumétrica simples apresenta os melhores resultados; o método de tarifa volumétrica variável foi o mais penalizador, reduzindo rapidamente a área das culturas mais consumidoras de água, sendo o melhor do ponto de vista ambiental. Qualquer um dos métodos traz aspetos negativos relativamente à redução da margem bruta total. Palavras-chaves: Recursos hídricos; Preço da água; Programação linear. Abstract: Increased pressure on water resources has led many countries to reconsider the mechanisms used in the induction of efficient water use, especially for irrigated agriculture, a major consumer of water. Establishing the correct price of water is one of the mechanisms for more efficient allocation of water. This paper aims to analyze the economic, social and essenenvironmental impacts of water price policies. The methodology used is the linear programming, applied to the Irrigated Valley Caxito, in Bengo Province, 45 kilometers from Luanda, which has the river Dande as its source. Three scenarios concerning water price policies were tested: simple volumetric rate, variable volumetric rate and flat rate per surface. The main findings show that from the point of view of the efficient use of water in agriculture, the best results are obtained with variable volumetric rate; from the social point of view, the simple volumetric rate has the best results; the volume variable rate method proved to be the most penalizing, quickly reducing the area of most water consuming cultures, being the method in which the environmental objectives would be more readily achieved. Either methods bring negative aspects in relation to the reduction of total gross margin. Key-words: Water resources; Water price; Linear programming.
Resumo:
Background and aim: The usefulness of high definition colonoscopy plus i-scan (HD+i-SCAN) for average-risk colorectal cancer screening has not been fully assessed. The detection rate of adenomas and other measurements such as the number of adenomas per colonoscopy and the flat adenoma detection rate have been recognized as markers of colonoscopy quality. The aim of the present study was to compare the diagnostic performance of an HD+i-SCAN with that of standard resolution white-light colonoscope. Methods: This is a retrospective analysis of a prospectively collected screening colonoscopy database. A comparative analysis of the diagnostic yield of an HD+i-SCAN or standard resolution colonoscopy for average-risk colorectal screening was conducted. Results: During the period of study, 155/163 (95.1%) patients met the inclusion criteria. The mean age was 56.9 years. Sixty of 155 (39%) colonoscopies were performed using a HD+i-SCAN. Adenoma-detection-rates during the withdrawal of the standard resolution versus HD+i-SCAN colonoscopies were 29.5% and 30% (p = n.s.). Adenoma/colonoscopy values for standard resolution versus HD+i-SCAN colonoscopies were 0.46 (SD = 0.9) and 0.72 (SD = 1.3) (p = n.s.). A greater number of flat adenomas were detected in the HD+i-SCAN group (6/60 vs. 2/95) (p < .05). Likewise, serrated adenomas/polyps per colonoscopy were also higher in the HD+i-SCAN group. Conclusions: A HD+i-SCAN colonoscopy increases the flat adenoma detection rate and serrated adenomas/polyps per colonoscopy compared to a standard colonoscopy in average-risk screening population. HD+i-SCAN is a simple, available procedure that can be helpful, even for experienced providers. The performance of HD+i-SCAN and substantial prevalence of flat lesions in our average-risk screening cohort support its usefulness in improving the efficacy of screening colonoscopies.
Resumo:
We discuss the dynamics of the Universe within the framework of the massive graviton cold dark matter scenario (MGCDM) in which gravitons are geometrically treated as massive particles. In this modified gravity theory, the main effect of the gravitons is to alter the density evolution of the cold dark matter component in such a way that the Universe evolves to an accelerating expanding regime, as presently observed. Tight constraints on the main cosmological parameters of the MGCDM model are derived by performing a joint likelihood analysis involving the recent supernovae type Ia data, the cosmic microwave background shift parameter, and the baryonic acoustic oscillations as traced by the Sloan Digital Sky Survey red luminous galaxies. The linear evolution of small density fluctuations is also analyzed in detail. It is found that the growth factor of the MGCDM model is slightly different (similar to 1-4%) from the one provided by the conventional flat Lambda CDM cosmology. The growth rate of clustering predicted by MGCDM and Lambda CDM models are confronted to the observations and the corresponding best fit values of the growth index (gamma) are also determined. By using the expectations of realistic future x-ray and Sunyaev-Zeldovich cluster surveys we derive the dark matter halo mass function and the corresponding redshift distribution of cluster-size halos for the MGCDM model. Finally, we also show that the Hubble flow differences between the MGCDM and the Lambda CDM models provide a halo redshift distribution departing significantly from the those predicted by other dark energy models. These results suggest that the MGCDM model can observationally be distinguished from Lambda CDM and also from a large number of dark energy models recently proposed in the literature.
Resumo:
Our group have recently proposed that low prenatal vitamin D may be a risk-modifying factor for schizophrenia. Climate variability impacts on vitamin D levels in a population via fluctuations in the amount of available UV radiation. In order to explore this hypothesis, we examined fluctuations in the birthrates for people with schizophrenia born between 1920 and 1967 and three sets of variables strongly associated with UV radiation. These included: (a) the Southern Oscillation Index (SOI), a marker of El Nino which is the most prominent meteorological factor that influences Queensland weather: (b) measures of cloud cover and (c) measures of sunshine. Schizophrenia births were extracted from the Queensland Mental Health register and corrected for background population birth rates. Schizophrenia birth rates had several apparently non-random features in common with the SO1. The prominent SO1 fluctuation event that occurred between 1937 and 1943 is congruent with the most prominent fluctuation in schizophrenia birth rates. The relatively flat profile of SOI activity between 1927 and 1936 also corresponds to the flattest period in the schizophrenia time series. Both time series have prominent oscillations in the 3 ~, year range between 1946 and 1960. Significant associations between schizophrenia birth rates and measures of both sunshine and cloud cover were identified,and all three time series shared periodicity in the 3-4 year range. The analyses suggest that the risk of schizophrenia is higher for those born during times of increased cloud cover,reduced sunshine and positive SO1. These ecological analyses provide initial support for the vitamin D hypothesis, however alternative non-genetic candidate exposures also need to be considered. Other sites with year-to-year fluctuations in cloud cover and sunshine should examine patterns of association between these climate variables and schizophrenia birth rates. The Stanley Foundation supported this project.
Resumo:
In order to understand the earthquake nucleation process, we need to understand the effective frictional behavior of faults with complex geometry and fault gouge zones. One important aspect of this is the interaction between the friction law governing the behavior of the fault on the microscopic level and the resulting macroscopic behavior of the fault zone. Numerical simulations offer a possibility to investigate the behavior of faults on many different scales and thus provide a means to gain insight into fault zone dynamics on scales which are not accessible to laboratory experiments. Numerical experiments have been performed to investigate the influence of the geometric configuration of faults with a rate- and state-dependent friction at the particle contacts on the effective frictional behavior of these faults. The numerical experiments are designed to be similar to laboratory experiments by DIETERICH and KILGORE (1994) in which a slide-hold-slide cycle was performed between two blocks of material and the resulting peak friction was plotted vs. holding time. Simulations with a flat fault without a fault gouge have been performed to verify the implementation. These have shown close agreement with comparable laboratory experiments. The simulations performed with a fault containing fault gouge have demonstrated a strong dependence of the critical slip distance D-c on the roughness of the fault surfaces and are in qualitative agreement with laboratory experiments.
Resumo:
Nucleation rates for tunneling processes in Minkowski and de Sitter space are investigated, taking into account one loop prefactors. In particular, we consider the creation of membranes by an antisymmetric tensor field, analogous to Schwinger pair production. This can be viewed as a model for the decay of a false (or true) vacuum at zero temperature in the thin wall limit. Also considered is the spontaneous nucleation of strings, domain walls, and monopoles during inflation. The instantons for these processes are spherical world sheets or world lines embedded in flat or de Sitter backgrounds. We find the contribution of such instantons to the semiclassical partition function, including the one loop corrections due to small fluctuations around the spherical world sheet. We suggest a prescription for obtaining, from the partition function, the distribution of objects nucleated during inflation. This can be seen as an extension of the usual formula, valid in flat space, according to which the nucleation rate is twice the imaginary part of the free energy. For the case of pair production, the results reproduce those that can be obtained using second quantization methods, confirming the validity of instanton techniques in de Sitter space. Throughout the paper, both the gravitational field and the antisymmetric tensor field are assumed external.