941 resultados para Q de Tobin
Resumo:
Esta pesquisa busca compreender perante quem se responsabilizam e com quem se vinculam os altos dirigentes das Entidades Fiscalizadoras Superiores, EFS, no Brasil e no Chile. A pesquisa focou nas unidades voltadas para o controle dos serviços públicos regulados. Adaptou-se o referencial teórico de De Graaf sobre as lealdades dos servidores públicos e adotou-se a pesquisa comparativa, fazendo uso da Metodologia Q para cotejar as percepções sobre lealdades e vinculação dos servidores públicos dessas entidades e suas influencias no processo de tomada de decisão. Os resultados revelam diferenças nas percepções dominantes nas duas EFS, mas também similaridades, principalmente relacionadas com a dominância do ideário weberiano da função publica.
Resumo:
I study the welfare cost of inflation and the effect on prices after a permanent increase in the interest rate. In the steady state, the real money demand is homogeneous of degree one in income and its interest-rate elasticity is approximately equal to −1/2. Consumers are indifferent between an economy with 10% p.a. inflation and one with zero inflation if their income is 1% higher in the first economy. A permanent increase in the interest rate makes the price level to drop initially and inflation to adjust slowly to its steady state level.
Resumo:
My dissertation focuses on dynamic aspects of coordination processes such as reversibility of early actions, option to delay decisions, and learning of the environment from the observation of other people’s actions. This study proposes the use of tractable dynamic global games where players privately and passively learn about their actions’ true payoffs and are able to adjust early investment decisions to the arrival of new information to investigate the consequences of the presence of liquidity shocks to the performance of a Tobin tax as a policy intended to foster coordination success (chapter 1), and the adequacy of the use of a Tobin tax in order to reduce an economy’s vulnerability to sudden stops (chapter 2). Then, it analyzes players’ incentive to acquire costly information in a sequential decision setting (chapter 3). In chapter 1, a continuum of foreign agents decide whether to enter or not in an investment project. A fraction λ of them are hit by liquidity restrictions in a second period and are forced to withdraw early investment or precluded from investing in the interim period, depending on the actions they chose in the first period. Players not affected by the liquidity shock are able to revise early decisions. Coordination success is increasing in the aggregate investment and decreasing in the aggregate volume of capital exit. Without liquidity shocks, aggregate investment is (in a pivotal contingency) invariant to frictions like a tax on short term capitals. In this case, a Tobin tax always increases success incidence. In the presence of liquidity shocks, this invariance result no longer holds in equilibrium. A Tobin tax becomes harmful to aggregate investment, which may reduces success incidence if the economy does not benefit enough from avoiding capital reversals. It is shown that the Tobin tax that maximizes the ex-ante probability of successfully coordinated investment is decreasing in the liquidity shock. Chapter 2 studies the effects of a Tobin tax in the same setting of the global game model proposed in chapter 1, with the exception that the liquidity shock is considered stochastic, i.e, there is also aggregate uncertainty about the extension of the liquidity restrictions. It identifies conditions under which, in the unique equilibrium of the model with low probability of liquidity shocks but large dry-ups, a Tobin tax is welfare improving, helping agents to coordinate on the good outcome. The model provides a rationale for a Tobin tax on economies that are prone to sudden stops. The optimal Tobin tax tends to be larger when capital reversals are more harmful and when the fraction of agents hit by liquidity shocks is smaller. Chapter 3 focuses on information acquisition in a sequential decision game with payoff complementar- ity and information externality. When information is cheap relatively to players’ incentive to coordinate actions, only the first player chooses to process information; the second player learns about the true payoff distribution from the observation of the first player’s decision and follows her action. Miscoordination requires that both players privately precess information, which tends to happen when it is expensive and the prior knowledge about the distribution of the payoffs has a large variance.
Resumo:
Ilustração componente do jogo “Ortotetris (http://www.loa.sead.ufscar.br/ortotetris.html)” desenvolvido pela equipe do Laboratório de Objetos de Aprendizagem da Universidade Federal de São Carlos (LOA/UFSCar).
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Techniques of optimization known as metaheuristics have achieved success in the resolution of many problems classified as NP-Hard. These methods use non deterministic approaches that reach very good solutions which, however, don t guarantee the determination of the global optimum. Beyond the inherent difficulties related to the complexity that characterizes the optimization problems, the metaheuristics still face the dilemma of xploration/exploitation, which consists of choosing between a greedy search and a wider exploration of the solution space. A way to guide such algorithms during the searching of better solutions is supplying them with more knowledge of the problem through the use of a intelligent agent, able to recognize promising regions and also identify when they should diversify the direction of the search. This way, this work proposes the use of Reinforcement Learning technique - Q-learning Algorithm - as exploration/exploitation strategy for the metaheuristics GRASP (Greedy Randomized Adaptive Search Procedure) and Genetic Algorithm. The GRASP metaheuristic uses Q-learning instead of the traditional greedy-random algorithm in the construction phase. This replacement has the purpose of improving the quality of the initial solutions that are used in the local search phase of the GRASP, and also provides for the metaheuristic an adaptive memory mechanism that allows the reuse of good previous decisions and also avoids the repetition of bad decisions. In the Genetic Algorithm, the Q-learning algorithm was used to generate an initial population of high fitness, and after a determined number of generations, where the rate of diversity of the population is less than a certain limit L, it also was applied to supply one of the parents to be used in the genetic crossover operator. Another significant change in the hybrid genetic algorithm is the proposal of a mutually interactive cooperation process between the genetic operators and the Q-learning algorithm. In this interactive/cooperative process, the Q-learning algorithm receives an additional update in the matrix of Q-values based on the current best solution of the Genetic Algorithm. The computational experiments presented in this thesis compares the results obtained with the implementation of traditional versions of GRASP metaheuristic and Genetic Algorithm, with those obtained using the proposed hybrid methods. Both algorithms had been applied successfully to the symmetrical Traveling Salesman Problem, which was modeled as a Markov decision process
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
According to clinical and pre-clinical studies, oxidative stress and its consequences may be the cause or, at least, a contributing factor, to a large number of neurodegenerative diseases. These diseases include common and debilitating disorders, characterized by progressive and irreversible loss of neurons in specific regions of the brain. The most common neurodegenerative diseases are Parkinson's disease, Huntington's disease, Alzheimer's disease and amyotrophic lateral sclerosis. Coenzyme Q(10) (CoQ(10)) has been extensively studied since its discovery in 1957. It is a component of the electron transportation chain and participates in aerobic cellular respiration, generating energy in the form of adenosine triphosphate (ATP). The property of CoQ(10) to act as an antioxidant or a pro-oxidant, suggests that it also plays an important role in the modulation of redox cellular status under physiological and pathological conditions, also performing a role in the ageing process. In several animal models of neurodegenerative diseases, CoQ(10) has shown beneficial effects in reducing disease progression. However, further studies are needed to assess the outcome and effectiveness of CoQ(10) before exposing patients to unnecessary health risks at significant costs.
Resumo:
We address the generalization of thermodynamic quantity q-deformed by q-algebra that describes a general algebra for bosons and fermions . The motivation for our study stems from an interest to strengthen our initial ideas, and a possible experimental application. On our journey, we met a generalization of the recently proposed formalism of the q-calculus, which is the application of a generalized sequence described by two parameters deformation positive real independent and q1 and q2, known for Fibonacci oscillators . We apply the wellknown problem of Landau diamagnetism immersed in a space D-dimensional, which still generates good discussions by its nature, and dependence with the number of dimensions D, enables us future extend its application to systems extra-dimensional, such as Modern Cosmology, Particle Physics and String Theory. We compare our results with some experimentally obtained performing major equity. We also use the formalism of the oscillators to Einstein and Debye solid, strengthening the interpretation of the q-deformation acting as a factor of disturbance or impurity in a given system, modifying the properties of the same. Our results show that the insertion of two parameters of disorder, allowed a wider range of adjustment , i.e., enabling change only the desired property, e.g., the thermal conductivity of a same element without the waste essence
Resumo:
This chapter of the "Flavor in the era of LHC" workshop report discusses flavor-related issues in the production and decays of heavy states at the LHC at high momentum transfer Q, both from the experimental and the theoretical perspective. We review top quark physics, and discuss the flavor aspects of several extensions of the standard model, such as supersymmetry, little Higgs models or models with extra dimensions. This includes discovery aspects, as well as the measurement of several properties of these heavy states. We also present publicly available computational tools related to this topic.
Resumo:
Seismic wave dispersion and attenuation studies have become an important tool for lithology and fluid discrimination in hydrocarbon reservoirs. The processes associated to attenuation are complex and are encapsulated in a single quantitative description called quality factor (Q). The present dissertation has the objective of comparing different approaches of Q determination and is divided in two parts. Firstly, we made performance and robustness tests of three different approaches for Q determination in the frequency domain. They are: peak shift, centroid shift and spectral ratio. All these tests were performed in a three-layered model. In the suite of tests performed here, we varied the thickness, Q and inclination of the layers for propagation pulses with central frequency of 30, 40 and 60 Hz. We found that the centroid shift method is produces robust results for the entire suíte of tests. Secondly, we inverted for Q values using the peak and centroid shift methods using an sequential grid search algorithm. In this case, centroid shift method also produced more robust results than the peak shift method, despite being of slower convergence