7 resultados para Relaxed clock

em Repositório digital da Fundação Getúlio Vargas - FGV


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper contributes to the debate on whether the Brazilian public debt is sustainable or not in the long run by considering threshold effects on the Brazilian Budget Deficit. Using data from 1947 to 1999 and a threshold autoregressive model, we find evidence of delays in fiscal stabilization. As suggested in Alesina (1991), delayed stabilizations reflect the existence of political constraints blocking deficit cuts, which are relaxed only when the budget deficit reaches a sufficiently high level, deemed to be unsustainable. In particular, our results suggest that, in the absence of seignorage, only when the increase in the budget deficit reaches 1.74% of the GDP will fiscal authorities intervene to reduce the deficit. If seignorage is allowed, the threshold increases to 2.2%, suggesting that seignorage makes government more tolerant to fiscal imbalances.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A teoria de Finanças Comportamentais surge como uma nova abordagem ao mercado financeiro, argumentando que alguns eventos podem ser mais bem explicados se as restrições da racionalidade do investidor são relaxadas. Conceitos de psicologia e limites à arbitragem são usados para modelar as ineficiências, criando a idéia de ser possível ganhar sistematicamente do mercado. Este trabalho propõe um novo modelo, simplista na sua implementação, para aproveitar os retornos anormais advindos de estratégias de momentum e reversão à média simultaneamente. A idéia de um efeito momentum de longo prazo mais forte que o de curto prazo é introduzida, mas os resultados empíricos mostram que a dinâmica do mercado brasileiro rejeita este conceito. O modelo falha em conseguir retornos positivos e livres de risco.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A sociologia das organizações ainda é realizada sob a construção de esquemas analíticos livres de tempo (CLARK, 1985; HASSARD, 2000; GIDDENS, 2003). No entanto, questões temporais permeiam toda e qualquer organização, o que torna o conceito de tempo de central importância para os estudos organizacionais. Com base nisso, esta pesquisa teve a ambição de abordar a dimensão temporal do trabalho nas organizações; sob a perspectiva dos indivíduos. E, dado que os gerentes médios vivenciam um duplo foco de pressão: originado da alta gerência e do nível operacional da organização, decidiu-se investigar como os gerentes médios experimentam o tempo no trabalho. Para desvendar a experiência temporal dos gerentes médios, foram analisadas, com a metodologia de análise de conteúdo, entrevistas com 20 profissionais de média gerência que trabalham em empresas que operam na cidade de São Paulo. A coleta do material de pesquisa ocorreu com entrevistas em profundidade semi-estruturadas. A análise das entrevistas sugere que embora o tempo no trabalho seja, por todos os profissionais entrevistados, definido como um recurso econômico, cuja utilização dever ser otimizada ao máximo, a experiência temporal entre os gerentes médios não é homogênea. Há fatores ambientais comuns a todos os entrevistados, que tendem a aproximar as experiências temporais dos mesmos. Tais fatores, que são a compressão do tempo, o sentido de urgência, as novas tecnologias, características intrínsecas ao papel de gerente e a organização de si próprios, das empresas e colaboradores, estão associados ao cenário econômico e social contemporâneo. No entanto, características relacionadas à idade, gênero, valores e experiências pessoais, além do segmento de atuação da empresa também impactam a experiência temporal dos gerentes médios e contribuem para a diversificação da maneira como os gerentes médios experimentam e lidam com as pressões temporais. Em síntese, a despeito dos fatores ambientais compartilhados, com destaque para a crescente compressão do tempo, a natureza humana e a impermanência dos fenômenos sociais desnudam a complexidade da experiência temporal dos gerentes médios no trabalho. E revelam que – apesar da homogeneidade, objetividade e linearidade representadas pelo relógio, ícone do tempo nas sociedades ocidentais contemporâneas – heterogeneidade, subjetividade e ciclicidade fazem parte da experiência temporal dos trabalhadores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com a intenção de melhor serem conhecidas as formas de avaliação daquilo que chamamos "aspectos subjetivos", nos projetos de arquitetura, selecionamos e entrevistamos 7 arquitetos, escolhidos por suas atuações, tanto profissionais como no magistério. Apresentamos, inicialmente, algumas das mais importantes posições teóricas sobre avaliação, procurando-se, também, mostrar o enfoque sociológico do assunto. Desde logo observamos pontos de vista diversos, divergentes e até discrepantes, entre os vários autores selecionados, o que nos alertou para um possivel ingresso em terreno polêmico. Como metodologia de trabalho, procuramos entrevistar os professores, usando-se as mesmas condições para todos eles (ambiente calmo, descontraído, sem pressa e sem interrupções), ocasião em que um elenco de perguntas era proposto. As respostas foram registradas em um gravador, que ficava à vista, sobre a mesa. Garantimos aos entrevistados que não haveria identificação da autoria das declarações, propiciando-se, assim, mais "fluência" nas respostas e, talvez, um pouco mais de "ousadia" nas afirmações. Em seguida, e já a partir das diversas declarações, buscamos destacar a visão de avaliação de cada um, para em seguida, afunilar essa análise, focalizando, mais especificamente, os aspectos objetivos e os subjetivos na avaliação dos projetos de arquitetura. Concluimos que, apesar das muitas maneiras de avaliar, e de até não avaliar os trabalhos de projeto, há várias atitudes, sistemas, métodos, etc - alguns até conflitantes - empregados pelos diversos professores, que formam, entretanto, uma base comum, uma postura semelhante, entre eles. Essa identidade provém da constatação de que houve unanimidade, entre os entrevistados, de ser a arte uma resultante da composição de ingredientes culturais e ideológicos, provenientes, principalmente, das condições materiais, num determinado momento histórico. Na Arquitetura, com maior ênfase ainda, por ser uma arte utilitária, o seu "engajamento ideológico" torna-se inevitável e, consequentemente a sua avaliação, de forma jacente ou subjacente, será feita por essa ótica.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Local provision of public services has the positive effect of increasing the efficiency because each locality has its idiosyncrasies that determine a particular demand for public services. This dissertation addresses different aspects of the local demand for public goods and services and their relationship with political incentives. The text is divided in three essays. The first essay aims to test the existence of yardstick competition in education spending using panel data from Brazilian municipalities. The essay estimates two-regime spatial Durbin models with time and spatial fixed effects using maximum likelihood, where the regimes represent different electoral and educational accountability institutional settings. First, it is investigated whether the lame duck incumbents tend to engage in less strategic interaction as a result of the impossibility of reelection, which lowers the incentives for them to signal their type (good or bad) to the voters by mimicking their neighbors’ expenditures. Additionally, it is evaluated whether the lack of electorate support faced by the minority governments causes the incumbents to mimic the neighbors’ spending to a greater extent to increase their odds of reelection. Next, the essay estimates the effects of the institutional change introduced by the disclosure on April 2007 of the Basic Education Development Index (known as IDEB) and its goals on the strategic interaction at the municipality level. This institutional change potentially increased the incentives for incumbents to follow the national best practices in an attempt to signal their type to voters, thus reducing the importance of local information spillover. The same model is also tested using school inputs that are believed to improve students’ performance in place of education spending. The results show evidence for yardstick competition in education spending. Spatial auto-correlation is lower among the lame ducks and higher among the incumbents with minority support (a smaller vote margin). In addition, the institutional change introduced by the IDEB reduced the spatial interaction in education spending and input-setting, thus diminishing the importance of local information spillover. The second essay investigates the role played by the geographic distance between the poor and non-poor in the local demand for income redistribution. In particular, the study provides an empirical test of the geographically limited altruism model proposed in Pauly (1973), incorporating the possibility of participation costs associated with the provision of transfers (Van de Wale, 1998). First, the discussion is motivated by allowing for an “iceberg cost” of participation in the programs for the poor individuals in Pauly’s original model. Next, using data from the 2000 Brazilian Census and a panel of municipalities based on the National Household Sample Survey (PNAD) from 2001 to 2007, all the distance-related explanatory variables indicate that an increased proximity between poor and non-poor is associated with better targeting of the programs (demand for redistribution). For instance, a 1-hour increase in the time spent commuting by the poor reduces the targeting by 3.158 percentage points. This result is similar to that of Ashworth, Heyndels and Smolders (2002) but is definitely not due to the program leakages. To empirically disentangle participation costs and spatially restricted altruism effects, an additional test is conducted using unique panel data based on the 2004 and 2006 PNAD, which assess the number of benefits and the average benefit value received by beneficiaries. The estimates suggest that both cost and altruism play important roles in targeting determination in Brazil, and thus, in the determination of the demand for redistribution. Lastly, the results indicate that ‘size matters’; i.e., the budget for redistribution has a positive impact on targeting. The third essay aims to empirically test the validity of the median voter model for the Brazilian case. Information on municipalities are obtained from the Population Census and the Brazilian Supreme Electoral Court for the year 2000. First, the median voter demand for local public services is estimated. The bundles of services offered by reelection candidates are identified as the expenditures realized during incumbents’ first term in office. The assumption of perfect information of candidates concerning the median demand is relaxed and a weaker hypothesis, of rational expectation, is imposed. Thus, incumbents make mistakes about the median demand that are referred to as misperception errors. Thus, at a given point in time, incumbents can provide a bundle (given by the amount of expenditures per capita) that differs from median voter’s demand for public services by a multiplicative error term, which is included in the residuals of the demand equation. Next, it is estimated the impact of the module of this misperception error on the electoral performance of incumbents using a selection models. The result suggests that the median voter model is valid for the case of Brazilian municipalities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we can model the heteroskedasticity of a linear combination of the errors. We show that this assumption can be satisfied without imposing strong assumptions on the errors in common DID applications. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative inference method that relies on strict stationarity and ergodicity of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment periods. We extend our inference methods to linear factor models when there are few treated groups. We also derive conditions under which a permutation test for the synthetic control estimator proposed by Abadie et al. (2010) is robust to heteroskedasticity and propose a modification on the test statistic that provided a better heteroskedasticity correction in our simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Differences-in-Differences (DID) is one of the most widely used identification strategies in applied economics. However, how to draw inferences in DID models when there are few treated groups remains an open question. We show that the usual inference methods used in DID models might not perform well when there are few treated groups and errors are heteroskedastic. In particular, we show that when there is variation in the number of observations per group, inference methods designed to work when there are few treated groups tend to (under-) over-reject the null hypothesis when the treated groups are (large) small relative to the control groups. This happens because larger groups tend to have lower variance, generating heteroskedasticity in the group x time aggregate DID model. We provide evidence from Monte Carlo simulations and from placebo DID regressions with the American Community Survey (ACS) and the Current Population Survey (CPS) datasets to show that this problem is relevant even in datasets with large numbers of observations per group. We then derive an alternative inference method that provides accurate hypothesis testing in situations where there are few treated groups (or even just one) and many control groups in the presence of heteroskedasticity. Our method assumes that we know how the heteroskedasticity is generated, which is the case when it is generated by variation in the number of observations per group. With many pre-treatment periods, we show that this assumption can be relaxed. Instead, we provide an alternative application of our method that relies on assumptions about stationarity and convergence of the moments of the time series. Finally, we consider two recent alternatives to DID when there are many pre-treatment groups. We extend our inference method to linear factor models when there are few treated groups. We also propose a permutation test for the synthetic control estimator that provided a better heteroskedasticity correction in our simulations than the test suggested by Abadie et al. (2010).