928 resultados para mborayu (the spirit that unites us)
Resumo:
Over the last several years, lawmakers have been responding to several highly publicized child abduction, assault and murder cases. While such cases remain rare in Iowa, the public debates they have generated are having far-reaching effects. Policy makers are responsible for controlling the nature of such effects. Challenges they face stem from the need to avoid primarily politically-motivated responses and the desire to make informed decisions that recognize both the strengths and the limitations of the criminal justice system as a vehicle for promoting safe and healthy families and communities. Consensus was reached by the Task Force at its first meeting that one of its standing goals is to provide nonpartisan guidance to help avoid or fix problematic sex offense policies and practices. Setting this goal was a response to the concern over what can result from elected officials’ efforts to respond to the types of sex offender-related concerns that can easily become emotionally laden and politically charged due to the universally held abhorrence of sex crimes against children. The meetings of the Task Force and the various work groups it has formed have included some spirited and perhaps emotionally charged discussions, despite the above-stated ground rule. However, as is described in the report, the Task Force’s first set of recommendations and plans for further study were approved through consensus. It is hoped that in upcoming legislative deliberations, it will be remembered that the non-legislative members of the Task Force all agreed on the recommendations contained in this report. The topics discussed in this first report from the Task Force are limited to the study issues specifically named in H.F. 619, the Task Force’s enabling legislation. However, other topics of concern were discussed by the Task Force because of their immediacy or because of their possible relationships with one or more of the Task Force’s mandated study issues. For example, it has been reported by some probation/parole officers and others that the 2000 feet rule has had a negative influence on treatment participation and supervision compliance. While such concerns were noted, the Task Force did not take it upon itself to investigate them at this time and thus broaden the agenda it was given by the General Assembly last session. As a result, the recently reinstated 2000 feet rule, the new cohabitation/child endangerment law and other issues of interest to Task Force members but not within the scope of their charge are not discussed in the body of this report. An issue of perhaps the greatest interest to most Task Force members that was not a part of their charge was a belief in the benefit of viewing Iowa’s efforts to protect children from sex crimes with as comprehensive a platform as possible. It has been suggested that much more can be done to prevent child-victim sex crimes than would be accomplished by only concentrating on what to do with offenders after a crime has occurred. To prevent child victimization, H.F. 619 policy provisions rely largely on incapacitation and future deterrent effects of increased penalties, more restrictive supervision practices and greater public awareness of the risk presented by a segment of Iowa’s known sex offenders. For some offenders, these policies will no doubt prevent future sex crimes against children, and the Task Force has begun long-term studies to look for the desired results and for ways to improve such results through better supervision tools and more effective offender treatment. Unfortunately, much of the effects from the new policies may primarily influence persons who have already committed sex offenses against minors and who have already been caught doing so. Task Force members discussed the need for a range of preventive efforts and a need to think about sex crimes against children from other than just a “reaction- to-the-offender” perspective. While this topic is not addressed in the report that follows, it was suggested that some of the Task Force’s discussions could be briefly shared through these opening comments. Along with incapacitation and deterrence, comprehensive approaches to the prevention of child-victim sex crimes would also involve making sure parents have the tools they need to detect signs of adults with sex behavior problems, to help teach their children about warning signs and to find the support they need for healthy parenting. School, faithbased and other community organizations might benefit from stronger supports and better tools they can use to more effectively promote positive youth development and the learning of respect for others, respect for boundaries and healthy relationships. All of us who have children, or who live in communities where there are children, need to understand the limitations of our justice system and the importance of our own ability to play a role in preventing sexual abuse and protecting children from sex offenders, which are often the child’s own family members. Over 1,000 incidences of child sexual abuse are confirmed or founded each year in Iowa, and most such acts take place in the child’s home or the residence of the caretaker of the child. Efforts to prevent child sexual abuse and to provide for early interventions with children and families at risk could be strategically examined and strengthened. The Sex Offender Treatment and Supervision Task Force was established to provide assistance to the General Assembly. It will respond to legislative direction for adjusting its future plans as laid out in this report. Its plans could be adjusted to broaden or narrow its scope or to assign different priority levels of effort to its current areas of study. Also, further Task Force considerations of the recommendations it has already submitted could be called for. In the meantime, it is hoped that the information and recommendations submitted through this report prove helpful.
Resumo:
According to the Taylor principle a central bank should adjust the nominal interest rate by more than one-for-one in response to changes in current inflation. Most of the existing literature supports the view that by following this simple recommendation a central bank can avoid being a source of unnecessary fluctuations in economic activity. The present paper shows that this conclusion is not robust with respect to the modelling of capital accumulation. We use our insights to discuss the desirability of alternative interest rate rules. Our results suggest a reinterpretation of monetary policy under Volcker and Greenspan: The empirically plausible characterization of monetary policy can explain the stabilization of macroeconomic outcomes observed in the early eighties for the US economy. The Taylor principle in itself cannot.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Introduction: Population ageing is a worldwide phenomenon that forces us to make radical changes on multiple levels of society. So far, studies have concluded that the health, both physical and mental, of prisoners in general and older prisoners in particular is worse than that of the general population. Prisoners are reported to age faster as compared to adults in the community. However, to date, very little is known about the actual healthcare conditions of older prisoners and almost no substantial knowledge is available concerning their patterns of healthcare use. Method: A quantitative study was conducted in four prisons for male prisoners in Switzerland, including two open and two closed prisons situated in different cantons. In this study, medical records of older prisoners (50+) were obtained from the respective authority upon consent and total anonymity was ensured. Data gathered from all available medical records included basic demographic information, education and prison sentencing. Healthcare data obtained were extensive in nature encompassing data related to illness types, number of visits to different health care providers and hospitals. The corresponding reasons for visits and outcomes of these visits were extracted. All data are analysed using statistical software SPSS 20.0. Results: Data were extracted for a total of 50 older prisoners living in Switzerland. The chosen prisons are located in German-speaking cantons. Preliminary results show that the age average was 56 years. For more than half, this was their first imprisonment. Nevertheless, a third of them were sentenced to measures (Art. 64 Swiss Criminal Code) which means that the length of the detention is indefinite and while release is possible it is in most cases not very likely. This entails that these prisoners will grow old in prison and some will even spend their remaining years there. Concerning their health, a third of the sample reported respiratory and cardiovascular illnesses and half reported suffering from some form of musculoskeletal related pain. Older prisoners were prescribed on average only 3.5 medications, which is significantly fewer than the number of medication prescribed to younger prisoners, whose data were also sampled. Conclusion: Access to healthcare is a right given to all prisoners through the principle of equivalence which is generally exercised in Switzerland. Prisoners growing old in prison will represent a challenge for prison health care services.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
This research paper has been written with the intention to discuss the problem of discipline in Cape Verdean secondary schools. While many of us discuss the effects that student misbehavior has on the student misbehavior has on the student, school and society as a whole, very few of us seek solutions which would impact on the prevention and management of this problem that each day becomes more complicated and harder to handle. This paper will discuss the need to better define discipline at the school level; identify the causes and factors that aggravate the problem, in addition, to provide what I hope to be useful strategies to better manage the problem as we make the effort to reclaim our schools and better educate our students. My research included surveys completed by teachers and student alike as they baffled over the question: what is discipline and how can we better manage discipline problems at our schools?
Resumo:
The tendency for public welfare spending to be increasingly aimed at the elderly has been pointed out for the US and other developed countries. While population ageing is a common trend, it is not obvious why the shift in spending exceeds the trend in ageing, or why per capita spending on the elderly increases.We show that this is the case in Spain, identify the losers from this development, discuss the policies that underlie this trend, and propose adjustments based on Musgrave s fixed proportions rule as an inter-generationally fair distribution.
Resumo:
This paper tests for the market environment within which US fiscal policyoperates, that is we test for the incompleteness of the US government bondmarket. We document the stochastic properties of US debt and deficits andthen consider the ability of competing optimal tax models to account forthis behaviour. We show that when a government pursues an optimal taxpolicy and issues a full set of contingent claims, the value of debthas the same or less persistence than other variables in the economyand declines in response to higher deficit shocks. By contrast, ifgovernments only issue one-period risk free bonds (incomplete markets),debt shows more persistence than other variables and it increases inresponse to expenditure shocks. Maintaining the hypothesis of Ramseybehavior, US data conflicts.
Resumo:
According to the Taylor principle a central bank should adjust the nominal interest rate by more than one-for-one in response to changes in current inflation. Most of the existing literature supports the view that by following this simple recommendation a central bank can avoid being a source of unnecessary fluctuations in economic activity. The present paper shows that this conclusion is not robust with respect to the modelling of capital accumulation. We use our insights to discuss the desirability of alternative interest raterules. Our results suggest a reinterpretation of monetary policy under Volcker and Greenspan: The empirically plausible characterization of monetary policy can explain the stabilization of macroeconomic outcomes observed in the early eighties for the US economy. The Taylor principle in itself cannot.
Resumo:
We formulate an evolutionary learning process in the spirit ofYoung (1993a) for games of incomplete information. The process involves trembles. For many games, if the amount of trembling is small, play will be in accordance with the games' (semi-strict) Bayesian equilibria most of the time. This supports the notion of Bayesian equilibrium. Further, often play will most of the time be in accordance with exactly one Bayesian equilibrium. This gives a selection among the Bayesian equilibria. For two specific games of economic interest wecharacterize this selection. The first is an extension to incomplete information of the prototype strategic conflict known as Chicken. The second is an incomplete information bilateral monopoly, which is also an extension to incompleteinformation of Nash's demand game, or a simple version ofthe so-called sealed bid double auction. For both gamesselection by evolutionary learning is in favor of Bayesianequilibria where some types of players fail to coordinate, such that the outcome is inefficient.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Resumo:
We lay out a model of wage bargaining with two leading features:bargaining is ex post to relevant investments and there isindividual bargaining in firms without a Union. We compareindividual ex post bargaining to coordinated ex post bargainingand we analyze the effects on wage formation. As opposed to exante bargaining models, the costs of destroying the employmentrelationship play a crucial role in determining wages. Highfiring costs in particular yield a rent for employees. Ourtheory points to a employer size-wage effect that is independentof the production function and market power. We derive a simpleleast squares specification from the theoretical model thatallow us to estimate components of the wage premium fromcoordination. We reject the hypothesis that labor coordinationdoes not alter the extensive form of the bargaining game. Laborcoordination substantially increases bargaining power butdecreases labor's ability to pose costly threats to the firm.
Resumo:
Does worker mobility undermine governments ability to redistribute income? Thispaper analyzes the experience of US states in the recent decades. We build a tractablemodel where both migration decisions and redistribution policies are endogenous. Wecalibrate the model to match skill premium and worker productivity at the state level,as well as the size and skill composition of migration flows. The calibrated modelis able to reproduce the large changes in skill composition as well as key qualitativerelationships of labor flows and redistribution policies observed in the data. Our resultssuggest that regional di¤erences in labor productivity are an important determinantof interstate migration. We use the calibrated model to compare the cross-section ofredistributive policies with and without worker mobility. The main result of the paperis that interstate migration has induced substantial convergence in tax rates acrossUS states, but no race to the bottom. Skill-biased in-migration has reduced the skillpremium and the need for tax-based redistribution in the states that would have hadthe highest tax rates in the absence of mobility.
Resumo:
We examine monetary policy in the Euro area from both theoretical and empirical perspectives. We discuss what theory tells us the strategy of Central banks should be and contrasts it with the one employed by the ECB. We review accomplishments (and failures) of monetary policy in the Euro area and suggest changes that would increase the correlation between words and actions; streamline the understanding that markets have of the policy process; and anchor expectation formation more strongly. We examine the transmission of monetary policy shocks in the Euro area and in some potential member countries and try to infer the likely effects occurring when Turkey joins the EU first and the Euro area later. Much of the analysis here warns against having too high expectations of the economic gains that membership to the EU and Euro club will produce.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.