81 resultados para mborayu (the spirit that unites us)
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Intuitively, we think of perception as providing us with direct cognitive access to physical objects and their properties. But this common sense picture of perception becomes problematic when we notice that perception is not always veridical. In fact, reflection on illusions and hallucinations seems to indicate that perception cannot be what it intuitively appears to be. This clash between intuition and reflection is what generates the puzzle of perception. The task and enterprise of unravelling this puzzle took, and still takes, centre stage in the philosophy of perception. The goal of my dissertation is to make a contribution to this enterprise by formulating and defending a new structural approach to perception and perceptual consciousness. The argument for my structural approach is developed in several steps. Firstly, I develop an empirically inspired causal argument against naïve and direct realist conceptions of perceptual consciousness. Basically, the argument says that perception and hallucination can have the same proximal causes and must thus belong to the same mental kind. I emphasise that this insight gives us good reasons to abandon what we are instinctively driven to believe - namely that perception is directly about the outside physical world. The causal argument essentially highlights that the information that the subject acquires in perceiving a worldly object is always indirect. To put it another way, the argument shows that what we, as perceivers, are immediately aware of, is not an aspect of the world but an aspect of our sensory response to it. A view like this is traditionally known as a Representative Theory of Perception. As a second step, emphasis is put on the task of defending and promoting a new structural version of the Representative Theory of Perception; one that is immune to some major objections that have been standardly levelled at other Representative Theories of Perception. As part of this defence and promotion, I argue that it is only the structural features of perceptual experiences that are fit to represent the empirical world. This line of thought is backed up by a detailed study of the intriguing phenomenon of synaesthesia. More precisely, I concentrate on empirical cases of synaesthetic experiences and argue that some of them provide support for a structural approach to perception. The general picture that emerges in this dissertation is a new perspective on perceptual consciousness that is structural through and through.
Resumo:
Abstract Accurate characterization of the spatial distribution of hydrological properties in heterogeneous aquifers at a range of scales is a key prerequisite for reliable modeling of subsurface contaminant transport, and is essential for designing effective and cost-efficient groundwater management and remediation strategies. To this end, high-resolution geophysical methods have shown significant potential to bridge a critical gap in subsurface resolution and coverage between traditional hydrological measurement techniques such as borehole log/core analyses and tracer or pumping tests. An important and still largely unresolved issue, however, is how to best quantitatively integrate geophysical data into a characterization study in order to estimate the spatial distribution of one or more pertinent hydrological parameters, thus improving hydrological predictions. Recognizing the importance of this issue, the aim of the research presented in this thesis was to first develop a strategy for the assimilation of several types of hydrogeophysical data having varying degrees of resolution, subsurface coverage, and sensitivity to the hydrologic parameter of interest. In this regard a novel simulated annealing (SA)-based conditional simulation approach was developed and then tested in its ability to generate realizations of porosity given crosshole ground-penetrating radar (GPR) and neutron porosity log data. This was done successfully for both synthetic and field data sets. A subsequent issue that needed to be addressed involved assessing the potential benefits and implications of the resulting porosity realizations in terms of groundwater flow and contaminant transport. This was investigated synthetically assuming first that the relationship between porosity and hydraulic conductivity was well-defined. Then, the relationship was itself investigated in the context of a calibration procedure using hypothetical tracer test data. Essentially, the relationship best predicting the observed tracer test measurements was determined given the geophysically derived porosity structure. Both of these investigations showed that the SA-based approach, in general, allows much more reliable hydrological predictions than other more elementary techniques considered. Further, the developed calibration procedure was seen to be very effective, even at the scale of tomographic resolution, for predictions of transport. This also held true at locations within the aquifer where only geophysical data were available. This is significant because the acquisition of hydrological tracer test measurements is clearly more complicated and expensive than the acquisition of geophysical measurements. Although the above methodologies were tested using porosity logs and GPR data, the findings are expected to remain valid for a large number of pertinent combinations of geophysical and borehole log data of comparable resolution and sensitivity to the hydrological target parameter. Moreover, the obtained results allow us to have confidence for future developments in integration methodologies for geophysical and hydrological data to improve the 3-D estimation of hydrological properties.
Resumo:
OBJECTIVE: The primary aim of the study was to evaluate whether rheumatoid arthritis (RA) patients considered to be in remission according to clinical criteria sets still had persisting ultrasound (US) synovitis. We further intended to evaluate the capacity of our US score to discriminate between the patients with a clinically active disease versus those in remission. METHODS: This is an observational study nested within the Swiss Clinical Quality Management in Rheumatic Diseases (SCQM) rheumatoid arthritis cohort. A validated US score (SONAR score) based on a semi-quantitative B-mode and Doppler (PwD) score as part of the regular clinical workup by rheumatologists in different clinical settings was used. To define clinically relevant synovitis, the same score was applied to 38 healthy controls and the 90st percentile was used as cut-off for 'relevant' synovitis. RESULTS: Three hundred and seven patients had at least one US examination and concomitant clinical information on disease activity. More than a third of patients in both DAS28 and ACR/EULAR remission showed significant gray scale synovitis (P=0.01 and 0.0002, respectively) and PwD activity (P=0.005 and 0.0005, respectively) when compared to controls. The capacity of US to discriminate between the two clinical remission groups and patients with active disease was only moderate. CONCLUSION: This observational study confirms that many patients considered to be in clinical remission according the DAS and the ACR/EULAR definitions still have residual synovitis on US. The prognostic significance of US synovitis and the exact place of US in patients reaching clinical remission need to be further evaluated.
Resumo:
The study of the exotic blocks of the Hawasina Nappes (Sultanate of Oman) leads to give apposit data that allow us to propose a new paleogeographic evolution of the Oman margin in time and space. A revised classification of exotic blocks into different paleogeographical units is presented. Two newly introduced stratigraphic groups, the Ramaq Group (Ordovician to Triassic) and the Al Buda'ah Group (upper Permian to Jurassic) are interpreted as tilted blocks related to the Oman continental margin. The Kawr Group (middle Triassic to Cretaceous) is redefined and interpreted as an atoll-type seamount. The paleogeography and paleoenvironments of these units are integrated into a new scheme of the Neotethyan rifting history. Brecciae and olisto¬liths of the Hawasina series are interpreted to have originated from tectonic movements affecting the Oman margin and the Neotethyan ocean floor. The breccias of late Permian age were generated by the extension processes affecting the margin, and by the creation of the Neotethyan oceanic floor. The breccias of mid-late Triassic age coincide in time with the collision of the Cimmerian continents with Eurasia. In constrast, the breccias of late Jurassic and Cretaceous age are interpreted as resulting to the creation of a new oceanic crust (Semail) off the Oman margin
Resumo:
An enormous burst of interest in the public health burden from chronic disease in Africa has emerged as a consequence of efforts to estimate global population health. Detailed estimates are now published for Africa as a whole and each country on the continent. These data have formed the basis for warnings about sharp increases in cardiovascular disease (CVD) in the coming decades. In this essay we briefly examine the trajectory of social development on the continent and its consequences for the epidemiology of CVD and potential control strategies. Since full vital registration has only been implemented in segments of South Africa and the island nations of Seychelles and Mauritius - formally part of WHO-AFRO - mortality data are extremely limited. Numerous sample surveys have been conducted but they often lack standardization or objective measures of health status. Trend data are even less informative. However, using the best quality data available, age-standardized trends in CVD are downward, and in the case of stroke, sharply so. While acknowledging that the extremely limited available data cannot be used as the basis for inference to the continent, we raise the concern that general estimates based on imputation to fill in the missing mortality tables may be even more misleading. No immediate remedies to this problem can be identified, however bilateral collaborative efforts to strength local educational institutions and governmental agencies rank as the highest priority for near term development.
Resumo:
Deeply incised river networks are generally regarded as robust features that are not easily modified by erosion or tectonics. Although the reorganization of deeply incised drainage systems has been documented, the corresponding importance with regard to the overall landscape evolution of mountain ranges and the factors that permit such reorganizations are poorly understood. To address this problem, we have explored the rapid drainage reorganization that affected the Cahabon River in Guatemala during the Quaternary. Sediment-provenance analysis, field mapping, and electrical resistivity tomography (ERT) imaging are used to reconstruct the geometry of the valley before the river was captured. Dating of the abandoned valley sediments by the Be-10-Al-26 burial method and geomagnetic polarity analysis allow us to determine the age of the capture events and then to quantify several processes, such as the rate of tectonic deformation of the paleovalley, the rate of propagation of post-capture drainage reversal, and the rate at which canyons that formed at the capture sites have propagated along the paleovalley. Transtensional faulting started 1 to 3 million years ago, produced ground tilting and ground faulting along the Cahabon River, and thus generated differential uplift rate of 0.3 +/- 0.1 up to 0.7 +/- 0.4 mm . y(-1) along the river's course. The river responded to faulting by incising the areas of relative uplift and depositing a few tens of meters of sediment above the areas of relative subsidence. Then, the river experienced two captures and one avulsion between 700 ky and 100 ky. The captures breached high-standing ridges that separate the Cahabon River from its captors. Captures occurred at specific points where ridges are made permeable by fault damage zones and/or soluble rocks. Groundwater flow from the Cahabon River down to its captors likely increased the erosive power of the captors thus promoting focused erosion of the ridges. Valley-fill formation and capture occurred in close temporal succession, suggesting a genetic link between the two. We suggest that the aquifers accumulated within the valley-fills, increased the head along the subterraneous system connecting the Cahabon River to its captors, and promoted their development. Upon capture, the breached valley experienced widespread drainage reversal toward the capture sites. We attribute the generalized reversal to combined effects of groundwater sapping in the valley-fill, axial drainage obstruction by lateral fans, and tectonic tilting. Drainage reversal increased the size of the captured areas by a factor of 4 to 6. At the capture sites, 500 m deep canyons have been incised into the bedrock and are propagating upstream at a rate of 3 to 11 mm . y(-1) deepening at a rate of 0.7 to 1 5 mm . y(-1). At this rate, 1 to 2 million years will be necessary for headward erosion to completely erase the topographic expression of the paleovalley. It is concluded that the rapid reorganization of this drainage system was made possible by the way the river adjusted to the new tectonic strain field, which involved transient sedimentation along the river's course. If the river had escaped its early reorganization and had been given the time necessary to reach a new dynamic equilibrium, then the transient conditions that promoted capture would have vanished and its vulnerability to capture would have been strongly reduced.
Resumo:
In many bird populations, individuals display one of several genetically inherited colour morphs. Colour polymorphism can be maintained by several mechanisms one of which being frequency-dependent selection with colour morphs signalling alternative mating strategies. One morph may be dominant and territorial, and another one adopt a sneaky behaviour to gain access to fertile females. We tested this hypothesis in the barn owl Tyto alba in which coloration varies from reddish-brown to white. This trait is heritable and neither sensitive to the environment in which individuals live nor to body condition. In Switzerland, reddish-brown males were observed to feed their brood at a higher rate and to produce more offspring than white males. This observation lead us to hypothesize that white males may equalise fitness by investing more effort in extra-pair copulations. This hypothesis predicts that lighter Coloured males produce more extra-pair young, have larger testes and higher levels of circulating testosterone. However, our results are not consistent with these three predictions. First, paternity analyses of 54 broods with a total of 211 offspring revealed that only one young was not sired by the male that was feeding it. Second, testes size was not correlated with male plumage coloration suggesting that white males are not sexually more active. Finally, in nestlings at the time of feather growth testosterone level was not related to plumage coloration suggesting that this androgen is not required for the expression of this plumage trait. Our study therefore indicates that in the barn owl colour polymorphism plays no role in the probability of producing extra-pair young.
Resumo:
In this study, a quantitative approach was used to investigate the role of D142, which belongs to the highly conserved E/DRY sequence, in the activation process of the alpha1B-adrenergic receptor (alpha1B-AR). Experimental and computer-simulated mutagenesis were performed by substituting all possible natural amino acids at the D142 site. The resulting congeneric set of proteins together with the finding that all the receptor mutants show various levels of constitutive (agonist-independent) activity enabled us to quantitatively analyze the relationships between structural/dynamic features and the extent of constitutive activity. Our results suggest that the hydrophobic/hydrophilic character of D142, which could be regulated by protonation/deprotonation of this residue, is an important modulator of the transition between the inactive (R) and active (R*) state of the alpha1B-AR. Our study represents an example of quantitative structure-activity relationship analysis of the activation process of a G protein-coupled receptor.
Contribution of the gap junction proteins Connexin40 and Connexin43 to the control of blood pressure
Resumo:
Summary Cells in tissues and organs coordinate their activities by communicating with each other through intercellular channels named gap junctions. These channels are conduits between the cytoplasmic compartments of adjacent cells, allowing the exchange of small molecules which may be crucial for hormone secretion. Renin is normally secreted in a regulated manner by specific cells of the juxtaglomerular apparatus located within the renal cortex. Gap junctional communication may be requisite to maintain an accurate functioning in coordination of renin-producing cells, more especially as renin is of paramount importance for the control of blood pressure. Connexin43 (Cx43) and Cx40 form gap junctions that link in vivo the cells of the juxtaglomerular apparatus. Cx43 links the endothelial cells, whereas gap junctions made of Cx40 connect the endothelial cells, the renin secreting cells, as well as the endothelial cells of to the renin-secreting cells of the afferent arteriole. The observation that loss of Cx40 results in chronic hypertension associated with altered vasomotion and signal conduction along arterioles, has lead us to suggest that connexins may contribute to control blood pressure by participating to the integration of various mechanical, osmotic and electrochemical stimuli involved in the control of renin secretion and by mediating the adaptive changes of the vascular wall induced by elevated blood pressure and mechanical stress. We therefore postulated that the absence of Cx40 could have deleterious effects on the coordinated functioning of the renin-containing cells, hence accounting for hypertension. In the first part of my thesis, we reported that Cx40-deficient mice (Cx40) are hypertensive due to increased plasma renin levels and numbers of renin-producing cells. Besides, we demonstrated that prostaglandins and nitric oxide, which are possible mediators in the regulation of renin secretion by the macula densa, exert a critical role in the mechanisms controlling blood pressure ín Cx40 knockout hypertensive mice. In view of previous studies that stated avessel-specifc increase in the expression of Cx43 during renin-dependent hypertension, we hypothesized that Cx43 channels are particularly well-matched to integrate the response of cells constituting the vascular wall to hypertensive conditions. Using transgenic mice in which Cx43 was replaced by Cx32, we revealed that the replacement of Cx43 by Cx32 is associated with decreased expression and secretion of renin and prevent the renin-dependent hypertension which is normally induced in the 2K1C model. To gain insights into the regulation of connexins in two separate tissues exposed to the same fluid pressure, the second part of my thesis work was dedicated to the study of the impact of chronic hypertension and related hypertrophy on the expression of the cardiovascular connexins (Cx40, Cx37, Cx43 and Cx45) in mouse aorta and heart. Our results documented that the expression of connexins is differentially regulated in mouse aorta. according to the models of hypertension. Thus, blood pressure induces mechanical forces that differentially alter the expression of vascular connexins in order to respond to an adaptation of the aortic wall observed under pathological conditions. Altogether these data provide the first evidences that intercellular communication mediated by gap junctions is required for a proper renin secretion from the juxtaglomerular apparatus in order to control blood pressure.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
Knowledge of the role of origin-related, environmental, sex, and age factors on host defence mechanisms is important to understand variation in parasite intensity. Because alternative components of parasite defence may be differently sensitive to various factors, they may not necessarily covary. Many components should therefore be considered to tackle the evolution of host-parasite interactions. In a population of barn owls (Tyto alba), we investigated the role of origin-related, environmental (i.e. year, season, nest of rearing, and body condition), sex, and age factors on 12 traits linked to immune responses [humoral immune responses towards sheep red blood cells (SRBC), human serum albumin (HSA) and toxoid toxin TT, T-cell mediated immune response towards the mitogen phytohemagglutinin (PHA)], susceptibility to ectoparasites (number and fecundity of Carnus haemapterus, number of Ixodes ricinus), and disease symptoms (size of the bursa of Fabricius and spleen, proportion of proteins that are immunoglobulins, haematocrit and blood concentration in leucocytes). Cross-fostering experiments allowed us to detect a heritable component of variation in only four out of nine immune and parasitic parameters (i.e. SRBC- and HSA-responses, haematocrit, and number of C. haemapterus). However, because nestlings were not always cross-fostered just after hatching, the finding that 44% of the immune and parasitic parameters were heritable is probably an overestimation. These experiments also showed that five out of these nine parameters were sensitive to the nest environment (i.e. SRBC- and PHA-responses, number of C. haemapterus, haematocrit and blood concentration in leucocytes). Female nestlings were more infested by the blood-sucking fly C. haemapterus than their male nestmates, and their blood was less concentrated in leucocytes. The effect of year, season, age (i.e. reflecting the degree of maturation of the immune system), brood size, position in the within-brood age hierarchy, and body mass strongly differed between the 12 parameters. Different components of host defence mechanisms are therefore not equally heritable and sensitive to environmental, sex, and age factors, potentially explaining why most of these components did not covary.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. We explore the implications of changes in these three parameters for entrepreneurial activity, measured by counts of firm births. The Swiss fiscal system offers sufficient intra-national variation in tax codes to allow us to estimate such effects with considerable precision. We find that high average taxes and complicated tax codes depress firm birth rates, while tax progressivity per se promotes firm births. The latter result supports the existence of an insurance effect from progressive corporate income taxes for risk averse entrepreneurs. However, implied elasticities with respect to the level and complexity of corporate taxes are an order of magnitude larger than elasticities with respect to the progressivity of tax schedules.
Resumo:
We here summarize five articles bringing new advances in our knowledge on neuropathic pain and put them into perspective with our current understanding. The first uses a mechanism-based approach with a capsaicin test to stratify patients suffering from painful diabetic neuropathy before starting a topical clonidine treatment. The second reviews disinhibition as a critical mechanism and a promising target for chronic pain. The third evokes neuroglial interactions and its implication regarding the interplay between injuries in childhood and hypersensitivity in adulthood. The last articles remind us that interventional therapies, not always very invasive, have a future potential in the therapy of frequent conditions such as head pain disorders.