9 resultados para Historically Black Colleges and Universities

em AMS Tesi di Dottorato - Alm@DL - Università di Bologna


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thanks to the Chandra and XMM–Newton surveys, the hard X-ray sky is now probed down to a flux limit where the bulk of the X-ray background is almost completely resolved into discrete sources, at least in the 2–8 keV band. Extensive programs of multiwavelength follow-up observations showed that the large majority of hard X–ray selected sources are identified with Active Galactic Nuclei (AGN) spanning a broad range of redshifts, luminosities and optical properties. A sizable fraction of relatively luminous X-ray sources hosting an active, presumably obscured, nucleus would not have been easily recognized as such on the basis of optical observations because characterized by “peculiar” optical properties. In my PhD thesis, I will focus the attention on the nature of two classes of hard X-ray selected “elusive” sources: those characterized by high X-ray-to-optical flux ratios and red optical-to-near-infrared colors, a fraction of which associated with Type 2 quasars, and the X-ray bright optically normal galaxies, also known as XBONGs. In order to characterize the properties of these classes of elusive AGN, the datasets of several deep and large-area surveys have been fully exploited. The first class of “elusive” sources is characterized by X-ray-to-optical flux ratios (X/O) significantly higher than what is generally observed from unobscured quasars and Seyfert galaxies. The properties of well defined samples of high X/O sources detected at bright X–ray fluxes suggest that X/O selection is highly efficient in sampling high–redshift obscured quasars. At the limits of deep Chandra surveys (∼10−16 erg cm−2 s−1), high X/O sources are generally characterized by extremely faint optical magnitudes, hence their spectroscopic identification is hardly feasible even with the largest telescopes. In this framework, a detailed investigation of their X-ray properties may provide useful information on the nature of this important component of the X-ray source population. The X-ray data of the deepest X-ray observations ever performed, the Chandra deep fields, allows us to characterize the average X-ray properties of the high X/O population. The results of spectral analysis clearly indicate that the high X/O sources represent the most obscured component of the X–ray background. Their spectra are harder (G ∼ 1) than any other class of sources in the deep fields and also of the XRB spectrum (G ≈ 1.4). In order to better understand the AGN physics and evolution, a much better knowledge of the redshift, luminosity and spectral energy distributions (SEDs) of elusive AGN is of paramount importance. The recent COSMOS survey provides the necessary multiwavelength database to characterize the SEDs of a statistically robust sample of obscured sources. The combination of high X/O and red-colors offers a powerful tool to select obscured luminous objects at high redshift. A large sample of X-ray emitting extremely red objects (R−K >5) has been collected and their optical-infrared properties have been studied. In particular, using an appropriate SED fitting procedure, the nuclear and the host galaxy components have been deconvolved over a large range of wavelengths and ptical nuclear extinctions, black hole masses and Eddington ratios have been estimated. It is important to remark that the combination of hard X-ray selection and extreme red colors is highly efficient in picking up highly obscured, luminous sources at high redshift. Although the XBONGs do not present a new source population, the interest on the nature of these sources has gained a renewed attention after the discovery of several examples from recent Chandra and XMM–Newton surveys. Even though several possibilities were proposed in recent literature to explain why a relatively luminous (LX = 1042 − 1043erg s−1) hard X-ray source does not leave any significant signature of its presence in terms of optical emission lines, the very nature of XBONGs is still subject of debate. Good-quality photometric near-infrared data (ISAAC/VLT) of 4 low-redshift XBONGs from the HELLAS2XMMsurvey have been used to search for the presence of the putative nucleus, applying the surface-brightness decomposition technique. In two out of the four sources, the presence of a nuclear weak component hosted by a bright galaxy has been revealed. The results indicate that moderate amounts of gas and dust, covering a large solid angle (possibly 4p) at the nuclear source, may explain the lack of optical emission lines. A weak nucleus not able to produce suffcient UV photons may provide an alternative or additional explanation. On the basis of an admittedly small sample, we conclude that XBONGs constitute a mixed bag rather than a new source population. When the presence of a nucleus is revealed, it turns out to be mildly absorbed and hosted by a bright galaxy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this Thesis, we investigate the cosmological co-evolution of supermassive black holes (BHs), Active Galactic Nuclei (AGN) and their hosting dark matter (DM) halos and galaxies, within the standard CDM scenario. We analyze both analytic, semi-analytic and hybrid techniques and use the most recent observational data available to constrain the assumptions underlying our models. First, we focus on very simple analytic models where the assembly of BHs is directly related to the merger history of DM haloes. For this purpose, we implement the two original analytic models of Wyithe & Loeb 2002 and Wyithe & Loeb 2003, compare their predictions to the AGN luminosity function and clustering data, and discuss possible modifications to the models that improve the match to the observation. Then we study more sophisticated semi-analytic models in which however the baryonic physics is neglected as well. Finally we improve the hybrid simulation of De Lucia & Blaizot 2007, adding new semi-analytical prescriptions to describe the BH mass accretion rate during each merger event and its conversion into radiation, and compare the derived BH scaling relations, fundamental plane and mass function, and the AGN luminosity function with observations. All our results support the following scenario: • The cosmological co-evolution of BHs, AGN and galaxies can be well described within the CDM model. • At redshifts z & 1, the evolution history of DM halo fully determines the overall properties of the BH and AGN populations. The AGN emission is triggered mainly by DM halo major mergers and, on average, AGN shine at their Eddington luminosity. • At redshifts z . 1, BH growth decouples from halo growth. Galaxy major mergers cannot constitute the only trigger to accretion episodes in this phase. • When a static hot halo has formed around a galaxy, a fraction of the hot gas continuously accretes onto the central BH, causing a low-energy “radio” activity at the galactic centre, which prevents significant gas cooling and thus limiting the mass of the central galaxies and quenching the star formation at late time. • The cold gas fraction accreted by BHs at high redshifts seems to be larger than at low redshifts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The study of supermassive black hole (SMBH) accretion during their phase of activity (hence becoming active galactic nuclei, AGN), and its relation to the host-galaxy growth, requires large datasets of AGN, including a significant fraction of obscured sources. X-ray data are strategic in AGN selection, because at X-ray energies the contamination from non-active galaxies is far less significant than in optical/infrared surveys, and the selection of obscured AGN, including also a fraction of heavily obscured AGN, is much more effective. In this thesis, I present the results of the Chandra COSMOS Legacy survey, a 4.6 Ms X-ray survey covering the equatorial COSMOS area. The COSMOS Legacy depth (flux limit f=2x10^(-16) erg/s/cm^(-2) in the 0.5-2 keV band) is significantly better than that of other X-ray surveys on similar area, and represents the path for surveys with future facilities, like Athena and X-ray Surveyor. The final Chandra COSMOS Legacy catalog contains 4016 point-like sources, 97% of which with redshift. 65% of the sources are optically obscured and potentially caught in the phase of main BH growth. We used the sample of 174 Chandra COSMOS Legacy at z>3 to place constraints on the BH formation scenario. We found a significant disagreement between our space density and the predictions of a physical model of AGN activation through major-merger. This suggests that in our luminosity range the BH triggering through secular accretion is likely preferred to a major-merger triggering scenario. Thanks to its large statistics, the Chandra COSMOS Legacy dataset, combined with the other multiwavelength COSMOS catalogs, will be used to answer questions related to a large number of astrophysical topics, with particular focus on the SMBH accretion in different luminosity and redshift regimes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual search and oculomotor behaviour are believed to be very relevant for athlete performance, especially for sports requiring refined visuo-motor coordination skills. Modern coaches believe that a correct visuo-motor strategy may be part of advanced training programs. In this thesis two experiments are reported in which gaze behaviour of expert and novice athletes were investigated while they were doing a real sport specific task. The experiments concern two different sports: judo and soccer. In each experiment, number of fixations, fixation locations and mean fixation duration (ms) were considered. An observational analysis was done at the end of the paper to see perceptual differences between near and far space. Purpose: The aim of the judo study was to delineate differences in gaze behaviour characteristics between a population of athletes and one of non athletes. Aspects specifically investigated were: search rate, search order and viewing time across different conditions in a real-world task. The second study was aimed at identifying gaze behaviour in varsity soccer goalkeepers while facing a penalty kick executed with instep and inside foot. Then an attempt has been done to compare the gaze strategies of expert judoka and soccer goalkeepers in order to delineate possible differences related to the different conditions of reacting to events occurring in near (peripersonal) or far (extrapersonal) space. Judo Methods: A sample of 9 judoka (black belt) and 11 near judoka (white belt) were studied. Eye movements were recorded at 500Hz using a video based eye tracker (EyeLink II). Each subject participated in 40 sessions for about 40 minutes. Gaze behaviour was considered as average number of locations fixated per trial, the average number of fixations per trial, and mean fixation duration. Soccer Methods: Seven (n = 7) intermediate level male volunteered for the experiment. The kickers and goalkeepers, had at least varsity level soccer experience. The vision-in-action (VIA) system (Vickers 1996; Vickers 2007) was used to collect the coupled gaze and motor behaviours of the goalkeepers. This system integrated input from a mobile eye tracking system (Applied Sciences Laboratories) with an external video of the goalkeeper’s saving actions. The goalkeepers took 30 penalty kicks on a synthetic pitch in accordance with FIFA (2008) laws. Judo Results: Results indicate that experts group differed significantly from near expert for fixations duration, and number of fixations per trial. The expert judokas used a less exhaustive search strategy involving fewer fixations of longer duration than their novice counterparts and focused on central regions of the body. The results showed that in defence and attack situation expert group did a greater number of transitions with respect to their novice counterpart. Soccer Results: We found significant main effect for the number of locations fixated across outcome (goal/save) but not for foot contact (instep/inside). Participants spent more time fixating the areas in instep than inside kick and in goal than in save situation. Mean and standard error in search strategy as a result of foot contact and outcome indicate that the most gaze behaviour start and finish on ball interest areas. Conclusions: Expert goalkeepers tend to spend more time in inside-save than instep-save penalty, differences that was opposite in scored penalty kick. Judo results show that differences in visual behaviour related to the level of expertise appear mainly when the test presentation is continuous, last for a relatively long period of time and present a high level of uncertainty with regard to the chronology and the nature of events. Expert judoist performers “anchor” the fovea on central regions of the scene (lapel and face) while using peripheral vision to monitor opponents’ limb movements. The differences between judo and soccer gaze strategies are discussed on the light of physiological and neuropsychological differences between near and far space perception.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of apatite fission tracks is applied to the study of the syn- and post-collisional thermochronological evolution of a vast area that includes the Eastern Pontides, their continuation in the Lesser Caucasus of Georgia (Adjara-Trialeti zone) and northern Armenia, and the eastern Anatolian Plateau. The resulting database is then integrated with the data presented by Okay et al. (2010) for the Bitlis Pütürge Massif, i.e. the western portion of the Bitlis-Zagros collision zone between Arabia and Eurasia. The mid-Miocene exhumation episode along the Black Sea coast and Lesser Caucasus of Armenia documented in this dissertation mirrors the age of collision between the Eurasian and Arabian plates along the Bitlis suture zone. We argue that tectonic stresses generated along the Bitlis collision zone were transmitted northward across eastern Anatolia and focused (i) at the rheological boundary between the Anatolian continental lithosphere and the (quasi)oceanic lithosphere of the Black Sea, and (ii) along major pre-existing discontinuities like the Sevan-Akera suture zone.The integration of both present-day crustal dynamics (GPS-derived kinematics and distribution of seismicity) and thermochronological data presented in this paper provides a comparison between short- and long-term deformation patterns for the entire eastern Anatolia-Transcaucasian region. Two successive stages of Neogene deformation of the northern foreland of the Arabia-Eurasia collision zone can be inferred. (i) Early and Middle Miocene: continental deformation was concentrated along the Arabia-Eurasia (Bitlis) collision zone but tectonic stress was also transferred northward across eastern Anatolia, focusing along the eastern Black Sea continent-ocean rheological transition and along major pre-existing structural discontinuities. (ii) Since Late-Middle Miocene time the westward translation of Anatolia and the activation of the North and Eastern Anatolian Fault systems have reduced efficient northward stress transfer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the framework of the micro-CHP (Combined Heat and Power) energy systems and the Distributed Generation (GD) concept, an Integrated Energy System (IES) able to meet the energy and thermal requirements of specific users, using different types of fuel to feed several micro-CHP energy sources, with the integration of electric generators of renewable energy sources (RES), electrical and thermal storage systems and the control system was conceived and built. A 5 kWel Polymer Electrolyte Membrane Fuel Cell (PEMFC) has been studied. Using experimental data obtained from various measurement campaign, the electrical and CHP PEMFC system performance have been determinate. The analysis of the effect of the water management of the anodic exhaust at variable FC loads has been carried out, and the purge process programming logic was optimized, leading also to the determination of the optimal flooding times by varying the AC FC power delivered by the cell. Furthermore, the degradation mechanisms of the PEMFC system, in particular due to the flooding of the anodic side, have been assessed using an algorithm that considers the FC like a black box, and it is able to determine the amount of not-reacted H2 and, therefore, the causes which produce that. Using experimental data that cover a two-year time span, the ageing suffered by the FC system has been tested and analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation explores how diseases contributed to shape historical institutions and how health and diseases are still affecting modern comparative development. The overarching goal of this investigation is to identify the channels linking geographic suitability to diseases and the emergence of historical and modern insitutions, while tackling the endogenenity problems that traditionally undermine this literature. I attempt to do so by taking advantage of the vast amount of newly available historical data and of the richness of data accessible through the geographic information system (GIS). The first chapter of my thesis, 'Side Effects of Immunities: The African Slave Trade', proposes and test a novel explanation for the origins of slavery in the tropical regions of the Americas. I argue that Africans were especially attractive for employment in tropical areas because they were immune to many of the diseases that were ravaging those regions. In particular, Africans' resistance to malaria increased the profitability of slaves coming from the most malarial parts of Africa. In the second chapter of my thesis, 'Caste Systems and Technology in Pre-Modern Societies', I advance and test the hypothesis that caste systems, generally viewed as a hindrance to social mobility and development, had been comparatively advantageous at an early stage of economic development. In the third chapter, 'Malaria as Determinant of Modern Ethnolinguistic Diversity', I conjecture that in highly malarious areas the necessity to adapt and develop immunities specific to the local disease environment historically reduced mobility and increased isolation, thus leading to the formation of a higher number of different ethnolinguistic groups. In the final chapter, 'Malaria Risk and Civil Violence: A Disaggregated Analysis for Africa', I explore the relationship between malaria and violent conflicts. Using georeferenced data for Africa, the article shows that violent events are more frequent in areas where malaria risk is higher.