867 resultados para Multi-scale modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Graphene, that is a monolayer of carbon atoms arranged in a honeycomb lattice, has been isolated only recently from graphite. This material shows very attractive physical properties, like superior carrier mobility, current carrying capability and thermal conductivity. In consideration of that, graphene has been the object of large investigation as a promising candidate to be used in nanometer-scale devices for electronic applications. In this work, graphene nanoribbons (GNRs), that are narrow strips of graphene, for which a band-gap is induced by the quantum confinement of carriers in the transverse direction, have been studied. As experimental GNR-FETs are still far from being ideal, mainly due to the large width and edge roughness, an accurate description of the physical phenomena occurring in these devices is required to have valuable predictions about the performance of these novel structures. A code has been developed to this purpose and used to investigate the performance of 1 to 15-nm wide GNR-FETs. Due to the importance of an accurate description of the quantum effects in the operation of graphene devices, a full-quantum transport model has been adopted: the electron dynamics has been described by a tight-binding (TB) Hamiltonian model and transport has been solved within the formalism of the non-equilibrium Green's functions (NEGF). Both ballistic and dissipative transport are considered. The inclusion of the electron-phonon interaction has been taken into account in the self-consistent Born approximation. In consideration of their different energy band-gap, narrow GNRs are expected to be suitable for logic applications, while wider ones could be promising candidates as channel material for radio-frequency applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro è dedicato all'analisi fisica e alla modellizzazione dello strato limite atmosferico in condizioni stabili. L'obiettivo principale è quello di migliorare i modelli di parametrizzazione della turbulenza attualmente utilizzati dai modelli meteorologici a grande scala. Questi modelli di parametrizzazione della turbolenza consistono nell' esprimere gli stress di Reynolds come funzioni dei campi medi (componenti orizzontali della velocità e temperatura potenziale) usando delle chiusure. La maggior parte delle chiusure sono state sviluppate per i casi quasi-neutrali, e la difficoltà è trattare l'effetto della stabilità in modo rigoroso. Studieremo in dettaglio due differenti modelli di chiusura della turbolenza per lo strato limite stabile basati su assunzioni diverse: uno schema TKE-l (Mellor-Yamada,1982), che è usato nel modello di previsione BOLAM (Bologna Limited Area Model), e uno schema sviluppato recentemente da Mauritsen et al. (2007). Le assunzioni delle chiusure dei due schemi sono analizzate con dati sperimentali provenienti dalla torre di Cabauw in Olanda e dal sito CIBA in Spagna. Questi schemi di parametrizzazione della turbolenza sono quindi inseriti all'interno di un modello colonnare dello strato limite atmosferico, per testare le loro predizioni senza influenze esterne. Il confronto tra i differenti schemi è effettuato su un caso ben documentato in letteratura, il "GABLS1". Per confermare la validità delle predizioni, un dataset tridimensionale è creato simulando lo stesso caso GABLS1 con una Large Eddy Simulation. ARPS (Advanced Regional Prediction System) è stato usato per questo scopo. La stratificazione stabile vincola il passo di griglia, poichè la LES deve essere ad una risoluzione abbastanza elevata affinchè le tipiche scale verticali di moto siano correttamente risolte. Il confronto di questo dataset tridimensionale con le predizioni degli schemi turbolenti permettono di proporre un insieme di nuove chiusure atte a migliorare il modello di turbolenza di BOLAM. Il lavoro è stato compiuto all' ISAC-CNR di Bologna e al LEGI di Grenoble.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is devoted to the study of the properties of high-redsfhit galaxies in the epoch 1 < z < 3, when a substantial fraction of galaxy mass was assembled, and when the evolution of the star-formation rate density peaked. Following a multi-perspective approach and using the most recent and high-quality data available (spectra, photometry and imaging), the morphologies and the star-formation properties of high-redsfhit galaxies were investigated. Through an accurate morphological analyses, the built up of the Hubble sequence was placed around z ~ 2.5. High-redshift galaxies appear, in general, much more irregular and asymmetric than local ones. Moreover, the occurrence of morphological k-­correction is less pronounced than in the local Universe. Different star-formation rate indicators were also studied. The comparison of ultra-violet and optical based estimates, with the values derived from infra-red luminosity showed that the traditional way of addressing the dust obscuration is problematic, at high-redshifts, and new models of dust geometry and composition are required. Finally, by means of stacking techniques applied to rest-frame ultra-violet spectra of star-forming galaxies at z~2, the warm phase of galactic-scale outflows was studied. Evidence was found of escaping gas at velocities of ~ 100 km/s. Studying the correlation of inter-­stellar absorption lines equivalent widths with galaxy physical properties, the intensity of the outflow-related spectral features was proven to depend strongly on a combination of the velocity dispersion of the gas and its geometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates two distinct research topics. The main topic (Part I) is the computational modelling of cardiomyocytes derived from human stem cells, both embryonic (hESC-CM) and induced-pluripotent (hiPSC-CM). The aim of this research line lies in developing models of the electrophysiology of hESC-CM and hiPSC-CM in order to integrate the available experimental data and getting in-silico models to be used for studying/making new hypotheses/planning experiments on aspects not fully understood yet, such as the maturation process, the functionality of the Ca2+ hangling or why the hESC-CM/hiPSC-CM action potentials (APs) show some differences with respect to APs from adult cardiomyocytes. Chapter I.1 introduces the main concepts about hESC-CMs/hiPSC-CMs, the cardiac AP, and computational modelling. Chapter I.2 presents the hESC-CM AP model, able to simulate the maturation process through two developmental stages, Early and Late, based on experimental and literature data. Chapter I.3 describes the hiPSC-CM AP model, able to simulate the ventricular-like and atrial-like phenotypes. This model was used to assess which currents are responsible for the differences between the ventricular-like AP and the adult ventricular AP. The secondary topic (Part II) consists in the study of texture descriptors for biological image processing. Chapter II.1 provides an overview on important texture descriptors such as Local Binary Pattern or Local Phase Quantization. Moreover the non-binary coding and the multi-threshold approach are here introduced. Chapter II.2 shows that the non-binary coding and the multi-threshold approach improve the classification performance of cellular/sub-cellular part images, taken from six datasets. Chapter II.3 describes the case study of the classification of indirect immunofluorescence images of HEp2 cells, used for the antinuclear antibody clinical test. Finally the general conclusions are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the present work, a multi physics simulation of an innovative safety system for light water nuclear reactor is performed, with the aim to increase the reliability of its main decay heat removal system. The system studied, denoted by the acronym PERSEO (in Pool Energy Removal System for Emergency Operation) is able to remove the decay power from the primary side of the light water nuclear reactor through a heat suppression pool. The experimental facility, located at SIET laboratories (PIACENZA), is an evolution of the Thermal Valve concept where the triggering valve is installed liquid side, on a line connecting two pools at the bottom. During the normal operation, the valve is closed, while in emergency conditions it opens, the heat exchanger is flooded with consequent heat transfer from the primary side to the pool side. In order to verify the correct system behavior during long term accidental transient, two main experimental PERSEO tests are analyzed. For this purpose, a coupling between the mono dimensional system code CATHARE, which reproduces the system scale behavior, with a three-dimensional CFD code NEPTUNE CFD, allowing a full investigation of the pools and the injector, is implemented. The coupling between the two codes is realized through the boundary conditions. In a first analysis, the facility is simulated by the system code CATHARE V2.5 to validate the results with the experimental data. The comparison of the numerical results obtained shows a different void distribution during the boiling conditions inside the heat suppression pool for the two cases of single nodalization and three volume nodalization scheme of the pool. Finaly, to improve the investigation capability of the void distribution inside the pool and the temperature stratification phenomena below the injector, a two and three dimensional CFD models with a simplified geometry of the system are adopted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliable electronic systems, namely a set of reliable electronic devices connected to each other and working correctly together for the same functionality, represent an essential ingredient for the large-scale commercial implementation of any technological advancement. Microelectronics technologies and new powerful integrated circuits provide noticeable improvements in performance and cost-effectiveness, and allow introducing electronic systems in increasingly diversified contexts. On the other hand, opening of new fields of application leads to new, unexplored reliability issues. The development of semiconductor device and electrical models (such as the well known SPICE models) able to describe the electrical behavior of devices and circuits, is a useful means to simulate and analyze the functionality of new electronic architectures and new technologies. Moreover, it represents an effective way to point out the reliability issues due to the employment of advanced electronic systems in new application contexts. In this thesis modeling and design of both advanced reliable circuits for general-purpose applications and devices for energy efficiency are considered. More in details, the following activities have been carried out: first, reliability issues in terms of security of standard communication protocols in wireless sensor networks are discussed. A new communication protocol is introduced, allows increasing the network security. Second, a novel scheme for the on-die measurement of either clock jitter or process parameter variations is proposed. The developed scheme can be used for an evaluation of both jitter and process parameter variations at low costs. Then, reliability issues in the field of “energy scavenging systems” have been analyzed. An accurate analysis and modeling of the effects of faults affecting circuit for energy harvesting from mechanical vibrations is performed. Finally, the problem of modeling the electrical and thermal behavior of photovoltaic (PV) cells under hot-spot condition is addressed with the development of an electrical and thermal model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Workaholism is defined as the combination of two underlying dimensions: working excessively and working compulsively. The present thesis aims at achieving the following purposes: 1) to test whether the interaction between environmental and personal antecedents may enhance workaholism; 2) to develop a questionnaire aimed to assess overwork climate in the workplace; 3) to contrast focal employees’ and coworkers’ perceptions of employees’ workaholism and engagement. Concerning the first purpose, the interaction between overwork climate and person characteristics (achievement motivation, perfectionism, conscientiousness, self-efficacy) was explored on a sample of 333 Dutch employees. The results of moderated regression analyses showed that the interaction between overwork climate and person characteristics is related to workaholism. The second purpose was pursued with two interrelated studies. In Study 1 the Overwork Climate Scale (OWCS) was developed and tested using a principal component analysis (N = 395) and a confirmatory factor analysis (N = 396). Two overwork climate dimensions were distinguished, overwork endorsement and lacking overwork rewards. In Study 2 the total sample (N = 791) was used to explore the association of overwork climate with two types of working hard: work engagement and workaholism. Lacking overwork rewards was negatively associated with engagement, whereas overwork endorsement showed a positive association with workaholism. Concerning the third purpose, using a sample of 73 dyads composed by focal employees and their coworkers, a multitrait-multimethod matrix and a correlated trait-correlated method model, i.e. the CT-C(M–1) model, were examined. Our results showed a considerable agreement between raters on focal employees' engagement and workaholism. In contrast, we observed a significant difference concerning the cognitive dimension of workaholism, working compulsively. Moreover, we provided further evidence for the discriminant validity between engagement and workaholism. Overall, workaholism appears as a negative work-related state that could be better explained by assuming a multi-causal and multi-rater approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Basic concepts and definitions relative to Lagrangian Particle Dispersion Models (LPDMs)for the description of turbulent dispersion are introduced. The study focusses on LPDMs that use as input, for the large scale motion, fields produced by Eulerian models, with the small scale motions described by Lagrangian Stochastic Models (LSMs). The data of two different dynamical model have been used: a Large Eddy Simulation (LES) and a General Circulation Model (GCM). After reviewing the small scale closure adopted by the Eulerian model, the development and implementation of appropriate LSMs is outlined. The basic requirement of every LPDM used in this work is its fullfillment of the Well Mixed Condition (WMC). For the dispersion description in the GCM domain, a stochastic model of Markov order 0, consistent with the eddy-viscosity closure of the dynamical model, is implemented. A LSM of Markov order 1, more suitable for shorter timescales, has been implemented for the description of the unresolved motion of the LES fields. Different assumptions on the small scale correlation time are made. Tests of the LSM on GCM fields suggest that the use of an interpolation algorithm able to maintain an analytical consistency between the diffusion coefficient and its derivative is mandatory if the model has to satisfy the WMC. Also a dynamical time step selection scheme based on the diffusion coefficient shape is introduced, and the criteria for the integration step selection are discussed. Absolute and relative dispersion experiments are made with various unresolved motion settings for the LSM on LES data, and the results are compared with laboratory data. The study shows that the unresolved turbulence parameterization has a negligible influence on the absolute dispersion, while it affects the contribution of the relative dispersion and meandering to absolute dispersion, as well as the Lagrangian correlation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Chlorinated solvents are the most ubiquitous organic contaminants found in groundwater since the last five decades. They generally reach groundwater as Dense Non-Aqueous Phase Liquid (DNAPL). This phase can migrate through aquifers, and also through aquitards, in ways that aqueous contaminants cannot. The complex phase partitioning to which chlorinated solvent DNAPLs can undergo (i.e. to the dissolved, vapor or sorbed phase), as well as their transformations (e.g. degradation), depend on the physico-chemical properties of the contaminants themselves and on features of the hydrogeological system. The main goal of the thesis is to provide new knowledge for the future investigations of sites contaminated by DNAPLs in alluvial settings, proposing innovative investigative approaches and emphasizing some of the key issues and main criticalities of this kind of contaminants in such a setting. To achieve this goal, the hydrogeologic setting below the city of Ferrara (Po plain, northern Italy), which is affected by scattered contamination by chlorinated solvents, has been investigated at different scales (regional and site specific), both from an intrinsic (i.e. groundwater flow systems) and specific (i.e. chlorinated solvent DNAPL behavior) point of view. Detailed investigations were carried out in particular in one selected test-site, known as “Caretti site”, where high-resolution vertical profiling of different kind of data were collected by means of multilevel monitoring systems and other innovative sampling and analytical techniques. This allowed to achieve a deep geological and hydrogeological knowledge of the system and to reconstruct in detail the architecture of contaminants in relationship to the features of the hosting porous medium. The results achieved in this thesis are useful not only at local scale, e.g. employable to interpret the origin of contamination in other sites of the Ferrara area, but also at global scale, in order to address future remediation and protection actions of similar hydrogeologic settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An extensive study of the morphology and the dynamics of the equatorial ionosphere over South America is presented here. A multi parametric approach is used to describe the physical characteristics of the ionosphere in the regions where the combination of the thermospheric electric field and the horizontal geomagnetic field creates the so-called Equatorial Ionization Anomalies. Ground based measurements from GNSS receivers are used to link the Total Electron Content (TEC), its spatial gradients and the phenomenon known as scintillation that can lead to a GNSS signal degradation or even to a GNSS signal ‘loss of lock’. A new algorithm to highlight the features characterizing the TEC distribution is developed in the framework of this thesis and the results obtained are validated and used to improve the performance of a GNSS positioning technique (long baseline RTK). In addition, the correlation between scintillation and dynamics of the ionospheric irregularities is investigated. By means of a software, here implemented, the velocity of the ionospheric irregularities is evaluated using high sampling rate GNSS measurements. The results highlight the parallel behaviour of the amplitude scintillation index (S4) occurrence and the zonal velocity of the ionospheric irregularities at least during severe scintillations conditions (post-sunset hours). This suggests that scintillations are driven by TEC gradients as well as by the dynamics of the ionospheric plasma. Finally, given the importance of such studies for technological applications (e.g. GNSS high-precision applications), a validation of the NeQuick model (i.e. the model used in the new GALILEO satellites for TEC modelling) is performed. The NeQuick performance dramatically improves when data from HF radar sounding (ionograms) are ingested. A custom designed algorithm, based on the image recognition technique, is developed to properly select the ingested data, leading to further improvement of the NeQuick performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In dieser Arbeit wurden Simulation von Flüssigkeiten auf molekularer Ebene durchgeführt, wobei unterschiedliche Multi-Skalen Techniken verwendet wurden. Diese erlauben eine effektive Beschreibung der Flüssigkeit, die weniger Rechenzeit im Computer benötigt und somit Phänomene auf längeren Zeit- und Längenskalen beschreiben kann.rnrnEin wesentlicher Aspekt ist dabei ein vereinfachtes (“coarse-grained”) Modell, welches in einem systematischen Verfahren aus Simulationen des detaillierten Modells gewonnen wird. Dabei werden ausgewählte Eigenschaften des detaillierten Modells (z.B. Paar-Korrelationsfunktion, Druck, etc) reproduziert.rnrnEs wurden Algorithmen untersucht, die eine gleichzeitige Kopplung von detaillierten und vereinfachten Modell erlauben (“Adaptive Resolution Scheme”, AdResS). Dabei wird das detaillierte Modell in einem vordefinierten Teilvolumen der Flüssigkeit (z.B. nahe einer Oberfläche) verwendet, während der Rest mithilfe des vereinfachten Modells beschrieben wird.rnrnHierzu wurde eine Methode (“Thermodynamische Kraft”) entwickelt um die Kopplung auch dann zu ermöglichen, wenn die Modelle in verschiedenen thermodynamischen Zuständen befinden. Zudem wurde ein neuartiger Algorithmus der Kopplung beschrieben (H-AdResS) der die Kopplung mittels einer Hamilton-Funktion beschreibt. In diesem Algorithmus ist eine zur Thermodynamischen Kraft analoge Korrektur mit weniger Rechenaufwand möglich.rnrnAls Anwendung dieser grundlegenden Techniken wurden Pfadintegral Molekulardynamik (MD) Simulationen von Wasser untersucht. Mithilfe dieser Methode ist es möglich, quantenmechanische Effekte der Kerne (Delokalisation, Nullpunktsenergie) in die Simulation einzubeziehen. Hierbei wurde zuerst eine Multi-Skalen Technik (“Force-matching”) verwendet um eine effektive Wechselwirkung aus einer detaillierten Simulation auf Basis der Dichtefunktionaltheorie zu extrahieren. Die Pfadintegral MD Simulation verbessert die Beschreibung der intra-molekularen Struktur im Vergleich mit experimentellen Daten. Das Modell eignet sich auch zur gleichzeitigen Kopplung in einer Simulation, wobei ein Wassermolekül (beschrieben durch 48 Punktteilchen im Pfadintegral-MD Modell) mit einem vereinfachten Modell (ein Punktteilchen) gekoppelt wird. Auf diese Weise konnte eine Wasser-Vakuum Grenzfläche simuliert werden, wobei nur die Oberfläche im Pfadintegral Modell und der Rest im vereinfachten Modell beschrieben wird.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Urban centers significantly contribute to anthropogenic air pollution, although they cover only a minor fraction of the Earth's land surface. Since the worldwide degree of urbanization is steadily increasing, the anthropogenic contribution to air pollution from urban centers is expected to become more substantial in future air quality assessments. The main objective of this thesis was to obtain a more profound insight in the dispersion and the deposition of aerosol particles from 46 individual major population centers (MPCs) as well as the regional and global influence on the atmospheric distribution of several aerosol types. For the first time, this was assessed in one model framework, for which the global model EMAC was applied with different representations of aerosol particles. First, in an approach with passive tracers and a setup in which the results depend only on the source location and the size and the solubility of the tracers, several metrics and a regional climate classification were used to quantify the major outflow pathways, both vertically and horizontally, and to compare the balance between pollution export away from and pollution build-up around the source points. Then in a more comprehensive approach, the anthropogenic emissions of key trace species were changed at the MPC locations to determine the cumulative impact of the MPC emissions on the atmospheric aerosol burdens of black carbon, particulate organic matter, sulfate, and nitrate. Ten different mono-modal passive aerosol tracers were continuously released at the same constant rate at each emission point. The results clearly showed that on average about five times more mass is advected quasi-horizontally at low levels than exported into the upper troposphere. The strength of the low-level export is mainly determined by the location of the source, while the vertical transport is mainly governed by the lifting potential and the solubility of the tracers. Similar to insoluble gas phase tracers, the low-level export of aerosol tracers is strongest at middle and high latitudes, while the regions of strongest vertical export differ between aerosol (temperate winter dry) and gas phase (tropics) tracers. The emitted mass fraction that is kept around MPCs is largest in regions where aerosol tracers have short lifetimes; this mass is also critical for assessing the impact on humans. However, the number of people who live in a strongly polluted region around urban centers depends more on the population density than on the size of the area which is affected by strong air pollution. Another major result was that fine aerosol particles (diameters smaller than 2.5 micrometer) from MPCs undergo substantial long-range transport, with about half of the emitted mass being deposited beyond 1000 km away from the source. In contrast to this diluted remote deposition, there are areas around the MPCs which experience high deposition rates, especially in regions which are frequently affected by heavy precipitation or are situated in poorly ventilated locations. Moreover, most MPC aerosol emissions are removed over land surfaces. In particular, forests experience more deposition from MPC pollutants than other land ecosystems. In addition, it was found that the generic treatment of aerosols has no substantial influence on the major conclusions drawn in this thesis. Moreover, in the more comprehensive approach, it was found that emissions of black carbon, particulate organic matter, sulfur dioxide, and nitrogen oxides from MPCs influence the atmospheric burden of various aerosol types very differently, with impacts generally being larger for secondary species, sulfate and nitrate, than for primary species, black carbon and particulate organic matter. While the changes in the burdens of sulfate, black carbon, and particulate organic matter show an almost linear response for changes in the emission strength, the formation of nitrate was found to be contingent upon many more factors, e.g., the abundance of sulfuric acid, than only upon the strength of the nitrogen oxide emissions. The generic tracer experiments were further extended to conduct the first risk assessment to obtain the cumulative risk of contamination from multiple nuclear reactor accidents on the global scale. For this, many factors had to be taken into account: the probability of major accidents, the cumulative deposition field of the radionuclide cesium-137, and a threshold value that defines contamination. By collecting the necessary data and after accounting for uncertainties, it was found that the risk is highest in western Europe, the eastern US, and in Japan, where on average contamination by major accidents is expected about every 50 years.