961 resultados para short-range ordering
Resumo:
Hypernuclear physics is currently attracting renewed interest, due tornthe important role of hypernuclei spectroscopy rn(hyperon-hyperon and hyperon-nucleon interactions) rnas a unique toolrnto describe the baryon-baryon interactions in a unified way and to rnunderstand the origin of their short-range.rnrnHypernuclear research will be one of the main topics addressed by the {sc PANDA} experimentrnat the planned Facility for Antiproton and Ion Research {sc FAIR}.rnThanks to the use of stored $overline{p}$ beams, copiousrnproduction of double $Lambda$ hypernuclei is expected at thern{sc PANDA} experiment, which will enable high precision $gamma$rnspectroscopy of such nuclei for the first time.rnAt {sc PANDA} excited states of $Xi^-$ hypernuclei will be usedrnas a basis for the formation of double $Lambda$ hypernuclei.rnFor their detection, a devoted hypernuclear detector setup is planned. This setup consists ofrna primary nuclear target for the production of $Xi^{-}+overline{Xi}$ pairs, a secondary active targetrnfor the hypernuclei formation and the identification of associated decay products and a germanium array detector to perform $gamma$ spectroscopy.rnrnIn the present work, the feasibility of performing high precision $gamma$rnspectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment has been studiedrnby means of a Monte Carlo simulation. For this issue, the designing and simulation of the devoted detector setup as well as of the mechanism to produce double $Lambda$ hypernuclei have been optimizedrntogether with the performance of the whole system. rnIn addition, the production yields of double hypernuclei in excitedrnparticle stable states have been evaluated within a statistical decay model.rnrnA strategy for the unique assignment of various newly observed $gamma$-transitions rnto specific double hypernuclei has been successfully implemented by combining the predicted energy spectra rnof each target with the measurement of two pion momenta from the subsequent weak decays of a double hypernucleus.rn% Indeed, based on these Monte Carlo simulation, the analysis of the statistical decay of $^{13}_{Lambda{}Lambda}$B has been performed. rn% As result, three $gamma$-transitions associated to the double hypernuclei $^{11}_{Lambda{}Lambda}$Bern% and to the single hyperfragments $^{4}_{Lambda}$H and $^{9}_{Lambda}$Be, have been well identified.rnrnFor the background handling a method based on time measurement has also been implemented.rnHowever, the percentage of tagged events related to the production of $Xi^{-}+overline{Xi}$ pairs, variesrnbetween 20% and 30% of the total number of produced events of this type. As a consequence, further considerations have to be made to increase the tagging efficiency by a factor of 2.rnrnThe contribution of the background reactions to the radiation damage on the germanium detectorsrnhas also been studied within the simulation. Additionally, a test to check the degradation of the energyrnresolution of the germanium detectors in the presence of a magnetic field has also been performed.rnNo significant degradation of the energy resolution or in the electronics was observed. A correlationrnbetween rise time and the pulse shape has been used to correct the measured energy. rnrnBased on the present results, one can say that the performance of $gamma$ spectroscopy of double $Lambda$ hypernuclei at the {sc PANDA} experiment seems feasible.rnA further improvement of the statistics is needed for the background rejection studies. Moreover, a more realistic layout of the hypernuclear detectors has been suggested using the results of these studies to accomplish a better balance between the physical and the technical requirements.rn
Resumo:
Self-organising pervasive ecosystems of devices are set to become a major vehicle for delivering infrastructure and end-user services. The inherent complexity of such systems poses new challenges to those who want to dominate it by applying the principles of engineering. The recent growth in number and distribution of devices with decent computational and communicational abilities, that suddenly accelerated with the massive diffusion of smartphones and tablets, is delivering a world with a much higher density of devices in space. Also, communication technologies seem to be focussing on short-range device-to-device (P2P) interactions, with technologies such as Bluetooth and Near-Field Communication gaining greater adoption. Locality and situatedness become key to providing the best possible experience to users, and the classic model of a centralised, enormously powerful server gathering and processing data becomes less and less efficient with device density. Accomplishing complex global tasks without a centralised controller responsible of aggregating data, however, is a challenging task. In particular, there is a local-to-global issue that makes the application of engineering principles challenging at least: designing device-local programs that, through interaction, guarantee a certain global service level. In this thesis, we first analyse the state of the art in coordination systems, then motivate the work by describing the main issues of pre-existing tools and practices and identifying the improvements that would benefit the design of such complex software ecosystems. The contribution can be divided in three main branches. First, we introduce a novel simulation toolchain for pervasive ecosystems, designed for allowing good expressiveness still retaining high performance. Second, we leverage existing coordination models and patterns in order to create new spatial structures. Third, we introduce a novel language, based on the existing ``Field Calculus'' and integrated with the aforementioned toolchain, designed to be usable for practical aggregate programming.
Resumo:
The rapid development in the field of lighting and illumination allows low energy consumption and a rapid growth in the use, and development of solid-state sources. As the efficiency of these devices increases and their cost decreases there are predictions that they will become the dominant source for general illumination in the short term. The objective of this thesis is to study, through extensive simulations in realistic scenarios, the feasibility and exploitation of visible light communication (VLC) for vehicular ad hoc networks (VANETs) applications. A brief introduction will introduce the new scenario of smart cities in which visible light communication will become a fundamental enabling technology for the future communication systems. Specifically, this thesis focus on the acquisition of several, frequent, and small data packets from vehicles, exploited as sensors of the environment. The use of vehicles as sensors is a new paradigm to enable an efficient environment monitoring and an improved traffic management. In most cases, the sensed information must be collected at a remote control centre and one of the most challenging aspects is the uplink acquisition of data from vehicles. My thesis discusses the opportunity to take advantage of short range vehicle-to-vehicle (V2V) and vehicle-to-roadside (V2R) communications to offload the cellular networks. More specifically, it discusses the system design and assesses the obtainable cellular resource saving, by considering the impact of the percentage of vehicles equipped with short range communication devices, of the number of deployed road side units, and of the adopted routing protocol. When short range communications are concerned, WAVE/IEEE 802.11p is considered as standard for VANETs. Its use together with VLC will be considered in urban vehicular scenarios to let vehicles communicate without involving the cellular network. The study is conducted by simulation, considering both a simulation platform (SHINE, simulation platform for heterogeneous interworking networks) developed within the Wireless communication Laboratory (Wilab) of the University of Bologna and CNR, and network simulator (NS3). trying to realistically represent all the wireless network communication aspects. Specifically, simulation of vehicular system was performed and introduced in ns-3, creating a new module for the simulator. This module will help to study VLC applications in VANETs. Final observations would enhance and encourage potential research in the area and optimize performance of VLC systems applications in the future.
Resumo:
We consider stochastic individual-based models for social behaviour of groups of animals. In these models the trajectory of each animal is given by a stochastic differential equation with interaction. The social interaction is contained in the drift term of the SDE. We consider a global aggregation force and a short-range repulsion force. The repulsion range and strength gets rescaled with the number of animals N. We show that for N tending to infinity stochastic fluctuations disappear and a smoothed version of the empirical process converges uniformly towards the solution of a nonlinear, nonlocal partial differential equation of advection-reaction-diffusion type. The rescaling of the repulsion in the individual-based model implies that the corresponding term in the limit equation is local while the aggregation term is non-local. Moreover, we discuss the effect of a predator on the system and derive an analogous convergence result. The predator acts as an repulsive force. Different laws of motion for the predator are considered.
Resumo:
Heusler compounds is a large class of materials, which exhibits diverse fundamental phenomena, together with the possibility of their specific tailoring for various engineering demands. Present work discusses the magnetic noncollinearity in the family of noncentrosymmetric ferrimagnetic Mn2-based Heusler compounds. Based on the obtained experimental and theoretical results, Mn2YZ Heusler family is suspected to provide promising candidates for the formation of the skyrmion lattice. The work is focused on Mn2RhSn bulk polycrystalline sample, which serves as a prototype. It crystallizes in the tetragonal noncentrosymmetric structure (No. 119, I-4m2), which enables the anisotropic Dzyaloshinskii-Moriya (DM) exchange coupling. Additional short-range modulation, induced by the competing nearest and next-nearest interplanes Heisenberg exchange, is suppressed above the 80 K. This allows to develop the long-range modulations in the ideal ferrimagnetic structure within the ab crystallographic planes, and thus, favors to the occurrence of the skyrmion lattice within the temperature range of (80 ≤ T ≤ 270) K. The studies of Mn2RhSn were expandedrnto the broad composition range and continued on thin film samples.
Resumo:
Diese Arbeit stellt eine ausführliche Studie fundamentaler Eigenschaften der Kalzit CaCO3(10.4) und verwandter Mineraloberflächen dar, welche nicht nur durch die Verwendung von Nichtkontakt Rasterkraftmikroskopie, sondern hauptsächlich durch die Messung von Kraftfeldern ermöglicht wurde. Die absolute Oberflächenorientierung sowie der hierfür zugrundeliegende Prozess auf atomarer Skala konnten erfolgreich für die Kalzit (10.4) Oberfläche identifiziert werden.rnDie Adsorption chiraler Moleküle auf Kalzit ist relevant im Bereich der Biomineralisation, was ein Verständnis der Oberflächensymmetrie unumgänglich macht. Die Messung des Oberflächenkraftfeldes auf atomarer Ebene ist hierfür ein zentraler Aspekt. Eine solche Kraftkarte beleuchtet nicht nur die für die Biomineralisation wichtige Wechselwirkung der Oberfläche mit Molekülen, sondern enthält auch die Möglichkeit, Prozesse auf atomarer Skala und damit Oberflächeneigenschaften zu identifizieren.rnDie Einführung eines höchst flexiblen Messprotokolls gewährleistet die zuverlässige und kommerziell nicht erhältliche Messung des Oberflächenkraftfeldes. Die Konversion der rohen ∆f Daten in die vertikale Kraft Fz ist jedoch kein trivialer Vorgang, insbesondere wenn Glätten der Daten in Frage kommt. Diese Arbeit beschreibt detailreich, wie Fz korrekt für die experimentellen Bedingungen dieser Arbeit berechnet werden können. Weiterhin ist beschrieben, wie Lateralkräfte Fy und Dissipation Γ erhalten wurden, um das volle Potential dieser Messmethode auszureizen.rnUm Prozesse auf atomarer Skala auf Oberflächen zu verstehen sind die kurzreichweitigen, chemischen Kräfte Fz,SR von größter Wichtigkeit. Langreichweitige Beiträge müssen hierzu an Fz angefittet und davon abgezogen werden. Dies ist jedoch eine fehleranfällige Aufgabe, die in dieser Arbeit dadurch gemeistert werden konnte, dass drei unabhängige Kriterien gefunden wurden, die den Beginn zcut von Fz,SR bestimmen, was für diese Aufgabe von zentraler Bedeutung ist. Eine ausführliche Fehleranalyse zeigt, dass als Kriterium die Abweichung der lateralen Kräfte voneinander vertrauenswürdige Fz,SR liefert. Dies ist das erste Mal, dass in einer Studie ein Kriterium für die Bestimmung von zcut gegeben werden konnte, vervollständigt mit einer detailreichen Fehleranalyse.rnMit der Kenntniss von Fz,SR und Fy war es möglich, eine der fundamentalen Eigenschaften der CaCO3(10.4) Oberfläche zu identifizieren: die absolute Oberflächenorientierung. Eine starke Verkippung der abgebildeten Objekte
Resumo:
Bluetooth wireless technology is a robust short-range communications system designed for low power (10 meter range) and low cost. It operates in the 2.4 GHz Industrial Scientific Medical (ISM) band and it employs two techniques for minimizing interference: a frequency hopping scheme which nominally splits the 2.400 - 2.485 GHz band in 79 frequency channels and a time division duplex (TDD) scheme which is used to switch to a new frequency channel on 625 μs boundaries. During normal operation a Bluetooth device will be active on a different frequency channel every 625 μs, thus minimizing the chances of continuous interference impacting the performance of the system. The smallest unit of a Bluetooth network is called a piconet, and can have a maximum of eight nodes. Bluetooth devices must assume one of two roles within a piconet, master or slave, where the master governs quality of service and the frequency hopping schedule within the piconet and the slave follows the master’s schedule. A piconet must have a single master and up to 7 active slaves. By allowing devices to have roles in multiple piconets through time multiplexing, i.e. slave/slave or master/slave, the Bluetooth technology allows for interconnecting multiple piconets into larger networks called scatternets. The Bluetooth technology is explored in the context of enabling ad-hoc networks. The Bluetooth specification provides flexibility in the scatternet formation protocol, outlining only the mechanisms necessary for future protocol implementations. A new protocol for scatternet formation and maintenance - mscat - is presented and its performance is evaluated using a Bluetooth simulator. The free variables manipulated in this study include device activity and the probabilities of devices performing discovery procedures. The relationship between the role a device has in the scatternet and it’s probability of performing discovery was examined and related to the scatternet topology formed. The results show that mscat creates dense network topologies for networks of 30, 50 and 70 nodes. The mscat protocol results in approximately a 33% increase in slaves/piconet and a reduction of approximately 12.5% of average roles/node. For 50 node scenarios the set of parameters which creates the best determined outcome is unconnected node inquiry probability (UP) = 10%, master node inquiry probability (MP) = 80% and slave inquiry probability (SP) = 40%. The mscat protocol extends the Bluetooth specification for formation and maintenance of scatternets in an ad-hoc network.
Resumo:
Cell competition is the short-range elimination of slow-dividing cells through apoptosis when confronted with a faster growing population. It is based on the comparison of relative cell fitness between neighboring cells and is a striking example of tissue adaptability that could play a central role in developmental error correction and cancer progression in both Drosophila melanogaster and mammals. Cell competition has led to the discovery of multiple pathways that affect cell fitness and drive cell elimination. The diversity of these pathways could reflect unrelated phenomena, yet recent evidence suggests some common wiring and the existence of a bona fide fitness comparison pathway.
Resumo:
Terbium-149 is among the most interesting therapeutic nuclides for medical applications. It decays by emission of short-range α-particles (Eα = 3.967 MeV) with a half-life of 4.12 h. The goal of this study was to investigate the anticancer efficacy of a 149Tb-labeled DOTA-folate conjugate (cm09) using folate receptor (FR)-positive cancer cells in vitro and in tumor-bearing mice. 149Tb was produced at the ISOLDE facility at CERN. Radiolabeling of cm09 with purified 149Tb resulted in a specific activity of ~1.2 MBq/nmol. In vitro assays performed with 149Tb-cm09 revealed a reduced KB cell viability in a FR-specific and activity concentration-dependent manner. Tumor-bearing mice were injected with saline only (group A) or with 149Tb-cm09 (group B: 2.2 MBq; group C: 3.0 MBq). A significant tumor growth delay was found in treated animals resulting in an increased average survival time of mice which received 149Tb-cm09 (B: 30.5 d; C: 43 d) compared to untreated controls (A: 21 d). Analysis of blood parameters revealed no signs of acute toxicity to the kidneys or liver in treated mice over the time of investigation. These results demonstrated the potential of folate-based α-radionuclide therapy in tumor-bearing mice.
Resumo:
Short range nucleon-nucleon correlations in nuclei (NN SRC) carry important information on nuclear structure and dynamics. NN SRC have been extensively probed through two-nucleon knock- out reactions in both pion and electron scattering experiments. We report here on the detection of two-nucleon knock-out events from neutrino interactions and discuss their topological features as possibly involving NN SRC content in the target argon nuclei. The ArgoNeuT detector in the Main Injector neutrino beam at Fermilab has recorded a sample of 30 fully reconstructed charged current events where the leading muon is accompanied by a pair of protons at the interaction vertex, 19 of which have both protons above the Fermi momentum of the Ar nucleus. Out of these 19 events, four are found with the two protons in a strictly back-to-back high momenta configuration directly observed in the final state and can be associated to nucleon Resonance pionless mechanisms involving a pre-existing short range correlated np pair in the nucleus. Another fraction (four events) of the remaining 15 events have a reconstructed back-to-back configuration of a np pair in the initial state, a signature compatible with one-body Quasi Elastic interaction on a neutron in a SRC pair. The detection of these two subsamples of the collected (mu- + 2p) events suggests that mechanisms directly involving nucleon-nucleon SRC pairs in the nucleus are active and can be efficiently explored in neutrino-argon interactions with the LAr TPC technology.
Resumo:
Determining the role of different precipitation periods for peak discharge generation is crucial for both projecting future changes in flood probability and for short- and medium-range flood forecasting. In this study, catchment-averaged daily precipitation time series are analyzed prior to annual peak discharge events (floods) in Switzerland. The high number of floods considered – more than 4000 events from 101 catchments have been analyzed – allows to derive significant information about the role of antecedent precipitation for peak discharge generation. Based on the analysis of precipitation times series, a new separation of flood-related precipitation periods is proposed: (i) the period 0 to 1 day before flood days, when the maximum flood-triggering precipitation rates are generally observed, (ii) the period 2 to 3 days before flood days, when longer-lasting synoptic situations generate "significantly higher than normal" precipitation amounts, and (iii) the period from 4 days to 1 month before flood days when previous wet episodes may have already preconditioned the catchment. The novelty of this study lies in the separation of antecedent precipitation into the precursor antecedent precipitation (4 days before floods or earlier, called PRE-AP) and the short range precipitation (0 to 3 days before floods, a period when precipitation is often driven by one persistent weather situation like e.g., a stationary low-pressure system). A precise separation of "antecedent" and "peak-triggering" precipitation is not attempted. Instead, the strict definition of antecedent precipitation periods permits a direct comparison of all catchments. The precipitation accumulating 0 to 3 days before an event is the most relevant for floods in Switzerland. PRE-AP precipitation has only a weak and region-specific influence on flood probability. Floods were significantly more frequent after wet PRE-AP periods only in the Jura Mountains, in the western and eastern Swiss plateau, and at the outlet of large lakes. As a general rule, wet PRE-AP periods enhance the flood probability in catchments with gentle topography, high infiltration rates, and large storage capacity (karstic cavities, deep soils, large reservoirs). In contrast, floods were significantly less frequent after wet PRE-AP periods in glacial catchments because of reduced melt. For the majority of catchments however, no significant correlation between precipitation amounts and flood occurrences is found when the last 3 days before floods are omitted in the precipitation amounts. Moreover, the PRE-AP was not higher for extreme floods than for annual floods with a high frequency and was very close to climatology for all floods. The fact that floods are not significantly more frequent nor more intense after wet PRE-AP is a clear indicator of a short discharge memory of Pre-Alpine, Alpine and South Alpine Swiss catchments. Our study poses the question whether the impact of long-term precursory precipitation for floods in such catchments is not overestimated in the general perception. The results suggest that the consideration of a 3–4 days precipitation period should be sufficient to represent (understand, reconstruct, model, project) Swiss Alpine floods.
Resumo:
We consider a three-dimensional effective theory of Polyakov lines derived previously from lattice Yang-Mills theory and QCD by means of a resummed strong coupling expansion. The effective theory is useful for investigations of the phase structure, with a sign problem mild enough to allow simulations also at finite density. In this work we present a numerical method to determine improved values for the effective couplings directly from correlators of 4d Yang-Mills theory. For values of the gauge coupling up to the vicinity of the phase transition, the dominant short range effective coupling are well described by their corresponding strong coupling series. We provide numerical results also for the longer range interactions, Polyakov lines in higher representations as well as four-point interactions, and discuss the growing significance of non-local contributions as the lattice gets finer. Within this approach the critical Yang-Mills coupling β c is reproduced to better than one percent from a one-coupling effective theory on N τ = 4 lattices while up to five couplings are needed on N τ = 8 for the same accuracy.
Resumo:
The precision of late Paleocene to middle Eocene nannofossil datums is investigated by means of quantitative methods and correlated to the magnetic polarity stratigraphy, using sequences from the Northwest Pacific, Southeast Atlantic and Italy. It is the rule rather than the exception to find tails of very reduced abundances prior to, or after, a range of consistent and higher abundances. The absolutely first or final occurrence of a species, therefore, seldom provides a synchronous datum when material from different geographic areas are compared. On the other band, synchroneity is often confirmed when the initial sharp rise or the final sharp decline in abundance is used as datum level. The use of datums not employed in the two principal existing nannofossil zonal schemes can substantially improve the biostratigraphic resolution. Two established zonal markers show abundance patterns making them unsuitable as datums: the first occurrences of Ellipsolithus macellus (base NP4, diachronous) and Tribrachiatus nunnii (base NP10 and Paleocene/Eocene boundary, too rare and too short range in open ocean sections). The first occurrence of either Fasciculithus spp. or Sphenolithus spp. is a better marker near the base of NP4. The first occurrence of Discoaster diastypus at 56.6 Ma represents a suitable replacement for recognition of the Paleocene/Eocene boundary.
Resumo:
Recent works (Evelpidou et al., 2012) suggest that the modern tidal notch is disappearing worldwide due sea level rise over the last century. In order to assess this hypothesis, we measured modern tidal notches in several of sites along the Mediterranean coasts. We report observations on tidal notches cut along carbonate coasts from 73 sites from Italy, France, Croatia, Montenegro, Greece, Malta and Spain, plus additional observations carried outside the Mediterranean. At each site, we measured notch width and depth, and we described the characteristics of the biological rim at the base of the notch. We correlated these parameters with wave energy, tide gauge datasets and rock lithology. Our results suggest that, considering 'the development of tidal notches the consequence of midlittoral bioerosion' (as done in Evelpidou et al., 2012) is a simplification that can lead to misleading results, such as stating that notches are disappearing. Important roles in notch formation can be also played by wave action, rate of karst dissolution, salt weathering and wetting and drying cycles. Of course notch formation can be augmented and favoured also by bioerosion which can, in particular cases, be the main process of notch formation and development. Our dataset shows that notches are carved by an ensemble rather than by a single process, both today and in the past, and that it is difficult, if not impossible, to disentangle them and establish which one is prevailing. We therefore show that tidal notches are still forming, challenging the hypothesis that sea level rise has drowned them.
Resumo:
RESUMEN La dispersión del amoniaco (NH3) emitido por fuentes agrícolas en medias distancias, y su posterior deposición en el suelo y la vegetación, pueden llevar a la degradación de ecosistemas vulnerables y a la acidificación de los suelos. La deposición de NH3 suele ser mayor junto a la fuente emisora, por lo que los impactos negativos de dichas emisiones son generalmente mayores en esas zonas. Bajo la legislación comunitaria, varios estados miembros emplean modelos de dispersión inversa para estimar los impactos de las emisiones en las proximidades de las zonas naturales de especial conservación. Una revisión reciente de métodos para evaluar impactos de NH3 en distancias medias recomendaba la comparación de diferentes modelos para identificar diferencias importantes entre los métodos empleados por los distintos países de la UE. En base a esta recomendación, esta tesis doctoral compara y evalúa las predicciones de las concentraciones atmosféricas de NH3 de varios modelos bajo condiciones, tanto reales como hipotéticas, que plantean un potencial impacto sobre ecosistemas (incluidos aquellos bajo condiciones de clima Mediterráneo). En este sentido, se procedió además a la comparación y evaluación de varias técnicas de modelización inversa para inferir emisiones de NH3. Finalmente, se ha desarrollado un modelo matemático simple para calcular las concentraciones de NH3 y la velocidad de deposición de NH3 en ecosistemas vulnerables cercanos a una fuente emisora. La comparativa de modelos supuso la evaluación de cuatro modelos de dispersión (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 y LADD v2010) en un amplio rango de casos hipotéticos (dispersión de NH3 procedente de distintos tipos de fuentes agrícolas de emisión). La menor diferencia entre las concentraciones medias estimadas por los distintos modelos se obtuvo para escenarios simples. La convergencia entre las predicciones de los modelos fue mínima para el escenario relativo a la dispersión de NH3 procedente de un establo ventilado mecánicamente. En este caso, el modelo ADMS predijo concentraciones significativamente menores que los otros modelos. Una explicación de estas diferencias podríamos encontrarla en la interacción de diferentes “penachos” y “capas límite” durante el proceso de parametrización. Los cuatro modelos de dispersión fueron empleados para dos casos reales de dispersión de NH3: una granja de cerdos en Falster (Dinamarca) y otra en Carolina del Norte (EEUU). Las concentraciones medias anuales estimadas por los modelos fueron similares para el caso americano (emisión de granjas ventiladas de forma natural y balsa de purines). La comparación de las predicciones de los modelos con concentraciones medias anuales medidas in situ, así como la aplicación de los criterios establecidos para la aceptación estadística de los modelos, permitió concluir que los cuatro modelos se comportaron aceptablemente para este escenario. No ocurrió lo mismo en el caso danés (nave ventilada mecánicamente), en donde el modelo LADD no dio buenos resultados debido a la ausencia de procesos de “sobreelevacion de penacho” (plume-rise). Los modelos de dispersión dan a menudo pobres resultados en condiciones de baja velocidad de viento debido a que la teoría de dispersión en la que se basan no es aplicable en estas condiciones. En situaciones de frecuente descenso en la velocidad del viento, la actual guía de modelización propone usar un modelo que sea eficaz bajo dichas condiciones, máxime cuando se realice una valoración que tenga como objeto establecer una política de regularización. Esto puede no ser siempre posible debido a datos meteorológicos insuficientes, en cuyo caso la única opción sería utilizar un modelo más común, como la versión avanzada de los modelos Gausianos ADMS o AERMOD. Con el objetivo de evaluar la idoneidad de estos modelos para condiciones de bajas velocidades de viento, ambos modelos fueron utilizados en un caso con condiciones Mediterráneas. Lo que supone sucesivos periodos de baja velocidad del viento. El estudio se centró en la dispersión de NH3 procedente de una granja de cerdos en Segovia (España central). Para ello la concentración de NH3 media mensual fue medida en 21 localizaciones en torno a la granja. Se realizaron también medidas de concentración de alta resolución en una única localización durante una campaña de una semana. En este caso, se evaluaron dos estrategias para mejorar la respuesta del modelo ante bajas velocidades del viento. La primera se basó en “no zero wind” (NZW), que sustituyó periodos de calma con el mínimo límite de velocidad del viento y “accumulated calm emissions” (ACE), que forzaban al modelo a calcular las emisiones totales en un periodo de calma y la siguiente hora de no-calma. Debido a las importantes incertidumbres en los datos de entrada del modelo (inputs) (tasa de emisión de NH3, velocidad de salida de la fuente, parámetros de la capa límite, etc.), se utilizó el mismo caso para evaluar la incertidumbre en la predicción del modelo y valorar como dicha incertidumbre puede ser considerada en evaluaciones del modelo. Un modelo dinámico de emisión, modificado para el caso de clima Mediterráneo, fue empleado para estimar la variabilidad temporal en las emisiones de NH3. Así mismo, se realizó una comparativa utilizando las emisiones dinámicas y la tasa constante de emisión. La incertidumbre predicha asociada a la incertidumbre de los inputs fue de 67-98% del valor medio para el modelo ADMS y entre 53-83% del valor medio para AERMOD. La mayoría de esta incertidumbre se debió a la incertidumbre del ratio de emisión en la fuente (50%), seguida por la de las condiciones meteorológicas (10-20%) y aquella asociada a las velocidades de salida (5-10%). El modelo AERMOD predijo mayores concentraciones que ADMS y existieron más simulaciones que alcanzaron los criterios de aceptabilidad cuando se compararon las predicciones con las concentraciones medias anuales medidas. Sin embargo, las predicciones del modelo ADMS se correlacionaron espacialmente mejor con las mediciones. El uso de valores dinámicos de emisión estimados mejoró el comportamiento de ADMS, haciendo empeorar el de AERMOD. La aplicación de estrategias destinadas a mejorar el comportamiento de este último tuvo efectos contradictorios similares. Con el objeto de comparar distintas técnicas de modelización inversa, varios modelos (ADMS, LADD y WindTrax) fueron empleados para un caso no agrícola, una colonia de pingüinos en la Antártida. Este caso fue empleado para el estudio debido a que suponía la oportunidad de obtener el primer factor de emisión experimental para una colonia de pingüinos antárticos. Además las condiciones eran propicias desde el punto de vista de la casi total ausencia de concentraciones ambiente (background). Tras el trabajo de modelización existió una concordancia suficiente entre las estimaciones obtenidas por los tres modelos. De este modo se pudo definir un factor de emisión de para la colonia de 1.23 g NH3 por pareja criadora por día (con un rango de incertidumbre de 0.8-2.54 g NH3 por pareja criadora por día). Posteriores aplicaciones de técnicas de modelización inversa para casos agrícolas mostraron también un buen compromiso estadístico entre las emisiones estimadas por los distintos modelos. Con todo ello, es posible concluir que la modelización inversa es una técnica robusta para estimar tasas de emisión de NH3. Modelos de selección (screening) permiten obtener una rápida y aproximada estimación de los impactos medioambientales, siendo una herramienta útil para evaluaciones de impactos en tanto que permite eliminar casos que presentan un riesgo potencial de daño bajo. De esta forma, lo recursos del modelo pueden Resumen (Castellano) destinarse a casos en donde la posibilidad de daño es mayor. El modelo de Cálculo Simple de los Límites de Impacto de Amoniaco (SCAIL) se desarrolló para obtener una estimación de la concentración media de NH3 y de la tasa de deposición seca asociadas a una fuente agrícola. Está técnica de selección, basada en el modelo LADD, fue evaluada y calibrada con diferentes bases de datos y, finalmente, validada utilizando medidas independientes de concentraciones realizadas cerca de las fuentes. En general SCAIL dio buenos resultados de acuerdo a los criterios estadísticos establecidos. Este trabajo ha permitido definir situaciones en las que las concentraciones predichas por modelos de dispersión son similares, frente a otras en las que las predicciones difieren notablemente entre modelos. Algunos modelos nos están diseñados para simular determinados escenarios en tanto que no incluyen procesos relevantes o están más allá de los límites de su aplicabilidad. Un ejemplo es el modelo LADD que no es aplicable en fuentes con velocidad de salida significativa debido a que no incluye una parametrización de sobreelevacion del penacho. La evaluación de un esquema simple combinando la sobreelevacion del penacho y una turbulencia aumentada en la fuente mejoró el comportamiento del modelo. Sin embargo más pruebas son necesarias para avanzar en este sentido. Incluso modelos que son aplicables y que incluyen los procesos relevantes no siempre dan similares predicciones. Siendo las razones de esto aún desconocidas. Por ejemplo, AERMOD predice mayores concentraciones que ADMS para dispersión de NH3 procedente de naves de ganado ventiladas mecánicamente. Existe evidencia que sugiere que el modelo ADMS infraestima concentraciones en estas situaciones debido a un elevado límite de velocidad de viento. Por el contrario, existen evidencias de que AERMOD sobreestima concentraciones debido a sobreestimaciones a bajas Resumen (Castellano) velocidades de viento. Sin embrago, una modificación simple del pre-procesador meteorológico parece mejorar notablemente el comportamiento del modelo. Es de gran importancia que estas diferencias entre las predicciones de los modelos sean consideradas en los procesos de evaluación regulada por los organismos competentes. Esto puede ser realizado mediante la aplicación del modelo más útil para cada caso o, mejor aún, mediante modelos múltiples o híbridos. ABSTRACT Short-range atmospheric dispersion of ammonia (NH3) emitted by agricultural sources and its subsequent deposition to soil and vegetation can lead to the degradation of sensitive ecosystems and acidification of the soil. Atmospheric concentrations and dry deposition rates of NH3 are generally highest near the emission source and so environmental impacts to sensitive ecosystems are often largest at these locations. Under European legislation, several member states use short-range atmospheric dispersion models to estimate the impact of ammonia emissions on nearby designated nature conservation sites. A recent review of assessment methods for short-range impacts of NH3 recommended an intercomparison of the different models to identify whether there are notable differences to the assessment approaches used in different European countries. Based on this recommendation, this thesis compares and evaluates the atmospheric concentration predictions of several models used in these impact assessments for various real and hypothetical scenarios, including Mediterranean meteorological conditions. In addition, various inverse dispersion modelling techniques for the estimation of NH3 emissions rates are also compared and evaluated and a simple screening model to calculate the NH3 concentration and dry deposition rate at a sensitive ecosystem located close to an NH3 source was developed. The model intercomparison evaluated four atmospheric dispersion models (ADMS 4.1; AERMOD v07026; OPS-st v3.0.3 and LADD v2010) for a range of hypothetical case studies representing the atmospheric dispersion from several agricultural NH3 source types. The best agreement between the mean annual concentration predictions of the models was found for simple scenarios with area and volume sources. The agreement between the predictions of the models was worst for the scenario representing the dispersion from a mechanically ventilated livestock house, for which ADMS predicted significantly smaller concentrations than the other models. The reason for these differences appears to be due to the interaction of different plume-rise and boundary layer parameterisations. All four dispersion models were applied to two real case studies of dispersion of NH3 from pig farms in Falster (Denmark) and North Carolina (USA). The mean annual concentration predictions of the models were similar for the USA case study (emissions from naturally ventilated pig houses and a slurry lagoon). The comparison of model predictions with mean annual measured concentrations and the application of established statistical model acceptability criteria concluded that all four models performed acceptably for this case study. This was not the case for the Danish case study (mechanically ventilated pig house) for which the LADD model did not perform acceptably due to the lack of plume-rise processes in the model. Regulatory dispersion models often perform poorly in low wind speed conditions due to the model dispersion theory being inapplicable at low wind speeds. For situations with frequent low wind speed periods, current modelling guidance for regulatory assessments is to use a model that can handle these conditions in an acceptable way. This may not always be possible due to insufficient meteorological data and so the only option may be to carry out the assessment using a more common regulatory model, such as the advanced Gaussian models ADMS or AERMOD. In order to assess the suitability of these models for low wind conditions, they were applied to a Mediterranean case study that included many periods of low wind speed. The case study was the dispersion of NH3 emitted by a pig farm in Segovia, Central Spain, for which mean monthly atmospheric NH3 concentration measurements were made at 21 locations surrounding the farm as well as high-temporal-resolution concentration measurements at one location during a one-week campaign. Two strategies to improve the model performance for low wind speed conditions were tested. These were ‘no zero wind’ (NZW), which replaced calm periods with the minimum threshold wind speed of the model and ‘accumulated calm emissions’ (ACE), which forced the model to emit the total emissions during a calm period during the first subsequent non-calm hour. Due to large uncertainties in the model input data (NH3 emission rates, source exit velocities, boundary layer parameters), the case study was also used to assess model prediction uncertainty and assess how this uncertainty can be taken into account in model evaluations. A dynamic emission model modified for the Mediterranean climate was used to estimate the temporal variability in NH3 emission rates and a comparison was made between the simulations using the dynamic emissions and a constant emission rate. Prediction uncertainty due to model input uncertainty was 67-98% of the mean value for ADMS and between 53-83% of the mean value for AERMOD. Most of this uncertainty was due to source emission rate uncertainty (~50%), followed by uncertainty in the meteorological conditions (~10-20%) and uncertainty in exit velocities (~5-10%). AERMOD predicted higher concentrations than ADMS and more of the simulations met the model acceptability criteria when compared with the annual mean measured concentrations. However, the ADMS predictions were better correlated spatially with the measurements. The use of dynamic emission estimates improved the performance of ADMS but worsened the performance of AERMOD and the application of strategies to improved model performance had similar contradictory effects. In order to compare different inverse modelling techniques, several models (ADMS, LADD and WindTrax) were applied to a non-agricultural case study of a penguin colony in Antarctica. This case study was used since it gave the opportunity to provide the first experimentally-derived emission factor for an Antarctic penguin colony and also had the advantage of negligible background concentrations. There was sufficient agreement between the emission estimates obtained from the three models to define an emission factor for the penguin colony (1.23 g NH3 per breeding pair per day with an uncertainty range of 0.8-2.54 g NH3 per breeding pair per day). This emission estimate compared favourably to the value obtained using a simple micrometeorological technique (aerodynamic gradient) of 0.98 g ammonia per breeding pair per day (95% confidence interval: 0.2-2.4 g ammonia per breeding pair per day). Further application of the inverse modelling techniques for a range of agricultural case studies also demonstrated good agreement between the emission estimates. It is concluded, therefore, that inverse dispersion modelling is a robust technique for estimating NH3 emission rates. Screening models that can provide a quick and approximate estimate of environmental impacts are a useful tool for impact assessments because they can be used to filter out cases that potentially have a minimal environmental impact allowing resources to be focussed on more potentially damaging cases. The Simple Calculation of Ammonia Impact Limits (SCAIL) model was developed as a screening model to provide an estimate of the mean NH3 concentration and dry deposition rate downwind of an agricultural source. This screening tool, based on the LADD model, was evaluated and calibrated with several experimental datasets and then validated using independent concentration measurements made near sources. Overall SCAIL performed acceptably according to established statistical criteria. This work has identified situations where the concentration predictions of dispersion models are similar and other situations where the predictions are significantly different. Some models are simply not designed to simulate certain scenarios since they do not include the relevant processes or are beyond the limits of their applicability. An example is the LADD model that is not applicable to sources with significant exit velocity since the model does not include a plume-rise parameterisation. The testing of a simple scheme combining a momentum-driven plume rise and increased turbulence at the source improved model performance, but more testing is required. Even models that are applicable and include the relevant process do not always give similar predictions and the reasons for this need to be investigated. AERMOD for example predicts higher concentrations than ADMS for dispersion from mechanically ventilated livestock housing. There is evidence to suggest that ADMS underestimates concentrations in these situations due to a high wind speed threshold. Conversely, there is also evidence that AERMOD overestimates concentrations in these situations due to overestimation at low wind speeds. However, a simple modification to the meteorological pre-processor appears to improve the performance of the model. It is important that these differences between the predictions of these models are taken into account in regulatory assessments. This can be done by applying the most suitable model for the assessment in question or, better still, using multiple or hybrid models.