954 resultados para Electromagnetic fields
Resumo:
In recent years, thanks to the technological advances, electromagnetic methods for non-invasive shallow subsurface characterization have been increasingly used in many areas of environmental and geoscience applications. Among all the geophysical electromagnetic methods, the Ground Penetrating Radar (GPR) has received unprecedented attention over the last few decades due to its capability to obtain, spatially and temporally, high-resolution electromagnetic parameter information thanks to its versatility, its handling, its non-invasive nature, its high resolving power, and its fast implementation. The main focus of this thesis is to perform a dielectric site characterization in an efficient and accurate way studying in-depth a physical phenomenon behind a recent developed GPR approach, the so-called early-time technique, which infers the electrical properties of the soil in the proximity of the antennas. In particular, the early-time approach is based on the amplitude analysis of the early-time portion of the GPR waveform using a fixed-offset ground-coupled antenna configuration where the separation between the transmitting and receiving antenna is on the order of the dominant pulse-wavelength. Amplitude information can be extracted from the early-time signal through complex trace analysis, computing the instantaneous-amplitude attributes over a selected time-duration of the early-time signal. Basically, if the acquired GPR signals are considered to represent the real part of a complex trace, and the imaginary part is the quadrature component obtained by applying a Hilbert transform to the GPR trace, the amplitude envelope is the absolute value of the resulting complex trace (also known as the instantaneous-amplitude). Analysing laboratory information, numerical simulations and natural field conditions, and summarising the overall results embodied in this thesis, it is possible to suggest the early-time GPR technique as an effective method to estimate physical properties of the soil in a fast and non-invasive way.
Resumo:
Several countries have acquired, over the past decades, large amounts of area covering Airborne Electromagnetic data. Contribution of airborne geophysics has dramatically increased for both groundwater resource mapping and management proving how those systems are appropriate for large-scale and efficient groundwater surveying. We start with processing and inversion of two AEM dataset from two different systems collected over the Spiritwood Valley Aquifer area, Manitoba, Canada respectively, the AeroTEM III (commissioned by the Geological Survey of Canada in 2010) and the “Full waveform VTEM” dataset, collected and tested over the same survey area, during the fall 2011. We demonstrate that in the presence of multiple datasets, either AEM and ground data, due processing, inversion, post-processing, data integration and data calibration is the proper approach capable of providing reliable and consistent resistivity models. Our approach can be of interest to many end users, ranging from Geological Surveys, Universities to Private Companies, which are often proprietary of large geophysical databases to be interpreted for geological and\or hydrogeological purposes. In this study we deeply investigate the role of integration of several complimentary types of geophysical data collected over the same survey area. We show that data integration can improve inversions, reduce ambiguity and deliver high resolution results. We further attempt to use the final, most reliable output resistivity models as a solid basis for building a knowledge-driven 3D geological voxel-based model. A voxel approach allows a quantitative understanding of the hydrogeological setting of the area, and it can be further used to estimate the aquifers volumes (i.e. potential amount of groundwater resources) as well as hydrogeological flow model prediction. In addition, we investigated the impact of an AEM dataset towards hydrogeological mapping and 3D hydrogeological modeling, comparing it to having only a ground based TEM dataset and\or to having only boreholes data.
Resumo:
The study defines a new farm classification and identifies the arable land management. These aspects and several indicators are taken into account to estimate the sustainability level of farms, for organic and conventional regimes. The data source is Italian Farm Account Data Network (RICA) for years 2007-2011, which samples structural and economical information. An environmental data has been added to the previous one to better describe the farm context. The new farm classification describes holding by general informations and farm structure. The general information are: adopted regime and farm location in terms of administrative region, slope and phyto-climatic zone. The farm structures describe the presence of main productive processes and land covers, which are recorded by FADN database. The farms, grouped by homogeneous farm structure or farm typology, are evaluated in terms of sustainability. The farm model MAD has been used to estimate a list of indicators. They describe especially environmental and economical areas of sustainability. Finally arable lands are taken into account to identify arable land managements and crop rotations. Each arable land has been classified by crop pattern. Then crop rotation management has been analysed by spatial and temporal approaches. The analysis reports a high variability inside regimes. The farm structure influences indicators level more than regimes, and it is not always possible to compare the two regimes. However some differences between organic and conventional agriculture have been found. Organic farm structures report different frequency and geographical location than conventional ones. Also different connections among arable lands and farm structures have been identified.
Resumo:
Among all possible realizations of quark and antiquark assembly, the nucleon (the proton and the neutron) is the most stable of all hadrons and consequently has been the subject of intensive studies. Mass, shape, radius and more complex representations of its internal structure are measured since several decades using different probes. The proton (spin 1/2) is described by the electric GE and magnetic GM form factors which characterise its internal structure. The simplest way to measure the proton form factors consists in measuring the angular distribution of the electron-proton elastic scattering accessing the so-called Space-Like region where q2 < 0. Using the crossed channel antiproton proton <--> e+e-, one accesses another kinematical region, the so-called Time-Like region where q2 > 0. However, due to the antiproton proton <--> e+e- threshold q2th, only the kinematical domain q2 > q2th > 0 is available. To access the unphysical region, one may use the antiproton proton --> pi0 e+ e- reaction where the pi0 takes away a part of the system energy allowing q2 to be varied between q2th and almost 0. This thesis aims to show the feasibility of such measurements with the PANDA detector which will be installed on the new high intensity antiproton ring at the FAIR facility at Darmstadt. To describe the antiproton proton --> pi0 e+ e- reaction, a Lagrangian based approach is developed. The 5-fold differential cross section is determined and related to linear combinations of hadronic tensors. Under the assumption of one nucleon exchange, the hadronic tensors are expressed in terms of the 2 complex proton electromagnetic form factors. An extraction method which provides an access to the proton electromagnetic form factor ratio R = |GE|/|GM| and for the first time in an unpolarized experiment to the cosine of the phase difference is developed. Such measurements have never been performed in the unphysical region up to now. Extended simulations were performed to show how the ratio R and the cosine can be extracted from the positron angular distribution. Furthermore, a model is developed for the antiproton proton --> pi0 pi+ pi- background reaction considered as the most dangerous one. The background to signal cross section ratio was estimated under different cut combinations of the particle identification information from the different detectors and of the kinematic fits. The background contribution can be reduced to the percent level or even less. The corresponding signal efficiency ranges from a few % to 30%. The precision on the determination of the ratio R and of the cosine is determined using the expected counting rates via Monte Carlo method. A part of this thesis is also dedicated to more technical work with the study of the prototype of the electromagnetic calorimeter and the determination of its resolution.
Resumo:
Diese Arbeit befasst sich mit den optischen Resonanzen metallischer Nanopartikel im Abstand weniger Nanometer von einer metallischen Grenzfläche. Die elektromagnetische Wechselwirkung dieser „Kugel-vor-Fläche“ Geometrie ruft interessante optische Phänomene hervor. Sie erzeugt eine spezielle elektromagnetische Eigenmode, auch Spaltmode genannt, die im Wesentlichen auf den Nanospalt zwi-schen Kugel und Oberfläche lokalisiert ist. In der quasistatischen Näherung hängt die Resonanzposition nur vom Material, der Umgebung, dem Film-Kugel Abstand und dem Kugelradius selbst ab. Theoretische Berechnungen sagen für diese Region unter Resonanzbedingungen eine große Verstärkung des elektro-magnetischen Feldes voraus. rnUm die optischen Eigenschaften dieser Systeme zu untersuchen, wurde ein effizienter plasmonenver-mittelnder Dunkelfeldmodus für die konfokale Rastermikroskopie durch dünne Metallfilme entwickelt, der die Verstärkung durch Oberflächenplasmonen sowohl im Anregungs- als auch Emissionsprozess ausnutzt. Dadurch sind hochwertige Dunkelfeldaufnahmen durch die Metallfilme der Kugel-vor-Fläche Systeme garantiert, und die Spektroskopie einzelner Resonatoren wird erleichtert. Die optischen Untersuchungen werden durch eine Kombination von Rasterkraft- und Rasterelektronenmikroskopie vervollständigt, so dass die Form und Größe der untersuchten Resonatoren in allen drei Dimensionen bestimmt und mit den optischen Resonanzen korreliert werden können. Die Leistungsfähigkeit des neu entwickelten Modus wird für ein Referenzsystem aus Polystyrol-Kugeln auf einem Goldfilm demonstriert. Hierbei zeigen Partikel gleicher Größe auch die erwartete identische Resonanz.rnFür ein aus Gold bestehendes Kugel-vor-Fläche System, bei dem der Spalt durch eine selbstorganisierte Monolage von 2-Aminoethanthiol erzeugt wird, werden die Resonanzen von Goldpartikeln, die durch Reduktion mit Chlorgoldsäure erzeugt wurden, mit denen von idealen Goldkugeln verglichen. Diese ent-stehen aus den herkömmlichen Goldpartikeln durch zusätzliche Bestrahlung mit einem Pikosekunden Nd:Yag Laser. Bei den unbestrahlten Partikeln mit ihrer Unzahl an verschiedenen Formen zeigen nur ein Drittel der untersuchten Resonatoren ein Verhalten, das von der Theorie vorhergesagt wird, ohne das dies mit ihrer Form oder Größe korrelieren würde. Im Fall der bestrahlten Goldkugeln tritt eine spürbare Verbesserung ein, bei dem alle Resonatoren mit den theoretischen Rechnungen übereinstimmen. Eine Änderung der Oberflächenrauheit des Films zeigt hingegen keinen Einfluß auf die Resonanzen. Obwohl durch die Kombination von Goldkugeln und sehr glatten Metallfilmen eine sehr definierte Probengeometrie geschaffen wurde, sind die experimentell bestimmten Linienbreiten der Resonanzen immer noch wesentlich größer als die berechneten. Die Streuung der Daten, selbst für diese Proben, deutet auf weitere Faktoren hin, die die Spaltmoden beeinflußen, wie z.B. die genaue Form des Spalts. rnDie mit den Nanospalten verbundenen hohen Feldverstärkungen werden untersucht, indem ein mit Farbstoff beladenes Polyphenylen-Dendrimer in den Spalt eines aus Silber bestehenden Kugel-vor-Fläche Systems gebracht wird. Das Dendrimer in der Schale besteht lediglich aus Phenyl-Phenyl Bindungen und garantiert durch die damit einhergende Starrheit des Moleküls eine überragende Formstabiliät, ohne gleichzeitig optisch aktiv zu sein. Die 16 Dithiolan Endgruppen sorgen gleichzeitig für die notwendige Affinität zum Silber. Dadurch kann der im Inneren befindliche Farbstoff mit einer Präzision von wenigen Nanometern im Spalt zwischen den Metallstrukturen platziert werden. Der gewählte Perylen Farbstoff zeichnet sich wiederum durch hohe Photostabilität und Fluoreszenz-Quantenausbeute aus. Für alle untersuchten Partikel wird ein starkes Fluoreszenzsignal gefunden, das mindestens 1000-mal stärker ist, als das des mit Farbstoff überzogenen Metallfilms. Das Profil des Fluoreszenz-Anregungsspektrums variiert zwischen den Partikeln und zeigt im Vergleich zum freien Farbstoff eine zusätzliche Emission bei höheren Frequenzen, was in der Literatur als „hot luminescence“ bezeichnet wird. Bei der Untersuchung des Streuverhaltens der Resonatoren können wieder zwei unterschiedliche Arten von Resonatoren un-terschieden werden. Es gibt zunächst die Fälle, die bis auf die beschriebene Linienverbreiterung mit einer idealen Kugel-vor-Fläche Geometrie übereinstimmen und dann andere, die davon stark abweichen. Die Veränderungen der Fluoreszenz-Anregungsspektren für den gebundenen Farbstoffs weisen auf physikalische Mechanismen hin, die bei diesen kleinen Metall/Farbstoff Abständen eine Rolle spielen und die über eine einfache wellenlängenabhängige Verstärkung hinausgehen.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
Coastal sand dunes represent a richness first of all in terms of defense from the sea storms waves and the saltwater ingression; moreover these morphological elements constitute an unique ecosystem of transition between the sea and the land environment. The research about dune system is a strong part of the coastal sciences, since the last century. Nowadays this branch have assumed even more importance for two reasons: on one side the born of brand new technologies, especially related to the Remote Sensing, have increased the researcher possibilities; on the other side the intense urbanization of these days have strongly limited the dune possibilities of development and fragmented what was remaining from the last century. This is particularly true in the Ravenna area, where the industrialization united to the touristic economy and an intense subsidence, have left only few dune ridges residual still active. In this work three different foredune ridges, along the Ravenna coast, have been studied with Laser Scanner technology. This research didn’t limit to analyze volume or spatial difference, but try also to find new ways and new features to monitor this environment. Moreover the author planned a series of test to validate data from Terrestrial Laser Scanner (TLS), with the additional aim of finalize a methodology to test 3D survey accuracy. Data acquired by TLS were then applied on one hand to test some brand new applications, such as Digital Shore Line Analysis System (DSAS) and Computational Fluid Dynamics (CFD), to prove their efficacy in this field; on the other hand the author used TLS data to find any correlation with meteorological indexes (Forcing Factors), linked to sea and wind (Fryberger's method) applying statistical tools, such as the Principal Component Analysis (PCA).
Resumo:
Eine der offenen Fragen der aktuellen Physik ist das Verständnis von Systemen im Nichtgleichgewicht. Im Gegensatz zu der Gleichgewichtsphysik ist in diesem Bereich aktuell kein Formalismus bekannt der ein systematisches Beschreiben der unterschiedlichen Systeme ermöglicht. Um das Verständnis über diese Systeme zu vergrößern werden in dieser Arbeit zwei unterschiedliche Systeme studiert, die unter einem externen Feld ein starkes nichtlineares Verhalten zeigen. Hierbei handelt es sich zum einen um das Verhalten von Teilchen unter dem Einfluss einer extern angelegten Kraft und zum anderen um das Verhalten eines Systems in der Nähe des kritischen Punktes unter Scherung. Das Modellsystem in dem ersten Teil der Arbeit ist eine binäre Yukawa Mischung, die bei tiefen Temperaturen einen Glassübergang zeigt. Dies führt zu einer stark ansteigenden Relaxationszeit des Systems, so dass man auch bei kleinen Kräften relativ schnell ein nichtlineares Verhalten beobachtet. In Abhängigkeit der angelegten konstanten Kraft können in dieser Arbeit drei Regime, mit stark unterschiedlichem Teilchenverhalten, identifiziert werden. In dem zweiten Teil der Arbeit wird das Ising-Modell unter Scherung betrachtet. In der Nähe des kritischen Punkts kommt es in diesem Modell zu einer Beeinflussung der Fluktuationen in dem System durch das angelegte Scherfeld. Dies hat zur Folge, dass das System stark anisotrop wird und man zwei unterschiedliche Korrelationslängen vorfindet, die mit unterschiedlichen Exponenten divergieren. Infolgedessen lässt sich der normale isotrope Formalismus des "finite-size scaling" nicht mehr auf dieses System anwenden. In dieser Arbeit wird gezeigt, wie dieser auf den anisotropen Fall zu verallgemeinern ist und wie damit die kritischen Punkte, sowie die dazu gehörenden kritischen Exponenten berechnet werden können.
Resumo:
This thesis deals with three different physical models, where each model involves a random component which is linked to a cubic lattice. First, a model is studied, which is used in numerical calculations of Quantum Chromodynamics.In these calculations random gauge-fields are distributed on the bonds of the lattice. The formulation of the model is fitted into the mathematical framework of ergodic operator families. We prove, that for small coupling constants, the ergodicity of the underlying probability measure is indeed ensured and that the integrated density of states of the Wilson-Dirac operator exists. The physical situations treated in the next two chapters are more similar to one another. In both cases the principle idea is to study a fermion system in a cubic crystal with impurities, that are modeled by a random potential located at the lattice sites. In the second model we apply the Hartree-Fock approximation to such a system. For the case of reduced Hartree-Fock theory at positive temperatures and a fixed chemical potential we consider the limit of an infinite system. In that case we show the existence and uniqueness of minimizers of the Hartree-Fock functional. In the third model we formulate the fermion system algebraically via C*-algebras. The question imposed here is to calculate the heat production of the system under the influence of an outer electromagnetic field. We show that the heat production corresponds exactly to what is empirically predicted by Joule's law in the regime of linear response.
Resumo:
The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.
Resumo:
A climatological field is a mean gridded field that represents the monthly or seasonal trend of an ocean parameter. This instrument allows to understand the physical conditions and physical processes of the ocean water and their impact on the world climate. To construct a climatological field, it is necessary to perform a climatological analysis on an historical dataset. In this dissertation, we have constructed the temperature and salinity fields on the Mediterranean Sea using the SeaDataNet 2 dataset. The dataset contains about 140000 CTD, bottles, XBT and MBT profiles, covering the period from 1900 to 2013. The temperature and salinity climatological fields are produced by the DIVA software using a Variational Inverse Method and a Finite Element numerical technique to interpolate data on a regular grid. Our results are also compared with a previous version of climatological fields and the goodness of our climatologies is assessed, according to the goodness criteria suggested by Murphy (1993). Finally the temperature and salinity seasonal cycle for the Mediterranean Sea is described.
Resumo:
Diese Arbeit stellt eine ausführliche Studie fundamentaler Eigenschaften der Kalzit CaCO3(10.4) und verwandter Mineraloberflächen dar, welche nicht nur durch die Verwendung von Nichtkontakt Rasterkraftmikroskopie, sondern hauptsächlich durch die Messung von Kraftfeldern ermöglicht wurde. Die absolute Oberflächenorientierung sowie der hierfür zugrundeliegende Prozess auf atomarer Skala konnten erfolgreich für die Kalzit (10.4) Oberfläche identifiziert werden.rnDie Adsorption chiraler Moleküle auf Kalzit ist relevant im Bereich der Biomineralisation, was ein Verständnis der Oberflächensymmetrie unumgänglich macht. Die Messung des Oberflächenkraftfeldes auf atomarer Ebene ist hierfür ein zentraler Aspekt. Eine solche Kraftkarte beleuchtet nicht nur die für die Biomineralisation wichtige Wechselwirkung der Oberfläche mit Molekülen, sondern enthält auch die Möglichkeit, Prozesse auf atomarer Skala und damit Oberflächeneigenschaften zu identifizieren.rnDie Einführung eines höchst flexiblen Messprotokolls gewährleistet die zuverlässige und kommerziell nicht erhältliche Messung des Oberflächenkraftfeldes. Die Konversion der rohen ∆f Daten in die vertikale Kraft Fz ist jedoch kein trivialer Vorgang, insbesondere wenn Glätten der Daten in Frage kommt. Diese Arbeit beschreibt detailreich, wie Fz korrekt für die experimentellen Bedingungen dieser Arbeit berechnet werden können. Weiterhin ist beschrieben, wie Lateralkräfte Fy und Dissipation Γ erhalten wurden, um das volle Potential dieser Messmethode auszureizen.rnUm Prozesse auf atomarer Skala auf Oberflächen zu verstehen sind die kurzreichweitigen, chemischen Kräfte Fz,SR von größter Wichtigkeit. Langreichweitige Beiträge müssen hierzu an Fz angefittet und davon abgezogen werden. Dies ist jedoch eine fehleranfällige Aufgabe, die in dieser Arbeit dadurch gemeistert werden konnte, dass drei unabhängige Kriterien gefunden wurden, die den Beginn zcut von Fz,SR bestimmen, was für diese Aufgabe von zentraler Bedeutung ist. Eine ausführliche Fehleranalyse zeigt, dass als Kriterium die Abweichung der lateralen Kräfte voneinander vertrauenswürdige Fz,SR liefert. Dies ist das erste Mal, dass in einer Studie ein Kriterium für die Bestimmung von zcut gegeben werden konnte, vervollständigt mit einer detailreichen Fehleranalyse.rnMit der Kenntniss von Fz,SR und Fy war es möglich, eine der fundamentalen Eigenschaften der CaCO3(10.4) Oberfläche zu identifizieren: die absolute Oberflächenorientierung. Eine starke Verkippung der abgebildeten Objekte
Resumo:
Radio relics are diffuse synchrotron sources generally located in the peripheries of galaxy clusters in merging state. According to the current leading scenario, relics trace gigantic cosmological shock waves that cross the intra-cluster medium where particle acceleration occurs. The relic/shock connection is supported by several observational facts, including the spatial coincidence between relics and shocks found in the X-rays. Under the assumptions that particles are accelerated at the shock front and are subsequently deposited and then age downstream of the shock, Markevitch et al. (2005) proposed a method to constrain the magnetic field strength in radio relics. Measuring the thickness of radio relics at different frequencies allows to derive combined constraints on the velocity of the downstream flow and on the magnetic field, which in turns determines particle aging. We elaborate this idea to infer first constraints on magnetic fields in cluster outskirts. We consider three models of particle aging and develop a geometric model to take into account the contribution to the relic transverse size due to the projection of the shock-surface on the plane of the sky. We selected three well studied radio relics in the clusters A 521, CIZA J2242.8+5301 and 1RXS J0603.3+4214. These relics have been chosen primarily because they are almost seen edge-on and because the Mach number of the shock that is associated with these relics is measured by X-ray observations, thus allowing to break the degeneracy between magnetic field and downstream velocity in the method. For the first two clusters, our method is consistent with a pure radiative aging model allowing us to derive constraints on the relics magnetic field strength. In the case of 1RXS J0603.3+4214 we find that particle life-times are consistent with a pure radiative aging model under some conditions, however we also collect evidences for downstream particle re-acceleration in the relic W-region and for a magnetic field decaying downstream in its E-region. Our estimates of the magnetic field strength in the relics in A 521 and CIZA J2242.8+5301 provide unique information on the field properties in cluster outskirts. The constraints derived for these relics, together with the lower limits to the magnetic field that we derived from the lack of inverse Compton X-ray emission from the sources, have been combined with the constraints from Faraday rotation studies of the Coma cluster. Overall results suggest that the spatial profile of the magnetic field energy density is broader than that of the thermal gas, implying that the ε_th /ε_B ratio decreases with cluster radius. Alternatively, radio relics could trace dynamically active regions where the magnetic field strength is biased high with respect to the average value in the cluster volume.
Resumo:
La forma spettrale dell’X-ray background richiede l’esistenza di un grande numero di AGN mediamente oscurati, oltre alla presenza di AGN fortemente oscurati, la cui densità di colonna supera il limite Compton (Nh>10^24 cm^(-2)). A causa della loro natura, questi oggetti risultano di difficile osservazione, per cui è necessario adottare un approccio multi-banda per riuscire a rivelarli. In questo lavoro di tesi abbiamo studiato 29 sorgenti osservate nel CDF-S e 10 nel CDF-N a 0.07
Resumo:
La programmazione aggregata è un paradigma che supporta la programmazione di sistemi di dispositivi, adattativi ed eventualmente a larga scala, nel loro insieme -- come aggregati. L'approccio prevalente in questo contesto è basato sul field calculus, un calcolo formale che consente di definire programmi aggregati attraverso la composizione funzionale di campi computazionali, creando i presupposti per la specifica di pattern di auto-organizzazione robusti. La programmazione aggregata è attualmente supportata, in modo più o meno parziale e principalmente per la simulazione, da DSL dedicati (cf., Protelis), ma non esistono framework per linguaggi mainstream finalizzati allo sviluppo di applicazioni. Eppure, un simile supporto sarebbe auspicabile per ridurre tempi e sforzi d'adozione e per semplificare l'accesso al paradigma nella costruzione di sistemi reali, nonché per favorire la ricerca stessa nel campo. Il presente lavoro consiste nello sviluppo, a partire da un prototipo della semantica operazionale del field calculus, di un framework per la programmazione aggregata in Scala. La scelta di Scala come linguaggio host nasce da motivi tecnici e pratici. Scala è un linguaggio moderno, interoperabile con Java, che ben integra i paradigmi ad oggetti e funzionale, ha un sistema di tipi espressivo, e fornisce funzionalità avanzate per lo sviluppo di librerie e DSL. Inoltre, la possibilità di appoggiarsi, su Scala, ad un framework ad attori solido come Akka, costituisce un altro fattore trainante, data la necessità di colmare l'abstraction gap inerente allo sviluppo di un middleware distribuito. Nell'elaborato di tesi si presenta un framework che raggiunge il triplice obiettivo: la costruzione di una libreria Scala che realizza la semantica del field calculus in modo corretto e completo, la realizzazione di una piattaforma distribuita Akka-based su cui sviluppare applicazioni, e l'esposizione di un'API generale e flessibile in grado di supportare diversi scenari.