988 resultados para Cosmic physics
Resumo:
A polar stratospheric cloud submodel has been developed and incorporated in a general circulation model including atmospheric chemistry (ECHAM5/MESSy). The formation and sedimentation of polar stratospheric cloud (PSC) particles can thus be simulated as well as heterogeneous chemical reactions that take place on the PSC particles. For solid PSC particle sedimentation, the need for a tailor-made algorithm has been elucidated. A sedimentation scheme based on first order approximations of vertical mixing ratio profiles has been developed. It produces relatively little numerical diffusion and can deal well with divergent or convergent sedimentation velocity fields. For the determination of solid PSC particle sizes, an efficient algorithm has been adapted. It assumes a monodisperse radii distribution and thermodynamic equilibrium between the gas phase and the solid particle phase. This scheme, though relatively simple, is shown to produce particle number densities and radii within the observed range. The combined effects of the representations of sedimentation and solid PSC particles on vertical H2O and HNO3 redistribution are investigated in a series of tests. The formation of solid PSC particles, especially of those consisting of nitric acid trihydrate, has been discussed extensively in recent years. Three particle formation schemes in accordance with the most widely used approaches have been identified and implemented. For the evaluation of PSC occurrence a new data set with unprecedented spatial and temporal coverage was available. A quantitative method for the comparison of simulation results and observations is developed and applied. It reveals that the relative PSC sighting frequency can be reproduced well with the PSC submodel whereas the detailed modelling of PSC events is beyond the scope of coarse global scale models. In addition to the development and evaluation of new PSC submodel components, parts of existing simulation programs have been improved, e.g. a method for the assimilation of meteorological analysis data in the general circulation model, the liquid PSC particle composition scheme, and the calculation of heterogeneous reaction rate coefficients. The interplay of these model components is demonstrated in a simulation of stratospheric chemistry with the coupled general circulation model. Tests against recent satellite data show that the model successfully reproduces the Antarctic ozone hole.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
This Thesis focuses on the X-ray study of the inner regions of Active Galactic Nuclei, in particular on the formation of high velocity winds by the accretion disk itself. Constraining AGN winds physical parameters is of paramount importance both for understanding the physics of the accretion/ejection flow onto supermassive black holes, and for quantifying the amount of feedback between the SMBH and its environment across the cosmic time. The sources selected for the present study are BAL, mini-BAL, and NAL QSOs, known to host high-velocity winds associated to the AGN nuclear regions. Observationally, a three-fold strategy has been adopted: - substantial samples of distant sources have been analyzed through spectral, photometric, and statistical techniques, to gain insights into their mean properties as a population; - a moderately sized sample of bright sources has been studied through detailed X-ray spectral analysis, to give a first flavor of the general spectral properties of these sources, also from a temporally resolved point of view; - the best nearby candidate has been thoroughly studied using the most sophisticated spectral analysis techniques applied to a large dataset with a high S/N ratio, to understand the details of the physics of its accretion/ejection flow. There are three main channels through which this Thesis has been developed: - [Archival Studies]: the XMM-Newton public archival data has been extensively used to analyze both a large sample of distant BAL QSOs, and several individual bright sources, either BAL, mini-BAL, or NAL QSOs. - [New Observational Campaign]: I proposed and was awarded with new X-ray pointings of the mini-BAL QSOs PG 1126-041 and PG 1351+640 during the XMM-Newton AO-7 and AO-8. These produced the biggest X-ray observational campaign ever made on a mini-BAL QSO (PG 1126-041), including the longest exposure so far. Thanks to the exceptional dataset, a whealth of informations have been obtained on both the intrinsic continuum and on the complex reprocessing media that happen to be in the inner regions of this AGN. Furthermore, the temporally resolved X-ray spectral analysis field has been finally opened for mini-BAL QSOs. - [Theoretical Studies]: some issues about the connection between theories and observations of AGN accretion disk winds have been investigated, through theoretical arguments and synthetic absorption line profiles studies.
Resumo:
The atmospheric muon charge ratio, defined as the number of positive over negative charged muons, is an interesting quantity for the study of high energy hadronic interactions in atmosphere and the nature of the primary cosmic rays. The measurement of the charge ratio in the TeV muon energy range allows to study the hadronic interactions in kinematic regions not yet explored at accelerators. The OPERA experiment is a hybrid electronic detector/emulsion apparatus, located in the underground Gran Sasso Laboratory, at an average depth of 3800 meters water equivalent (m.w.e.). OPERA is the first large magnetized detector that can measure the muon charge ratio at the LNGS depth, with a wide acceptance for cosmic ray muons coming from above. In this thesis, the muon charge ratio is measured using the spectrometers of the OPERA detector in the highest energy region. The charge ratio was computed separately for single and for multiple muon events, in order to select different primary cosmic ray samples in energy and composition. The measurement as a function of the surface muon energy is used to infer parameters characterizing the particle production in atmosphere, that will be used to constrain Monte Carlo predictions. Finally, the experimental results are interpreted in terms of cosmic ray and particle physics models.
Resumo:
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A promising result is that one can qualitatively reconstruct the conductivity inside the cross-section of a human chest. Even though the human volunteer is neither two-dimensional nor circular, such reconstructions can be useful in medical applications: monitoring for lung problems such as accumulating fluid or a collapsed lung and noninvasive monitoring of heart function and blood flow.
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
Das am Südpol gelegene Neutrinoteleskop IceCube detektiert hochenergetische Neutrinos über die schwache Wechselwirkung geladener und neutraler Ströme. Die Analyse basiert auf einem Vergleich mit Monte-Carlo-Simulationen, deren Produktion global koordiniert wird. In Mainz ist es erstmalig gelungen, Simulationen innerhalb der Architektur des Worldwide LHC Computing Grid (WLCG) zu realisieren, was die Möglichkeit eröffnet, Monte-Carlo-Berechnungen auch auf andere deutsche Rechnerfarmen (CEs) mit IceCube-Berechtigung zu verteilen. Atmosphärische Myonen werden mit einer Rate von über 1000 Ereignissen pro Sekunde aufgezeichnet. Eine korrekte Interpretation dieses dominanten Signals, welches um einen Faktor von 10^6 reduziert werden muss um das eigentliche Neutrinosignal zu extrahieren, ist deswegen von großer Bedeutung. Eigene Simulationen mit der Software-Umgebung CORSIKA wurden durchgeführt um die von Energie und Einfallswinkel abhängige Entstehungshöhe atmosphärischer Myonen zu bestimmen. IceCube Myonraten wurden mit Wetterdaten des European Centre for Medium-Range Weather Forcasts (ECMWF) verglichen und Korrelationen zwischen jahreszeitlichen sowie kurzzeitigen Schwankungen der Atmosphärentemperatur und Myonraten konnten nachgewiesen werden. Zudem wurde eine Suche nach periodischen Effekten in der Atmosphäre, verursacht durch z.B. meteorologische Schwerewellen, mit Hilfe einer Fourieranalyse anhand der IceCube-Daten durchgeführt. Bislang konnte kein signifikanter Nachweis zur Existenz von Schwerewellen am Südpol erbracht werden.
Resumo:
Millisecond Pulsars (MSPs) are fast rotating, highly magnetized neutron stars. According to the "canonical recycling scenario", MSPs form in binary systems containing a neutron star which is spun up through mass accretion from the evolving companion. Therefore, the final stage consists of a binary made of a MSP and the core of the deeply peeled companion. In the last years, however an increasing number of systems deviating from these expectations has been discovered, thus strongly indicating that our understanding of MSPs is far to be complete. The identification of the optical companions to binary MSPs is crucial to constrain the formation and evolution of these objects. In dense environments such as Globular Clusters (GCs), it also allows us to get insights on the cluster internal dynamics. By using deep photometric data, acquired both from space and ground-based telescopes, we identified 5 new companions to MSPs. Three of them being located in GCs and two in the Galactic Field. The three new identifications in GCs increased by 50% the number of such objects known before this Thesis. They all are non-degenerate stars, at odds with the expectations of the "canonical recycling scenario". These results therefore suggest either that transitory phases should also be taken into account, or that dynamical processes, as exchange interactions, play a crucial role in the evolution of MSPs. We also performed a spectroscopic follow-up of the companion to PSRJ1740-5340A in the GC NGC 6397, confirming that it is a deeply peeled star descending from a ~0.8Msun progenitor. This nicely confirms the theoretical expectations about the formation and evolution of MSPs.
Resumo:
The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.
Resumo:
This thesis reports on the creation and analysis of many-body states of interacting fermionic atoms in optical lattices. The realized system can be described by the Fermi-Hubbard hamiltonian, which is an important model for correlated electrons in modern condensed matter physics. In this way, ultra-cold atoms can be utilized as a quantum simulator to study solid state phenomena. The use of a Feshbach resonance in combination with a blue-detuned optical lattice and a red-detuned dipole trap enables an independent control over all relevant parameters in the many-body hamiltonian. By measuring the in-situ density distribution and doublon fraction it has been possible to identify both metallic and insulating phases in the repulsive Hubbard model, including the experimental observation of the fermionic Mott insulator. In the attractive case, the appearance of strong correlations has been detected via an anomalous expansion of the cloud that is caused by the formation of non-condensed pairs. By monitoring the in-situ density distribution of initially localized atoms during the free expansion in a homogeneous optical lattice, a strong influence of interactions on the out-of-equilibrium dynamics within the Hubbard model has been found. The reported experiments pave the way for future studies on magnetic order and fermionic superfluidity in a clean and well-controlled experimental system.
Resumo:
The Standard Model of elementary particle physics was developed to describe the fundamental particles which constitute matter and the interactions between them. The Large Hadron Collider (LHC) at CERN in Geneva was built to solve some of the remaining open questions in the Standard Model and to explore physics beyond it, by colliding two proton beams at world-record centre-of-mass energies. The ATLAS experiment is designed to reconstruct particles and their decay products originating from these collisions. The precise reconstruction of particle trajectories plays an important role in the identification of particle jets which originate from bottom quarks (b-tagging). This thesis describes the step-wise commissioning of the ATLAS track reconstruction and b-tagging software and one of the first measurements of the b-jet production cross section in pp collisions at sqrt(s)=7 TeV with the ATLAS detector. The performance of the track reconstruction software was studied in great detail, first using data from cosmic ray showers and then collisions at sqrt(s)=900 GeV and 7 TeV. The good understanding of the track reconstruction software allowed a very early deployment of the b-tagging algorithms. First studies of these algorithms and the measurement of the b-tagging efficiency in the data are presented. They agree well with predictions from Monte Carlo simulations. The b-jet production cross section was measured with the 2010 dataset recorded by the ATLAS detector, employing muons in jets to estimate the fraction of b-jets. The measurement is in good agreement with the Standard Model predictions.
Resumo:
In this thesis, the phenomenology of the Randall-Sundrum setup is investigated. In this context models with and without an enlarged SU(2)_L x SU(2)_R x U(1)_X x P_{LR} gauge symmetry, which removes corrections to the T parameter and to the Z b_L \bar b_L coupling, are compared with each other. The Kaluza-Klein decomposition is formulated within the mass basis, which allows for a clear understanding of various model-specific features. A complete discussion of tree-level flavor-changing effects is presented. Exact expressions for five dimensional propagators are derived, including Yukawa interactions that mediate flavor-off-diagonal transitions. The symmetry that reduces the corrections to the left-handed Z b \bar b coupling is analyzed in detail. In the literature, Randall-Sundrum models have been used to address the measured anomaly in the t \bar t forward-backward asymmetry. However, it will be shown that this is not possible within a natural approach to flavor. The rare decays t \to cZ and t \to ch are investigated, where in particular the latter could be observed at the LHC. A calculation of \Gamma_{12}^{B_s} in the presence of new physics is presented. It is shown that the Randall-Sundrum setup allows for an improved agreement with measurements of A_{SL}^s, S_{\psi\phi}, and \Delta\Gamma_s. For the first time, a complete one-loop calculation of all relevant Higgs-boson production and decay channels in the custodial Randall-Sundrum setup is performed, revealing a sensitivity to large new-physics scales at the LHC.
Resumo:
The formation and evolution of galaxy bulges is a greatly debated topic in modern astrophysics. An approach to address this issue is to look at the Galactic bulge, the closest to us. According to some theoretical models, our bulge built-up from the merger of substructures formed from the instability and fragmentation of a proto-disk in the early phases of Galactic evolution. We may have discovered the remnant of one of these substructures: the stellar system Terzan 5. Terzan 5 hosts two stellar populations with different iron abundances, thus suggesting it once was far more massive than today. Moreover, its peculiar chemistry resembles that observed only in the Galactic bulge. In this Thesis we perform a detailed photometric and spectroscopic analysis of this cluster to determine its formation and evolutionary histories. Form the photometric point of view we built a high-resolution differential reddening map in Terzan 5 direction and we measured relative proper motions to separate its member population from the contaminating field stars. This information represents the necessary work to measure the absolute ages of Terzan 5 populations via the Turn-off luminosity method. From the spectroscopic point of view we measured abundances for more than 600 stars belonging to Terzan 5 and its surroundings in order to build the largest field-decontaminated metallicity distribution for this system. We find that the metallicity distribution is extremely wide (more than 1 dex) and we discovered a third, metal-poor and alpha-enhanced population with average [Fe/H]=-0.8. The striking similarity between Terzan 5 and the bulge in terms of their chemical formation and evolution revealed by this Thesis suggests that Terzan 5 formed in situ with the bulge itself. In particular its metal-poor populations trace the early stages of the bulge formation, while its most metal-rich component contains crucial information on the bulge more recent evolution.
Resumo:
The Large Magellanic Cloud (LMC) is widely considered as the first step of the cosmological distance ladder, since it contains many different distance indicators. An accurate determination of the distance to the LMC allows one to calibrate these distance indicators that are then used to measure the distance to far objects. The main goal of this thesis is to study the distance and structure of the LMC, as traced by different distance indicators. For these purposes three types of distance indicators were chosen: Classical Cepheids,``hot'' eclipsing binaries and RR Lyrae stars. These objects belong to different stellar populations tracing, in turn, different sub-structures of the LMC. The RR Lyrae stars (age >10 Gyr) are distributed smoothly and likely trace the halo of the LMC. Classical Cepheids are young objects (age 50-200 Myr), mainly located in the bar and spiral arm of the galaxy, while ``hot'' eclipsing binaries mainly trace the star forming regions of the LMC. Furthermore, we have chosen these distance indicators for our study, since the calibration of their zero-points is based on fundamental geometric methods. The ESA cornerstone mission Gaia, launched on 19 December 2013, will measure trigonometric parallaxes for one billion stars with an accuracy of 20 micro-arcsec at V=15 mag, and 200 micro-arcsec at V=20 mag, thus will allow us to calibrate the zero-points of Classical Cepheids, eclipsing binaries and RR Lyrae stars with an unprecedented precision.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.