9 resultados para dark energy experiments
em AMS Tesi di Laurea - Alm@DL - Università di Bologna
Resumo:
The cosmological constant Λ seems to be a not satisfactory explanation of the late-time accelerated expansion of the Universe, for which a number of experimental evidences exist; therefore, it has become necessary in the last years to consider alternative models of dark energy, meant as cause of the accelerated expansion. In the study of dark energy models, it is important to understand which quantities can be determined starting from observational data, without assuming any hypothesis on the cosmological model; such quantities have been determined in Amendola, Kunz et al., 2012. In the same paper it has been further shown that it is possible to estabilish a relation between the model-independent parameters and the anisotropic stress η, which can be also expressed as a combination of the functions appearing in the most general Lagrangian for the scalar-tensor theories, the Horndeski Lagrangian. In the present thesis, the Fisher matrix formalism is used to perform a forecast on the constraints that will be possible to make on the anisotropic stress η in the future, starting from the estimated uncertainties for the galaxy clustering and weak lensing measurements which will be performed by the European Space Agency Euclid mission, to be launched in 2020. Further, constraints coming from supernovae-Ia observations are considered. The forecast is performed for two cases in which (a) η is considered as depending from redshift only and (b) η is constant and equal to one, as in the ΛCDM model.
Resumo:
We have extended the Boltzmann code CLASS and studied a specific scalar tensor dark energy model: Induced Gravity
Resumo:
The last decade has witnessed the establishment of a Standard Cosmological Model, which is based on two fundamental assumptions: the first one is the existence of a new non relativistic kind of particles, i. e. the Dark Matter (DM) that provides the potential wells in which structures create, while the second one is presence of the Dark Energy (DE), the simplest form of which is represented by the Cosmological Constant Λ, that sources the acceleration in the expansion of our Universe. These two features are summarized by the acronym ΛCDM, which is an abbreviation used to refer to the present Standard Cosmological Model. Although the Standard Cosmological Model shows a remarkably successful agreement with most of the available observations, it presents some longstanding unsolved problems. A possible way to solve these problems is represented by the introduction of a dynamical Dark Energy, in the form of the scalar field ϕ. In the coupled DE models, the scalar field ϕ features a direct interaction with matter in different regimes. Cosmic voids are large under-dense regions in the Universe devoided of matter. Being nearby empty of matter their dynamics is supposed to be dominated by DE, to the nature of which the properties of cosmic voids should be very sensitive. This thesis work is devoted to the statistical and geometrical analysis of cosmic voids in large N-body simulations of structure formation in the context of alternative competing cosmological models. In particular we used the ZOBOV code (see ref. Neyrinck 2008), a publicly available void finder algorithm, to identify voids in the Halos catalogues extraxted from CoDECS simulations (see ref. Baldi 2012 ). The CoDECS are the largest N-body simulations to date of interacting Dark Energy (DE) models. We identify suitable criteria to produce voids catalogues with the aim of comparing the properties of these objects in interacting DE scenarios to the standard ΛCDM model, at different redshifts. This thesis work is organized as follows: in chapter 1, the Standard Cosmological Model as well as the main properties of cosmic voids are intro- duced. In chapter 2, we will present the scalar field scenario. In chapter 3 the tools, the methods and the criteria by which a voids catalogue is created are described while in chapter 4 we discuss the statistical properties of cosmic voids included in our catalogues. In chapter 5 the geometrical properties of the catalogued cosmic voids are presented by means of their stacked profiles. In chapter 6 we summarized our results and we propose further developments of this work.
Resumo:
La materia ordinaria copre soli pochi punti percentuali della massa-energia totale dell'Universo, che è invece largamente dominata da componenti “oscure”. Il modello standard usato per descriverle è il modello LambdaCDM. Nonostante esso sembri consistente con la maggior parte dei dati attualmente disponibili, presenta alcuni problemi fondamentali che ad oggi restano irrisolti, lasciando spazio per lo studio di modelli cosmologici alternativi. Questa Tesi mira a studiare un modello proposto recentemente, chiamato “Multi-coupled Dark Energy” (McDE), che presenta interazioni modificate rispetto al modello LambdaCDM. In particolare, la Materia Oscura è composta da due diversi tipi di particelle con accoppiamento opposto rispetto ad un campo scalare responsabile dell'Energia Oscura. L'evoluzione del background e delle perturbazioni lineari risultano essere indistinguibili da quelle del modello LambdaCDM. In questa Tesi viene presentata per la prima volta una serie di simulazioni numeriche “zoomed”. Esse presentano diverse regioni con risoluzione differente, centrate su un singolo ammasso di interesse, che permettono di studiare in dettaglio una singola struttura senza aumentare eccessivamente il tempo di calcolo necessario. Un codice chiamato ZInCo, da me appositamente sviluppato per questa Tesi, viene anch'esso presentato per la prima volta. Il codice produce condizioni iniziali adatte a simulazioni cosmologiche, con differenti regioni di risoluzione, indipendenti dal modello cosmologico scelto e che preservano tutte le caratteristiche dello spettro di potenza imposto su di esse. Il codice ZInCo è stato usato per produrre condizioni iniziali per una serie di simulazioni numeriche del modello McDE, le quali per la prima volta mostrano, grazie all'alta risoluzione raggiunta, che l'effetto di segregazione degli ammassi avviene significativamente prima di quanto stimato in precedenza. Inoltre, i profili radiale di densità ottenuti mostrano un appiattimento centrale nelle fasi iniziali della segregazione. Quest'ultimo effetto potrebbe aiutare a risolvere il problema “cusp-core” del modello LambdaCDM e porre limiti ai valori dell'accoppiamento possibili.
Resumo:
Cosmic voids are vast and underdense regions emerging between the elements of the cosmic web and dominating the large-scale structure of the Universe. Void number counts and density profiles have been demonstrated to provide powerful cosmological probes. Indeed, thanks to their low-density nature and they very large sizes, voids represent natural laboratories to test alternative dark energy scenarios, modifications of gravity and the presence of massive neutrinos. Despite the increasing use of cosmic voids in Cosmology, a commonly accepted definition for these objects has not yet been reached. For this reason, different void finding algorithms have been proposed during the years. Voids finder algorithms based on density or geometrical criteria are affected by intrinsic uncertainties. In recent years, new solutions have been explored to face these issues. The most interesting is based on the idea of identify void positions through the dynamics of the mass tracers, without performing any direct reconstruction of the density field. The goal of this Thesis is to provide a performing void finder algorithm based on dynamical criteria. The Back-in-time void finder (BitVF) we present use tracers as test particles and their orbits are reconstructed from their actual clustered configuration to an homogeneous and isotropic distribution, expected for the Universe early epoch. Once the displacement field is reconstructed, the density field is computed as its divergence. Consequently, void centres are identified as local minima of the field. In this Thesis work we applied the developed void finding algorithm to simulations. From the resulting void samples we computed different void statistics, comparing the results to those obtained with VIDE, the most popular void finder. BitVF proved to be able to produce a more reliable void samples than the VIDE ones. The BitVF algorithm will be a fundamental tool for precision cosmology, especially with upcoming galaxy-survey.
Resumo:
Il modello ΛCDM è il modello cosmologico più semplice, ma finora più efficace, per descrivere l'evoluzione dell'universo. Esso si basa sulla teoria della Relatività Generale di Einstein e fornisce una spiegazione dell'espansione accelerata dell'universo introducendo la costante cosmologica Λ, che rappresenta il contributo della cosiddetta energia oscura, un'entità di cui ben poco si sa con certezza. Sono stati tuttavia proposti modelli teorici alternativi che descrivono gli effetti di questa quantità misteriosa, introducendo ad esempio gradi di libertà aggiuntivi, come nella teoria di Horndeski. L'obiettivo principale di questa testi è quello di studiare questi modelli tramite il tensor computer algebra xAct. In particolare, il nostro scopo sarà quello di implementare una procedura universale che permette di derivare, a partire dall'azione, le equazioni del moto e l'evoluzione temporale di qualunque modello generico.
Resumo:
This thesis is based on two studies that are related to floating wave energy conversion (WEC) devices and turbulent fountains. The ability of the open-source CFD software OpenFOAM® has been studied to simulate these phenomena. The CFD model has been compared with the physical experimental results. The first study presents a model of a WEC device, called MoonWEC, which is patented by the University of Bologna. The CFD model of the MoonWEC under the action of waves has been simulated using OpenFOAM and the results are promising. The reliability of the CFD model is confirmed by the laboratory experiments, conducted at the University of Bologna, for which a small-scale prototype of the MoonWEC was made from wood and brass. The second part of the thesis is related to the turbulent fountains which are formed when a heavier source fluid is injected upward into a lighter ambient fluid, or else a lighter source fluid is injected downward into a heavier ambient fluid. For this study, the first case is considered for laboratory experiments and the corresponding CFD model. The vertical releases of the source fluids into a quiescent, uniform ambient fluid, from a circular source, were studied with different densities in the laboratory experiments, conducted at the University of Parma. The CFD model has been set up for these experiments. Favourable results have been observed from the OpenFOAM simulations for the turbulent fountains as well, indicating that it can be a reliable tool for the simulation of such phenomena.
Resumo:
The scientific success of the LHC experiments at CERN highly depends on the availability of computing resources which efficiently store, process, and analyse the amount of data collected every year. This is ensured by the Worldwide LHC Computing Grid infrastructure that connect computing centres distributed all over the world with high performance network. LHC has an ambitious experimental program for the coming years, which includes large investments and improvements both for the hardware of the detectors and for the software and computing systems, in order to deal with the huge increase in the event rate expected from the High Luminosity LHC (HL-LHC) phase and consequently with the huge amount of data that will be produced. Since few years the role of Artificial Intelligence has become relevant in the High Energy Physics (HEP) world. Machine Learning (ML) and Deep Learning algorithms have been successfully used in many areas of HEP, like online and offline reconstruction programs, detector simulation, object reconstruction, identification, Monte Carlo generation, and surely they will be crucial in the HL-LHC phase. This thesis aims at contributing to a CMS R&D project, regarding a ML "as a Service" solution for HEP needs (MLaaS4HEP). It consists in a data-service able to perform an entire ML pipeline (in terms of reading data, processing data, training ML models, serving predictions) in a completely model-agnostic fashion, directly using ROOT files of arbitrary size from local or distributed data sources. This framework has been updated adding new features in the data preprocessing phase, allowing more flexibility to the user. Since the MLaaS4HEP framework is experiment agnostic, the ATLAS Higgs Boson ML challenge has been chosen as physics use case, with the aim to test MLaaS4HEP and the contribution done with this work.
Resumo:
All structures are subjected to various loading conditions and combinations. For offshore structures, these loads include permanent loads, hydrostatic pressure, wave, current, and wind loads. Typically, sea environments in different geographical regions are characterized by the 100-year wave height, surface currents, and velocity speeds. The main problems associated with the commonly used, deterministic method is the fact that not all waves have the same period, and that the actual stochastic nature of the marine environment is not taken into account. Offshore steel structure fatigue design is done using the DNVGL-RP-0005:2016 standard which takes precedence over the DNV-RP-C203 standard (2012). Fatigue analysis is necessary for oil and gas producing offshore steel structures which were first constructed in the Gulf of Mexico North Sea (the 1930s) and later in the North Sea (1960s). Fatigue strength is commonly described by S-N curves which have been obtained by laboratory experiments. The rapid development of the Offshore wind industry has caused the exploration into deeper ocean areas and the adoption of new support structural concepts such as full lattice tower systems amongst others. The optimal design of offshore wind support structures including foundation, turbine towers, and transition piece components putting into consideration, economy, safety, and even the environment is a critical challenge. In this study, fatigue design challenges of transition pieces from decommissioned platforms for offshore wind energy are proposed to be discussed. The fatigue resistance of the material and structural components under uniaxial and multiaxial loading is introduced with the new fatigue design rules whilst considering the combination of global and local modeling using finite element analysis software programs.