998 resultados para dark-energy


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of this thesis is to analyze the possibility of using early-type galaxies to place evolutionary and cosmological constraints, by both disentangling what is the main driver of ETGs evolution between mass and environment, and developing a technique to constrain H(z) and the cosmological parameters studying the ETGs age-redshift relation. The (U-V) rest-frame color distribution is studied as a function of mass and environment for two sample of ETGs up to z=1, extracted from the zCOSMOS survey with a new selection criterion. The color distributions and the slopes of the color-mass and color-environment relations are studied, finding a strong dependence on mass and a minor dependence on environment. The spectral analysis performed on the D4000 and Hδ features gives results validating the previous analysis. The main driver of galaxy evolution is found to be the galaxy mass, the environment playing a subdominant but non negligible role. The age distribution of ETGs is also analyzed as a function of mass, providing strong evidences supporting a downsizing scenario. The possibility of setting cosmological constraints studying the age-redshift relation is studied, discussing the relative degeneracies and model dependencies. A new approach is developed, aiming to minimize the impact of systematics on the “cosmic chronometer” method. Analyzing theoretical models, it is demonstrated that the D4000 is a feature correlated almost linearly with age at fixed metallicity, depending only minorly on the models assumed or on the SFH chosen. The analysis of a SDSS sample of ETGs shows that it is possible to use the differential D4000 evolution of the galaxies to set constraints to cosmological parameters in an almost model-independent way. Values of the Hubble constant and of the dark energy EoS parameter are found, which are not only fully compatible, but also with a comparable error budget with the latest results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the thesis we present the implementation of the quadratic maximum likelihood (QML) method, ideal to estimate the angular power spectrum of the cross-correlation between cosmic microwave background (CMB) and large scale structure (LSS) maps as well as their individual auto-spectra. Such a tool is an optimal method (unbiased and with minimum variance) in pixel space and goes beyond all the previous harmonic analysis present in the literature. We describe the implementation of the QML method in the {\it BolISW} code and demonstrate its accuracy on simulated maps throughout a Monte Carlo. We apply this optimal estimator to WMAP 7-year and NRAO VLA Sky Survey (NVSS) data and explore the robustness of the angular power spectrum estimates obtained by the QML method. Taking into account the shot noise and one of the systematics (declination correction) in NVSS, we can safely use most of the information contained in this survey. On the contrary we neglect the noise in temperature since WMAP is already cosmic variance dominated on the large scales. Because of a discrepancy in the galaxy auto spectrum between the estimates and the theoretical model, we use two different galaxy distributions: the first one with a constant bias $b$ and the second one with a redshift dependent bias $b(z)$. Finally, we make use of the angular power spectrum estimates obtained by the QML method to derive constraints on the dark energy critical density in a flat $\Lambda$CDM model by different likelihood prescriptions. When using just the cross-correlation between WMAP7 and NVSS maps with 1.8° resolution, we show that $\Omega_\Lambda$ is about the 70\% of the total energy density, disfavouring an Einstein-de Sitter Universe at more than 2 $\sigma$ CL (confidence level).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Astronomical observations of luminosity distances derived from Type Ia supernovae, CMB spectrum and global matter distribution provide evidence of cosmic speed up of the Universe. Alternatively, cosmic acceleration might be due to an exotic fluid filling the Universe, known as dark energy. These have given rise to a collection of new cosmological evolutions, future singularites being the most perplexing ones (“big rip”, “sudden singularities”. . .).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A significant observational effort has been directed to investigate the nature of the so-called dark energy. In this dissertation we derive constraints on dark energy models using three different observable: measurements of the Hubble rate H(z) (compiled by Meng et al. in 2015.); distance modulus of 580 Supernovae Type Ia (Union catalog Compilation 2.1, 2011); and the observations of baryon acoustic oscilations (BAO) and the cosmic microwave background (CMB) by using the so-called CMB/BAO of six peaks of BAO (a peak determined through the Survey 6dFGS data, two through the SDSS and three through WiggleZ). The statistical analysis used was the method of the χ2 minimum (marginalized or minimized over h whenever possible) to link the cosmological parameter: m, ω and δω0. These tests were applied in two parameterization of the parameter ω of the equation of state of dark energy, p = ωρ (here, p is the pressure and ρ is the component of energy density). In one, ω is considered constant and less than -1/3, known as XCDM model; in the other the parameter of state equantion varies with the redshift, where we the call model GS. This last model is based on arguments that arise from the theory of cosmological inflation. For comparison it was also made the analysis of model CDM. Comparison of cosmological models with different observations lead to different optimal settings. Thus, to classify the observational viability of different theoretical models we use two criteria information, the Bayesian information criterion (BIC) and the Akaike information criteria (AIC). The Fisher matrix tool was incorporated into our testing to provide us with the uncertainty of the parameters of each theoretical model. We found that the complementarity of tests is necessary inorder we do not have degenerate parametric spaces. Making the minimization process we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are m = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059. Performing a marginalization we found (68%), for the Model XCDM the best fit parameters are m = 0.28 ± 0, 012 and ωX = −1.01 ± 0, 052. While for Model GS the best settings are M = 0.28 ± 0, 011 and δω0 = 0.00 ± 0, 059.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present DES14X3taz, a new hydrogen-poor superluminous supernova (SLSN-I) discovered by the Dark Energy Survey (DES) supernova program, with additional photometric data provided by the Survey Using DECam for Superluminous Supernovae. Spectra obtained using Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy on the Gran Telescopio CANARIAS show DES14X3taz is an SLSN-I at z = 0.608. Multi-color photometry reveals a double-peaked light curve: a blue and relatively bright initial peak that fades rapidly prior to the slower rise of the main light curve. Our multi-color photometry allows us, for the first time, to show that the initial peak cools from 22,000 to 8000 K over 15 rest-frame days, and is faster and brighter than any published core-collapse supernova, reaching 30% of the bolometric luminosity of the main peak. No physical 56Ni-powered model can fit this initial peak. We show that a shock-cooling model followed by a magnetar driving the second phase of the light curve can adequately explain the entire light curve of DES14X3taz. Models involving the shock-cooling of extended circumstellar material at a distance of 400  are preferred over the cooling of shock-heated surface layers of a stellar envelope. We compare DES14X3taz to the few double-peaked SLSN-I events in the literature. Although the rise times and characteristics of these initial peaks differ, there exists the tantalizing possibility that they can be explained by one physical interpretation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Black hole's response to external perturbations will carry significant information about these exotic objects. Its response, shortly after the initial `kick', is known to be ruled by the damped oscillation of the perturbating eld, called quasinormal modes(QNMs), followed by the tails of decay and is the characteristic of the background black hole spacetime. In the last three decades, several shortcomings came out in the Einstein's General Theory of Relativity(GTR). Such issues come, especially, from observational cosmology and quantum eld theory. In the rst case, for example, the observed accelerated expansion of the universe and the hypothesized mysterious dark energy still lack a satisfactory explanation. Secondly, GTR is a classical theory which does not work as a fundamental theory, when one wants to achieve a full quantum description of gravity. Due to these facts modi cation to GTR or alternative theories for gravity have been considered. Two potential approaches towards these problems are the quintessence model for dark energy and Ho rava-Lifshitz(HL) gravity. Quintessence is a dynamical model of dark energy which is often realized by scalar eld mechanism. HL gravity is the recently proposed theory of gravity, which is renormalizable in power counting arguments. The two models are considered as a potential candidate in explaining these issues.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Heading into the 2020s, Physics and Astronomy are undergoing experimental revolutions that will reshape our picture of the fabric of the Universe. The Large Hadron Collider (LHC), the largest particle physics project in the world, produces 30 petabytes of data annually that need to be sifted through, analysed, and modelled. In astrophysics, the Large Synoptic Survey Telescope (LSST) will be taking a high-resolution image of the full sky every 3 days, leading to data rates of 30 terabytes per night over ten years. These experiments endeavour to answer the question why 96% of the content of the universe currently elude our physical understanding. Both the LHC and LSST share the 5-dimensional nature of their data, with position, energy and time being the fundamental axes. This talk will present an overview of the experiments and data that is gathered, and outlines the challenges in extracting information. Common strategies employed are very similar to industrial data! Science problems (e.g., data filtering, machine learning, statistical interpretation) and provide a seed for exchange of knowledge between academia and industry. Speaker Biography Professor Mark Sullivan Mark Sullivan is a Professor of Astrophysics in the Department of Physics and Astronomy. Mark completed his PhD at Cambridge, and following postdoctoral study in Durham, Toronto and Oxford, now leads a research group at Southampton studying dark energy using exploding stars called "type Ia supernovae". Mark has many years' experience of research that involves repeatedly imaging the night sky to track the arrival of transient objects, involving significant challenges in data handling, processing, classification and analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Using our anholonomic frame deformation method, we show how generic off-diagonal cosmological solutions depending, in general, on all spacetime coordinates and undergoing a phase of ultra-slow contraction can be constructed in massive gravity. In this paper, there are found and studied new classes of locally anisotropic and (in)homogeneous cosmological metrics with open and closed spatial geometries. The late time acceleration is present due to effective cosmological terms induced by nonlinear off-diagonal interactions and graviton mass. The off-diagonal cosmological metrics and related Stückelberg fields are constructed in explicit form up to nonholonomic frame transforms of the Friedmann–Lamaître–Robertson–Walker (FLRW) coordinates. We show that the solutions include matter, graviton mass and other effective sources modeling nonlinear gravitational and matter fields interactions in modified and/or massive gravity, with polarization of physical constants and deformations of metrics, which may explain certain dark energy and dark matter effects. There are stated and analyzed the conditions when such configurations mimic interesting solutions in general relativity and modifications and recast the general Painlevé–Gullstrand and FLRW metrics. Finally, we elaborate on a reconstruction procedure for a subclass of off-diagonal cosmological solutions which describe cyclic and ekpyrotic universes, with an emphasis on open issues and observable signatures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Magnetic fields are ubiquitous in galaxy cluster atmospheres and have a variety of astrophysical and cosmological consequences. Magnetic fields can contribute to the pressure support of clusters, affect thermal conduction, and modify the evolution of bubbles driven by active galactic nuclei. However, we currently do not fully understand the origin and evolution of these fields throughout cosmic time. Furthermore, we do not have a general understanding of the relationship between magnetic field strength and topology and other cluster properties, such as mass and X-ray luminosity. We can now begin to answer some of these questions using large-scale cosmological magnetohydrodynamic (MHD) simulations of the formation of galaxy clusters including the seeding and growth of magnetic fields. Using large-scale cosmological simulations with the FLASH code combined with a simplified model of the acceleration of cosmic rays responsible for the generation of radio halos, we find that the galaxy cluster frequency distribution and expected number counts of radio halos from upcoming low-frequency sur- veys are strongly dependent on the strength of magnetic fields. Thus, a more complete understanding of the origin and evolution of magnetic fields is necessary to understand and constrain models of diffuse synchrotron emission from clusters. One favored model for generating magnetic fields is through the amplification of weak seed fields in active galactic nuclei (AGN) accretion disks and their subsequent injection into cluster atmospheres via AGN-driven jets and bubbles. However, current large-scale cosmological simulations cannot directly include the physical processes associated with the accretion and feedback processes of AGN or the seeding and merging of the associated SMBHs. Thus, we must include these effects as subgrid models. In order to carefully study the growth of magnetic fields in clusters via AGN-driven outflows, we present a systematic study of SMBH and AGN subgrid models. Using dark-matter only cosmological simulations, we find that many important quantities, such as the relationship between SMBH mass and galactic bulge velocity dispersion and the merger rate of black holes, are highly sensitive to the subgrid model assumptions of SMBHs. In addition, using MHD calculations of an isolated cluster, we find that magnetic field strengths, extent, topology, and relationship to other gas quantities such as temperature and density are also highly dependent on the chosen model of accretion and feedback. We use these systematic studies of SMBHs and AGN inform and constrain our choice of subgrid models, and we use those results to outline a fully cosmological MHD simulation to study the injection and growth of magnetic fields in clusters of galaxies. This simulation will be the first to study the birth and evolution of magnetic fields using a fully closed accretion-feedback cycle, with as few assumptions as possible and a clearer understanding of the effects of the various parameter choices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A stately fraction of the Universe volume is dominated by almost empty space. Alongside the luminous filamentary structures that make it up, there are vast and smooth regions that have remained outside the Cosmology spotlight during the past decades: cosmic voids. Although essentially devoid of matter, voids enclose fundamental information about the cosmological framework and have gradually become an effective and competitive cosmological probe. In this Thesis work we present fundamental results about the cosmological exploitation of voids. We focused on the number density of voids as a function of their radius, known as void size function, developing an effective pipeline for its cosmological usage. We proposed a new parametrisation of the most used theoretical void size function to model voids identified in the distribution of biased tracers (i.e. dark matter haloes, galaxies and galaxy clusters), a step of fundamental importance to extend the analysis to real data surveys. We then applied our built methodology to study voids in alternative cosmological scenarios. Firstly we exploited voids with the aim of breaking the degeneracies between cosmological scenarios characterised by modified gravity and the inclusion of massive neutrinos. Secondly we analysed voids in the perspective of the Euclid survey, focusing on the void abundance constraining power on dynamical dark energy models with massive neutrinos. Moreover we explored other void statistics like void profiles and clustering (i.e. the void-galaxy and the void-void correlation), providing cosmological forecasts for the Euclid mission. We finally focused on the probe combination, highlighting the incredible potential of the joint analysis of multiple void statistics and of the combination of the void size function with different cosmological probes. Our results show the fundamental role of the void analysis in constraining the fundamental parameters of the cosmological model and pave the way for future studies on this topic.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cosmic voids are vast and underdense regions emerging between the elements of the cosmic web and dominating the large-scale structure of the Universe. Void number counts and density profiles have been demonstrated to provide powerful cosmological probes. Indeed, thanks to their low-density nature and they very large sizes, voids represent natural laboratories to test alternative dark energy scenarios, modifications of gravity and the presence of massive neutrinos. Despite the increasing use of cosmic voids in Cosmology, a commonly accepted definition for these objects has not yet been reached. For this reason, different void finding algorithms have been proposed during the years. Voids finder algorithms based on density or geometrical criteria are affected by intrinsic uncertainties. In recent years, new solutions have been explored to face these issues. The most interesting is based on the idea of identify void positions through the dynamics of the mass tracers, without performing any direct reconstruction of the density field. The goal of this Thesis is to provide a performing void finder algorithm based on dynamical criteria. The Back-in-time void finder (BitVF) we present use tracers as test particles and their orbits are reconstructed from their actual clustered configuration to an homogeneous and isotropic distribution, expected for the Universe early epoch. Once the displacement field is reconstructed, the density field is computed as its divergence. Consequently, void centres are identified as local minima of the field. In this Thesis work we applied the developed void finding algorithm to simulations. From the resulting void samples we computed different void statistics, comparing the results to those obtained with VIDE, the most popular void finder. BitVF proved to be able to produce a more reliable void samples than the VIDE ones. The BitVF algorithm will be a fundamental tool for precision cosmology, especially with upcoming galaxy-survey.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Il modello ΛCDM è il modello cosmologico più semplice, ma finora più efficace, per descrivere l'evoluzione dell'universo. Esso si basa sulla teoria della Relatività Generale di Einstein e fornisce una spiegazione dell'espansione accelerata dell'universo introducendo la costante cosmologica Λ, che rappresenta il contributo della cosiddetta energia oscura, un'entità di cui ben poco si sa con certezza. Sono stati tuttavia proposti modelli teorici alternativi che descrivono gli effetti di questa quantità misteriosa, introducendo ad esempio gradi di libertà aggiuntivi, come nella teoria di Horndeski. L'obiettivo principale di questa testi è quello di studiare questi modelli tramite il tensor computer algebra xAct. In particolare, il nostro scopo sarà quello di implementare una procedura universale che permette di derivare, a partire dall'azione, le equazioni del moto e l'evoluzione temporale di qualunque modello generico.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We show the results and discussions of the study of a possible suppression of the extragalactic neutrino flux during its propagation due to a nonstandard interaction with a candidate field to dark matter. In particular, we show the study of neutrino interaction with an ultra-light scalar field. It is shown that the extragalactic neutrino flux may be suppressed by such an interaction, leading to a mechanism to reduce the ultra-high energy neutrino flux. We calculate both the cases of non-self-conjugate as well as self-conjugate ultra-light dark matter. In the first case, the suppression is independent of the neutrino and dark matter masses. We conclude that care must be taken when explaining limits on the neutrino flux through source acceleration mechanisms only, since there could be other mechanisms, as absorption during propagation, for the reduction of the neutrino flux [1], © Published under licence by IOP Publishing Ltd.