11 resultados para cosmological parameters from CMBR
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
The objective of the work is the evaluation of the potential capabilities of navigation satellite signals to retrieve basic atmospheric parameters. A capillary study have been performed on the assumptions more or less explicitly contained in the common processing steps of navigation signals. A probabilistic procedure has been designed for measuring vertical discretised profiles of pressure, temperature and water vapour and their associated errors. Numerical experiments on a synthetic dataset have been performed with the main objective of quantifying the information that could be gained from such approach, using entropy and relative entropy as testing parameters. A simulator of phase delay and bending of a GNSS signal travelling across the atmosphere has been developed to this aim.
Resumo:
By the end of the 19th century, geodesy has contributed greatly to the knowledge of regional tectonics and fault movement through its ability to measure, at sub-centimetre precision, the relative positions of points on the Earth’s surface. Nowadays the systematic analysis of geodetic measurements in active deformation regions represents therefore one of the most important tool in the study of crustal deformation over different temporal scales [e.g., Dixon, 1991]. This dissertation focuses on motion that can be observed geodetically with classical terrestrial position measurements, particularly triangulation and leveling observations. The work is divided into two sections: an overview of the principal methods for estimating longterm accumulation of elastic strain from terrestrial observations, and an overview of the principal methods for rigorously inverting surface coseismic deformation fields for source geometry with tests on synthetic deformation data sets and applications in two different tectonically active regions of the Italian peninsula. For the long-term accumulation of elastic strain analysis, triangulation data were available from a geodetic network across the Messina Straits area (southern Italy) for the period 1971 – 2004. From resulting angle changes, the shear strain rates as well as the orientation of the principal axes of the strain rate tensor were estimated. The computed average annual shear strain rates for the time period between 1971 and 2004 are γ˙1 = 113.89 ± 54.96 nanostrain/yr and γ˙2 = -23.38 ± 48.71 nanostrain/yr, with the orientation of the most extensional strain (θ) at N140.80° ± 19.55°E. These results suggests that the first-order strain field of the area is dominated by extension in the direction perpendicular to the trend of the Straits, sustaining the hypothesis that the Messina Straits could represents an area of active concentrated deformation. The orientation of θ agree well with GPS deformation estimates, calculated over shorter time interval, and is consistent with previous preliminary GPS estimates [D’Agostino and Selvaggi, 2004; Serpelloni et al., 2005] and is also similar to the direction of the 1908 (MW 7.1) earthquake slip vector [e.g., Boschi et al., 1989; Valensise and Pantosti, 1992; Pino et al., 2000; Amoruso et al., 2002]. Thus, the measured strain rate can be attributed to an active extension across the Messina Straits, corresponding to a relative extension rate ranges between < 1mm/yr and up to ~ 2 mm/yr, within the portion of the Straits covered by the triangulation network. These results are consistent with the hypothesis that the Messina Straits is an important active geological boundary between the Sicilian and the Calabrian domains and support previous preliminary GPS-based estimates of strain rates across the Straits, which show that the active deformation is distributed along a greater area. Finally, the preliminary dislocation modelling has shown that, although the current geodetic measurements do not resolve the geometry of the dislocation models, they solve well the rate of interseismic strain accumulation across the Messina Straits and give useful information about the locking the depth of the shear zone. Geodetic data, triangulation and leveling measurements of the 1976 Friuli (NE Italy) earthquake, were available for the inversion of coseismic source parameters. From observed angle and elevation changes, the source parameters of the seismic sequence were estimated in a join inversion using an algorithm called “simulated annealing”. The computed optimal uniform–slip elastic dislocation model consists of a 30° north-dipping shallow (depth 1.30 ± 0.75 km) fault plane with azimuth of 273° and accommodating reverse dextral slip of about 1.8 m. The hypocentral location and inferred fault plane of the main event are then consistent with the activation of Periadriatic overthrusts or other related thrust faults as the Gemona- Kobarid thrust. Then, the geodetic data set exclude the source solution of Aoudia et al. [2000], Peruzza et al. [2002] and Poli et al. [2002] that considers the Susans-Tricesimo thrust as the May 6 event. The best-fit source model is then more consistent with the solution of Pondrelli et al. [2001], which proposed the activation of other thrusts located more to the North of the Susans-Tricesimo thrust, probably on Periadriatic related thrust faults. The main characteristics of the leveling and triangulation data are then fit by the optimal single fault model, that is, these results are consistent with a first-order rupture process characterized by a progressive rupture of a single fault system. A single uniform-slip fault model seems to not reproduce some minor complexities of the observations, and some residual signals that are not modelled by the optimal single-fault plane solution, were observed. In fact, the single fault plane model does not reproduce some minor features of the leveling deformation field along the route 36 south of the main uplift peak, that is, a second fault seems to be necessary to reproduce these residual signals. By assuming movements along some mapped thrust located southward of the inferred optimal single-plane solution, the residual signal has been successfully modelled. In summary, the inversion results presented in this Thesis, are consistent with the activation of some Periadriatic related thrust for the main events of the sequence, and with a minor importance of the southward thrust systems of the middle Tagliamento plain.
Resumo:
The goal of this thesis is to analyze the possibility of using early-type galaxies to place evolutionary and cosmological constraints, by both disentangling what is the main driver of ETGs evolution between mass and environment, and developing a technique to constrain H(z) and the cosmological parameters studying the ETGs age-redshift relation. The (U-V) rest-frame color distribution is studied as a function of mass and environment for two sample of ETGs up to z=1, extracted from the zCOSMOS survey with a new selection criterion. The color distributions and the slopes of the color-mass and color-environment relations are studied, finding a strong dependence on mass and a minor dependence on environment. The spectral analysis performed on the D4000 and Hδ features gives results validating the previous analysis. The main driver of galaxy evolution is found to be the galaxy mass, the environment playing a subdominant but non negligible role. The age distribution of ETGs is also analyzed as a function of mass, providing strong evidences supporting a downsizing scenario. The possibility of setting cosmological constraints studying the age-redshift relation is studied, discussing the relative degeneracies and model dependencies. A new approach is developed, aiming to minimize the impact of systematics on the “cosmic chronometer” method. Analyzing theoretical models, it is demonstrated that the D4000 is a feature correlated almost linearly with age at fixed metallicity, depending only minorly on the models assumed or on the SFH chosen. The analysis of a SDSS sample of ETGs shows that it is possible to use the differential D4000 evolution of the galaxies to set constraints to cosmological parameters in an almost model-independent way. Values of the Hubble constant and of the dark energy EoS parameter are found, which are not only fully compatible, but also with a comparable error budget with the latest results.
Resumo:
Redshift Space Distortions (RSD) are an apparent anisotropy in the distribution of galaxies due to their peculiar motion. These features are imprinted in the correlation function of galaxies, which describes how these structures distribute around each other. RSD can be represented by a distortions parameter $\beta$, which is strictly related to the growth of cosmic structures. For this reason, measurements of RSD can be exploited to give constraints on the cosmological parameters, such us for example the neutrino mass. Neutrinos are neutral subatomic particles that come with three flavours, the electron, the muon and the tau neutrino. Their mass differences can be measured in the oscillation experiments. Information on the absolute scale of neutrino mass can come from cosmology, since neutrinos leave a characteristic imprint on the large scale structure of the universe. The aim of this thesis is to provide constraints on the accuracy with which neutrino mass can be estimated when expoiting measurements of RSD. In particular we want to describe how the error on the neutrino mass estimate depends on three fundamental parameters of a galaxy redshift survey: the density of the catalogue, the bias of the sample considered and the volume observed. In doing this we make use of the BASICC Simulation from which we extract a series of dark matter halo catalogues, characterized by different value of bias, density and volume. This mock data are analysed via a Markov Chain Monte Carlo procedure, in order to estimate the neutrino mass fraction, using the software package CosmoMC, which has been conveniently modified. In this way we are able to extract a fitting formula describing our measurements, which can be used to forecast the precision reachable in future surveys like Euclid, using this kind of observations.
Resumo:
The purpose of this Thesis is to develop a robust and powerful method to classify galaxies from large surveys, in order to establish and confirm the connections between the principal observational parameters of the galaxies (spectral features, colours, morphological indices), and help unveil the evolution of these parameters from $z \sim 1$ to the local Universe. Within the framework of zCOSMOS-bright survey, and making use of its large database of objects ($\sim 10\,000$ galaxies in the redshift range $0 < z \lesssim 1.2$) and its great reliability in redshift and spectral properties determinations, first we adopt and extend the \emph{classification cube method}, as developed by Mignoli et al. (2009), to exploit the bimodal properties of galaxies (spectral, photometric and morphologic) separately, and then combining together these three subclassifications. We use this classification method as a test for a newly devised statistical classification, based on Principal Component Analysis and Unsupervised Fuzzy Partition clustering method (PCA+UFP), which is able to define the galaxy population exploiting their natural global bimodality, considering simultaneously up to 8 different properties. The PCA+UFP analysis is a very powerful and robust tool to probe the nature and the evolution of galaxies in a survey. It allows to define with less uncertainties the classification of galaxies, adding the flexibility to be adapted to different parameters: being a fuzzy classification it avoids the problems due to a hard classification, such as the classification cube presented in the first part of the article. The PCA+UFP method can be easily applied to different datasets: it does not rely on the nature of the data and for this reason it can be successfully employed with others observables (magnitudes, colours) or derived properties (masses, luminosities, SFRs, etc.). The agreement between the two classification cluster definitions is very high. ``Early'' and ``late'' type galaxies are well defined by the spectral, photometric and morphological properties, both considering them in a separate way and then combining the classifications (classification cube) and treating them as a whole (PCA+UFP cluster analysis). Differences arise in the definition of outliers: the classification cube is much more sensitive to single measurement errors or misclassifications in one property than the PCA+UFP cluster analysis, in which errors are ``averaged out'' during the process. This method allowed us to behold the \emph{downsizing} effect taking place in the PC spaces: the migration between the blue cloud towards the red clump happens at higher redshifts for galaxies of larger mass. The determination of $M_{\mathrm{cross}}$ the transition mass is in significant agreement with others values in literature.
Resumo:
Weak lensing experiments such as the future ESA-accepted mission Euclid aim to measure cosmological parameters with unprecedented accuracy. It is important to assess the precision that can be obtained in these measurements by applying analysis software on mock images that contain many sources of noise present in the real data. In this Thesis, we show a method to perform simulations of observations, that produce realistic images of the sky according to characteristics of the instrument and of the survey. We then use these images to test the performances of the Euclid mission. In particular, we concentrate on the precision of the photometric redshift measurements, which are key data to perform cosmic shear tomography. We calculate the fraction of the total observed sample that must be discarded to reach the required level of precision, that is equal to 0.05(1+z) for a galaxy with measured redshift z, with different ancillary ground-based observations. The results highlight the importance of u-band observations, especially to discriminate between low (z < 0.5) and high (z ~ 3) redshifts, and the need for good observing sites, with seeing FWHM < 1. arcsec. We then construct an optimal filter to detect galaxy clusters through photometric catalogues of galaxies, and we test it on the COSMOS field, obtaining 27 lensing-confirmed detections. Applying this algorithm on mock Euclid data, we verify the possibility to detect clusters with mass above 10^14.2 solar masses with a low rate of false detections.
Resumo:
Waste management represents an important issue in our society and Waste-to-Energy incineration plants have been playing a significant role in the last decades, showing an increased importance in Europe. One of the main issues posed by waste combustion is the generation of air contaminants. Particular concern is present about acid gases, mainly hydrogen chloride and sulfur oxides, due to their potential impact on the environment and on human health. Therefore, in the present study the main available technological options for flue gas treatment were analyzed, focusing on dry treatment systems, which are increasingly applied in Municipal Solid Wastes (MSW) incinerators. An operational model was proposed to describe and optimize acid gas removal process. It was applied to an existing MSW incineration plant, where acid gases are neutralized in a two-stage dry treatment system. This process is based on the injection of powdered calcium hydroxide and sodium bicarbonate in reactors followed by fabric filters. HCl and SO2 conversions were expressed as a function of reactants flow rates, calculating model parameters from literature and plant data. The implementation in a software for process simulation allowed the identification of optimal operating conditions, taking into account the reactant feed rates, the amount of solid products and the recycle of the sorbent. Alternative configurations of the reference plant were also assessed. The applicability of the operational model was extended developing also a fundamental approach to the issue. A predictive model was developed, describing mass transfer and kinetic phenomena governing the acid gas neutralization with solid sorbents. The rate controlling steps were identified through the reproduction of literature data, allowing the description of acid gas removal in the case study analyzed. A laboratory device was also designed and started up to assess the required model parameters.
Resumo:
Questo lavoro di tesi è stato suddiviso in tre parti. L’argomento principale è stato lo “Studio della componente antiossidante di oli ottenuti da olive mediante l’utilizzo di diversi sistemi e parametri tecnologici”. E’ ben noto come la qualità ossidativa di un olio di oliva dipenda oltre che dalla sua composizione in acidi grassi, dalla presenza di composti caratterizzati da un elevata attività antiossidante, ovvero le sostanze fenoliche. I composti fenolici contribuiscono quindi in maniera preponderante alla shelf life dell’olio extravergine di oliva. Inoltre sono state riscontrate delle forti correlazione tra alcune di queste sostanze e gli attributi sensoriali positivi di amaro e piccante. E’ poi da sottolineare come il potere antiossidante dei composti fenolici degli oli vergini di oliva, sia stato negli ultimi anni oggetto di considerevole interesse, poiché correlato alla protezione da alcune patologie come ad esempio quelle vascolari, degenerative e tumorali. Il contenuto delle sostanze fenoliche negli oli di oliva dipende da diversi fattori: cultivar, metodo di coltivazione, grado di maturazione delle olive e ovviamente dalle operazioni tecnologiche poiché possono variare il quantitativo di questi composti estratto. Alla luce di quanto appena detto abbiamo valutato l’influenza dei fattori agronomici (metodi di agricoltura biologica, integrata e convenzionale) e tecnologici (riduzione della temperatura della materia prima, aggiunta di coadiuvanti in fase di frangitura e di gramolatura, confronto tra tre oli extravergini di oliva ottenuti mediante diversi sistemi tecnologici) sul contenuto in composti fenolici di oli edibili ottenuti da olive (paper 1-3-4). Oltre alle sostanze fenoliche, negli oli di oliva sono presenti altri composti caratterizzati da proprietà chimiche e nutrizionali, tra questi vi sono i fitosteroli, ovvero gli steroli tipici del mondo vegetale, che rappresentano la frazione dell’insaponificabile quantitativamente più importante dopo gli idrocarburi. La composizione quali-quantitativa degli steroli di un olio di oliva è una delle caratteristiche analitiche più importanti nella valutazione della sua genuinità; infatti la frazione sterolica è significativamente diversa in funzione dell’origine botanica e perciò viene utilizzata per distinguere tra di loro gli oli e le loro miscele. Il principale sterolo nell’olio di oliva è il β- sitosterolo, la presenza di questo composto in quantità inferiore al 90% è un indice approssimativo dell’aggiunta di un qualsiasi altro olio. Il β-sitosterolo è una sostanza importante dal punto di vista della salute, poiché si oppone all’assorbimento del colesterolo. Mentre in letteratura si trovano numerosi lavori relativi al potere antiossidante di una serie di composti presenti nell’olio vergine di oliva (i già citati polifenoli, ma anche carotenoidi e tocoferoli) e ricerche che dimostrano invece come altri composti possano promuovere l’ossidazione dei lipidi, per quanto riguarda il potere antiossidante degli steroli e dei 4- metilsteroli, vi sono ancora poche informazioni. Per questo è stata da noi valutata la composizione sterolica in oli extravergini di oliva ottenuti con diverse tecnologie di estrazione e l’influenza di questa sostanza sulla loro stabilità ossidativa (paper 2). E’ stato recentemente riportato in letteratura come lipidi cellulari evidenziati attraverso la spettroscopia di risonanza nucleare magnetica (NMR) rivestano una importanza strategica da un punto di vista funzionale e metabolico. Questi lipidi, da un lato un lato sono stati associati allo sviluppo di cellule neoplastiche maligne e alla morte cellulare, dall’altro sono risultati anche messaggeri di processi benigni quali l’attivazione e la proliferazione di un normale processo di crescita cellulare. Nell’ambito di questa ricerca è nata una collaborazione tra il Dipartimento di Biochimica “G. Moruzzi” ed il Dipartimento di Scienze degli Alimenti dell’Università di Bologna. Infatti, il gruppo di lipochimica del Dipartimento di Scienze degli Alimenti, a cui fa capo il Prof. Giovanni Lercker, da sempre si occupa dello studio delle frazioni lipidiche, mediante le principali tecniche cromatografiche. L’obiettivo di questa collaborazione è stato quello di caratterizzare la componente lipidica totale estratta dai tessuti renali umani sani e neoplastici, mediante l’utilizzo combinato di diverse tecniche analitiche: la risonanza magnetica nucleare (1H e 13C RMN), la cromatografia su strato sottile (TLC), la cromatografia liquida ad alta prestazione (HPLC) e la gas cromatografia (GC) (paper 5-6-7)
Resumo:
The first part of the thesis concerns the study of inflation in the context of a theory of gravity called "Induced Gravity" in which the gravitational coupling varies in time according to the dynamics of the very same scalar field (the "inflaton") driving inflation, while taking on the value measured today since the end of inflation. Through the analytical and numerical analysis of scalar and tensor cosmological perturbations we show that the model leads to consistent predictions for a broad variety of symmetry-breaking inflaton's potentials, once that a dimensionless parameter entering into the action is properly constrained. We also discuss the average expansion of the Universe after inflation (when the inflaton undergoes coherent oscillations about the minimum of its potential) and determine the effective equation of state. Finally, we analyze the resonant and perturbative decay of the inflaton during (p)reheating. The second part is devoted to the study of a proposal for a quantum theory of gravity dubbed "Horava-Lifshitz (HL) Gravity" which relies on power-counting renormalizability while explicitly breaking Lorentz invariance. We test a pair of variants of the theory ("projectable" and "non-projectable") on a cosmological background and with the inclusion of scalar field matter. By inspecting the quadratic action for the linear scalar cosmological perturbations we determine the actual number of propagating degrees of freedom and realize that the theory, being endowed with less symmetries than General Relativity, does admit an extra gravitational degree of freedom which is potentially unstable. More specifically, we conclude that in the case of projectable HL Gravity the extra mode is either a ghost or a tachyon, whereas in the case of non-projectable HL Gravity the extra mode can be made well-behaved for suitable choices of a pair of free dimensionless parameters and, moreover, turns out to decouple from the low-energy Physics.
Resumo:
The aim of this study was to investigate cortisol and progesterone (P4) trends in hair from birth up to postweaning in Italian trotter foals. Hair sampling is non-invasive and hair concentrations provide retrospective information of integrated hormone secretion over periods of several months. Samples were collected at birth and at a distance of 30 days, collecting only regrowth hair, up to post weaning. From birth to 3 months, foals cortisol falls from 47.64±5.6 to 4.9±0.68 pg/mg (mean±standard error), due to the interruption of foetal-placental connection and progressive adaptation to extrauterine life. From the third month of life to post weaning concentrations don’t vary significantly, underlining a non-chronic activation of the HPA axis. Hair P4 significantly decreases in the first two samples (from 469.68±72,54 to 184.65±35.42 pg/mg). At 2 (111.78±37.13 pg/mg) and 3 months (35.96±6.33 pg/mg) hair concentrations don’t show significant differences. These concentrations are not due to interactions of the utero-placental tissues with foals, animals are still prepuberal and P4 isn’t produced by adrenals as a result of high stress. We could therefore hypothesize that the source of foal hair P4 could be milk, suckled from mares. The high individual variability in hair at 2 and 3 months is due to a gradual and subjective change in foal diet, from milk to solid food, and to the fact that mares do not allow to suckle. From fourth month to post weaning P4 concentration in hair remains around 37.56±6.45 pg/mg. In conclusion, hair collected at birth, giving information about last period of gestation, could be used along with traditional matrices, to evaluate foals maturity. Hair cortisol could give indications about foals capacity to adapt to extra-uterine life. Finally milk, configuring as a bringer of nutrients and energy and assuming the characteristic of a nutraceutical, could give fundamental information about parental care.
Resumo:
We have used kinematic models in two Italian regions to reproduce surface interseismic velocities obtained from InSAR and GPS measurements. We have considered a Block modeling, BM, approach to evaluate which fault system is actively accommodating the occurring deformation in both considered areas. We have performed a study for the Umbria-Marche Apennines, obtaining that the tectonic extension observed by GPS measurements is explained by the active contribution of at least two fault systems, one of which is the Alto Tiberina fault, ATF. We have estimated also the interseismic coupling distribution for the ATF using a 3D surface and the result shows an interesting correlation between the microseismicity and the uncoupled fault portions. The second area analyzed concerns the Gargano promontory for which we have used jointly the available InSAR and GPS velocities. Firstly we have attached the two datasets to the same terrestrial reference frame and then using a simple dislocation approach, we have estimated the best fault parameters reproducing the available data, providing a solution corresponding to the Mattinata fault. Subsequently we have considered within a BM analysis both GPS and InSAR datasets in order to evaluate if the Mattinata fault may accommodate the deformation occurring in the central Adriatic due to the relative motion between the North-Adriatic and South-Adriatic plates. We obtain that the deformation occurring in that region should be accommodated by more that one fault system, that is however difficult to detect since the poor coverage of geodetic measurement offshore of the Gargano promontory. Finally we have performed also the estimate of the interseismic coupling distribution for the Mattinata fault, obtaining a shallow coupling pattern. Both of coupling distributions found using the BM approach have been tested by means of resolution checkerboard tests and they demonstrate that the coupling patterns depend on the geodetic data positions.